Refine
Document Type
- Doctoral Thesis (2)
- Master's Thesis (1)
Language
- English (3)
Has Fulltext
- yes (3)
Is part of the Bibliography
- no (3)
Keywords
- Engineering (1)
- Online Algorithmen (1)
- Paging (1)
Institute
- Informatik (3) (remove)
Paging is one of the most prominent problems in the field of online algorithms. We have to serve a sequence of page requests using a cache that can hold up to k pages. If the currently requested page is in cache we have a cache hit, otherwise we say that a cache miss occurs, and the requested page needs to be loaded into the cache. The goal is to minimize the number of cache misses by providing a good page-replacement strategy. This problem is part of memory-management when data is stored in a two-level memory hierarchy, more precisely a small and fast memory (cache) and a slow but large memory (disk). The most important application area is the virtual memory management of operating systems. Accessed pages are either already in the RAM or need to be loaded from the hard disk into the RAM using expensive I/O. The time needed to access the RAM is insignificant compared to an I/O operation which takes several milliseconds.
The traditional evaluation framework for online algorithms is competitive analysis where the online algorithm is compared to the optimal offline solution. A shortcoming of competitive analysis consists of its too pessimistic worst-case guarantees. For example LRU has a theoretical competitive ratio of k but in practice this ratio rarely exceeds the value 4.
Reducing the gap between theory and practice has been a hot research issue during the last years. More recent evaluation models have been used to prove that LRU is an optimal online algorithm or part of a class of optimal algorithms respectively, which was motivated by the assumption that LRU is one of the best algorithms in practice. Most of the newer models make LRU-friendly assumptions regarding the input, thus not leaving much room for new algorithms.
Only few works in the field of online paging have introduced new algorithms which can compete with LRU as regards the small number of cache misses.
In the first part of this thesis we study strongly competitive randomized paging algorithms, i.e. algorithms with optimal competitive guarantees. Although the tight bound for the competitive ratio has been known for decades, current algorithms matching this bound are complex and have high running times and memory requirements. We propose the algorithm OnlineMin which processes a page request in O(log k/log log k) time in the worst case. The best previously known solution requires O(k^2) time.
Usually the memory requirement of a paging algorithm is measured by the maximum number of pages that the algorithm keeps track of. Any algorithm stores information about the k pages in the cache. In addition it can also store information about pages not in cache, denoted bookmarks. We answer the open question of Bein et al. '07 whether strongly competitive randomized paging algorithms using only o(k) bookmarks exist or not. To do so we modify the Partition algorithm of McGeoch and Sleator '85 which has an unbounded bookmark complexity, and obtain Partition2 which uses O(k/log k) bookmarks.
In the second part we extract ideas from theoretical analysis of randomized paging algorithms in order to design deterministic algorithms that perform well in practice. We refine competitive analysis by introducing the attack rate
parameter r, which ranges between 1 and k. We show that r is a tight bound on the competitive ratio of deterministic algorithms.
We give empirical evidence that r is usually much smaller than k and thus r-competitive algorithms have a reasonable performance on real-world traces. By introducing the r-competitive priority-based algorithm class OnOPT we obtain a collection of promising algorithms to beat the LRU-standard. We single out the new algorithm RDM and show that it outperforms LRU and some of its variants on a wide range of real-world traces.
Since RDM is more complex than LRU one may think at first sight that the gain in terms of lowering the number of cache misses is ruined by high runtime for processing pages. We engineer a fast implementation of RDM, and compare it
to LRU and the very fast FIFO algorithm in an overall evaluation scheme, where we measure the runtime of the algorithms and add penalties for each cache miss.
Experimental results show that for realistic penalties RDM still outperforms these two algorithms even if we grant the competitors an idealistic runtime of 0.
Algorithms for the Maximum Cardinality Matching Problem which greedily add edges to the solution enjoy great popularity. We systematically study strengths and limitations of such algorithms, in particular of those which consider node degree information to select the next edge. Concentrating on nodes of small degree is a promising approach: it was shown, experimentally and analytically, that very good approximate solutions are obtained for restricted classes of random graphs. Results achieved under these idealized conditions, however, remained unsupported by statements which depend on less optimistic assumptions.
The KarpSipser algorithm and 1-2-Greedy, which is a simplified variant of the well-known MinGreedy algorithm, proceed as follows. In each step, if a node of degree one (resp. at most two) exists, then an edge incident with a minimum degree node is picked, otherwise an arbitrary edge is added to the solution.
We analyze the approximation ratio of both algorithms on graphs of degree at most D. Families of graphs are known for which the expected approximation ratio converges to 1/2 as D grows to infinity, even if randomization against the worst case is used. If randomization is not allowed, then we show the following convergence to 1/2: the 1-2-Greedy algorithm achieves approximation ratio (D-1)/(2D-3); if the graph is bipartite, then the more restricted KarpSipser algorithm achieves the even stronger factor D/(2D-2). These guarantees set both algorithms apart from other famous matching heuristics like e.g. Greedy or MRG: these algorithms depend on randomization to break the 1/2-barrier even for paths with D=2. Moreover, for any D our guarantees are strictly larger than the best known bounds on the expected performance of the randomized variants of Greedy and MRG.
To investigate whether KarpSipser or 1-2-Greedy can be refined to achieve better performance, or be simplified without loss of approximation quality, we systematically study entire classes of deterministic greedy-like algorithms for matching. Therefore we employ the adaptive priority algorithm framework by Borodin, Nielsen, and Rackoff: in each round, an adaptive priority algorithm requests one or more edges by formulating their properties---like e.g. "is incident with a node of minimum degree"---and adds the received edges to the solution. No constraints on time and space usage are imposed, hence an adaptive priority algorithm is restricted only by its nature of picking edges in a greedy-like fashion. If an adaptive priority algorithm requests edges by processing degree information, then we show that it does not surpass the performance of KarpSipser: our D/(2D-2)-guarantee for bipartite graphs is tight and KarpSipser is optimal among all such "degree-sensitive" algorithms even though it uses degree information merely to detect degree-1 nodes. Moreover, we show that if degrees of both nodes of an edge may be processed, like e.g. the Double-MinGreedy algorithm does, then the performance of KarpSipser can only be increased marginally, if at all. Of special interest is the capability of requesting edges not only by specifying the degree of a node but additionally its set of neighbors. This enables an adaptive priority algorithm to "traverse" the input graph. We show that on general degree-bounded graphs no such algorithm can beat factor (D-1)/(2D-3). Hence our bound for 1-2-Greedy is tight and this algorithm performs optimally even though it ignores neighbor information. Furthermore, we show that an adaptive priority algorithm deteriorates to approximation ratio exactly 1/2 if it does not request small degree nodes. This tremendous decline of approximation quality happens for graphs on which 1-2-Greedy and KarpSipser perform optimally, namely paths with D=2. Consequently, requesting small degree nodes is vital to beat factor 1/2.
Summarizing, our results show that 1-2-Greedy and KarpSipser stand out from known and hypothetical algorithms as an intriguing combination of both approximation quality and conceptual simplicity.
Computing the diameter of a graph is a fundamental part of network analysis. Even if the data fits into main memory the best known algorithm needs O(n2) [3] with high probability to compute the exact diameter. In practice this is usually too costly. Therefore, heuristics have been developed to approximate the diameter much faster. The heuristic “double sweep lower bound” (dslb) has reasonably good results and needs only two Breadth-First Searches (BFS). Hence, dslb has a complexity of O(n+m). If the data does not fit into main memory, an external-memory algorithm is needed. In this thesis the I/O model by Vitter and Shriver [4] is used. It is widely accepted and has produced suitable results in the past. The best known external-memory BFS implementation has an I/O-complexity of W(pn B + sort(n)) for sparse graphs [5]. But this is still very expensive compared to the I/O complexity of sorting with O(N/B * logM/B (N/B)). While there is no improvement for the external-memory computation of BFS yet, Meyer published a different approach called “Parallel clustering growing approach” (PAR_APPROX) that is a trade-off between the I/O complexity and the approximation guarantee [6].
In this thesis different existing approaches will be evaluated. Also, PAR_APPROX will be implemented and analyzed if it is viable in practice. One main result will be that it is difficult to choose the parameter in a way that PAR_APPROX is reasonably fast for every graph class without using the semi external-memory Single Source Shortest Path (SSSP) implementation by [1]. However, the gain is small compared to external-memory BFS using this approach. Therefore, the approach PAR_APPROX_R will be developed. Furthermore, a lower bound for the expected error of PAR_APPROX_R will be proved on a carefully chosen difficult input class. With PAR_APPROX_R the desired gain will be reached.