Results 1  10
of
11
On adequate performance measures for paging
 In Proceedings of the 38th Annual ACM Symposium on Theory of Computing (STOC ’06
, 2006
"... Memory management is a fundamental problem in computer architecture and operating systems. We consider a twolevel memory system with fast, but small cache and slow, but large main memory. The underlying theoretical problem is known as the paging problem. A sequence of requests to pages has to be se ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Memory management is a fundamental problem in computer architecture and operating systems. We consider a twolevel memory system with fast, but small cache and slow, but large main memory. The underlying theoretical problem is known as the paging problem. A sequence of requests to pages has to be served by making each requested page available in the cache. A paging strategy replaces pages in the cache with requested ones. The aim is to minimize the number of page faults that occur whenever a requested page is not in the cache. Experience shows that the LeastRecentlyUsed (LRU) paging strategy usually achieves a factor around 2 to 3 compared to the optimum number of faults. This contrasts the theoretical worst case, in which this factor can be as large as the cache size k. One difficulty in analyzing the paging problem was the lack of an appropriate lower bound for the minimum number of page faults. We address this issue and propose a general lower bound which provides insight into the global structure of a given request sequence. In addition, we derive a characterization for the number of faults incurred by LRU. We give a theoretical explanation why LRU performs well in practice. We classify the set
List update with locality of reference
 In Proceedings of the 8th Latin American Theoretical Informatics Symposium
, 2008
"... Abstract. It is known that in practice, request sequences for the list update problem exhibit a certain degree of locality of reference. Motivated by this observation we apply the locality of reference model for the paging problem due to Albers et al. [STOC 2002/JCSS 2005] in conjunction with biject ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
Abstract. It is known that in practice, request sequences for the list update problem exhibit a certain degree of locality of reference. Motivated by this observation we apply the locality of reference model for the paging problem due to Albers et al. [STOC 2002/JCSS 2005] in conjunction with bijective analysis [SODA 2007] to list update. Using this framework, we prove that MovetoFront (MTF) is the unique optimal algorithm for list update. This addresses the open question of defining an appropriate model for capturing locality of reference in the context of list update [Hester and Hirschberg ACM Comp. Surv. 1985]. Our results hold both for the standard cost function of Sleator and Tarjan [CACM 1985] and the improved cost function proposed independently by Martínez and Roura [TCS 2000] and Munro [ESA 2000]. This result resolves an open problem of Martínez and Roura, namely proposing a measure which can successfully separate MTF from all other listupdate algorithms. 1
Paging and List Update under Bijective Analysis
, 2009
"... It has long been known that for the paging problem in its standard form, competitive analysis cannot adequately distinguish algorithms based on their performance: there exists a vast class of algorithms which achieve the same competitive ratio, ranging from extremely naive and inefficient strategies ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
It has long been known that for the paging problem in its standard form, competitive analysis cannot adequately distinguish algorithms based on their performance: there exists a vast class of algorithms which achieve the same competitive ratio, ranging from extremely naive and inefficient strategies (such as FlushWhenFull), to strategies of excellent performance in practice (such as LeastRecentlyUsed and some of its variants). A similar situation arises in the list update problem: in particular, under the cost formulation studied by Martínez and Roura [TCS 2000] and Munro [ESA 2000] every list update algorithm has, asymptotically, the same competitive ratio. Several refinements of competitive analysis, as well as alternative performance measures have been introduced in the literature, with varying degrees of success in narrowing this disconnect between theoretical analysis and empirical evaluation. In this paper we study these two fundamental online problems under the framework of bijective analysis [Angelopoulos, Dorrigiv and LópezOrtiz, SODA 2007 and LATIN 2008]. This is an intuitive technique which is based on pairwise comparison of the costs incurred by two algorithms on sets of request sequences of the same size. Coupled with a wellestablished model of locality of reference due to Albers, Favrholdt and Giel [JCSS 2005], we show that LeastRecentlyUsed and MovetoFront are the unique optimal algorithms for paging and list update, respectively. Prior to this work, only measures based on averagecost analysis have separated LRU and MTF from all other algorithms. Given that bijective analysis is a fairly stringent measure (and also subsumes averagecost analysis), we prove that in a strong sense LRU and MTF stand out as the best (deterministic) algorithms.
Parameterized Analysis of Paging and List Update Algorithms
"... It is wellestablished that input sequences for paging and list update have locality of reference. In this paper we analyze the performance of algorithms for these problems in terms of the amount of locality in the input sequence. We define a measure for locality that is based on Denning’s working ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
It is wellestablished that input sequences for paging and list update have locality of reference. In this paper we analyze the performance of algorithms for these problems in terms of the amount of locality in the input sequence. We define a measure for locality that is based on Denning’s working set model and express the performance of well known algorithms in term of this parameter. This introduces parameterizedstyle analysis to online algorithms. The idea is that rather than normalizing the performance of an online algorithm by an (optimal) offline algorithm, we explicitly express the behavior of the algorithm in terms of two more natural parameters: the size of the cache and Denning’s working set measure. This technique creates a performance hierarchy of paging algorithms which better reflects their intuitive relative strengths. It also reflects the intuition that a larger cache leads to a better performance. We obtain similar separation for list update algorithms. Lastly, we show that, surprisingly, certain randomized algorithms which are superior to MTF in the classical model are not so in the parameterized case, which matches experimental results.
The Cooperative Ratio of Online Algorithms
, 2007
"... Online algorithms are usually analyzed using competitive analysis, in which the performance of an online algorithm on a sequence is normalized by the performance of the optimal offline algorithm on that sequence. In this paper we introduce cooperative analysis as an alternative general framework ..."
Abstract
 Add to MetaCart
Online algorithms are usually analyzed using competitive analysis, in which the performance of an online algorithm on a sequence is normalized by the performance of the optimal offline algorithm on that sequence. In this paper we introduce cooperative analysis as an alternative general framework for the analysis of online algorithms. The idea is to normalize the performance of an online algorithm by a measure other than the performance of the offline optimal algorithm OPT. We show that in many instances the perform of OPT on a sequence is a coarse approximation of the difficulty or complexity of a given input. Using a finer, more natural measure we can separate paging and list update algorithms which were otherwise indistinguishable under the classical model. This creates a performance hierarchy of algorithms which better reflects the intuitive relative strengths between them. Lastly, we show that, surprisingly, certain randomized algorithms which are superior to MTF in the classical model are not so in the cooperative case, which matches experimental results. This confirms that the ability of the online cooperative algorithm to ignore pathological worst cases can lead to algorithms that are more efficient in practice.
Online Occlusion Culling ⋆
"... Abstract. Modern computer graphics systems are able to render sophisticated 3D scenes consisting of millions of polygons. For most camera positions only a small collection of these polygons is visible. We address the problem of occlusion culling, i.e., determine hidden primitives. Aila, Miettinen, a ..."
Abstract
 Add to MetaCart
Abstract. Modern computer graphics systems are able to render sophisticated 3D scenes consisting of millions of polygons. For most camera positions only a small collection of these polygons is visible. We address the problem of occlusion culling, i.e., determine hidden primitives. Aila, Miettinen, and Nordlund suggested to implement a FIFO buffer on graphics cards which is able to delay the polygons before drawing them [2]. When one of the polygons within the buffer is occluded or masked by another polygon arriving later from the application, the rendering engine can drop the occluded one without rendering, saving important rendering time. We introduce a theoretical online model to analyse these problems in theory using competitive analysis. For different cost measures we invent the first competitive algorithms for online occlusion culling. Our implementation shows that these algorithms outperform the FIFO strategy for real 3D scenes as well. 1
Abstract On the Separation and Equivalence of Paging Strategies
"... It has been experimentally observed that LRU and variants thereof are the preferred strategies for online paging. However, under most proposed performance measures for online algorithms the performance of LRU is the same as that of many other strategies which are inferior in practice. In this pape ..."
Abstract
 Add to MetaCart
It has been experimentally observed that LRU and variants thereof are the preferred strategies for online paging. However, under most proposed performance measures for online algorithms the performance of LRU is the same as that of many other strategies which are inferior in practice. In this paper we first show that any performance measure which does not include a partition or implied distribution of the input sequences of a given length is unlikely to distinguish between any two lazy paging algorithms as their performance is identical in a very strong sense. This provides a theoretical justification for the use of a more refined measure. Building upon the ideas of concave analysis by Albers et al. [AFG05], we prove strict separation between LRU and all other paging strategies. That is, we show that LRU is the unique optimum strategy for paging under a deterministic model. This provides full theoretical backing to the empirical observation that LRU is preferable in practice. 1
Competitive Ratio CRA = max
"... The relative worst order ratio, a relatively new measure for the quality of online algorithms, is extended and applied to the paging problem. We obtain results significantly different from those obtained with the competitive ratio. First, we devise a new deterministic paging algorithm, Retrospectiv ..."
Abstract
 Add to MetaCart
The relative worst order ratio, a relatively new measure for the quality of online algorithms, is extended and applied to the paging problem. We obtain results significantly different from those obtained with the competitive ratio. First, we devise a new deterministic paging algorithm, RetrospectiveLRU, and show that, according to the relative worst order ratio, it performs better than LRU. This is supported by experimental results, but contrasts with the competitive ratio. Furthermore, the relative worst order ratio (and practice) indicates that LRU is better than FWF, though all deterministic marking algorithms have the same competitive ratio. Lookahead is also shown to be a significant advantage with this new measure, whereas the competitive ratio does not reflect that lookahead can be helpful. Finally, as with the competitive ratio, no deterministic marking algorithm can be significantly better than LRU, but the randomized algorithm MARK is better than LRU.
CS369N: Beyond WorstCase Analysis Lecture #5: SelfImproving Algorithms ∗
, 2010
"... Last lecture concluded with a discussion of semirandom graph models, an interpolation between worstcase analysis and averagecase analysis designed to identify robust algorithms in the face of strong impossibility results for worstcase guarantees. This lecture and the next two give three more ana ..."
Abstract
 Add to MetaCart
Last lecture concluded with a discussion of semirandom graph models, an interpolation between worstcase analysis and averagecase analysis designed to identify robust algorithms in the face of strong impossibility results for worstcase guarantees. This lecture and the next two give three more analysis frameworks that blend aspects of worst and averagecase analysis. Today’s model, of selfimproving algorithms, is the closest to traditional averagecase analysis. The model and results are by Ailon, Chazelle, Comandar, and Liu [1]. The Setup. For a given computational problem, we posit a distribution over instances. The difference between today’s model and traditional averagecase analysis is that the distribution is unknown. The goal is to design an algorithm that, given an online sequence of instances — each an independent and identically distributed (i.i.d.) sample — quickly converges to an algorithm that is optimal for the underlying distribution. Thus the algorithm is “automatically selftuning. ” The challenge is to accomplish this goal with fewer “training samples ” and smaller space than a bruteforce “learn the data model ” approach. Main Example: Sorting. The obvious first problem to apply the selfimproving paradigm to is sorting in the comparison model, and that’s what we do here. Each instance is an array of n elements, with the ith element drawn from a realvalued distribution Di. A key assumption is that the Di’s are independent distributions; Section 5.3 discusses this assumption. The distributions need not be identical. Identical distributions are uninteresting in our context, since in this case the relative order of the elements is a uniformly random permutation. Every correct sorting algorithm requires Ω(n log n) expected comparisons in this case, and a matching upper is bound is achieved by MergeSort (say).
Algorithms for Memory Hierarchies
, 2004
"... This report illustrates the area which my scientific research is focused on, in the framework of my doctorate program, also showing my research directions and a few preliminary results. The document is organized as follows. Section 1 motivates the research, presenting the hierarchical memory organi ..."
Abstract
 Add to MetaCart
This report illustrates the area which my scientific research is focused on, in the framework of my doctorate program, also showing my research directions and a few preliminary results. The document is organized as follows. Section 1 motivates the research, presenting the hierarchical memory organization of modern computer systems and giving an overview on different methodological approaches to the analysis of algorithms for such systems. Subsequent sections deepen two of these approaches. Section 2 shows a possible formalization of the concept of spatial locality, that I shall consider as a possible modelling tool to base may future research on. Another worthy candidate for defining a design/analysis framework is the idealcache model. Section 3 introduces this model and cacheoblivious algorithms, drawing the stateoftheart of this recent yet amazingly developed field. Sections 2 and 3 are concerned with ongoing research; at the end of each section the most prominent future research directions are discussed. 1 Why do we deal with memory hierarchies?