Results 1  10
of
15
On the Separation and Equivalence of Paging Strategies
"... It has been experimentally observed that LRU and variants thereof are the preferred strategies for online paging. However, under most proposed performance measures for online algorithms the performance of LRU is the same as that of many other strategies which are inferior in practice. In this pape ..."
Abstract

Cited by 20 (7 self)
 Add to MetaCart
It has been experimentally observed that LRU and variants thereof are the preferred strategies for online paging. However, under most proposed performance measures for online algorithms the performance of LRU is the same as that of many other strategies which are inferior in practice. In this paper we first show that any performance measure which does not include a partition or implied distribution of the input sequences of a given length is unlikely to distinguish between any two lazy paging algorithms as their performance is identical in a very strong sense. This provides a theoretical justification for the use of a more refined measure. Building upon the ideas of concave analysis by Albers et al. [AFG05], we prove strict separation between LRU and all other paging strategies. That is, we show that LRU is the unique optimum strategy for paging under a deterministic model. This provides full theoretical backing to the empirical observation that LRU is preferable in practice.
On adequate performance measures for paging
 In Proceedings of the 38th Annual ACM Symposium on Theory of Computing (STOC ’06
, 2006
"... Memory management is a fundamental problem in computer architecture and operating systems. We consider a twolevel memory system with fast, but small cache and slow, but large main memory. The underlying theoretical problem is known as the paging problem. A sequence of requests to pages has to be se ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
(Show Context)
Memory management is a fundamental problem in computer architecture and operating systems. We consider a twolevel memory system with fast, but small cache and slow, but large main memory. The underlying theoretical problem is known as the paging problem. A sequence of requests to pages has to be served by making each requested page available in the cache. A paging strategy replaces pages in the cache with requested ones. The aim is to minimize the number of page faults that occur whenever a requested page is not in the cache. Experience shows that the LeastRecentlyUsed (LRU) paging strategy usually achieves a factor around 2 to 3 compared to the optimum number of faults. This contrasts the theoretical worst case, in which this factor can be as large as the cache size k. One difficulty in analyzing the paging problem was the lack of an appropriate lower bound for the minimum number of page faults. We address this issue and propose a general lower bound which provides insight into the global structure of a given request sequence. In addition, we derive a characterization for the number of faults incurred by LRU. We give a theoretical explanation why LRU performs well in practice. We classify the set
Paging and List Update under Bijective Analysis
, 2009
"... It has long been known that for the paging problem in its standard form, competitive analysis cannot adequately distinguish algorithms based on their performance: there exists a vast class of algorithms which achieve the same competitive ratio, ranging from extremely naive and inefficient strategies ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
It has long been known that for the paging problem in its standard form, competitive analysis cannot adequately distinguish algorithms based on their performance: there exists a vast class of algorithms which achieve the same competitive ratio, ranging from extremely naive and inefficient strategies (such as FlushWhenFull), to strategies of excellent performance in practice (such as LeastRecentlyUsed and some of its variants). A similar situation arises in the list update problem: in particular, under the cost formulation studied by Martínez and Roura [TCS 2000] and Munro [ESA 2000] every list update algorithm has, asymptotically, the same competitive ratio. Several refinements of competitive analysis, as well as alternative performance measures have been introduced in the literature, with varying degrees of success in narrowing this disconnect between theoretical analysis and empirical evaluation. In this paper we study these two fundamental online problems under the framework of bijective analysis [Angelopoulos, Dorrigiv and LópezOrtiz, SODA 2007 and LATIN 2008]. This is an intuitive technique which is based on pairwise comparison of the costs incurred by two algorithms on sets of request sequences of the same size. Coupled with a wellestablished model of locality of reference due to Albers, Favrholdt and Giel [JCSS 2005], we show that LeastRecentlyUsed and MovetoFront are the unique optimal algorithms for paging and list update, respectively. Prior to this work, only measures based on averagecost analysis have separated LRU and MTF from all other algorithms. Given that bijective analysis is a fairly stringent measure (and also subsumes averagecost analysis), we prove that in a strong sense LRU and MTF stand out as the best (deterministic) algorithms.
Parameterized Analysis of Paging and List Update Algorithms
"... It is wellestablished that input sequences for paging and list update have locality of reference. In this paper we analyze the performance of algorithms for these problems in terms of the amount of locality in the input sequence. We define a measure for locality that is based on Denning’s working ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
It is wellestablished that input sequences for paging and list update have locality of reference. In this paper we analyze the performance of algorithms for these problems in terms of the amount of locality in the input sequence. We define a measure for locality that is based on Denning’s working set model and express the performance of well known algorithms in term of this parameter. This introduces parameterizedstyle analysis to online algorithms. The idea is that rather than normalizing the performance of an online algorithm by an (optimal) offline algorithm, we explicitly express the behavior of the algorithm in terms of two more natural parameters: the size of the cache and Denning’s working set measure. This technique creates a performance hierarchy of paging algorithms which better reflects their intuitive relative strengths. It also reflects the intuition that a larger cache leads to a better performance. We obtain similar separation for list update algorithms. Lastly, we show that, surprisingly, certain randomized algorithms which are superior to MTF in the classical model are not so in the parameterized case, which matches experimental results.
Simple optimality proofs for least recently used in the presence of locality of reference
 RESEARCH MEMORANDA 055, MAASTRICHT : METEOR, MAASTRICHT RESEARCH SCHOOL OF ECONOMICS OF TECHNOLOGY AND ORGANIZATION
, 2009
"... ..."
(Show Context)
unknown title
"... Abstract In this paper we give a finer separation of several known paging algorithms using a new technique called relative interval analysis. This technique compares the fault rate of two paging algorithms across the entire range of inputs of a given size rather than in the worst case alone. Using ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract In this paper we give a finer separation of several known paging algorithms using a new technique called relative interval analysis. This technique compares the fault rate of two paging algorithms across the entire range of inputs of a given size rather than in the worst case alone. Using this technique we characterize the relative performance of LRU and LRU2, as well as LRU and FWF, among others. We also show that lookahead is beneficial for a paging algorithm, a fact that is well known in practice but it was, until recently, not verified by theory.
FIFO anomaly is unbounded
, 2010
"... Abstract. Virtual memory of computers is usually implemented by demand paging. For some page replacement algorithms the number of page faults may increase as the number of page frames increases. Belady, Nelson and Shedler [5] constructed reference strings for which page replacement algorithm FIFO [ ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. Virtual memory of computers is usually implemented by demand paging. For some page replacement algorithms the number of page faults may increase as the number of page frames increases. Belady, Nelson and Shedler [5] constructed reference strings for which page replacement algorithm FIFO [10,
Algorithmic research for the 21st century
 ECONOMIC GLOBALIZATION & THE CHOICE OF ASIA: SHANGHAI FORUM 2005
, 2005
"... ..."
Variations on the Theme of Caching
, 2005
"... This thesis is concerned with caching algorithms. We investigate three variations of the caching problem: web caching in the Torng framework, relative competitiveness and caching with request reordering. In the first variation we define different cost models involving page sizes and page costs. We ..."
Abstract
 Add to MetaCart
(Show Context)
This thesis is concerned with caching algorithms. We investigate three variations of the caching problem: web caching in the Torng framework, relative competitiveness and caching with request reordering. In the first variation we define different cost models involving page sizes and page costs. We also present the Torng cost framework introduced by Torng in [29]. We analyze the competitive ratio of online deterministic marking algorithms in the Bit cost model combined with the Torng framework. We show that given some specific restrictions on the set of possible request sequences, any marking algorithm is 2competitive. The second variation consists in using the relative competitiveness ratio on an access graph as a complexity measure. We use the concept of access graphs introduced by Borodin
The Cooperative Ratio of Online Algorithms
, 2007
"... Online algorithms are usually analyzed using competitive analysis, in which the performance of an online algorithm on a sequence is normalized by the performance of the optimal offline algorithm on that sequence. In this paper we introduce cooperative analysis as an alternative general framework ..."
Abstract
 Add to MetaCart
Online algorithms are usually analyzed using competitive analysis, in which the performance of an online algorithm on a sequence is normalized by the performance of the optimal offline algorithm on that sequence. In this paper we introduce cooperative analysis as an alternative general framework for the analysis of online algorithms. The idea is to normalize the performance of an online algorithm by a measure other than the performance of the offline optimal algorithm OPT. We show that in many instances the perform of OPT on a sequence is a coarse approximation of the difficulty or complexity of a given input. Using a finer, more natural measure we can separate paging and list update algorithms which were otherwise indistinguishable under the classical model. This creates a performance hierarchy of algorithms which better reflects the intuitive relative strengths between them. Lastly, we show that, surprisingly, certain randomized algorithms which are superior to MTF in the classical model are not so in the cooperative case, which matches experimental results. This confirms that the ability of the online cooperative algorithm to ignore pathological worst cases can lead to algorithms that are more efficient in practice.