Results 1 
5 of
5
On adequate performance measures for paging
 In Proceedings of the 38th Annual ACM Symposium on Theory of Computing (STOC ’06
, 2006
"... Memory management is a fundamental problem in computer architecture and operating systems. We consider a twolevel memory system with fast, but small cache and slow, but large main memory. The underlying theoretical problem is known as the paging problem. A sequence of requests to pages has to be se ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Memory management is a fundamental problem in computer architecture and operating systems. We consider a twolevel memory system with fast, but small cache and slow, but large main memory. The underlying theoretical problem is known as the paging problem. A sequence of requests to pages has to be served by making each requested page available in the cache. A paging strategy replaces pages in the cache with requested ones. The aim is to minimize the number of page faults that occur whenever a requested page is not in the cache. Experience shows that the LeastRecentlyUsed (LRU) paging strategy usually achieves a factor around 2 to 3 compared to the optimum number of faults. This contrasts the theoretical worst case, in which this factor can be as large as the cache size k. One difficulty in analyzing the paging problem was the lack of an appropriate lower bound for the minimum number of page faults. We address this issue and propose a general lower bound which provides insight into the global structure of a given request sequence. In addition, we derive a characterization for the number of faults incurred by LRU. We give a theoretical explanation why LRU performs well in practice. We classify the set
Paging and List Update under Bijective Analysis
, 2009
"... It has long been known that for the paging problem in its standard form, competitive analysis cannot adequately distinguish algorithms based on their performance: there exists a vast class of algorithms which achieve the same competitive ratio, ranging from extremely naive and inefficient strategies ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
It has long been known that for the paging problem in its standard form, competitive analysis cannot adequately distinguish algorithms based on their performance: there exists a vast class of algorithms which achieve the same competitive ratio, ranging from extremely naive and inefficient strategies (such as FlushWhenFull), to strategies of excellent performance in practice (such as LeastRecentlyUsed and some of its variants). A similar situation arises in the list update problem: in particular, under the cost formulation studied by Martínez and Roura [TCS 2000] and Munro [ESA 2000] every list update algorithm has, asymptotically, the same competitive ratio. Several refinements of competitive analysis, as well as alternative performance measures have been introduced in the literature, with varying degrees of success in narrowing this disconnect between theoretical analysis and empirical evaluation. In this paper we study these two fundamental online problems under the framework of bijective analysis [Angelopoulos, Dorrigiv and LópezOrtiz, SODA 2007 and LATIN 2008]. This is an intuitive technique which is based on pairwise comparison of the costs incurred by two algorithms on sets of request sequences of the same size. Coupled with a wellestablished model of locality of reference due to Albers, Favrholdt and Giel [JCSS 2005], we show that LeastRecentlyUsed and MovetoFront are the unique optimal algorithms for paging and list update, respectively. Prior to this work, only measures based on averagecost analysis have separated LRU and MTF from all other algorithms. Given that bijective analysis is a fairly stringent measure (and also subsumes averagecost analysis), we prove that in a strong sense LRU and MTF stand out as the best (deterministic) algorithms.
List update with locality of reference
 In Proceedings of the 8th Latin American Theoretical Informatics Symposium
, 2008
"... Abstract. It is known that in practice, request sequences for the list update problem exhibit a certain degree of locality of reference. Motivated by this observation we apply the locality of reference model for the paging problem due to Albers et al. [STOC 2002/JCSS 2005] in conjunction with biject ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
Abstract. It is known that in practice, request sequences for the list update problem exhibit a certain degree of locality of reference. Motivated by this observation we apply the locality of reference model for the paging problem due to Albers et al. [STOC 2002/JCSS 2005] in conjunction with bijective analysis [SODA 2007] to list update. Using this framework, we prove that MovetoFront (MTF) is the unique optimal algorithm for list update. This addresses the open question of defining an appropriate model for capturing locality of reference in the context of list update [Hester and Hirschberg ACM Comp. Surv. 1985]. Our results hold both for the standard cost function of Sleator and Tarjan [CACM 1985] and the improved cost function proposed independently by Martínez and Roura [TCS 2000] and Munro [ESA 2000]. This result resolves an open problem of Martínez and Roura, namely proposing a measure which can successfully separate MTF from all other listupdate algorithms. 1
Parameterized Analysis of Paging and List Update Algorithms
"... It is wellestablished that input sequences for paging and list update have locality of reference. In this paper we analyze the performance of algorithms for these problems in terms of the amount of locality in the input sequence. We define a measure for locality that is based on Denning’s working ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
It is wellestablished that input sequences for paging and list update have locality of reference. In this paper we analyze the performance of algorithms for these problems in terms of the amount of locality in the input sequence. We define a measure for locality that is based on Denning’s working set model and express the performance of well known algorithms in term of this parameter. This introduces parameterizedstyle analysis to online algorithms. The idea is that rather than normalizing the performance of an online algorithm by an (optimal) offline algorithm, we explicitly express the behavior of the algorithm in terms of two more natural parameters: the size of the cache and Denning’s working set measure. This technique creates a performance hierarchy of paging algorithms which better reflects their intuitive relative strengths. It also reflects the intuition that a larger cache leads to a better performance. We obtain similar separation for list update algorithms. Lastly, we show that, surprisingly, certain randomized algorithms which are superior to MTF in the classical model are not so in the parameterized case, which matches experimental results.
The Cooperative Ratio of Online Algorithms
, 2007
"... Online algorithms are usually analyzed using competitive analysis, in which the performance of an online algorithm on a sequence is normalized by the performance of the optimal offline algorithm on that sequence. In this paper we introduce cooperative analysis as an alternative general framework ..."
Abstract
 Add to MetaCart
Online algorithms are usually analyzed using competitive analysis, in which the performance of an online algorithm on a sequence is normalized by the performance of the optimal offline algorithm on that sequence. In this paper we introduce cooperative analysis as an alternative general framework for the analysis of online algorithms. The idea is to normalize the performance of an online algorithm by a measure other than the performance of the offline optimal algorithm OPT. We show that in many instances the perform of OPT on a sequence is a coarse approximation of the difficulty or complexity of a given input. Using a finer, more natural measure we can separate paging and list update algorithms which were otherwise indistinguishable under the classical model. This creates a performance hierarchy of algorithms which better reflects the intuitive relative strengths between them. Lastly, we show that, surprisingly, certain randomized algorithms which are superior to MTF in the classical model are not so in the cooperative case, which matches experimental results. This confirms that the ability of the online cooperative algorithm to ignore pathological worst cases can lead to algorithms that are more efficient in practice.