Results 1  10
of
10
BEYOND COMPETITIVE ANALYSIS
, 2000
"... The competitive analysis of online algorithms has been criticized as being too crude and unrealistic. We propose refinements of competitive analysis in two directions: The first restricts the power of the adversary by allowingonly certain input distributions, while the other allows for comparisons ..."
Abstract

Cited by 132 (3 self)
 Add to MetaCart
The competitive analysis of online algorithms has been criticized as being too crude and unrealistic. We propose refinements of competitive analysis in two directions: The first restricts the power of the adversary by allowingonly certain input distributions, while the other allows for comparisons between information regimes for online decisionmaking. We illustrate the first with an application to the paging problem; as a byproduct we characterize completely the work functions of this important special case of the kserver problem. We use the second refinement to explore the power of lookahead in server and task systems.
On the Separation and Equivalence of Paging Strategies
"... It has been experimentally observed that LRU and variants thereof are the preferred strategies for online paging. However, under most proposed performance measures for online algorithms the performance of LRU is the same as that of many other strategies which are inferior in practice. In this pape ..."
Abstract

Cited by 20 (7 self)
 Add to MetaCart
It has been experimentally observed that LRU and variants thereof are the preferred strategies for online paging. However, under most proposed performance measures for online algorithms the performance of LRU is the same as that of many other strategies which are inferior in practice. In this paper we first show that any performance measure which does not include a partition or implied distribution of the input sequences of a given length is unlikely to distinguish between any two lazy paging algorithms as their performance is identical in a very strong sense. This provides a theoretical justification for the use of a more refined measure. Building upon the ideas of concave analysis by Albers et al. [AFG05], we prove strict separation between LRU and all other paging strategies. That is, we show that LRU is the unique optimum strategy for paging under a deterministic model. This provides full theoretical backing to the empirical observation that LRU is preferable in practice.
OnLine Paging against Adversarially Biased Random Inputs
 Journal of Algorithms
, 2002
"... In evaluating an algorithm, worstcase analysis can be overly pessimistic. ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
In evaluating an algorithm, worstcase analysis can be overly pessimistic.
On adequate performance measures for paging
 In Proceedings of the 38th Annual ACM Symposium on Theory of Computing (STOC ’06
, 2006
"... Memory management is a fundamental problem in computer architecture and operating systems. We consider a twolevel memory system with fast, but small cache and slow, but large main memory. The underlying theoretical problem is known as the paging problem. A sequence of requests to pages has to be se ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
(Show Context)
Memory management is a fundamental problem in computer architecture and operating systems. We consider a twolevel memory system with fast, but small cache and slow, but large main memory. The underlying theoretical problem is known as the paging problem. A sequence of requests to pages has to be served by making each requested page available in the cache. A paging strategy replaces pages in the cache with requested ones. The aim is to minimize the number of page faults that occur whenever a requested page is not in the cache. Experience shows that the LeastRecentlyUsed (LRU) paging strategy usually achieves a factor around 2 to 3 compared to the optimum number of faults. This contrasts the theoretical worst case, in which this factor can be as large as the cache size k. One difficulty in analyzing the paging problem was the lack of an appropriate lower bound for the minimum number of page faults. We address this issue and propose a general lower bound which provides insight into the global structure of a given request sequence. In addition, we derive a characterization for the number of faults incurred by LRU. We give a theoretical explanation why LRU performs well in practice. We classify the set
Paging and List Update under Bijective Analysis
, 2009
"... It has long been known that for the paging problem in its standard form, competitive analysis cannot adequately distinguish algorithms based on their performance: there exists a vast class of algorithms which achieve the same competitive ratio, ranging from extremely naive and inefficient strategies ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
It has long been known that for the paging problem in its standard form, competitive analysis cannot adequately distinguish algorithms based on their performance: there exists a vast class of algorithms which achieve the same competitive ratio, ranging from extremely naive and inefficient strategies (such as FlushWhenFull), to strategies of excellent performance in practice (such as LeastRecentlyUsed and some of its variants). A similar situation arises in the list update problem: in particular, under the cost formulation studied by Martínez and Roura [TCS 2000] and Munro [ESA 2000] every list update algorithm has, asymptotically, the same competitive ratio. Several refinements of competitive analysis, as well as alternative performance measures have been introduced in the literature, with varying degrees of success in narrowing this disconnect between theoretical analysis and empirical evaluation. In this paper we study these two fundamental online problems under the framework of bijective analysis [Angelopoulos, Dorrigiv and LópezOrtiz, SODA 2007 and LATIN 2008]. This is an intuitive technique which is based on pairwise comparison of the costs incurred by two algorithms on sets of request sequences of the same size. Coupled with a wellestablished model of locality of reference due to Albers, Favrholdt and Giel [JCSS 2005], we show that LeastRecentlyUsed and MovetoFront are the unique optimal algorithms for paging and list update, respectively. Prior to this work, only measures based on averagecost analysis have separated LRU and MTF from all other algorithms. Given that bijective analysis is a fairly stringent measure (and also subsumes averagecost analysis), we prove that in a strong sense LRU and MTF stand out as the best (deterministic) algorithms.
Average Performance Analysis
, 2006
"... The purpose of measures in algorithm theory is to distinguish between “good ” and “bad ” algorithms. The main drawback of classical worstcase analysis is that one single “bad ” instance decides the performance of an algorithm. Moreover, worstcase instances are often quite artificial and often do n ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
The purpose of measures in algorithm theory is to distinguish between “good ” and “bad ” algorithms. The main drawback of classical worstcase analysis is that one single “bad ” instance decides the performance of an algorithm. Moreover, worstcase instances are often quite artificial and often do not represent a “realistic ” or “typical ” instance of a problem. In this thesis, we are concerned with an approach that tries to adress this issue: average performance analysis. Consider an optimisation problem and let Alg be an arbitrary (online) algorithm for it. An adversary Adv chooses the distribution D of the input instances out of a fixed class ∆adv of distributions. Let Opt be an optimal algorithm for the considered problem. Then, the average performance ratio apr of the algorithm Alg is defined by alg
unknown title
"... Abstract In this paper we give a finer separation of several known paging algorithms using a new technique called relative interval analysis. This technique compares the fault rate of two paging algorithms across the entire range of inputs of a given size rather than in the worst case alone. Using ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract In this paper we give a finer separation of several known paging algorithms using a new technique called relative interval analysis. This technique compares the fault rate of two paging algorithms across the entire range of inputs of a given size rather than in the worst case alone. Using this technique we characterize the relative performance of LRU and LRU2, as well as LRU and FWF, among others. We also show that lookahead is beneficial for a paging algorithm, a fact that is well known in practice but it was, until recently, not verified by theory.
Algorithmic research for the 21st century
 ECONOMIC GLOBALIZATION & THE CHOICE OF ASIA: SHANGHAI FORUM 2005
, 2005
"... ..."
Optimal Eviction Policies for Stochastic Address Traces
"... The eviction problem for memory hierarchies is studied for the Hidden Markov Reference Model (HMRM) of the memory trace, showing how miss minimization can be naturally formulated in the optimal control setting. In addition to the traditional version assuming a buffer of fixed capacity, a relaxed ve ..."
Abstract
 Add to MetaCart
(Show Context)
The eviction problem for memory hierarchies is studied for the Hidden Markov Reference Model (HMRM) of the memory trace, showing how miss minimization can be naturally formulated in the optimal control setting. In addition to the traditional version assuming a buffer of fixed capacity, a relaxed version is also considered, in which buffer occupancy can vary and its average is constrained. Resorting to multiobjective optimization, viewing occupancy as a cost rather than as a constraint, the optimal eviction policy is obtained by composing solutions for the individual addressable items. This approach is then specialized to the Least Recently Used Stack Model (LRUSM), a type of HMRM often considered for traces, which includes V − 1 parameters, where V is the size of the virtual space. A gain optimal policy for any target average occupancy is obtained which (i) is computable in time O(V) from the model parameters, (ii) is optimal also for the fixed capacity case, and (iii) is characterized in terms of priorities, with the name of Least Profit Rate (LPR) policy. An O(logC) upper bound (being C the buffer capacity) is derived for the ratio between the expected miss rate of LPR and that of OPT, the optimal offline policy; the upper bound is tightened to O(1), under reasonable constraints on the LRUSM parameters. Using the stackdistance framework, an algorithm is developed to compute the number of misses incurred by LPR on a given input trace, simultaneously for all buffer capacities, in time O(log V) per access. Finally, some results are provided for miss minimization over a finite horizon and over an infinite horizon under bias optimality, a criterion more stringent than gain optimality.