Results 1  10
of
27
Randomized Competitive Algorithms for the List Update Problem
 Algorithmica
, 1992
"... We prove upper and lower bounds on the competitiveness of randomized algorithms for the list update problem of Sleator and Tarjan. We give a simple and elegant randomized algorithm that is more competitive than the best previous randomized algorithm due to Irani. Our algorithm uses randomness only d ..."
Abstract

Cited by 39 (2 self)
 Add to MetaCart
We prove upper and lower bounds on the competitiveness of randomized algorithms for the list update problem of Sleator and Tarjan. We give a simple and elegant randomized algorithm that is more competitive than the best previous randomized algorithm due to Irani. Our algorithm uses randomness only during an initialization phase, and from then on runs completely deterministically. It is the first randomized competitive algorithm with this property to beat the deterministic lower bound. We generalize our approach to a model in which access costs are fixed but update costs are scaled by an arbitrary constant d. We prove lower bounds for deterministic list update algorithms and for randomized algorithms against oblivious and adaptive online adversaries. In particular, we show that for this problem adaptive online and adaptive offline adversaries are equally powerful. 1 Introduction Recently much attention has been given to competitive analysis of online algorithms [7, 20, 22, 25]. Ro...
SelfOrganizing Linear Search
 ACM Computing Surveys
, 1985
"... this article. Two examples of simple permutation algorithms are movetofront, which moves the accessed record to the front of the list, shifting all records previously ahead of it back one position; and transpose, which merely exchanges the accessed record with the one immediately ahead of it in th ..."
Abstract

Cited by 29 (3 self)
 Add to MetaCart
this article. Two examples of simple permutation algorithms are movetofront, which moves the accessed record to the front of the list, shifting all records previously ahead of it back one position; and transpose, which merely exchanges the accessed record with the one immediately ahead of it in the list. These will be described in more detail later. Knuth [1973] describes several search methods that are usually more efficient than linear search. Bentley and McGeoch [1985] justify the use of selforganizing linear search in the following three contexts:
Selfimproving algorithms
 in SODA ’06: Proceedings of the seventeenth annual ACMSIAM symposium on Discrete algorithm
"... We investigate ways in which an algorithm can improve its expected performance by finetuning itself automatically with respect to an arbitrary, unknown input distribution. We give such selfimproving algorithms for sorting and computing Delaunay triangulations. The highlights of this work: (i) an al ..."
Abstract

Cited by 26 (4 self)
 Add to MetaCart
We investigate ways in which an algorithm can improve its expected performance by finetuning itself automatically with respect to an arbitrary, unknown input distribution. We give such selfimproving algorithms for sorting and computing Delaunay triangulations. The highlights of this work: (i) an algorithm to sort a list of numbers with optimal expected limiting complexity; and (ii) an algorithm to compute the Delaunay triangulation of a set of points with optimal expected limiting complexity. In both cases, the algorithm begins with a training phase during which it adjusts itself to the input distribution, followed by a stationary regime in which the algorithm settles to its optimized incarnation. 1
Second step algorithms in the BurrowsWheeler compression algorithm
 Software Practice and Experience
, 2001
"... In this paper we fix our attention on the second step algorithms of the BurrowsWheeler compression algorithm, which in the original version is the Move To Front transform. We discuss many of its replacements presented so far, and compare compression results obtained using them. Then we propose ..."
Abstract

Cited by 24 (0 self)
 Add to MetaCart
In this paper we fix our attention on the second step algorithms of the BurrowsWheeler compression algorithm, which in the original version is the Move To Front transform. We discuss many of its replacements presented so far, and compare compression results obtained using them. Then we propose a new algorithm that yields a better compression ratio than the previous ones.
Asymptotic approximation of the movetofront search cost distribution and leastrecentlyused caching fault probabilities
, 1999
"... Consider a finite list of items n = 1 � 2 � � � � � N, that are requested according to an i.i.d. process. Each time an item is requested it is moved to the front of the list. The associated search cost C N for accessing an item is equal to its position before being moved. If the request distribu ..."
Abstract

Cited by 23 (8 self)
 Add to MetaCart
Consider a finite list of items n = 1 � 2 � � � � � N, that are requested according to an i.i.d. process. Each time an item is requested it is moved to the front of the list. The associated search cost C N for accessing an item is equal to its position before being moved. If the request distribution converges to a proper distribution as N → ∞, then the stationary search cost C N converges in distribution to a limiting search cost C. We show that, when the (limiting) request distribution has a heavy tail (e.g., generalized Zipf’s law), P�R = n � ∼ c/n α as n → ∞, α> 1, then the limiting stationary search cost distribution P�C> n�, or, equivalently, the leastrecently used (LRU) caching fault probability, satisfies P�C> n� lim n→ ∞ P�R> n � =
SelfOrganizing Data Structures
 In
, 1998
"... . We survey results on selforganizing data structures for the search problem and concentrate on two very popular structures: the unsorted linear list, and the binary search tree. For the problem of maintaining unsorted lists, also known as the list update problem, we present results on the competit ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
. We survey results on selforganizing data structures for the search problem and concentrate on two very popular structures: the unsorted linear list, and the binary search tree. For the problem of maintaining unsorted lists, also known as the list update problem, we present results on the competitiveness achieved by deterministic and randomized online algorithms. For binary search trees, we present results for both online and offline algorithms. Selforganizing data structures can be used to build very effective data compression schemes. We summarize theoretical and experimental results. 1 Introduction This paper surveys results in the design and analysis of selforganizing data structures for the search problem. The general search problem in pointer data structures can be phrased as follows. The elements of a set are stored in a collection of nodes. Each node also contains O(1) pointers to other nodes and additional state data which can be used for navigation and selforganizati...
Offline Algorithms for The List Update Problem
, 1996
"... Optimum offline algorithms for the list update problem are investigated. The list update problem involves implementing a dictionary of items as a linear list. Several characterizations of optimum algorithms are given; these lead to optimum algorithm which runs in time \Theta2 n (n \Gamma 1)!m, wh ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
Optimum offline algorithms for the list update problem are investigated. The list update problem involves implementing a dictionary of items as a linear list. Several characterizations of optimum algorithms are given; these lead to optimum algorithm which runs in time \Theta2 n (n \Gamma 1)!m, where n is the length of the list and m is the number of requests. The previous best algorithm, an adaptation of a more general algorithm due to Manasse et al. [9], runs in time \Theta(n!) 2 m. 1 Introduction A dictionary is an abstract data type that stores a collection of keyed items and supports the operations access, insert, and delete. In the sequential search or list update problem, a dictionary is implemented as simple linear list, either stored as a linked collection of items or as an array. An access is done by starting at the front of the list and examining each succeeding item until either finding the item desired or reaching the end of the list and reporting the item not present...
LeastRecentlyUsed Caching with Dependent Requests
 Theoretical Computer Science
, 2002
"... We investigate a widely popular LeastRecentlyUsed (LRU) cache replacement algorithm with semiMarkov modulated requests. SemiMarkov processes provide the flexibility for modeling strong statistical correlation, including the widely reported longrange dependence in the World Wide Web page request ..."
Abstract

Cited by 13 (6 self)
 Add to MetaCart
We investigate a widely popular LeastRecentlyUsed (LRU) cache replacement algorithm with semiMarkov modulated requests. SemiMarkov processes provide the flexibility for modeling strong statistical correlation, including the widely reported longrange dependence in the World Wide Web page request patterns. When the frequency of requesting a page n is equal to the generalized Zipf’s law c/n α,α> 1, our main result shows that the cache fault probability is asymptotically, for large cache sizes, the same as in the corresponding LRU system with i.i.d. requests. The result is asymptotically explicit and appears to be the first computationally tractable averagecase analysis of LRU caching with statistically dependent request sequences. The surprising insensitivity of LRU caching performance demonstrates its robustness to changes in document popularity. Furthermore, we show that the derived asymptotic result and simulation experiments are in excellent agreement, even for relatively small cache sizes. Keywords: leastrecentlyused caching, movetofront, Zipf’s law, heavytailed distributions, longrange dependence, semiMarkov processes, averagecase analysis
Two New Families of List Update Algorithms
 In ISSAC'98, LCNS 1533
, 1998
"... . We consider the online list accessing problem and present a new family of competitiveoptimal deterministic list update algorithms which is the largest class of such algorithms known todate. This family, called SortbyRank (sbr), is parametrized with a real 0 ff 1, where sbr(0) is the Move ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
. We consider the online list accessing problem and present a new family of competitiveoptimal deterministic list update algorithms which is the largest class of such algorithms known todate. This family, called SortbyRank (sbr), is parametrized with a real 0 ff 1, where sbr(0) is the MovetoFront algorithm and sbr(1) is equivalent to the Timestamp algorithm. The behaviour of sbr(ff) mediates between the eager strategy of MovetoFront and the more conservative behaviour of Timestamp. We also present a family of algorithms SortbyDelay (sbd) which is parametrized by the positive integers, where sbd(1) is MovetoFront and sbd(2) is equivalent to Timestamp. In general, sbd(k) is kcompetitive for k 2. This is the first class of algorithms that is asymptotically optimal for independent, identically distributed requests while each algorithm is constantcompetitive. Empirical studies with with both generated and realworld data are also included. 1 Introduction Co...