Results 1  10
of
29
An optimal online algorithm for metrical task systems
 Journal of the ACM
, 1992
"... Abstract. In practice, almost all dynamic systems require decisions to be made online, without full knowledge of their future impact on the system. A general model for the processing of sequences of tasks is introduced, and a general online decnion algorithm is developed. It is shown that, for an ..."
Abstract

Cited by 185 (9 self)
 Add to MetaCart
Abstract. In practice, almost all dynamic systems require decisions to be made online, without full knowledge of their future impact on the system. A general model for the processing of sequences of tasks is introduced, and a general online decnion algorithm is developed. It is shown that, for an important algorithms. class of special cases, this algorithm is optimal among all online Specifically, a task system (S. d) for processing sequences of tasks consists of a set S of states and a cost matrix d where d(i, j) is the cost of changing from state i to state j (we assume that d satisfies the triangle inequality and all diagonal entries are f)). The cost of processing a given task depends on the state of the system. A schedule for a sequence T1, T2,..., Tk of tasks is a ‘equence sl,s~,..., Sk of states where s ~ is the state in which T ’ is processed; the cost of a schedule is the sum of all task processing costs and state transition costs incurred. An online scheduling algorithm is one that chooses s, only knowing T1 Tz ~.. T’. Such an algorithm is wcompetitive if, on any input task sequence, its cost is within an additive constant of w times the optimal offline schedule cost. The competitive ratio w(S, d) is the infimum w for which there is a wcompetitive online scheduling algorithm for (S, d). It is shown that w(S, d) = 2 ISI – 1 for eoery task system in which d is symmetric, and w(S, d) = 0(1 S]2) for every task system. Finally, randomized online scheduling algorithms are introduced. It is shown that for the uniform task system (in which d(i, j) = 1 for all i, j), the expected competitive ratio w(S, d) =
SelfOrganizing Linear Search
 ACM Computing Surveys
, 1985
"... this article. Two examples of simple permutation algorithms are movetofront, which moves the accessed record to the front of the list, shifting all records previously ahead of it back one position; and transpose, which merely exchanges the accessed record with the one immediately ahead of it in th ..."
Abstract

Cited by 28 (3 self)
 Add to MetaCart
this article. Two examples of simple permutation algorithms are movetofront, which moves the accessed record to the front of the list, shifting all records previously ahead of it back one position; and transpose, which merely exchanges the accessed record with the one immediately ahead of it in the list. These will be described in more detail later. Knuth [1973] describes several search methods that are usually more efficient than linear search. Bentley and McGeoch [1985] justify the use of selforganizing linear search in the following three contexts:
Selfimproving algorithms
 in SODA ’06: Proceedings of the seventeenth annual ACMSIAM symposium on Discrete algorithm
"... We investigate ways in which an algorithm can improve its expected performance by finetuning itself automatically with respect to an arbitrary, unknown input distribution. We give such selfimproving algorithms for sorting and computing Delaunay triangulations. The highlights of this work: (i) an al ..."
Abstract

Cited by 24 (4 self)
 Add to MetaCart
We investigate ways in which an algorithm can improve its expected performance by finetuning itself automatically with respect to an arbitrary, unknown input distribution. We give such selfimproving algorithms for sorting and computing Delaunay triangulations. The highlights of this work: (i) an algorithm to sort a list of numbers with optimal expected limiting complexity; and (ii) an algorithm to compute the Delaunay triangulation of a set of points with optimal expected limiting complexity. In both cases, the algorithm begins with a training phase during which it adjusts itself to the input distribution, followed by a stationary regime in which the algorithm settles to its optimized incarnation. 1
Average Case Analyses of List Update Algorithms, with Applications to Data Compression
 Algorithmica
, 1998
"... We study the performance of the Timestamp (0) (TS(0)) algorithm for selforganizing sequential search on discrete memoryless sources. We demonstrate that TS(0) is better than Movetofront on such sources, and determine performance ratios for TS(0) against the optimal offline and static adversaries ..."
Abstract

Cited by 21 (4 self)
 Add to MetaCart
We study the performance of the Timestamp (0) (TS(0)) algorithm for selforganizing sequential search on discrete memoryless sources. We demonstrate that TS(0) is better than Movetofront on such sources, and determine performance ratios for TS(0) against the optimal offline and static adversaries in this situation. Previous work on such sources compared online algorithms only with static adversaries. One practical motivation for our work is the use of the Movetofront heuristic in various compression algorithms. Our theoretical results suggest that in many cases using TS(0) in place of Movetofront in schemes that use the latter should improve compression. Tests using implementations on a standard corpus of test documents demonstrate that TS(0) leads to improved compression.
SelfOrganizing Data Structures
 In
, 1998
"... . We survey results on selforganizing data structures for the search problem and concentrate on two very popular structures: the unsorted linear list, and the binary search tree. For the problem of maintaining unsorted lists, also known as the list update problem, we present results on the competit ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
. We survey results on selforganizing data structures for the search problem and concentrate on two very popular structures: the unsorted linear list, and the binary search tree. For the problem of maintaining unsorted lists, also known as the list update problem, we present results on the competitiveness achieved by deterministic and randomized online algorithms. For binary search trees, we present results for both online and offline algorithms. Selforganizing data structures can be used to build very effective data compression schemes. We summarize theoretical and experimental results. 1 Introduction This paper surveys results in the design and analysis of selforganizing data structures for the search problem. The general search problem in pointer data structures can be phrased as follows. The elements of a set are stored in a collection of nodes. Each node also contains O(1) pointers to other nodes and additional state data which can be used for navigation and selforganizati...
Offline Algorithms for The List Update Problem
, 1996
"... Optimum offline algorithms for the list update problem are investigated. The list update problem involves implementing a dictionary of items as a linear list. Several characterizations of optimum algorithms are given; these lead to optimum algorithm which runs in time \Theta2 n (n \Gamma 1)!m, wh ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
Optimum offline algorithms for the list update problem are investigated. The list update problem involves implementing a dictionary of items as a linear list. Several characterizations of optimum algorithms are given; these lead to optimum algorithm which runs in time \Theta2 n (n \Gamma 1)!m, where n is the length of the list and m is the number of requests. The previous best algorithm, an adaptation of a more general algorithm due to Manasse et al. [9], runs in time \Theta(n!) 2 m. 1 Introduction A dictionary is an abstract data type that stores a collection of keyed items and supports the operations access, insert, and delete. In the sequential search or list update problem, a dictionary is implemented as simple linear list, either stored as a linked collection of items or as an array. An access is done by starting at the front of the list and examining each succeeding item until either finding the item desired or reaching the end of the list and reporting the item not present...
Two New Families of List Update Algorithms
 In ISSAC'98, LCNS 1533
, 1998
"... . We consider the online list accessing problem and present a new family of competitiveoptimal deterministic list update algorithms which is the largest class of such algorithms known todate. This family, called SortbyRank (sbr), is parametrized with a real 0 ff 1, where sbr(0) is the Move ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
. We consider the online list accessing problem and present a new family of competitiveoptimal deterministic list update algorithms which is the largest class of such algorithms known todate. This family, called SortbyRank (sbr), is parametrized with a real 0 ff 1, where sbr(0) is the MovetoFront algorithm and sbr(1) is equivalent to the Timestamp algorithm. The behaviour of sbr(ff) mediates between the eager strategy of MovetoFront and the more conservative behaviour of Timestamp. We also present a family of algorithms SortbyDelay (sbd) which is parametrized by the positive integers, where sbd(1) is MovetoFront and sbd(2) is equivalent to Timestamp. In general, sbd(k) is kcompetitive for k 2. This is the first class of algorithms that is asymptotically optimal for independent, identically distributed requests while each algorithm is constantcompetitive. Empirical studies with with both generated and realworld data are also included. 1 Introduction Co...
Can Entropy Characterize Performance of Online Algorithms?
 in Symposium on Discrete Algorithms, 2001
, 2001
"... We focus in this work on an aspect of online computation that is not addressed by the standard competitive analysis. Namely, identifying request sequences for which nontrivial online algorithms are useful versus request sequences for which all algorithms perform equally bad. The motivation for t ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
We focus in this work on an aspect of online computation that is not addressed by the standard competitive analysis. Namely, identifying request sequences for which nontrivial online algorithms are useful versus request sequences for which all algorithms perform equally bad. The motivation for this work are advanced system and architecture designs which allow the operating system to dynamically allocate resources to online protocols such as prefetching and caching. To utilize these features the operating system needs to identify data streams that can benet from more resources. Our approach in this work is based on the relation between entropy, compression and gambling, extensively studied in information theory. It has been shown that in some settings entropy can either fully or at least partially characterize the expected outcome of an iterative gambling game. Viewing online problem with stochastic input as an iterative gambling game, our goal is to study the extent to which the entropy of the input characterizes the expected performance of online algorithms for problems that arise in computer applications. We study bounds based on entropy for three online problems { list accessing, prefetching and caching. We show that entropy is a good performance characterizer for prefetching, but not so good characterizer for online caching. Our work raises several open questions in using entropy as a predictor in online computation. Computer Science Department, Brown University, Box 1910, Providence, RI 029121910, USA. Email: fgopal, elig@cs.brown.edu. Supported in part by NSF grant CCR9731477. A preliminary version of this paper appeared in the proceedings of the 12th annual ACMSIAM Symposium on Discrete Algorithms (SODA), Washington D.C., 2001. 1
The PersistentAccessCaching Algorithm
, 2004
"... ABSTRACT: Caching is widely recognized as an effective mechanism for improving the performance of the World Wide Web. One of the key components in engineering the Web caching systems is designing document placement/replacement algorithms for updating the collection of cached documents. The main desi ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
ABSTRACT: Caching is widely recognized as an effective mechanism for improving the performance of the World Wide Web. One of the key components in engineering the Web caching systems is designing document placement/replacement algorithms for updating the collection of cached documents. The main design objectives of such a policy are the high cache hit ratio, ease of implementation, low complexity and adaptability to the fluctuations in access patterns. These objectives are essentially satisfied by the widely used heuristic called the leastrecentlyused (LRU) cache replacement rule. However, in the context of the independent reference model, the LRU policy can significantly underperform the optimal leastfrequentlyused (LFU) algorithm that, on the other hand, has higher implementation complexity and lower adaptability to changes in access frequencies. To alleviate this problem, we introduce a new LRUbased rule, termed the persistentaccesscaching (PAC), which essentially preserves all of the desirable attributes of the LRU scheme. For this new heuristic, under the independent reference model and generalized Zipf’s law request probabilities, we prove that, for large cache sizes, its performance is arbitrarily close to the optimal LFU algorithm. Furthermore, this nearoptimality of the PAC algorithm is achieved at the expense of a negligible additional complexity for large cache sizes when compared to the ordinary LRU policy, since the PAC
Adaptive Structuring Of Binary Search Trees Using Conditional Rotations
 IEEE TRANSACTIONS ON KNOWLEDGE & DATA ENGINEERING
, 1987
"... Consider a set A = {A 1 , A 2 ,...,A N } of records, where each record is identified by a unique key. The records are accessed based on a set of access probabilities S = [s 1 ,s 2 ,...,s N ] and are to be arranged lexicographically using a Binary Search Tree (BST). If S is known a priori, it ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Consider a set A = {A 1 , A 2 ,...,A N } of records, where each record is identified by a unique key. The records are accessed based on a set of access probabilities S = [s 1 ,s 2 ,...,s N ] and are to be arranged lexicographically using a Binary Search Tree (BST). If S is known a priori, it is well known [10] that an optimal BST may be constructed using A and S. We consider the case when S is not known a priori . A new restructuring heuristic is introduced that requires three extra integer memory locations per record. In this scheme the restructuring is performed only if it decreases the Weighted Path Length (WPL) of the overall resultant tree. An optimized version of the latter method which requires only one extra integer field per record has also been presented. Initial simulation results which compare our algorithm with various other static and dynamic schemes seem to indicate that this scheme asymptotically produces trees which are an order of magnitude closer to the optim...