Results 1 
8 of
8
An optimal online algorithm for metrical task systems
 Journal of the ACM
, 1992
"... Abstract. In practice, almost all dynamic systems require decisions to be made online, without full knowledge of their future impact on the system. A general model for the processing of sequences of tasks is introduced, and a general online decnion algorithm is developed. It is shown that, for an ..."
Abstract

Cited by 186 (9 self)
 Add to MetaCart
Abstract. In practice, almost all dynamic systems require decisions to be made online, without full knowledge of their future impact on the system. A general model for the processing of sequences of tasks is introduced, and a general online decnion algorithm is developed. It is shown that, for an important algorithms. class of special cases, this algorithm is optimal among all online Specifically, a task system (S. d) for processing sequences of tasks consists of a set S of states and a cost matrix d where d(i, j) is the cost of changing from state i to state j (we assume that d satisfies the triangle inequality and all diagonal entries are f)). The cost of processing a given task depends on the state of the system. A schedule for a sequence T1, T2,..., Tk of tasks is a ‘equence sl,s~,..., Sk of states where s ~ is the state in which T ’ is processed; the cost of a schedule is the sum of all task processing costs and state transition costs incurred. An online scheduling algorithm is one that chooses s, only knowing T1 Tz ~.. T’. Such an algorithm is wcompetitive if, on any input task sequence, its cost is within an additive constant of w times the optimal offline schedule cost. The competitive ratio w(S, d) is the infimum w for which there is a wcompetitive online scheduling algorithm for (S, d). It is shown that w(S, d) = 2 ISI – 1 for eoery task system in which d is symmetric, and w(S, d) = 0(1 S]2) for every task system. Finally, randomized online scheduling algorithms are introduced. It is shown that for the uniform task system (in which d(i, j) = 1 for all i, j), the expected competitive ratio w(S, d) =
SelfOrganizing Linear Search
 ACM Computing Surveys
, 1985
"... this article. Two examples of simple permutation algorithms are movetofront, which moves the accessed record to the front of the list, shifting all records previously ahead of it back one position; and transpose, which merely exchanges the accessed record with the one immediately ahead of it in th ..."
Abstract

Cited by 29 (3 self)
 Add to MetaCart
this article. Two examples of simple permutation algorithms are movetofront, which moves the accessed record to the front of the list, shifting all records previously ahead of it back one position; and transpose, which merely exchanges the accessed record with the one immediately ahead of it in the list. These will be described in more detail later. Knuth [1973] describes several search methods that are usually more efficient than linear search. Bentley and McGeoch [1985] justify the use of selforganizing linear search in the following three contexts:
SelfOrganizing Data Structures
 In
, 1998
"... . We survey results on selforganizing data structures for the search problem and concentrate on two very popular structures: the unsorted linear list, and the binary search tree. For the problem of maintaining unsorted lists, also known as the list update problem, we present results on the competit ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
. We survey results on selforganizing data structures for the search problem and concentrate on two very popular structures: the unsorted linear list, and the binary search tree. For the problem of maintaining unsorted lists, also known as the list update problem, we present results on the competitiveness achieved by deterministic and randomized online algorithms. For binary search trees, we present results for both online and offline algorithms. Selforganizing data structures can be used to build very effective data compression schemes. We summarize theoretical and experimental results. 1 Introduction This paper surveys results in the design and analysis of selforganizing data structures for the search problem. The general search problem in pointer data structures can be phrased as follows. The elements of a set are stored in a collection of nodes. Each node also contains O(1) pointers to other nodes and additional state data which can be used for navigation and selforganizati...
The PersistentAccessCaching Algorithm
, 2004
"... ABSTRACT: Caching is widely recognized as an effective mechanism for improving the performance of the World Wide Web. One of the key components in engineering the Web caching systems is designing document placement/replacement algorithms for updating the collection of cached documents. The main desi ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
ABSTRACT: Caching is widely recognized as an effective mechanism for improving the performance of the World Wide Web. One of the key components in engineering the Web caching systems is designing document placement/replacement algorithms for updating the collection of cached documents. The main design objectives of such a policy are the high cache hit ratio, ease of implementation, low complexity and adaptability to the fluctuations in access patterns. These objectives are essentially satisfied by the widely used heuristic called the leastrecentlyused (LRU) cache replacement rule. However, in the context of the independent reference model, the LRU policy can significantly underperform the optimal leastfrequentlyused (LFU) algorithm that, on the other hand, has higher implementation complexity and lower adaptability to changes in access frequencies. To alleviate this problem, we introduce a new LRUbased rule, termed the persistentaccesscaching (PAC), which essentially preserves all of the desirable attributes of the LRU scheme. For this new heuristic, under the independent reference model and generalized Zipf’s law request probabilities, we prove that, for large cache sizes, its performance is arbitrarily close to the optimal LFU algorithm. Furthermore, this nearoptimality of the PAC algorithm is achieved at the expense of a negligible additional complexity for large cache sizes when compared to the ordinary LRU policy, since the PAC
Selforganizing data structures with dependent accesses
 ICALP'96, LNCS 1099
, 1995
"... We consider selforganizing data structures in the case where the sequence of accesses can be modeled by a first order Markov chain. For the simplek and batchedkmovetofront schemes, explicit formulae for the expected search costs are derived and compared. We use a new approach that employs th ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
We consider selforganizing data structures in the case where the sequence of accesses can be modeled by a first order Markov chain. For the simplek and batchedkmovetofront schemes, explicit formulae for the expected search costs are derived and compared. We use a new approach that employs the technique of expanding a Markov chain. This approach generalizes the results of Gonnet/Munro/Suwanda. In order to analyze arbitrary memoryfree moveforward heuristics for linear lists, we restrict our attention to a special access sequence, thereby reducing the state space of the chain governing the behaviour of the data structure. In the case of accesses with locality (inert transition behaviour), we find that the hierarchies of selforganizing data structures with respect to the expected search time are reversed, compared with independent accesses. Finally we look at selforganizing binary trees with the movetoroot rule and compare the expected search cost with the entropy of the Markov chain of accesses.
Optimality Of The MoveToFront Heuristic For SelfOrganizing Data Structures
, 1993
"... this paper we assume that the sequence of required keys is a Markov chain with transition kernel P, and we consider the class f* of stochastic matrices P such that movetofront is optimal among online rules, with respect to the stationary search cost. We give properties of f* that bear out the usu ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
this paper we assume that the sequence of required keys is a Markov chain with transition kernel P, and we consider the class f* of stochastic matrices P such that movetofront is optimal among online rules, with respect to the stationary search cost. We give properties of f* that bear out the usual explanation of optimality of movetofront by a locality phenomenon exhibited by the sequence of required keys. We produce explicitly a large subclass of f*. We also show that in some cases movetofront is optimal with respect to the speed of convergence toward stationary search cost. 1. Introduction. Let us describe a simple example of a selforganizing sequential search data structure. Let S = {1,2, ... ,M} be a set of items ; assume that these items are stored in places, and that the set p of places is {1,2, ... ,M}. When an item is required, it is searched for in place 1, then, if not found, in place 2, and so on, and a cost p is incurred if the item is finally found in place p. Once the item has been found, a control is taken on the search process by replacing the item in a wisely chosen place : for instance, closer to American Mathematical Society 1980 subject classification. Primary 68P05, 90C40 ; secondary 60J10. Key words and phrases. Controlled Markov chain, Bellman optimality condition, self organizing data structure, sequential search, locality. Abbreviated title (running head). Optimality of movetofront rule. 2 place 1, in such a way that the most frequently accessed items spend most of their time near place 1. When doing this, we must free the new position h of the accessed item by pushing the items remaining between the old position k and the new position h, the notaccessed items retaining their relative order, as in figure 1. Let F = (F n ) n1 be the s...
Near Optimality of the Discrete Persistent Access Caching Algorithm
, 2005
"... Renewed interest in caching techniques stems from their application in improving the performance of the World Wide Web, where storing popular documents in proxy caches closer to end users can significantly reduce the document download latency and overall network congestion. Rules used to update the ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Renewed interest in caching techniques stems from their application in improving the performance of the World Wide Web, where storing popular documents in proxy caches closer to end users can significantly reduce the document download latency and overall network congestion. Rules used to update the collection of frequently accessed documents inside a cache are referred to as cache replacement algorithms (policies). Due to many different factors that influence the Web performance, one of the key attributes of a cache replacement rule are low complexity and high adaptability to variability in Web access patterns. These properties are primarily the reason why most of the practical Web caching algorithms are based on the easily implemented LeastRecentlyUsed (LRU) cache replacement heuristic. In our recent paper [7], we introduce a new algorithm, termed Persistent Access Caching (PAC), that, in addition to desirable low complexity and adaptability, somewhat surprisingly achieves nearly optimal performance for the independent reference model and generalized Zipf’s law request probabilities. However, the main drawbacks of the PAC algorithm are its dependence on the request arrival times and variable storage requirements. In this paper, we resolve these problems by introducing a discrete version of the PAC policy that, after a cache miss, places the requested document in the cache only if
An Asymptotic Optimality of the Transposition Rule for Linear Lists
, 2008
"... The transposition rule is an algorithm for selforganizing linear lists. Upon a request for a given item, the item is transposed with the preceding one. The cost of a request is the distance of the requested item from the beginning of the list. An asymptotic optimality of the rule with the respect t ..."
Abstract
 Add to MetaCart
The transposition rule is an algorithm for selforganizing linear lists. Upon a request for a given item, the item is transposed with the preceding one. The cost of a request is the distance of the requested item from the beginning of the list. An asymptotic optimality of the rule with the respect to the optimal static arrangement is demonstrated for two families of request distributions. The result is established by considering an associated constrained asymmetric exclusion process.