Results 1  10
of
28
EnergyEfficient Algorithms for . . .
, 2007
"... We study scheduling problems in batteryoperated computing devices, aiming at schedules with low total energy consumption. While most of the previous work has focused on finding feasible schedules in deadlinebased settings, in this article we are interested in schedules that guarantee good respons ..."
Abstract

Cited by 65 (2 self)
 Add to MetaCart
We study scheduling problems in batteryoperated computing devices, aiming at schedules with low total energy consumption. While most of the previous work has focused on finding feasible schedules in deadlinebased settings, in this article we are interested in schedules that guarantee good response times. More specifically, our goal is to schedule a sequence of jobs on a variablespeed processor so as to minimize the total cost consisting of the energy consumption and the total flow time of all jobs. We first show that when the amount of work, for any job, may take an arbitrary value, then no online algorithm can achieve a constant competitive ratio. Therefore, most of the article is concerned with unitsize jobs. We devise a deterministic constant competitive online algorithm and show that
Improved Randomized OnLine Algorithms for the List Update Problem
 PROC. 6TH ANNUAL ACMSIAM SYMPOSIUM ON DISCRETE ALGORITHMS
, 1995
"... The best randomized online algorithms known so far for the list update problem achieve a competitiveness of p 3 1:73. In this paper we present a new family of randomized online algorithms that beat this competitive ratio. Our improved algorithms are called TIMESTAMP algorithms and achieve a com ..."
Abstract

Cited by 40 (8 self)
 Add to MetaCart
The best randomized online algorithms known so far for the list update problem achieve a competitiveness of p 3 1:73. In this paper we present a new family of randomized online algorithms that beat this competitive ratio. Our improved algorithms are called TIMESTAMP algorithms and achieve a competitiveness of maxf2 \Gamma p; 1 + p(2 \Gamma p)g, for any real number p 2 [0; 1]. Setting p = (3 \Gamma p 5)=2, we obtain a OEcompetitive algorithm, where OE = (1 + p 5)=2 1:62 is the Golden Ratio. TIMESTAMP algorithms coordinate the movements of items using some information on past requests. We can reduce the required information at the expense of increasing the competitive ratio. We present a very simple version of the TIMESTAMP algorithms that is 1:68competitive. The family of TIMESTAMP algorithms also includes a new deterministic 2competitive online algorithm that is different from the MOVETOFRONT rule.
A Combined BIT and TIMESTAMP Algorithm for the List Update Problem
 INFORMATION PROCESSING LETTERS
, 1995
"... We present a randomized online algorithm for the list update problem which achieves a competitive factor of 1.6, the best known so far. The algorithm makes an initial random choice between two known algorithms that have different worstcase request sequences. The first is the BIT algorithm that ..."
Abstract

Cited by 27 (11 self)
 Add to MetaCart
We present a randomized online algorithm for the list update problem which achieves a competitive factor of 1.6, the best known so far. The algorithm makes an initial random choice between two known algorithms that have different worstcase request sequences. The first is the BIT algorithm that, for each item in the list, alternates between moving it to the front of the list and leaving it at its place after it has been requested. The second is a TIMESTAMP algorithm that moves an item in front of less often requested items within the list.
Average Case Analyses of List Update Algorithms, with Applications to Data Compression
 Algorithmica
, 1998
"... We study the performance of the Timestamp (0) (TS(0)) algorithm for selforganizing sequential search on discrete memoryless sources. We demonstrate that TS(0) is better than Movetofront on such sources, and determine performance ratios for TS(0) against the optimal offline and static adversaries ..."
Abstract

Cited by 21 (4 self)
 Add to MetaCart
We study the performance of the Timestamp (0) (TS(0)) algorithm for selforganizing sequential search on discrete memoryless sources. We demonstrate that TS(0) is better than Movetofront on such sources, and determine performance ratios for TS(0) against the optimal offline and static adversaries in this situation. Previous work on such sources compared online algorithms only with static adversaries. One practical motivation for our work is the use of the Movetofront heuristic in various compression algorithms. Our theoretical results suggest that in many cases using TS(0) in place of Movetofront in schemes that use the latter should improve compression. Tests using implementations on a standard corpus of test documents demonstrate that TS(0) leads to improved compression.
Static Optimality and Dynamic SearchOptimality in Lists and Trees
, 2002
"... Adaptive data structures form a central topic of online algorithms research, beginning with the results of Sleator and Tarjan showing that splay trees achieve static optimality for search trees, and that MovetoFront is constant competitive for the list update prob lem [ST85a, ST85b]. This paper is ..."
Abstract

Cited by 19 (3 self)
 Add to MetaCart
Adaptive data structures form a central topic of online algorithms research, beginning with the results of Sleator and Tarjan showing that splay trees achieve static optimality for search trees, and that MovetoFront is constant competitive for the list update prob lem [ST85a, ST85b]. This paper is inspired by the observation that one can in fact achieve a 1 + e ra tio against the best static object in hindsight for a wide range of data structure problems via "weighted experts" techniques from Machine Learning, if computational decisionmaking costs are not considered.
SelfOrganizing Data Structures
 In
, 1998
"... . We survey results on selforganizing data structures for the search problem and concentrate on two very popular structures: the unsorted linear list, and the binary search tree. For the problem of maintaining unsorted lists, also known as the list update problem, we present results on the competit ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
. We survey results on selforganizing data structures for the search problem and concentrate on two very popular structures: the unsorted linear list, and the binary search tree. For the problem of maintaining unsorted lists, also known as the list update problem, we present results on the competitiveness achieved by deterministic and randomized online algorithms. For binary search trees, we present results for both online and offline algorithms. Selforganizing data structures can be used to build very effective data compression schemes. We summarize theoretical and experimental results. 1 Introduction This paper surveys results in the design and analysis of selforganizing data structures for the search problem. The general search problem in pointer data structures can be phrased as follows. The elements of a set are stored in a collection of nodes. Each node also contains O(1) pointers to other nodes and additional state data which can be used for navigation and selforganizati...
Offline Algorithms for The List Update Problem
, 1996
"... Optimum offline algorithms for the list update problem are investigated. The list update problem involves implementing a dictionary of items as a linear list. Several characterizations of optimum algorithms are given; these lead to optimum algorithm which runs in time \Theta2 n (n \Gamma 1)!m, wh ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
Optimum offline algorithms for the list update problem are investigated. The list update problem involves implementing a dictionary of items as a linear list. Several characterizations of optimum algorithms are given; these lead to optimum algorithm which runs in time \Theta2 n (n \Gamma 1)!m, where n is the length of the list and m is the number of requests. The previous best algorithm, an adaptation of a more general algorithm due to Manasse et al. [9], runs in time \Theta(n!) 2 m. 1 Introduction A dictionary is an abstract data type that stores a collection of keyed items and supports the operations access, insert, and delete. In the sequential search or list update problem, a dictionary is implemented as simple linear list, either stored as a linked collection of items or as an array. An access is done by starting at the front of the list and examining each succeeding item until either finding the item desired or reaching the end of the list and reporting the item not present...
A competitive analysis of the list update problem with lookahead
 Theoret. Comput. Sci
, 1998
"... We consider the question of lookahead in the list update problem: What improvement can be achieved in terms of competitiveness if an online algorithm sees not only the present request to be served but also some future requests? We introduce two different models of lookahead and study the list updat ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
We consider the question of lookahead in the list update problem: What improvement can be achieved in terms of competitiveness if an online algorithm sees not only the present request to be served but also some future requests? We introduce two different models of lookahead and study the list update problem using these models. We develop lower bounds on the competitiveness that can be achieved by deterministic online algorithms with lookahead. Furthermore we present online algorithms with lookahead that are competitive against static offline algorithms.
Competitive Algorithms for Multilevel Caching and Relaxed List Update (Extended Abstract)
 Journal of Algorithms
, 1998
"... ) Marek Chrobak John Noga y Abstract We study the Relaxed List Update Problem (RLUP), in which access requests are made to items stored in a list. The cost to access the jth item x j is c j , where c i c i+1 for all i. After the access, x j can be repeatedly swapped, at no cost, with any ite ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
) Marek Chrobak John Noga y Abstract We study the Relaxed List Update Problem (RLUP), in which access requests are made to items stored in a list. The cost to access the jth item x j is c j , where c i c i+1 for all i. After the access, x j can be repeatedly swapped, at no cost, with any item that precedes it in the list. This problem was introduced by Aggarwal et al [1] as a model for the management of hierarchical memory that consists of a number of caches of increasing size and access time. They also proved that a version of LRU is Ccompetitive, for some C, for a restricted class of cost functions. (1) We give an efficient offline algorithm that computes the optimal strategy for RLUP. We also show an elegant characterization of work functions for RLUP. (2) We prove that MovetoFront (MTF) is optimally competitive for RLUP with any cost function. An interesting feature of the proof is that it does not involve any estimates on the competitive ratio. (3) We give a lower boun...
Dynamic Optimality for Skip Lists and BTrees
, 2008
"... Sleator and Tarjan [39] conjectured that splay trees are dynamically optimal binary search trees (BST). In this context, we study the skip list data structure introduced by Pugh [35]. We prove that for a class of skip lists that satisfy a weak balancing property, the workingset bound is a lower bou ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Sleator and Tarjan [39] conjectured that splay trees are dynamically optimal binary search trees (BST). In this context, we study the skip list data structure introduced by Pugh [35]. We prove that for a class of skip lists that satisfy a weak balancing property, the workingset bound is a lower bound on the time to access any sequence. Furthermore, we develop a deterministic selfadjusting skip list whose running time matches the workingset bound, thereby achieving dynamic optimality in this class. Finally, we highlight the implications our bounds for skip lists have on multiway branching search trees such as Btrees, (ab)trees, and other variants as well as their binary tree representations. In particular, we show a selfadjusting Btree that is dynamically optimal both in internal and external memory.