Results 1  10
of
48
Speed is as Powerful as Clairvoyance
 Journal of the ACM
, 1995
"... We consider several well known nonclairvoyant scheduling problems, including the problem of minimizing the average response time, and besteffort firm realtime scheduling. It is known that there are no deterministic online algorithms for these problems with bounded (or even polylogarithmic in the n ..."
Abstract

Cited by 211 (25 self)
 Add to MetaCart
(Show Context)
We consider several well known nonclairvoyant scheduling problems, including the problem of minimizing the average response time, and besteffort firm realtime scheduling. It is known that there are no deterministic online algorithms for these problems with bounded (or even polylogarithmic in the number of jobs) competitive ratios. We show that moderately increasing the speed of the processor used by the nonclairvoyant scheduler effectively gives this scheduler the power of clairvoyance. Furthermore, we show that there exist online algorithms with bounded competitive ratios on all inputs that are not closely correlated with processor speed. 1 Introduction We consider several well known nonclairvoyant scheduling problems, including the problem of minimizing the average response time [13, 15], and besteffort firm realtime scheduling [1, 2, 3, 4, 8, 11, 12, 18]. (We postpone formally defining these problems until the next section.) In nonclairvoyant scheduling some relevant information...
BEYOND COMPETITIVE ANALYSIS
, 2000
"... The competitive analysis of online algorithms has been criticized as being too crude and unrealistic. We propose refinements of competitive analysis in two directions: The first restricts the power of the adversary by allowingonly certain input distributions, while the other allows for comparisons ..."
Abstract

Cited by 135 (3 self)
 Add to MetaCart
(Show Context)
The competitive analysis of online algorithms has been criticized as being too crude and unrealistic. We propose refinements of competitive analysis in two directions: The first restricts the power of the adversary by allowingonly certain input distributions, while the other allows for comparisons between information regimes for online decisionmaking. We illustrate the first with an application to the paging problem; as a byproduct we characterize completely the work functions of this important special case of the kserver problem. We use the second refinement to explore the power of lookahead in server and task systems.
On the Influence of Lookahead in Competitive Paging Algorithms
 ALGORITHMICA
, 1997
"... We introduce a new model of lookahead for online paging algorithms and study several algorithms using this model. A paging algorithm is online with strong lookahead l if it sees the present request and a sequence of future requests that contains l pairwise distinct pages. We show that strong look ..."
Abstract

Cited by 34 (1 self)
 Add to MetaCart
(Show Context)
We introduce a new model of lookahead for online paging algorithms and study several algorithms using this model. A paging algorithm is online with strong lookahead l if it sees the present request and a sequence of future requests that contains l pairwise distinct pages. We show that strong lookahead has practical as well as theoretical importance and improves the competitive factors of online paging algorithms. This is the first model of lookahead having such properties. In addition to lower bounds we present a number of deterministic and randomized online paging algorithms with strong lookahead which are optimal or nearly optimal.
The relative worst order ratio for online algorithms
 In 5th Italian Conference on Algorithms and Complexity, volume 2653 of LNCS
, 2003
"... We define a new measure for the quality of online algorithms, the relative worst order ratio, using ideas from the Max/Max ratio (BenDavid & Borodin 1994) and from the random order ratio (Kenyon 1996). The new ratio is used to compare online algorithms directly by taking the ratio of their pe ..."
Abstract

Cited by 28 (13 self)
 Add to MetaCart
(Show Context)
We define a new measure for the quality of online algorithms, the relative worst order ratio, using ideas from the Max/Max ratio (BenDavid & Borodin 1994) and from the random order ratio (Kenyon 1996). The new ratio is used to compare online algorithms directly by taking the ratio of their performances on their respective worst permutations of a worstcase sequence. Two variants of the bin packing problem are considered: the Classical Bin Packing problem, where the goal is to fit all items in as few bins as possible, and the Dual Bin Packing problem, which is the problem of maximizing the number of items packed in a fixed number of bins. Several known algorithms are compared using this new measure, and a new, simple variant of FirstFit is proposed for Dual Bin Packing. Many of our results are consistent with those previously obtained with the competitive ratio or the competitive ratio on accommodating sequences, but new separations and easier proofs are found.
Tight Bounds for Prefetching and Buffer Management Algorithms for Parallel I/O Systems
 In Foundations of Software Technology and Theoretical Computer Science
, 1996
"... . The growing importance of multipledisk parallel I/O systems requires the development of appropriate prefetching and buffer management algorithms. We answer several fundamental questions on prefetching and buffer management for such parallel I/O systems. Specifically, we find and prove the opt ..."
Abstract

Cited by 28 (11 self)
 Add to MetaCart
(Show Context)
. The growing importance of multipledisk parallel I/O systems requires the development of appropriate prefetching and buffer management algorithms. We answer several fundamental questions on prefetching and buffer management for such parallel I/O systems. Specifically, we find and prove the optimality of an algorithm, PMIN, that minimizes the number of parallel I/Os. Secondly, we analyze PCON, an algorithm which always matches its replacement decisions with those of the wellknown demandpaged MIN algorithm. We show that PCON can become fully sequential in the worst case. Finally, we define and analyze PLRU, a semionline version of the traditional LRU buffermanagement algorithm. Unexpectedly, we find that the performance of PLRU is independent of the number of disks. 1 Introduction The increasing imbalance between the speeds of processors and I/O devices has resulted in the I/O subsystem becoming a bottleneck in many applications. The use of multiple disks to build...
The Statistical Adversary Allows Optimal MoneyMaking Trading Strategies (Extended Abstract)
, 1993
"... Andrew Chou Jeremy Cooperstock y Ran ElYaniv z Michael Klugerman x Tom Leighton  November, 1993 Abstract The distributional approach and competitive analysis have traditionally been used for the design and analysis of online algorithms. The former assumes a specific distribution on inputs, whil ..."
Abstract

Cited by 22 (4 self)
 Add to MetaCart
Andrew Chou Jeremy Cooperstock y Ran ElYaniv z Michael Klugerman x Tom Leighton  November, 1993 Abstract The distributional approach and competitive analysis have traditionally been used for the design and analysis of online algorithms. The former assumes a specific distribution on inputs, while the latter assumes inputs are chosen by an unrestricted adversary. This paper employs the statistical adversary (recently proposed by Raghavan) to analyze and design online algorithms for twoway currency trading. The statistical adversary approach may be viewed as a hybrid of the distributional approach and competitive analysis. By statistical adversary, we mean an adversary that generates input sequences, where each sequence must satisfy certain general statistical properties. The online algorithms presented in this paper have some very attractive properties. For instance, the algorithms are moneymaking; they are guaranteed to be profitable when the optimal offli...
On the Separation and Equivalence of Paging Strategies
"... It has been experimentally observed that LRU and variants thereof are the preferred strategies for online paging. However, under most proposed performance measures for online algorithms the performance of LRU is the same as that of many other strategies which are inferior in practice. In this pape ..."
Abstract

Cited by 19 (6 self)
 Add to MetaCart
It has been experimentally observed that LRU and variants thereof are the preferred strategies for online paging. However, under most proposed performance measures for online algorithms the performance of LRU is the same as that of many other strategies which are inferior in practice. In this paper we first show that any performance measure which does not include a partition or implied distribution of the input sequences of a given length is unlikely to distinguish between any two lazy paging algorithms as their performance is identical in a very strong sense. This provides a theoretical justification for the use of a more refined measure. Building upon the ideas of concave analysis by Albers et al. [AFG05], we prove strict separation between LRU and all other paging strategies. That is, we show that LRU is the unique optimum strategy for paging under a deterministic model. This provides full theoretical backing to the empirical observation that LRU is preferable in practice.
The Accommodating Function  a generalization of the competitive ratio
 In Sixth International Workshop on Algorithms and Data Structures, volume 1663 of Lecture Notes in Computer Science
, 1998
"... A new measure, the accommodating function, for the quality of online algorithms is presented. The accommodating function, which is a generalization of both the competitive ratio and the accommodating ratio, measures the quality of an online algorithm as a function of the resources that would be su ..."
Abstract

Cited by 17 (10 self)
 Add to MetaCart
(Show Context)
A new measure, the accommodating function, for the quality of online algorithms is presented. The accommodating function, which is a generalization of both the competitive ratio and the accommodating ratio, measures the quality of an online algorithm as a function of the resources that would be sufficient for an optimal algorithm to fully grant all requests. More precisely, if we have some amount of resources n, the function value at ff is the usual ratio (still on some fixed amount of resources n), except that input sequences are restricted to those where all requests could have been fully granted by an optimal algorithm if it had had the amount of resources ffn. The accommodating functions for three specific online problems are investigated: a variant of binpacking in which the goal is to maximize the number of objects put in n bins, the seat reservation problem, and the problem of optimizing total flow time when preemption is allowed.
The power of reordering for online minimum makespan scheduling
 In Proc. 49th FOCS
"... In the classic minimum makespan scheduling problem, we are given an input sequence of jobs with processing times. A scheduling algorithm has to assign the jobs to m parallel machines. The objective is to minimize the makespan, which is the time it takes until all jobs are processed. In this paper, w ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
(Show Context)
In the classic minimum makespan scheduling problem, we are given an input sequence of jobs with processing times. A scheduling algorithm has to assign the jobs to m parallel machines. The objective is to minimize the makespan, which is the time it takes until all jobs are processed. In this paper, we consider online scheduling algorithms without preemption. However, we do not require that each arriving job has to be assigned immediately to one of the machines. A reordering buffer with limited storage capacity can be used to reorder the input sequence in a restricted fashion so as to schedule the jobs with a smaller makespan. This is a natural extension of lookahead. We present an extensive study of the power and limits of online reordering for minimum makespan scheduling. As main result, we give, for m identical machines, tight and, in comparison to the problem without reordering, much improved bounds on the competitive ratio for minimum makespan scheduling with reordering buffers. Depending on m, the achieved competitive ratio lies between 4/3 and 1.4659. This optimal ratio is achieved with a buffer of size Θ(m). We show that larger buffer sizes do not result in an additional advantage and that a buffer of size Ω(m) is necessary to achieve this competitive ratio. Further, we present several algorithms for different buffer sizes. Among others, we introduce, for every buffer size k ∈ [1,(m + 1)/2], a (2 − 1/(m − k + 1))competitive algorithm, which nicely generalizes the wellknown result of Graham. For m uniformly related machines, we give a scheduling algorithm that achieves a competitive ratio of 2 with a reordering buffer of size m. Considering that the best known ∗ Supported by DFG grant WE 2842/1. competitive ratio for uniformly related machines without reordering is 5.828, this result emphasizes the power of online reordering further more. 1.
The relative worst order ratio applied to paging
 In Proceedings of the 16th ACMSIAM Symposium on Discrete Algorithms (SODA ’05
, 2005
"... Abstract. The relative worst order ratio, a new measure for the quality of online algorithms, was recently defined and applied to two bin packing problems. Here, we apply it to the paging problem. Work in progress by various researchers shows that the measure gives interesting results and new separ ..."
Abstract

Cited by 13 (9 self)
 Add to MetaCart
(Show Context)
Abstract. The relative worst order ratio, a new measure for the quality of online algorithms, was recently defined and applied to two bin packing problems. Here, we apply it to the paging problem. Work in progress by various researchers shows that the measure gives interesting results and new separations for bin coloring, scheduling, and seat reservation problems as well. Using the relative worst order ratio, we obtain the following results: We devise a new deterministic paging algorithm, RetrospectiveLRU, and show that it performs better than LRU. This is supported by experimental results, but contrasts with the competitive ratio. All deterministic marking algorithms have the same competitive ratio, but here we find that LRU is better than FWF. No deterministic marking algorithm can be significantly better than LRU, but the randomized algorithm MARK is better than LRU. Finally, lookahead is shown to be a significant advantage, in contrast to the competitive ratio, which does not reflect that lookahead can be helpful. 1