Results 1  10
of
42
Competitive Paging With Locality of Reference
 Journal of Computer and System Sciences
, 1991
"... Abstract The SleatorTarjan competitive analysis of paging [Comm. of the ACM; 28:202 208, 1985] gives us the ability to make strong theoretical statements about the performance of paging algorithms without making probabilistic assumptions on the input. Nevertheless practitioners voice reservations ..."
Abstract

Cited by 124 (3 self)
 Add to MetaCart
Abstract The SleatorTarjan competitive analysis of paging [Comm. of the ACM; 28:202 208, 1985] gives us the ability to make strong theoretical statements about the performance of paging algorithms without making probabilistic assumptions on the input. Nevertheless practitioners voice reservations about the model, citing its inability to discern between LRU and FIFO (algorithms whose performances differ markedly in practice), and the fact that the theoretical competitiveness of LRU is much larger than observed in practice. In addition, we would like to address the following important question: given some knowledge of a program's reference pattern, can we use it to improve paging performance on that program?
Strongly Competitive Algorithms for Paging with Locality of Reference
, 1995
"... What is the best paging algorithm if one has partial information about the possible sequences of page requests? We give a partial answer to this question, by presenting the analysis of strongly competitive paging algorithms in the access graph model. This model restricts page requests so that they ..."
Abstract

Cited by 74 (5 self)
 Add to MetaCart
What is the best paging algorithm if one has partial information about the possible sequences of page requests? We give a partial answer to this question, by presenting the analysis of strongly competitive paging algorithms in the access graph model. This model restricts page requests so that they conform to a notion of locality of reference, given by an arbitrary access graph. We first consider optimal algorithms for undirected access graphs. Borodin et al. [2] define an algorithm, called FAR, and prove that it is within a logarithmic factor of the optimal online algorithm. We prove that FAR is in fact strongly competitive, i.e. within a constant factor of the optimum. For directed access graphs, we present an algorithm that is strongly competitive on structured program graphs graphs which model a subset of the request sequences of structured programs.
New Results on Server Problems
 SIAM Journal on Discrete Mathematics
, 1990
"... In the kserver problem, we must choose how k mobile servers will serve each of a sequence of requests, making our decisions in an online manner. We exhibit an optimal deterministic online strategy when the requests fall on the real line. For the weightedcache problem, in which the cost of moving t ..."
Abstract

Cited by 74 (7 self)
 Add to MetaCart
In the kserver problem, we must choose how k mobile servers will serve each of a sequence of requests, making our decisions in an online manner. We exhibit an optimal deterministic online strategy when the requests fall on the real line. For the weightedcache problem, in which the cost of moving to x from any other point is w(x), the weight of x, we also provide an optimal deterministic algorithm. We prove the nonexistence of competitive algorithms for the asymmetric twoserver problem, and of memoryless algorithms for the weightedcache problem. We give a fast algorithm for offline computing of an optimal schedule, and show that finding an optimal offline schedule is at least as hard as the assignment problem. 1 Introduction The kserver problem can be stated as follows. We are given a metric space M , and k servers which move among the points of M , each occupying one point of M . Repeatedly, a request (a point x 2 M) appears. To serve x, each server moves some distance, possibly...
Randomized Competitive Algorithms for the List Update Problem
 Algorithmica
, 1992
"... We prove upper and lower bounds on the competitiveness of randomized algorithms for the list update problem of Sleator and Tarjan. We give a simple and elegant randomized algorithm that is more competitive than the best previous randomized algorithm due to Irani. Our algorithm uses randomness only d ..."
Abstract

Cited by 42 (2 self)
 Add to MetaCart
We prove upper and lower bounds on the competitiveness of randomized algorithms for the list update problem of Sleator and Tarjan. We give a simple and elegant randomized algorithm that is more competitive than the best previous randomized algorithm due to Irani. Our algorithm uses randomness only during an initialization phase, and from then on runs completely deterministically. It is the first randomized competitive algorithm with this property to beat the deterministic lower bound. We generalize our approach to a model in which access costs are fixed but update costs are scaled by an arbitrary constant d. We prove lower bounds for deterministic list update algorithms and for randomized algorithms against oblivious and adaptive online adversaries. In particular, we show that for this problem adaptive online and adaptive offline adversaries are equally powerful. 1 Introduction Recently much attention has been given to competitive analysis of online algorithms [7, 20, 22, 25]. Ro...
A new measure for the study of online algorithms
 Algorithmica
, 1994
"... Abstract. An accepted measure for the performance of an online algorithm is the "competitive ratio" introduced by Sleator and Tarjan. This measure is well motivated and has led to the development of a mathematical theory for online algorithms. We investigate the behavior of this measure ..."
Abstract

Cited by 41 (0 self)
 Add to MetaCart
Abstract. An accepted measure for the performance of an online algorithm is the "competitive ratio" introduced by Sleator and Tarjan. This measure is well motivated and has led to the development of a mathematical theory for online algorithms. We investigate the behavior of this measure with respect to memory needs and benefits of lookahead and find some counterintuitive features. We present lower bounds on the size of memory devoted to recording the past. It is also observed that the competitive ratio reflects no improvement in the performance of an online algorithm due to any (finite) amount of lookahead. We offer an alternative measure that exhibits a different and, in some respects, more intuitive behavior. In particular, we demonstrate the use of our new measure by analyzing the tradeoff between the amortized cost of online algorithms for the paging problem and the amount of lookahead available to them. We also derive online algorithms for the Kserver problem on any bounded metric space, which, relative to the new measure, are optimal among all online algorithms (up to a factor of 2) and are within a factor of 2K from the optimal offline performance. Key Words. Online algorithms, Competitive analysis. 1. Introduction. We
Regrets Only! Online Stochastic Optimization under Time Constraints
 Proceedings of the 19th AAAI
, 2004
"... This paper considers online stochastic optimization problems where time constraints severely limit the number of offline optimizations which can be performed at decision time and/or in between decisions. It proposes a novel approach which combines the salient features of the earlier approaches: the ..."
Abstract

Cited by 36 (8 self)
 Add to MetaCart
This paper considers online stochastic optimization problems where time constraints severely limit the number of offline optimizations which can be performed at decision time and/or in between decisions. It proposes a novel approach which combines the salient features of the earlier approaches: the evaluation of every decision on all samples (expectation) and the ability to avoid distributing the samples among decisions (consensus). The key idea underlying the novel algorithm is to approximate the regret of a decision d. The regret algorithm is evaluated on two fundamental different applications: online packet scheduling in networks and online multiple vehicle routing with time windows. On both applications, it produces significant benefits over prior approaches.
The relative worst order ratio for online algorithms
 In 5th Italian Conference on Algorithms and Complexity, volume 2653 of LNCS
, 2003
"... We define a new measure for the quality of online algorithms, the relative worst order ratio, using ideas from the Max/Max ratio (BenDavid & Borodin 1994) and from the random order ratio (Kenyon 1996). The new ratio is used to compare online algorithms directly by taking the ratio of their pe ..."
Abstract

Cited by 21 (10 self)
 Add to MetaCart
We define a new measure for the quality of online algorithms, the relative worst order ratio, using ideas from the Max/Max ratio (BenDavid & Borodin 1994) and from the random order ratio (Kenyon 1996). The new ratio is used to compare online algorithms directly by taking the ratio of their performances on their respective worst permutations of a worstcase sequence. Two variants of the bin packing problem are considered: the Classical Bin Packing problem, where the goal is to fit all items in as few bins as possible, and the Dual Bin Packing problem, which is the problem of maximizing the number of items packed in a fixed number of bins. Several known algorithms are compared using this new measure, and a new, simple variant of FirstFit is proposed for Dual Bin Packing. Many of our results are consistent with those previously obtained with the competitive ratio or the competitive ratio on accommodating sequences, but new separations and easier proofs are found.
The value of consensus in online stochastic scheduling
 In Proceedings of the Fourteenth International Conference on Automated Planning and Scheduling (ICAPS
, 2004
"... This paper reconsiders online packet scheduling in computer networks, where the goal is to minimize weighted packet loss and where the arrival distributions of packets, or approximations thereof, are available for sampling. Earlier work proposed an expectation approach, which chooses the next packet ..."
Abstract

Cited by 21 (6 self)
 Add to MetaCart
This paper reconsiders online packet scheduling in computer networks, where the goal is to minimize weighted packet loss and where the arrival distributions of packets, or approximations thereof, are available for sampling. Earlier work proposed an expectation approach, which chooses the next packet to schedule by approximating the expected loss of each decision over a set of scenarios. The expectation approach was shown to significantly outperform traditional approaches ignoring stochastic information. This paper proposes a novel stochastic approach for online packet scheduling, whose key idea is to select the next packet as the one which is scheduled first most often in the optimal solutions of the scenarios. This consensus approach is shown to outperform the expectation approach significantly whenever time constraints and the problem features limit the number of scenarios that can be solved before making a decision. More importantly perhaps, the paper shows that the consensus and expectation approaches can be integrated to combine the benefits of both approaches. These novel online stochastic optimization algorithms are generic and problemindependent, they apply to other online applications as well, and they shed new light on why existing online stochastic algorithms behave well.
Dump: Competitive Distributed Paging
, 1993
"... This paper gives a randomized competitive distributed paging algorithm called Heat & Dump. The competitive ratio is logarithmic in the total storage capacity of the network, this is optimal to within a constant factor. This is in contrast to the linear optimal deterministic competitive ratio ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
This paper gives a randomized competitive distributed paging algorithm called Heat & Dump. The competitive ratio is logarithmic in the total storage capacity of the network, this is optimal to within a constant factor. This is in contrast to the linear optimal deterministic competitive ratio [BFR92]. 1 Introduction The basic paradigm: Distributed Virtual Memory. Virtual addressing has the advantage that the physical address is separate from the logical address [KELS62]. Briefly, the name of a memory item is decoupled from its physical location in memory; moreover, physical location may dynamically change in the runtime. With the appearance of the massively parallel machines in the 1980s, it was natural to extend the virtual memory concept from the traditional uniprocessor to distributed sharedmemory environment. In other words, the programmer can use the convenient Parallel Random Access Machine (PRAM) abstraction to write the program, which will be then compiled automatically ...
Jitter Control in QoS Networks
, 2001
"... We study jitter control in networks with guaranteed quality of service (QoS) from the competitive analysis point of view: we propose online algorithms that control jitter and compare their performance to the best possible (by an offline algorithm) for any given arrival sequence. For delay jitter, ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
We study jitter control in networks with guaranteed quality of service (QoS) from the competitive analysis point of view: we propose online algorithms that control jitter and compare their performance to the best possible (by an offline algorithm) for any given arrival sequence. For delay jitter, where the goal is to minimize the difference between delay times of different packets, we show that a simple online algorithm using a buffer of slots guarantees the same delay jitter as the best offline algorithm using buffer space P. We prove that the guarantees made by our online algorithm hold, even for simple distributed implementations, where the total buffer space is distributed along the path of the connection, provided that the input stream satisfies a certain simple property. For rate jitter, where the goal is to minimize the difference between interarrival times, we develop an online algorithm using a buffer of size P C for any I, and compare its jitter to the jitter of an optimal offline algorithm using buffer size. We prove that our algorithm guarantees that the difference is bounded by a term proportional to B/h.