Results 1  10
of
31
Competitive Paging With Locality of Reference
 Journal of Computer and System Sciences
, 1991
"... Abstract The SleatorTarjan competitive analysis of paging [Comm. of the ACM; 28:202 208, 1985] gives us the ability to make strong theoretical statements about the performance of paging algorithms without making probabilistic assumptions on the input. Nevertheless practitioners voice reservations ..."
Abstract

Cited by 121 (3 self)
 Add to MetaCart
Abstract The SleatorTarjan competitive analysis of paging [Comm. of the ACM; 28:202 208, 1985] gives us the ability to make strong theoretical statements about the performance of paging algorithms without making probabilistic assumptions on the input. Nevertheless practitioners voice reservations about the model, citing its inability to discern between LRU and FIFO (algorithms whose performances differ markedly in practice), and the fact that the theoretical competitiveness of LRU is much larger than observed in practice. In addition, we would like to address the following important question: given some knowledge of a program's reference pattern, can we use it to improve paging performance on that program?
New Results on Server Problems
 SIAM Journal on Discrete Mathematics
, 1990
"... In the kserver problem, we must choose how k mobile servers will serve each of a sequence of requests, making our decisions in an online manner. We exhibit an optimal deterministic online strategy when the requests fall on the real line. For the weightedcache problem, in which the cost of moving t ..."
Abstract

Cited by 73 (7 self)
 Add to MetaCart
In the kserver problem, we must choose how k mobile servers will serve each of a sequence of requests, making our decisions in an online manner. We exhibit an optimal deterministic online strategy when the requests fall on the real line. For the weightedcache problem, in which the cost of moving to x from any other point is w(x), the weight of x, we also provide an optimal deterministic algorithm. We prove the nonexistence of competitive algorithms for the asymmetric twoserver problem, and of memoryless algorithms for the weightedcache problem. We give a fast algorithm for offline computing of an optimal schedule, and show that finding an optimal offline schedule is at least as hard as the assignment problem. 1 Introduction The kserver problem can be stated as follows. We are given a metric space M , and k servers which move among the points of M , each occupying one point of M . Repeatedly, a request (a point x 2 M) appears. To serve x, each server moves some distance, possibly...
Strongly Competitive Algorithms for Paging with Locality of Reference
, 1995
"... What is the best paging algorithm if one has partial information about the possible sequences of page requests? We give a partial answer to this question, by presenting the analysis of strongly competitive paging algorithms in the access graph model. This model restricts page requests so that they ..."
Abstract

Cited by 73 (5 self)
 Add to MetaCart
What is the best paging algorithm if one has partial information about the possible sequences of page requests? We give a partial answer to this question, by presenting the analysis of strongly competitive paging algorithms in the access graph model. This model restricts page requests so that they conform to a notion of locality of reference, given by an arbitrary access graph. We first consider optimal algorithms for undirected access graphs. Borodin et al. [2] define an algorithm, called FAR, and prove that it is within a logarithmic factor of the optimal online algorithm. We prove that FAR is in fact strongly competitive, i.e. within a constant factor of the optimum. For directed access graphs, we present an algorithm that is strongly competitive on structured program graphs graphs which model a subset of the request sequences of structured programs.
Randomized Competitive Algorithms for the List Update Problem
 Algorithmica
, 1992
"... We prove upper and lower bounds on the competitiveness of randomized algorithms for the list update problem of Sleator and Tarjan. We give a simple and elegant randomized algorithm that is more competitive than the best previous randomized algorithm due to Irani. Our algorithm uses randomness only d ..."
Abstract

Cited by 39 (2 self)
 Add to MetaCart
We prove upper and lower bounds on the competitiveness of randomized algorithms for the list update problem of Sleator and Tarjan. We give a simple and elegant randomized algorithm that is more competitive than the best previous randomized algorithm due to Irani. Our algorithm uses randomness only during an initialization phase, and from then on runs completely deterministically. It is the first randomized competitive algorithm with this property to beat the deterministic lower bound. We generalize our approach to a model in which access costs are fixed but update costs are scaled by an arbitrary constant d. We prove lower bounds for deterministic list update algorithms and for randomized algorithms against oblivious and adaptive online adversaries. In particular, we show that for this problem adaptive online and adaptive offline adversaries are equally powerful. 1 Introduction Recently much attention has been given to competitive analysis of online algorithms [7, 20, 22, 25]. Ro...
Regrets Only! Online Stochastic Optimization under Time Constraints
 Proceedings of the 19th AAAI
, 2004
"... This paper considers online stochastic optimization problems where time constraints severely limit the number of offline optimizations which can be performed at decision time and/or in between decisions. It proposes a novel approach which combines the salient features of the earlier approaches: the ..."
Abstract

Cited by 32 (7 self)
 Add to MetaCart
This paper considers online stochastic optimization problems where time constraints severely limit the number of offline optimizations which can be performed at decision time and/or in between decisions. It proposes a novel approach which combines the salient features of the earlier approaches: the evaluation of every decision on all samples (expectation) and the ability to avoid distributing the samples among decisions (consensus). The key idea underlying the novel algorithm is to approximate the regret of a decision d. The regret algorithm is evaluated on two fundamental different applications: online packet scheduling in networks and online multiple vehicle routing with time windows. On both applications, it produces significant benefits over prior approaches.
The value of consensus in online stochastic scheduling
 In Proceedings of the Fourteenth International Conference on Automated Planning and Scheduling (ICAPS
, 2004
"... This paper reconsiders online packet scheduling in computer networks, where the goal is to minimize weighted packet loss and where the arrival distributions of packets, or approximations thereof, are available for sampling. Earlier work proposed an expectation approach, which chooses the next packet ..."
Abstract

Cited by 21 (6 self)
 Add to MetaCart
This paper reconsiders online packet scheduling in computer networks, where the goal is to minimize weighted packet loss and where the arrival distributions of packets, or approximations thereof, are available for sampling. Earlier work proposed an expectation approach, which chooses the next packet to schedule by approximating the expected loss of each decision over a set of scenarios. The expectation approach was shown to significantly outperform traditional approaches ignoring stochastic information. This paper proposes a novel stochastic approach for online packet scheduling, whose key idea is to select the next packet as the one which is scheduled first most often in the optimal solutions of the scenarios. This consensus approach is shown to outperform the expectation approach significantly whenever time constraints and the problem features limit the number of scenarios that can be solved before making a decision. More importantly perhaps, the paper shows that the consensus and expectation approaches can be integrated to combine the benefits of both approaches. These novel online stochastic optimization algorithms are generic and problemindependent, they apply to other online applications as well, and they shed new light on why existing online stochastic algorithms behave well.
The relative worst order ratio for online algorithms
 In 5th Italian Conference on Algorithms and Complexity, volume 2653 of LNCS
, 2003
"... We define a new measure for the quality of online algorithms, the relative worst order ratio, using ideas from the Max/Max ratio (BenDavid & Borodin 1994) and from the random order ratio (Kenyon 1996). The new ratio is used to compare online algorithms directly by taking the ratio of their perfor ..."
Abstract

Cited by 20 (10 self)
 Add to MetaCart
We define a new measure for the quality of online algorithms, the relative worst order ratio, using ideas from the Max/Max ratio (BenDavid & Borodin 1994) and from the random order ratio (Kenyon 1996). The new ratio is used to compare online algorithms directly by taking the ratio of their performances on their respective worst permutations of a worstcase sequence. Two variants of the bin packing problem are considered: the Classical Bin Packing problem, where the goal is to fit all items in as few bins as possible, and the Dual Bin Packing problem, which is the problem of maximizing the number of items packed in a fixed number of bins. Several known algorithms are compared using this new measure, and a new, simple variant of FirstFit is proposed for Dual Bin Packing. Many of our results are consistent with those previously obtained with the competitive ratio or the competitive ratio on accommodating sequences, but new separations and easier proofs are found.
Dump: Competitive Distributed Paging
, 1993
"... This paper gives a randomized competitive distributed paging algorithm called Heat & Dump. The competitive ratio is logarithmic in the total storage capacity of the network, this is optimal to within a constant factor. This is in contrast to the linear optimal deterministic competitive ratio [BFR ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
This paper gives a randomized competitive distributed paging algorithm called Heat & Dump. The competitive ratio is logarithmic in the total storage capacity of the network, this is optimal to within a constant factor. This is in contrast to the linear optimal deterministic competitive ratio [BFR92]. 1 Introduction The basic paradigm: Distributed Virtual Memory. Virtual addressing has the advantage that the physical address is separate from the logical address [KELS62]. Briefly, the name of a memory item is decoupled from its physical location in memory; moreover, physical location may dynamically change in the runtime. With the appearance of the massively parallel machines in the 1980s, it was natural to extend the virtual memory concept from the traditional uniprocessor to distributed sharedmemory environment. In other words, the programmer can use the convenient Parallel Random Access Machine (PRAM) abstraction to write the program, which will be then compiled automatically ...
Latency Effects of System Level Power Management Algorithms
"... ... In addition, service time and latencies have an effect on power management strategies since they alter the length and occurrences of idle periods which. We study this phenomenon experimentally, by modeling the disk drive of a laptop computer as an embedded system. The results show that if servic ..."
Abstract

Cited by 16 (8 self)
 Add to MetaCart
... In addition, service time and latencies have an effect on power management strategies since they alter the length and occurrences of idle periods which. We study this phenomenon experimentally, by modeling the disk drive of a laptop computer as an embedded system. The results show that if service times of arriving requests are modeled, the relative performance of algorithms can change leading to nonadaptive algorithms performing better than adaptive ones. We compare the performance of adaptive and nonadaptive power management algorithms. In particular, our experimental results show that an "immediate" shutdown strategy that shuts down the system whenever it encounters an idle period performs surprising better than sophisticated adaptive algorithms suggested in the literature. We provide an analytical explanation for the effectiveness of power management strategies.
A Ramseytype Theorem for Metric Spaces and its Applications for Metrical Task Systems and Related Problems
 In 42nd Annual IEEE Symposium on Foundations of Computer Science
, 2001
"... This paper gives a nearly logarithmic lower bound on the randomized competitive ratio for the Metrical Task Systems model [BLS92]. This implies a similar lower bound for the extensively studied Kserver problem. Our proof is based on proving a Ramseytype theorem for metric spaces. In particular we ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
This paper gives a nearly logarithmic lower bound on the randomized competitive ratio for the Metrical Task Systems model [BLS92]. This implies a similar lower bound for the extensively studied Kserver problem. Our proof is based on proving a Ramseytype theorem for metric spaces. In particular we prove that in every metric space there exists a large subspace which is approximately a "hierarchically wellseparated tree" (HST) [Bar96]. This theorem may be of independent interest.