Results 1  10
of
47
The kserver problem
 Computer Science Review
"... The kserver problem is perhaps the most influential online problem: natural, crisp, with a surprising technical depth that manifests the richness of competitive analysis. The kserver conjecture, which was posed more that two decades ago when the problem was first studied within the competitive ana ..."
Abstract

Cited by 66 (5 self)
 Add to MetaCart
The kserver problem is perhaps the most influential online problem: natural, crisp, with a surprising technical depth that manifests the richness of competitive analysis. The kserver conjecture, which was posed more that two decades ago when the problem was first studied within the competitive analysis framework, is still open and has been a major driving force for the development of the area online algorithms. This article surveys some major results for the kserver. 1
EnergyEfficient Algorithms for . . .
, 2007
"... We study scheduling problems in batteryoperated computing devices, aiming at schedules with low total energy consumption. While most of the previous work has focused on finding feasible schedules in deadlinebased settings, in this article we are interested in schedules that guarantee good respons ..."
Abstract

Cited by 62 (2 self)
 Add to MetaCart
We study scheduling problems in batteryoperated computing devices, aiming at schedules with low total energy consumption. While most of the previous work has focused on finding feasible schedules in deadlinebased settings, in this article we are interested in schedules that guarantee good response times. More specifically, our goal is to schedule a sequence of jobs on a variablespeed processor so as to minimize the total cost consisting of the energy consumption and the total flow time of all jobs. We first show that when the amount of work, for any job, may take an arbitrary value, then no online algorithm can achieve a constant competitive ratio. Therefore, most of the article is concerned with unitsize jobs. We devise a deterministic constant competitive online algorithm and show that
Page Replacement for General Caching Problems
, 1999
"... Caching (paging) is a wellstudied problem in online algorithms, usually studied under the assumption that all pages have a uniform size and a uniform fault cost (uni form caching). However, recent applications related to the web involve situations in which pages can be of different sizes and cost ..."
Abstract

Cited by 27 (2 self)
 Add to MetaCart
Caching (paging) is a wellstudied problem in online algorithms, usually studied under the assumption that all pages have a uniform size and a uniform fault cost (uni form caching). However, recent applications related to the web involve situations in which pages can be of different sizes and costs. This general caching problem seems more intricate than the uniform version. In particular, the offline case itself is NPhard. Only a few results exist for the general caching problem [8, 17]. This paper seeks to develop good offline page replacement policies for the general caching problem, with the hope that any insight gained here may lead to good online algorithms. Our first main result is that by using only a small amount of additional memory, say O(1) times the largest page size, we can obtain an O(1)approximation to the general caching problem. Note that the largest page size is typically a very small fraction of the total cache size, say 1%. Our second result is that when no add...
Better Algorithms For Unfair Metrical Task Systems And Applications
, 2000
"... Unfair metrical task systems are a generalization of online metrical task systems. In this paper we introduce new techniques to combine algorithms for unfair metrical task systems and apply these techniques to obtain the following results: 1. Better randomized algorithms for unfair metrical task sy ..."
Abstract

Cited by 22 (5 self)
 Add to MetaCart
Unfair metrical task systems are a generalization of online metrical task systems. In this paper we introduce new techniques to combine algorithms for unfair metrical task systems and apply these techniques to obtain the following results: 1. Better randomized algorithms for unfair metrical task systems on the uniform metric space. 2. A randomized algorithm for metrical task systems on general metric spaces, O((log n log log n) 2 ) competitive, improving on the best previous result of O(log 5 n log log n). 3. A tight randomized competitive ratio for the kweighted caching problem on k + 1 points, O(log k), improving on the best previous result of O(log 2 k). 4. An O(log 2 n) competitive randomized algorithm for metrical task systems on n equally spaced points on the line. Key words. online algorithms, randomized algorithms AMS subject classications. 68W20, 68W25, 68W40 1.
A Randomized Algorithm for Two Servers on the Line
 Information and Computation
, 1998
"... In the kserver problem we wish to minimize, in an online fashion, the movement cost of k servers in response to a sequence of requests. For 2 servers, it is known that the optimal deterministic algorithm has competitive ratio 2, and it has been a longstanding open problem whether it is possible t ..."
Abstract

Cited by 19 (5 self)
 Add to MetaCart
In the kserver problem we wish to minimize, in an online fashion, the movement cost of k servers in response to a sequence of requests. For 2 servers, it is known that the optimal deterministic algorithm has competitive ratio 2, and it has been a longstanding open problem whether it is possible to improve this ratio using randomization. We give a positive answer to this problem when the underlying metric space is a real line, by providing a randomized online algorithm for this case with competitive ratio at most 155 78 ß 1:987. This is the first algorithm for 2 servers that achieves a competitive ratio smaller than 2 in a nonuniform metric space with more than three points. We consider a more general problem called the (k; l)server problem, in which a request is served using l out of k available servers. We show that the randomized 2server problem can be reduced to the deterministic (2l; l)server problem. We prove a lower bound of 2 on the competitive ratio of the (4; 2)server...
On broadcast disk paging
 SIAM Journal on Computing
, 1998
"... Abstract. Broadcast disks are an emerging paradigm for massive data dissemination. In a broadcast disk, data is divided into n equalsized pages, and pages are broadcast in a roundrobin fashion by a server. Broadcast disks are effective because many clients can simultaneously retrieve any transmitt ..."
Abstract

Cited by 18 (5 self)
 Add to MetaCart
Abstract. Broadcast disks are an emerging paradigm for massive data dissemination. In a broadcast disk, data is divided into n equalsized pages, and pages are broadcast in a roundrobin fashion by a server. Broadcast disks are effective because many clients can simultaneously retrieve any transmitted data. Paging is used by the clients to improve performance, much as in virtual memory systems. However, paging on broadcast disks differs from virtual memory paging in at least two fundamental aspects: • A page fault in the broadcast disk model has a variable cost that depends on the requested page as well as the current state of the broadcast. • Prefetching is both natural and a provably essential mechanism for achieving significantly better competitive ratios in broadcast disk paging. In this paper, we design a deterministic algorithm that uses prefetching to achieve an O(n log k) competitive ratio for the broadcast disk paging problem, where k denotes the size of the client’s cache. We also show a matching lower bound of Ω(n log k) that applies even when the adversary is not allowed to use prefetching. In contrast, we show that when prefetching is not allowed, no deterministic online algorithm can achieve a competitive ratio better than Ω(nk). Moreover, we show a lower bound of Ω(n log k) on the competitive ratio achievable by any nonprefetching randomized algorithm against an oblivious adversary. These lower bounds are trivially matched from above by known results about deterministic and randomized marking algorithms for paging. An interpretation of our results is that in the broadcast disk paging, prefetching is a perfect substitute for randomization.
A Better Lower Bound on the Competitive Ratio of the Randomized 2Server Problem
 Information Processing Letters
, 1997
"... We present a lower bound of 1+e \Gamma1=2 ß 1:6065 on the competitive ratio of randomized algorithms for the weighted 2cache problem, which is a special case of the 2server problem. This improves the previously best known lower bound of e=(e \Gamma 1) ß 1:582 for both problems. 1 Introduction T ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
We present a lower bound of 1+e \Gamma1=2 ß 1:6065 on the competitive ratio of randomized algorithms for the weighted 2cache problem, which is a special case of the 2server problem. This improves the previously best known lower bound of e=(e \Gamma 1) ß 1:582 for both problems. 1 Introduction The kserver problem is defined as follows: we are given k mobile servers that reside in a metric space M . At every time step, a request r 2 M is read, and in order to satisfy the request, we must move one server to the request point. Our cost is the total distance traveled by the servers. The choice of server at each step must be made online, i.e., without knowledge of future requests, and therefore (except for some degenerate situations) no algorithm for scheduling the server movement can achieve the optimal cost on each request sequence. An online algorithm A is said to be Ccompetitive if there is a constant a, dependent only on the initial configuration, such that the cost incurred by ...
A Ramseytype Theorem for Metric Spaces and its Applications for Metrical Task Systems and Related Problems
 In 42nd Annual IEEE Symposium on Foundations of Computer Science
, 2001
"... This paper gives a nearly logarithmic lower bound on the randomized competitive ratio for the Metrical Task Systems model [BLS92]. This implies a similar lower bound for the extensively studied Kserver problem. Our proof is based on proving a Ramseytype theorem for metric spaces. In particular we ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
This paper gives a nearly logarithmic lower bound on the randomized competitive ratio for the Metrical Task Systems model [BLS92]. This implies a similar lower bound for the extensively studied Kserver problem. Our proof is based on proving a Ramseytype theorem for metric spaces. In particular we prove that in every metric space there exists a large subspace which is approximately a "hierarchically wellseparated tree" (HST) [Bar96]. This theorem may be of independent interest.
Online competitive algorithms for maximizing weighted throughput of unit jobs
 In Proc. 21st Symp. on Theoretical Aspects of Computer Science (STACS
, 2004
"... Abstract. We study an online buffer management problem for networks supporting QualityofService (QoS) applications. Packets with different QoS values arrive at a network switch and are to be sent along an outgoing link. Due to overloading conditions, some packets have to be dropped. The objective ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
Abstract. We study an online buffer management problem for networks supporting QualityofService (QoS) applications. Packets with different QoS values arrive at a network switch and are to be sent along an outgoing link. Due to overloading conditions, some packets have to be dropped. The objective is to maximize the total value of packets that are sent. We formulate this as an online scheduling problem for unitlength jobs, where each job is specified by its release time, deadline, and a nonnegative weight (QoS value). The goal is to maximize the weighted throughput, that is the total weight of scheduled jobs. We first give a randomized algorithm RMix with competitive ratio of e/(e − 1) ≈ 1.582. This is the first algorithm for this problem with competitive ratio smaller than 2. Then we consider sbounded instances where the span of each job (deadline minus release time) is at most s. We give a 1.25competitive randomized algorithm for 2bounded instances, matching the known lower bound. We give a deterministic algorithm Edfα, whose competitive ratio on sbounded instances is at most 2 − 2/s + o(1/s). For 3bounded instances its ratio is φ ≈ 1.618, matching the lower bound. Previously, an upper bound of φ was known for 2bounded instances, and our work extends this result. Next, we consider 2uniform instances, where the span of each job is exactly 2. We prove a lower bound of 4 − 2 √ 2 ≈ 1.172 for randomized algorithms. For deterministic memoryless algorithms, we prove a lower bound of √ 2 ≈ 1.414, matching a known upper bound. Finally, we consider the multiprocessor case and give an 1/(1 − ( M M+1)M)competitive algorithm for M processors. We also show improved lower bounds for the general and 2uniform cases. 1