Results 1 
7 of
7
Reordering buffer management for nonuniform cost models
 In Proceedings of the 32nd International Colloquium on Automata, Languages and Programming (ICALP
, 2005
"... Abstract. A sequence of objects which are characterized by their color has to be processed. Their processing order influences how efficiently they can be processed: Each color change between two consecutive objects produces nonuniform cost. A reordering buffer which is a random access buffer with s ..."
Abstract

Cited by 26 (9 self)
 Add to MetaCart
(Show Context)
Abstract. A sequence of objects which are characterized by their color has to be processed. Their processing order influences how efficiently they can be processed: Each color change between two consecutive objects produces nonuniform cost. A reordering buffer which is a random access buffer with storage capacity for k objects can be used to rearrange this sequence in such a way that the total cost are minimized. This concept is useful for many applications in computer science and economics. We show that a reordering buffer reduces the cost of each sequence by a factor of at most 2k − 1. This result even holds for cost functions modeled by arbitrary metric spaces. In addition, a matching lower bound is presented. From this bound follows that each strategy that does not increase the cost of a sequence is at least (2k − 1)competitive. As main result, we present the deterministic Maximum Adjusted Penalty (MAP) strategy which is O(log k)competitive. Previous strategies only achieve a competitive ratio of k in the nonuniform model. For the upper bound on MAP, we introduce a basic proof technique. We believe that this technique can be interesting for other problems. 1
The power of reordering for online minimum makespan scheduling
 In Proc. 49th FOCS
"... In the classic minimum makespan scheduling problem, we are given an input sequence of jobs with processing times. A scheduling algorithm has to assign the jobs to m parallel machines. The objective is to minimize the makespan, which is the time it takes until all jobs are processed. In this paper, w ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
(Show Context)
In the classic minimum makespan scheduling problem, we are given an input sequence of jobs with processing times. A scheduling algorithm has to assign the jobs to m parallel machines. The objective is to minimize the makespan, which is the time it takes until all jobs are processed. In this paper, we consider online scheduling algorithms without preemption. However, we do not require that each arriving job has to be assigned immediately to one of the machines. A reordering buffer with limited storage capacity can be used to reorder the input sequence in a restricted fashion so as to schedule the jobs with a smaller makespan. This is a natural extension of lookahead. We present an extensive study of the power and limits of online reordering for minimum makespan scheduling. As main result, we give, for m identical machines, tight and, in comparison to the problem without reordering, much improved bounds on the competitive ratio for minimum makespan scheduling with reordering buffers. Depending on m, the achieved competitive ratio lies between 4/3 and 1.4659. This optimal ratio is achieved with a buffer of size Θ(m). We show that larger buffer sizes do not result in an additional advantage and that a buffer of size Ω(m) is necessary to achieve this competitive ratio. Further, we present several algorithms for different buffer sizes. Among others, we introduce, for every buffer size k ∈ [1,(m + 1)/2], a (2 − 1/(m − k + 1))competitive algorithm, which nicely generalizes the wellknown result of Graham. For m uniformly related machines, we give a scheduling algorithm that achieves a competitive ratio of 2 with a reordering buffer of size m. Considering that the best known ∗ Supported by DFG grant WE 2842/1. competitive ratio for uniformly related machines without reordering is 5.828, this result emphasizes the power of online reordering further more. 1.
results on web caching with request reordering
 in: SPAA: Annual ACM Symposium on Parallel Algorithms and Architectures, ACM
"... We study web caching with request reordering. The goal is to maintain a cache of web documents so that a sequence of requests can be served at low cost. To improve cache hit rates, a limited reordering of requests is allowed. Feder et al. [6], who recently introduced this problem, considered caches ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
(Show Context)
We study web caching with request reordering. The goal is to maintain a cache of web documents so that a sequence of requests can be served at low cost. To improve cache hit rates, a limited reordering of requests is allowed. Feder et al. [6], who recently introduced this problem, considered caches of size 1, i.e. a cache can store one document. They presented an offline algorithm based on dynamic programming as well as online algorithms that achieve constant factor competitive ratios. For arbitrary cache sizes, Feder et al. [7] gave online strategies that have nearly optimal competitive ratios in several cost models. In this paper we first present a deterministic online algorithm that achieves an optimal competitiveness, for the most general cost model and all cache sizes. We then investigate the offline problem, which is NPhard in general. We develop the first polynomial time algorithms that can manage arbitrary cache sizes. Our strategies achieve small constant factor approximation ratios. The algorithms are based on a general technique that reduces web caching with request reordering to a problem of computing batched service schedules. Our approximation result for the Fault Model also improves upon the best previous approximation guarantee known for web caching without request reordering.
Variations on the Theme of Caching
"... author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I understand that my thesis may be made electronically available to the public. ii This thesis is concerned with caching algorithms. We investigate three variations of the c ..."
Abstract
 Add to MetaCart
(Show Context)
author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I understand that my thesis may be made electronically available to the public. ii This thesis is concerned with caching algorithms. We investigate three variations of the caching problem: web caching in the Torng framework, relative competitiveness and caching with request reordering. In the first variation we define different cost models involving page sizes and page costs. We also present the Torng cost framework introduced by Torng in [29]. We analyze the competitive ratio of online deterministic marking algorithms in the Bit cost model combined with the Torng framework. We show that given some specific restrictions on the set of possible request sequences, any marking algorithm is 2competitive. The second variation consists in using the relative competitiveness ratio on an access graph as a complexity measure. We use the concept of access graphs introduced by Borodin
on discrete structures, Sequencing and scheduling
"... We study web caching with request reordering. The goal is to maintain a cache of web documents so that a sequence of requests can be served at low cost. To improve cache hit rates, a limited reordering of requests is allowed. Feder et al. [6], who recently introduced this problem, considered caches ..."
Abstract
 Add to MetaCart
(Show Context)
We study web caching with request reordering. The goal is to maintain a cache of web documents so that a sequence of requests can be served at low cost. To improve cache hit rates, a limited reordering of requests is allowed. Feder et al. [6], who recently introduced this problem, considered caches of size 1, i.e. a cache can store one document. They presented an offline algorithm based on dynamic programming as well as online algorithms that achieve constant factor competitive ratios. For arbitrary cache sizes, Feder et al. [7] gave online strategies that have nearly optimal competitive ratios in several cost models. In this paper we first present a deterministic online algorithm that achieves an optimal competitiveness, for the most general cost model and all cache sizes. We then investigate the offline problem, which is NPhard in general. We develop the first polynomial time algorithms that can manage arbitrary cache sizes. Our strategies achieve small constant factor approximation ratios. The algorithms are based on a general technique that reduces web caching with request reordering to a problem of computing batched service schedules. Our approximation result for the Fault Model also improves upon the best previous approximation guarantee known for web caching without request reordering.
On the Remote Server Problem or More about TCP Acknowledgments
"... Abstract. We study an online problem that is motivated by service calls management in a remote support center. When a customer calls the remote support center of a software company, a Technician opens a service request and assigns it a severity rating. This request is then transferred to the approp ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. We study an online problem that is motivated by service calls management in a remote support center. When a customer calls the remote support center of a software company, a Technician opens a service request and assigns it a severity rating. This request is then transferred to the appropriate Support Engineer (SE) who establishes a connection to the customer’s site and uses remote diagnostic capabilities to resolve the problem. We assume that the SE can service at most one customer at time and a request service time is negligible. There is a constant setup cost of creating a new connection to a customer’s site and a specific cost per request for delaying its service that depends on the severity of the request. The problem is to decide which customers to serve first so as to minimize the incurred cost. This problem with just two customers is a natural generalization of the TCP acknowledgment problem. For the online version of the Remote Server Problem (RSP), we present algorithms for the general case and for a special case of two customers that achieve competitive ratios of exactly 4 and 3, respectively. We also show that no deterministic online algorithm can have competitive ratio better that 3. Then we study generalized versions of our model, these are the case of an asymmetric setup cost function and the case of multiple SE’s. For the offline version of the RSP, we derive an optimal algorithm with a polynomial running time for a constant number of customers. 1