Results 1  10
of
28
CostAware WWW Proxy Caching Algorithms
 IN PROCEEDINGS OF THE 1997 USENIX SYMPOSIUM ON INTERNET TECHNOLOGY AND SYSTEMS
, 1997
"... Web caches can not only reduce network traffic and downloading latency, but can also affect the distribution of web traffic over the network through costaware caching. This paper introduces GreedyDualSize, which incorporates locality with cost and size concerns in a simple and nonparameterized fash ..."
Abstract

Cited by 468 (6 self)
 Add to MetaCart
Web caches can not only reduce network traffic and downloading latency, but can also affect the distribution of web traffic over the network through costaware caching. This paper introduces GreedyDualSize, which incorporates locality with cost and size concerns in a simple and nonparameterized fashion for high performance. Tracedriven simulations show that with the appropriate cost definition, GreedyDualSize outperforms existing web cache replacement algorithms in many aspects, including hit ratios, latency reduction and network cost reduction. In addition, GreedyDualSize can potentially improve the performance of mainmemory caching of Web documents.
Optimal Prefetching via Data Compression
, 1995
"... Caching and prefetching are important mechanisms for speeding up access time to data on secondary storage. Recent work in competitive online algorithms has uncovered several promising new algorithms for caching. In this paper we apply a form of the competitive philosophy for the first time to the pr ..."
Abstract

Cited by 236 (11 self)
 Add to MetaCart
Caching and prefetching are important mechanisms for speeding up access time to data on secondary storage. Recent work in competitive online algorithms has uncovered several promising new algorithms for caching. In this paper we apply a form of the competitive philosophy for the first time to the problem of prefetching to develop an optimal universal prefetcher in terms of fault ratio, with particular applications to largescale databases and hypertext systems. Our prediction algorithms for prefetching are novel in that they are based on data compression techniques that are both theoretically optimal and good in practice. Intuitively, in order to compress data effectively, you have to be able to predict future data well, and thus good data compressors should be able to predict well for purposes of prefetching. We show for powerful models such as Markov sources and nth order Markov sources that the page fault rates incurred by our prefetching algorithms are optimal in the limit for almost all sequences of page requests.
Efficient OnLine Call Control Algorithms
, 1993
"... We study the problem of online call control, i.e., the problem of accepting or rejecting an incoming call without knowledge of future calls. The problem is a part of the more general problem of bandwidth allocation and management. Intuition suggests that knowledge of future call arrivals can be ..."
Abstract

Cited by 77 (2 self)
 Add to MetaCart
We study the problem of online call control, i.e., the problem of accepting or rejecting an incoming call without knowledge of future calls. The problem is a part of the more general problem of bandwidth allocation and management. Intuition suggests that knowledge of future call arrivals can be crucial to the performance of the system. In this paper, however, we present preemptive deterministic online call control algorithms. We use competitive analysis to measure their performance i.e., we compare our algorithms to their offline, clairvoyant counterpartsand prove optimality for some of them.
Competitive Analysis of Randomized Paging Algorithms
, 2000
"... The paging problem is defined as follows: we are given a twolevel memory system, in which one level is a fast memory, called cache, capable of holding k items, and the second level is an unbounded but slow memory. At each given time step, a request to an item is issued. Given a request to an item p ..."
Abstract

Cited by 62 (9 self)
 Add to MetaCart
The paging problem is defined as follows: we are given a twolevel memory system, in which one level is a fast memory, called cache, capable of holding k items, and the second level is an unbounded but slow memory. At each given time step, a request to an item is issued. Given a request to an item p,amiss occurs if p is not present in the fast memory. In response to a miss, we need to choose an item q in the cache and replace it by p. The choice of q needs to be made online, without the knowledge of future requests. The objective is to design a replacement strategy with a small number of misses. In this paper we use competitive analysis to study the performance of randomized online paging algorithms. Our goal is to show how the concept of work functions, used previously mostly for the analysis of deterministic algorithms, can also be applied, in a systematic fashion, to the randomized case. We present two results: we first show that the competitive ratio of the marking algorithm is ex...
Complexity analysis of realtime reinforcement learning applied to finding shortest paths in deterministic domains
, 1992
"... This paper analyzes the complexity of online reinforcement learning algorithms, namely asynchronous realtime versions of Qlearning and valueiteration, applied to the problem of reaching a goal state in deterministic domains. Previous work had concluded that, in many cases, tabula rasa reinforceme ..."
Abstract

Cited by 43 (4 self)
 Add to MetaCart
This paper analyzes the complexity of online reinforcement learning algorithms, namely asynchronous realtime versions of Qlearning and valueiteration, applied to the problem of reaching a goal state in deterministic domains. Previous work had concluded that, in many cases, tabula rasa reinforcement learning was exponential for such problems, or was tractable only if the learning algorithm was augmented. We show that, to the contrary, the algorithms are tractable with only a simple change in the task representation or initialization. We provide tight bounds on the worstcase complexity, and show how the complexity is even smaller if the reinforcement learning algorithms have initial knowledge of the topology of the state space or the domain has certain special properties. We also present a novel bidirectional Qlearning algorithm to find optimal paths from all states to a goal state and show that it is no more complex than the other algorithms.
VisionBased Motion Planning and Exploration Algorithms for Mobile Robots
 Workshop on the Algorithmic Foundations of Robotics
, 1999
"... This paper considers the problem of systematically exploring an unfamiliar environment in search of one or more recognizable targets. The proposed exploration algorithm is based on a novel representation of environments containing visual landmarks called the boundary place graph. This representation ..."
Abstract

Cited by 38 (1 self)
 Add to MetaCart
This paper considers the problem of systematically exploring an unfamiliar environment in search of one or more recognizable targets. The proposed exploration algorithm is based on a novel representation of environments containing visual landmarks called the boundary place graph. This representation records the set of recognizable objects (landmarks) that are visible from the boundary of each configuration space obstacle. No metric information about the scene geometry is recorded nor are explicit prescriptions for moving between places stored. The exploration algorithm constructs the boundary place graph incrementally from sensor data. Once the robot has completely explored an environment, it can use the constructed representation to carry out further navigation tasks. In order to precisely characterize the set of environments in which this algorithm is expected to succeed, weprovide a necessary and sufficient condition under which the algorithm is guaranteed to discover all landmarks. This algorithm has been implemented on our mobile robot platform RJ, and results from these experiments are presented. Importantly, this research demonstrates that it is possible to design and implementprovably correct exploration and navigation algorithms that do not require global positioning systems or metric representations of the environment. Keywords exploration, navigation, mobile robots, landmarks I.
Randomized Robot Navigation Algorithms
, 1996
"... We consider the problem faced by a mobile robot that has to reach a given target by traveling through an unmapped region in the plane containing oriented rectangular obstacles. We assume the robot has no prior knowledge about the positions or sizes of the obstacles, and acquires such knowledge only ..."
Abstract

Cited by 30 (0 self)
 Add to MetaCart
We consider the problem faced by a mobile robot that has to reach a given target by traveling through an unmapped region in the plane containing oriented rectangular obstacles. We assume the robot has no prior knowledge about the positions or sizes of the obstacles, and acquires such knowledge only when obstacles are encountered. Our goal is to minimize the distance the robot must travel, using the competitive ratio as our measure. We give a new randomized algorithm...
Optimal Prediction for Prefetching in the Worst Case
, 1998
"... Response time delays caused by I/O are a major problem in many systems and database applications. Prefetching and cache replacement methods are attracting renewed attention because of their success in avoiding costly I/Os. Prefetching can be looked upon as a type of online sequential prediction, whe ..."
Abstract

Cited by 27 (7 self)
 Add to MetaCart
Response time delays caused by I/O are a major problem in many systems and database applications. Prefetching and cache replacement methods are attracting renewed attention because of their success in avoiding costly I/Os. Prefetching can be looked upon as a type of online sequential prediction, where the predictions must be accurate as well as made in a computationally efficient way. Unlike other online problems, prefetching cannot admit a competitive analysis, since the optimal offline prefetcher incurs no cost when it knows the future page requests. Previous analytical work on prefetching [J. Assoc. Comput. Mach., 143 (1996), pp. 771–793] consisted of modeling the user as a probabilistic Markov source. In this paper, we look at the much stronger form of worstcase analysis and derive a randomized algorithm for pure prefetching. We compare our algorithm for every page request sequence with the important class of finite state prefetchers, making no assumptions as to how the sequence of page requests is generated. We prove analytically that the fault rate of our online prefetching algorithm converges almost surely for every page request sequence to the fault rate of the optimal finite state prefetcher for the sequence. This analysis model can be looked upon as a generalization of the competitive framework, in that it compares an online algorithm in a worstcase manner over all sequences with a powerful yet nonclairvoyant opponent. We simultaneously achieve the computational goal of implementing our prefetcher in optimal constant expected time per prefetched page using the optimal dynamic discrete random variate generator of Matias, Vitter, and Ni [Proc. 4th Annual SIAM/ACM
Competitiveness via Consensus
, 2002
"... We introduce Consensus Revenue Estimate (CORE) auctions. This is a class of competitive auctions that is interesting for several reasons. One auction from this class achieves a better competitive ratio than any previously known auction. Another one uses only two random bits, whereas the previously k ..."
Abstract

Cited by 21 (7 self)
 Add to MetaCart
We introduce Consensus Revenue Estimate (CORE) auctions. This is a class of competitive auctions that is interesting for several reasons. One auction from this class achieves a better competitive ratio than any previously known auction. Another one uses only two random bits, whereas the previously known competitive auctions on n bidders use n random bits. A parameterized CORE auction performs better than the previous auctions in the context of massmarket goods, such as digital goods. The improved performance is due to the consensus estimate technique that allows more information to be extracted from the input. This technique is very natural and may be useful in other contexts.
Preemptive online scheduling for two uniform processors
 Oper. Res. Lett
, 1998
"... This paper considers the problem of preemptive online scheduling for two uniform processors in which one of the processors has speed 1 and the other has speed s1. The objective is to minimize the makespan. A best possible algorithm with competitive ratio of (1+s) =(1+s+s ) is proposed for thi ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
This paper considers the problem of preemptive online scheduling for two uniform processors in which one of the processors has speed 1 and the other has speed s1. The objective is to minimize the makespan. A best possible algorithm with competitive ratio of (1+s) =(1+s+s ) is proposed for this problem. c 1998 Elsevier Science B.V. All rights reserved. Keywords: Online scheduling; Competitive ratio; Preemption; Uniform processors 1.