Results 1  10
of
17
CostAware WWW Proxy Caching Algorithms
 IN PROCEEDINGS OF THE 1997 USENIX SYMPOSIUM ON INTERNET TECHNOLOGY AND SYSTEMS
, 1997
"... Web caches can not only reduce network traffic and downloading latency, but can also affect the distribution of web traffic over the network through costaware caching. This paper introduces GreedyDualSize, which incorporates locality with cost and size concerns in a simple and nonparameterized fash ..."
Abstract

Cited by 475 (6 self)
 Add to MetaCart
Web caches can not only reduce network traffic and downloading latency, but can also affect the distribution of web traffic over the network through costaware caching. This paper introduces GreedyDualSize, which incorporates locality with cost and size concerns in a simple and nonparameterized fashion for high performance. Tracedriven simulations show that with the appropriate cost definition, GreedyDualSize outperforms existing web cache replacement algorithms in many aspects, including hit ratios, latency reduction and network cost reduction. In addition, GreedyDualSize can potentially improve the performance of mainmemory caching of Web documents.
Online file caching
 In Proc. of the 9th Annual ACMSIAM Symp. on Discrete algorithms
, 1998
"... Consider the following file caching problem: in response to a sequence of requests for files, where each file has a specified size and retrieval cost, maintain a cache of files of total size at most some specified k so as to minimize the total retrieval cost. Specifically, when a requested file is n ..."
Abstract

Cited by 69 (2 self)
 Add to MetaCart
Consider the following file caching problem: in response to a sequence of requests for files, where each file has a specified size and retrieval cost, maintain a cache of files of total size at most some specified k so as to minimize the total retrieval cost. Specifically, when a requested file is not in the cache, bring it into the cache, pay the retrieval cost, and choose files to remove from the cache so that the total size of files in the cache is at most k. This problem generalizes previous paging and caching problems by allowing objects of arbitrary size and cost, both important attributes when caching files for worldwideweb browsers, servers, and proxies. We give a simple deterministic online algorithm that generalizes many wellknown paging and weightedcaching strategies, including leastrecentlyused, firstinfirstout,
Competitive Analysis of Randomized Paging Algorithms
, 2000
"... The paging problem is defined as follows: we are given a twolevel memory system, in which one level is a fast memory, called cache, capable of holding k items, and the second level is an unbounded but slow memory. At each given time step, a request to an item is issued. Given a request to an item p ..."
Abstract

Cited by 62 (9 self)
 Add to MetaCart
The paging problem is defined as follows: we are given a twolevel memory system, in which one level is a fast memory, called cache, capable of holding k items, and the second level is an unbounded but slow memory. At each given time step, a request to an item is issued. Given a request to an item p,amiss occurs if p is not present in the fast memory. In response to a miss, we need to choose an item q in the cache and replace it by p. The choice of q needs to be made online, without the knowledge of future requests. The objective is to design a replacement strategy with a small number of misses. In this paper we use competitive analysis to study the performance of randomized online paging algorithms. Our goal is to show how the concept of work functions, used previously mostly for the analysis of deterministic algorithms, can also be applied, in a systematic fashion, to the randomized case. We present two results: we first show that the competitive ratio of the marking algorithm is ex...
A Unified Analysis of Paging and Caching
 Algorithmica
, 1998
"... Paging (caching) is the problem of managing a twolevel memory hierarchy in order to minimize the time required to process a sequence of memory accesses. In order to measure this quantity, we define the system parameter miss penalty to represent the extra time required to access slow memory. In the c ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
Paging (caching) is the problem of managing a twolevel memory hierarchy in order to minimize the time required to process a sequence of memory accesses. In order to measure this quantity, we define the system parameter miss penalty to represent the extra time required to access slow memory. In the context of paging, miss penalty is large, so most previous studies of online paging have implicitly set miss penalty = 1 in order to simplify the model. We show that this seemingly insignificant simplification substantially alters the precision of derived results. Consequently, we reintroduce miss penalty to the paging problem and present a more accurate analysis of online paging (and caching). We validate using this more accurate model by deriving intuitively appealing results for the paging problem which cannot be derived using the simplified model. 1 Introduction Over the past decade, competitive analysis has been extensively used to analyze the performance of paging 1 algorithms [20...
Page replacement with multisize pages and applications to Web caching
 in 29th ACM STOC
, 1997
"... We consider the paging problem where the pages have varying size. This problem has applications to page replacement policies for caches containing World Wide Web documents. We consider two models for the cost of an algorithm on a request sequence. In the rst, (the Fault model) the goal is to minimiz ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
We consider the paging problem where the pages have varying size. This problem has applications to page replacement policies for caches containing World Wide Web documents. We consider two models for the cost of an algorithm on a request sequence. In the rst, (the Fault model) the goal is to minimize the number of page faults. In the second, (the Bit model) the goal is to minimize the total numberofbitsthathave tobereadinto the cache. We show o ine algorithms for both cost models that obtain approximation factors of O(log k), where k is the ratio of the size of the cache to the size of the smallest page. We show randomized online algorithms for both cost models that are O(log 2 k)competitive. In addition, if the input sequence is generated by aknown distribution, we show an algorithm for the Fault model whose expected cost is within a factor of O(log k) ofany other online algorithm. 1
Competitive Paging And DualGuided OnLine Weighted Caching And Matching Algorithms
, 1991
"... This thesis presents research done by the author on competitive analysis of online problems. An online problem is a problem that is given and solved one piece at a time. An online strategy for solving such a problem must give the solution to each piece knowing only the current piece and preceding ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
This thesis presents research done by the author on competitive analysis of online problems. An online problem is a problem that is given and solved one piece at a time. An online strategy for solving such a problem must give the solution to each piece knowing only the current piece and preceding pieces, in ignorance of the pieces to be given in the future. We consider online strategies that are competitive (guaranteeing solutions whose costs are within a constant factor of optimal) for several combinatorial optimization problems: paging, weighted caching, the kserver problem, and weighted matching. We introduce variations on the standard model of competitive analysis for paging: allowing randomization, allowing resourcebounded lookahead, and loose competitiveness, in which performance over a range of fast memory sizes is considered and noncompetitiveness is allowed provided the fault rate is insignificant. Each variation leads to substantially better competitive ratios. We prese...
Online Algorithms: Competitive Analysis and Beyond. Algorithms and Theory of Computation Handbook
, 1999
"... ..."
Competitive Analysis of Paging: A Survey
 In Proceedings of the Dagstuhl Seminar on Online Algorithms, Dagstuhl
, 1996
"... . This paper is a survey of competitive analysis of paging. We present proofs showing tight bounds for the competitive ratio achievable by any deterministic or randomized online algorithm. We then go on to discuss variations and refinements of the competitive ratio and the insights they give into th ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
. This paper is a survey of competitive analysis of paging. We present proofs showing tight bounds for the competitive ratio achievable by any deterministic or randomized online algorithm. We then go on to discuss variations and refinements of the competitive ratio and the insights they give into the paging problem. Finally, we variations to the online paging problem to which competitive analysis has been applied. 1 Introduction The paging problem has inspired several decades of theoretical and applied research and has now become a classical problem in computer science. This is due to the fact that managing a two level store of memory has long been, and continues to be, a fundamentally important problem in computing systems. The paging problem has also been one of the cornerstones in the development of the area of online algorithms. Starting with the seminal work of Sleator and Tarjan which initiated the recent interest in the competitive analysis of online algorithms, the paging probl...
Connection Caching: Model and Algorithms
 Journal of Computer and System Sciences
, 2003
"... We introduce a theoretical model for connection caching. In our model each host maintains (caches) a limited number of open connections to other hosts. A request may utilize an open connection in which case it is a hit, or it may require to open a new connection in which case it is a miss. Establ ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We introduce a theoretical model for connection caching. In our model each host maintains (caches) a limited number of open connections to other hosts. A request may utilize an open connection in which case it is a hit, or it may require to open a new connection in which case it is a miss. Establishment of a new connection may force termination (eviction) of another connection at each of the endpoints. The goal is to serve the request sequence with minimum number of misses. This model diers from the standard caching model as it involves many caches which aect each other: a decision to terminate a connection by one node aects the cache of another node that is forced to accept the termination. Our motivation to study the problem stems from Web applications, namely the transmission of HTTP (Hyper Text Transfer Protocol) messages over persistent TCP (Transmission Control Protocol) connections.
CostAware Caching Algorithms for Distributed Storage Servers
"... Abstract. We study replacement algorithms for nonuniform access caches that are used in distributed storage systems. Considering access latencies as major costs of data management in such a system, we show that the total cost of any replacement algorithm is bounded by the total costs of evicted blo ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. We study replacement algorithms for nonuniform access caches that are used in distributed storage systems. Considering access latencies as major costs of data management in such a system, we show that the total cost of any replacement algorithm is bounded by the total costs of evicted blocks plus the total cost of the optimal offline algorithm (OPT). We propose two offline heuristics: MINd and MINcod, as well as an online algorithm: HDcod, which can be run efficiently and perform well at the same time. Our simulation results with Storage Performance Council (SPC)’s storage server traces show that: (1) for offline workloads, MINcod performs as well as OPT in some cases, all is at most three times worse in all test case; (2) for online workloads, HDcod performs closely to the best algorithms in all cases, and is the single algorithm that performs well in all test cases, including the optimal online algorithm (Landlord). Our study suggests that the essential issue to be considered be the tradeoff between the costs of victim blocks and the total number of evictions in order to effectively optimize both efficiency and performance of distributed storage cache replacement algorithms. 1