Results 1 - 10
of
15
Competitive Paging With Locality of Reference
- Journal of Computer and System Sciences
, 1991
"... Abstract The Sleator-Tarjan competitive analysis of paging [Comm. of the ACM; 28:202- 208, 1985] gives us the ability to make strong theoretical statements about the performance of paging algorithms without making probabilistic assumptions on the input. Nevertheless practitioners voice reservations ..."
Abstract
-
Cited by 128 (3 self)
- Add to MetaCart
Abstract The Sleator-Tarjan competitive analysis of paging [Comm. of the ACM; 28:202- 208, 1985] gives us the ability to make strong theoretical statements about the performance of paging algorithms without making probabilistic assumptions on the input. Nevertheless practitioners voice reservations about the model, citing its inability to discern between LRU and FIFO (algorithms whose performances differ markedly in practice), and the fact that the theoretical competitiveness of LRU is much larger than observed in practice. In addition, we would like to address the following important question: given some knowledge of a program's reference pattern, can we use it to improve paging performance on that program?
Strongly Competitive Algorithms for Paging with Locality of Reference
, 1995
"... What is the best paging algorithm if one has partial information about the possible sequences of page requests? We give a partial answer to this question, by presenting the analysis of strongly competitive paging algorithms in the access graph model. This model restricts page requests so that they ..."
Abstract
-
Cited by 72 (5 self)
- Add to MetaCart
What is the best paging algorithm if one has partial information about the possible sequences of page requests? We give a partial answer to this question, by presenting the analysis of strongly competitive paging algorithms in the access graph model. This model restricts page requests so that they conform to a notion of locality of reference, given by an arbitrary access graph. We first consider optimal algorithms for undirected access graphs. Borodin et al. [2] define an algorithm, called FAR, and prove that it is within a logarithmic factor of the optimal on-line algorithm. We prove that FAR is in fact strongly competitive, i.e. within a constant factor of the optimum. For directed access graphs, we present an algorithm that is strongly competitive on structured program graphs-- graphs which model a subset of the request sequences of structured programs.
MARKOV PAGING
, 2000
"... This paper considers the problemof paging under the assumption that the sequence of pages accessed is generated by a Markov chain. We use this model to study the fault-rate of paging algorithms. We first draw on the theory of Markov decision processes to characterize the paging algorithmthat achieve ..."
Abstract
-
Cited by 67 (4 self)
- Add to MetaCart
This paper considers the problemof paging under the assumption that the sequence of pages accessed is generated by a Markov chain. We use this model to study the fault-rate of paging algorithms. We first draw on the theory of Markov decision processes to characterize the paging algorithmthat achieves optimal fault-rate on any Markov chain. Next, we address the problemof devising a paging strategy with low fault-rate for a given Markov chain. We show that a number of intuitive approaches fail. Our main result is a polynomial-time procedure that, on any Markov chain, will give a paging algorithm with fault-rate at most a constant times optimal. Our techniques show also that some algorithms that do poorly in practice fail in the Markov setting, despite known (good) performance guarantees when the requests are generated independently from a probability distribution.
Experimental Studies of Access Graph Based Heuristics: Beating the LRU standard?
- In Proceedings of the Eighth Annual ACM-SIAM Symposium on Discrete Algorithms
, 1997
"... In this paper we devise new paging heuristics motivated by the access graph model of paging [2]. Unlike the access graph model [2, 9, 4] and the related Markov paging model [11] our heuristics are truly online in that we do not assume any prior knowledge of the program just about to be executed. Th ..."
Abstract
-
Cited by 30 (2 self)
- Add to MetaCart
In this paper we devise new paging heuristics motivated by the access graph model of paging [2]. Unlike the access graph model [2, 9, 4] and the related Markov paging model [11] our heuristics are truly online in that we do not assume any prior knowledge of the program just about to be executed. The Least Recently Used heuristic for paging is remarkably good, and is known experimentally to be superior to many of the suggested alternatives on real program traces [24]. Experiments we've performed suggest that our heuristics beat LRU consistently, over a wide range of cache sizes and programs. The number of page faults can be as low as 75% less than the number of page faults for LRU and is typically 5%--30% less than that of LRU. We have built a program tracer that gives the page access sequence for real program executions of 200 -- 1,500 thousand page access requests, and our simulations are based on these real program traces. While we have no real evidence to suggest that the programs...
Can Entropy Characterize Performance of Online Algorithms?
- in Symposium on Discrete Algorithms, 2001
, 2001
"... We focus in this work on an aspect of online computation that is not addressed by the standard competitive analysis. Namely, identifying request sequences for which non-trivial online algorithms are useful versus request sequences for which all algorithms perform equally bad. The motivation for t ..."
Abstract
-
Cited by 7 (1 self)
- Add to MetaCart
(Show Context)
We focus in this work on an aspect of online computation that is not addressed by the standard competitive analysis. Namely, identifying request sequences for which non-trivial online algorithms are useful versus request sequences for which all algorithms perform equally bad. The motivation for this work are advanced system and architecture designs which allow the operating system to dynamically allocate resources to online protocols such as prefetching and caching. To utilize these features the operating system needs to identify data streams that can benet from more resources. Our approach in this work is based on the relation between entropy, compression and gambling, extensively studied in information theory. It has been shown that in some settings entropy can either fully or at least partially characterize the expected outcome of an iterative gambling game. Viewing online problem with stochastic input as an iterative gambling game, our goal is to study the extent to which the entropy of the input characterizes the expected performance of online algorithms for problems that arise in computer applications. We study bounds based on entropy for three online problems { list accessing, prefetching and caching. We show that entropy is a good performance characterizer for prefetching, but not so good characterizer for online caching. Our work raises several open questions in using entropy as a predictor in online computation. Computer Science Department, Brown University, Box 1910, Providence, RI 02912-1910, USA. E-mail: fgopal, elig@cs.brown.edu. Supported in part by NSF grant CCR-9731477. A preliminary version of this paper appeared in the proceedings of the 12th annual ACM-SIAM Symposium on Discrete Algorithms (SODA), Washington D.C., 2001. 1
Paging Against a Distribution and IP Networking
- International Country Risk Guide – ICRG
, 1999
"... In this paper we consider the paging problem when the page request sequence is drawn from a distribution, and give an application to computer networking. In the IP-paging problem the page inter-request times are chosen according to independent distributions. For this model we construct a very simple ..."
Abstract
-
Cited by 3 (0 self)
- Add to MetaCart
(Show Context)
In this paper we consider the paging problem when the page request sequence is drawn from a distribution, and give an application to computer networking. In the IP-paging problem the page inter-request times are chosen according to independent distributions. For this model we construct a very simple deterministic algorithm whose page fault rate is at most 5 times that of the best online algorithm (that knows the inter-request time distributions). We also show that many other natural algorithms for this problem do not have constant competitive ratio. In distributional paging the inter-request time distributions may be dependent, and hence any probabilistic model of page request sequences can be represented. We construct a simple randomized algorithm whose page fault rate is at most 4 times that of the best online algorithm. The IP-paging problem is motivation by the following application to data networks. Next generation wide area networks are very likely to use connection-oriented prot...
Entropy-Based Bounds for Online Algorithms
- ACM TRANSACTIONS ON ALGORITHMS
"... We focus in this work on an aspect of online computation that is not addressed by the standard competitive analysis. Namely, identifying request sequences for which non-trivial online algorithms are useful versus request sequences for which all algorithms perform equally bad. The motivation for this ..."
Abstract
-
Cited by 3 (0 self)
- Add to MetaCart
We focus in this work on an aspect of online computation that is not addressed by the standard competitive analysis. Namely, identifying request sequences for which non-trivial online algorithms are useful versus request sequences for which all algorithms perform equally bad. The motivation for this work are advanced system and architecture designs which allow the operating system to dynamically allocate resources to online protocols such as prefetching and caching. To utilize these features the operating system needs to identify data streams that can benefit from more resources. Our approach in this work is based on the relation between entropy, compression and gambling, extensively studied in information theory. It has been shown that in some settings entropy can either fully or at least partially characterize the expected outcome of an iterative gambling game. Our goal is to study the extent to which the entropy of the input characterizes the expected performance of online algorithms for problems that arise in computer applications. We study bounds based on entropy for three classical online problems — list accessing, prefetching, and caching. Our bounds relate the performance of the best online algorithm to the entropy,
Abstract Competitive Paging with Locality of Reference
"... The Sleator-Tarjan competitive analysis of paging [19] gives us the ability to make strong theoretical state-ments about the performance of paging algorithms without making probabilistic assumptions on the in-put. Nevertheless practitioners voice reservations about the model, citing its inability to ..."
Abstract
- Add to MetaCart
The Sleator-Tarjan competitive analysis of paging [19] gives us the ability to make strong theoretical state-ments about the performance of paging algorithms without making probabilistic assumptions on the in-put. Nevertheless practitioners voice reservations about the model, citing its inability to discern between LRU and FIFO (algorithms whose performances dif-fer markedly in practice), and the fact that the the-oretical competitiveness of LRU is much larger than observed in practice. In addition, we would like to address the following important question: given some knowledge of a program’s reference pattern, can we use it to improve paging performance on that pro-gram? We address these concerns by introducing an im-portant practical element that underlies the philoso-phy behind paging: locality of reference. We devise a graph-theoretical model, the access graph, for study-ing locality of reference. We use it to prove results that address the practical concerns mentioned above. In addition, we use our model to address the follow-ing questions: How well is LRU likely to perform on a given program? Is there a universal paging algorithm that achieves (nearly) the best possible paging perfor-mance on every program? We do so without compro-mising the benefits of the Sleator-Tarjan model, while bringing it closer to practice.
A Universal Online Caching Algorithm Based on Pattern Matching ∗
"... We present a universal algorithm for the classical online problem of caching or demand paging. We consider the caching problem when the page request sequence is drawn from an unknown probability distribution and the goal is to devise an efficient algorithm whose performance is close to the optimal o ..."
Abstract
- Add to MetaCart
(Show Context)
We present a universal algorithm for the classical online problem of caching or demand paging. We consider the caching problem when the page request sequence is drawn from an unknown probability distribution and the goal is to devise an efficient algorithm whose performance is close to the optimal online algorithm which has full knowledge of the underlying distribution. Most previous works have devised such algorithms for specific classes of distributions with the assumption that the algorithm has full knowledge of the source. In this paper, we present a universal and simple algorithm based on pattern matching for mixing sources (includes Markov sources). The expected performance of our algorithm is within 4 + o(1) times the optimal online algorithm (which has full knowledge of the input model and can use unbounded resources).
Buffer Replacement Using Online Optimization by Mining
"... this report, we are only interested in a technique used for discovering association rules. The problem of mining association rules was introduced in [3] using an example in the supermarket. In the supermarket, bar-code technology has made it possible to collect the so called basket data that store i ..."
Abstract
- Add to MetaCart
this report, we are only interested in a technique used for discovering association rules. The problem of mining association rules was introduced in [3] using an example in the supermarket. In the supermarket, bar-code technology has made it possible to collect the so called basket data that store items purchased on a per-transaction basis. An example of an association rule that can be mined from these basket data will be "30% of the transactions contain bread and butter, out of these transactions, 90% purchase milk".