Results 11  20
of
127
Minimizing Stall Time in Single and Parallel Disk Systems
, 1998
"... We study integrated prefetching and caching problems following the work of Cao et. al. [3] and Kimbrel and Karlin [14]. Cao et. al. and Kimbrel and Karlin gave approximation algorithms for minimizing the total elapsed time in single and parallel disk settings. The total elapsed time is the sum of ..."
Abstract

Cited by 35 (0 self)
 Add to MetaCart
(Show Context)
We study integrated prefetching and caching problems following the work of Cao et. al. [3] and Kimbrel and Karlin [14]. Cao et. al. and Kimbrel and Karlin gave approximation algorithms for minimizing the total elapsed time in single and parallel disk settings. The total elapsed time is the sum of the processor stall times and the length of the request sequence to be served. We show that an optimum prefetching/caching schedule for a single disk problem can be computed in polynomial time, thereby settling an open question by Kimbrel and Karlin. For the parallel disk problem we give an approximation algorithm for minimizing stall time. Stall time is an important and harder to approximate measure for this problem. All of our algorithms are based on a new approach which involves formulating the prefetching/caching problems as integer programs.
Page Replacement for General Caching Problems
, 1999
"... Caching (paging) is a wellstudied problem in online algorithms, usually studied under the assumption that all pages have a uniform size and a uniform fault cost (uni form caching). However, recent applications related to the web involve situations in which pages can be of different sizes and cost ..."
Abstract

Cited by 33 (3 self)
 Add to MetaCart
Caching (paging) is a wellstudied problem in online algorithms, usually studied under the assumption that all pages have a uniform size and a uniform fault cost (uni form caching). However, recent applications related to the web involve situations in which pages can be of different sizes and costs. This general caching problem seems more intricate than the uniform version. In particular, the offline case itself is NPhard. Only a few results exist for the general caching problem [8, 17]. This paper seeks to develop good offline page replacement policies for the general caching problem, with the hope that any insight gained here may lead to good online algorithms. Our first main result is that by using only a small amount of additional memory, say O(1) times the largest page size, we can obtain an O(1)approximation to the general caching problem. Note that the largest page size is typically a very small fraction of the total cache size, say 1%. Our second result is that when no add...
Experimental Studies of Access Graph Based Heuristics: Beating the LRU standard?
 In Proceedings of the Eighth Annual ACMSIAM Symposium on Discrete Algorithms
, 1997
"... In this paper we devise new paging heuristics motivated by the access graph model of paging [2]. Unlike the access graph model [2, 9, 4] and the related Markov paging model [11] our heuristics are truly online in that we do not assume any prior knowledge of the program just about to be executed. Th ..."
Abstract

Cited by 30 (2 self)
 Add to MetaCart
In this paper we devise new paging heuristics motivated by the access graph model of paging [2]. Unlike the access graph model [2, 9, 4] and the related Markov paging model [11] our heuristics are truly online in that we do not assume any prior knowledge of the program just about to be executed. The Least Recently Used heuristic for paging is remarkably good, and is known experimentally to be superior to many of the suggested alternatives on real program traces [24]. Experiments we've performed suggest that our heuristics beat LRU consistently, over a wide range of cache sizes and programs. The number of page faults can be as low as 75% less than the number of page faults for LRU and is typically 5%30% less than that of LRU. We have built a program tracer that gives the page access sequence for real program executions of 200  1,500 thousand page access requests, and our simulations are based on these real program traces. While we have no real evidence to suggest that the programs...
Optimal Prediction for Prefetching in the Worst Case
, 1998
"... Response time delays caused by I/O are a major problem in many systems and database applications. Prefetching and cache replacement methods are attracting renewed attention because of their success in avoiding costly I/Os. Prefetching can be looked upon as a type of online sequential prediction, whe ..."
Abstract

Cited by 29 (5 self)
 Add to MetaCart
Response time delays caused by I/O are a major problem in many systems and database applications. Prefetching and cache replacement methods are attracting renewed attention because of their success in avoiding costly I/Os. Prefetching can be looked upon as a type of online sequential prediction, where the predictions must be accurate as well as made in a computationally efficient way. Unlike other online problems, prefetching cannot admit a competitive analysis, since the optimal offline prefetcher incurs no cost when it knows the future page requests. Previous analytical work on prefetching [J. Assoc. Comput. Mach., 143 (1996), pp. 771–793] consisted of modeling the user as a probabilistic Markov source. In this paper, we look at the much stronger form of worstcase analysis and derive a randomized algorithm for pure prefetching. We compare our algorithm for every page request sequence with the important class of finite state prefetchers, making no assumptions as to how the sequence of page requests is generated. We prove analytically that the fault rate of our online prefetching algorithm converges almost surely for every page request sequence to the fault rate of the optimal finite state prefetcher for the sequence. This analysis model can be looked upon as a generalization of the competitive framework, in that it compares an online algorithm in a worstcase manner over all sequences with a powerful yet nonclairvoyant opponent. We simultaneously achieve the computational goal of implementing our prefetcher in optimal constant expected time per prefetched page using the optimal dynamic discrete random variate generator of Matias, Vitter, and Ni [Proc. 4th Annual SIAM/ACM
On Online Computation
 Approximation Algorithms for NPHard Problems, chapter 13
, 1997
"... This chapter presents an introduction to the competitive analysis of online algorithms. In an online problem... ..."
Abstract

Cited by 27 (1 self)
 Add to MetaCart
(Show Context)
This chapter presents an introduction to the competitive analysis of online algorithms. In an online problem...
Distributed Computing with Advice
 Information Sensitivity of Graph Coloring, in "ICALP
"... Abstract. We consider a model for online computation in which the online algorithm receives, together with each request, some information regarding the future, referred to as advice. The advice provided to the online algorithm may allow an improvement in its performance, compared to the classical mo ..."
Abstract

Cited by 26 (3 self)
 Add to MetaCart
(Show Context)
Abstract. We consider a model for online computation in which the online algorithm receives, together with each request, some information regarding the future, referred to as advice. The advice provided to the online algorithm may allow an improvement in its performance, compared to the classical model of complete lack of information regarding the future. We are interested in the impact of such advice on the competitive ratio, and in particular, in the relation between the size b of the advice, measured in terms of bits of information per request, and the (improved) competitive ratio. Since b = 0 corresponds to the classical online model, and b = ⌈log A⌉, where A is the algorithm’s action space, corresponds to the optimal (offline) one, our model spans a spectrum of settings ranging from classical online algorithms to offline ones. In this paper we propose the above model and illustrate its applicability by considering two of the most extensively studied online problems, namely, metrical task systems (MTS) and the kserver problem. For MTS we establish tight (up to constant factors) upper and lower bounds on the competitive ratio of deterministic and randomized online algorithms with advice for any choice of 1 ≤ b ≤ Θ(log n), where n is the number of states in the system: we prove that any randomized online algorithm for MTS has competitive ratio Ω(log(n)/b) and we present a deterministic online algorithm for MTS with competitive ratio O(log(n)/b). For the kserver problem we construct a deterministic online algorithm for general metric spaces with competitive ratio k O(1/b) for any choice of Θ(1) ≤ b ≤ log k. 1
A Unified Analysis of Paging and Caching
, 1998
"... Paging (caching) is the problem of managing a twolevel memory hierarchy in order to minimize the time required to process a sequence of memory accesses. In order to measure this quantity, which we refer to as the total memory access time, we define the system parameter miss penalty to represent th ..."
Abstract

Cited by 25 (0 self)
 Add to MetaCart
(Show Context)
Paging (caching) is the problem of managing a twolevel memory hierarchy in order to minimize the time required to process a sequence of memory accesses. In order to measure this quantity, which we refer to as the total memory access time, we define the system parameter miss penalty to represent the extra time required to access slow memory. We also introduce the system parameter page size. In the context of paging, miss penalty is quite large, so most previous studies of online paging have implicitly set miss penaltyD 1 in order to simplify the model. We show that this seemingly insignificant simplification substantially alters the precision of derived results. For example, previous studies have essentially ignored page size. Consequently, we reintroduce the miss penalty and page size parameters to the paging problem and present a more accurate analysis of online paging (and caching). We validate using this more accurate model by deriving intuitively appealing results for the paging problem which cannot be derived using the simplified model. First, we present a natural, quantifiable definition of the amount of locality of reference in any access sequence. We also point out that the amount of locality of reference in an access sequence should depend on page size among other factors. We then show that deterministic and randomized marking algorithms such as the popular least recently used (LRU) algorithm achieve constant competitive ratios when processing typical access sequences which exhibit significant locality of reference; this represents the first competitive analysis
LRU is Better than FIFO
 In Proc. 9th Annual ACMSIAM Symp. on Discrete Algorithms
, 1998
"... In the paging problem we have to manage a twolevel memory system, in which the first level has short access time but can hold only up to k pages, while the second level is very large but slow. We use competitive analysis to study the relative performance of the two best known algorithms for paging, ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
(Show Context)
In the paging problem we have to manage a twolevel memory system, in which the first level has short access time but can hold only up to k pages, while the second level is very large but slow. We use competitive analysis to study the relative performance of the two best known algorithms for paging, LRU and FIFO. Sleator and Tarjan proved that the competitive ratio of LRU and FIFO is k. In practice, however, LRU is known to perform much better than FIFO. It is believed that the superiority of LRU can be attributed to locality of reference exhibited in request sequences. In order to study this phenomenon, Borodin, Irani, Raghavan and Schieber [2] refined the competitive approach by introducing the concept of access graphs. They conjectured that the competitive ratio of LRU on each access graph is less than or equal to the competitive ratio of FIFO. We prove this conjecture in this paper. 1 Introduction. In the paging problem we have a twolevel memory system, in which the first level i...
LeastRecentlyUsed Caching with Dependent Requests
 Theoretical Computer Science
, 2002
"... We investigate a widely popular LeastRecentlyUsed (LRU) cache replacement algorithm with semiMarkov modulated requests. SemiMarkov processes provide the flexibility for modeling strong statistical correlation, including the widely reported longrange dependence in the World Wide Web page request ..."
Abstract

Cited by 24 (6 self)
 Add to MetaCart
(Show Context)
We investigate a widely popular LeastRecentlyUsed (LRU) cache replacement algorithm with semiMarkov modulated requests. SemiMarkov processes provide the flexibility for modeling strong statistical correlation, including the widely reported longrange dependence in the World Wide Web page request patterns. When the frequency of requesting a page n is equal to the generalized Zipf’s law c/n α,α> 1, our main result shows that the cache fault probability is asymptotically, for large cache sizes, the same as in the corresponding LRU system with i.i.d. requests. The result is asymptotically explicit and appears to be the first computationally tractable averagecase analysis of LRU caching with statistically dependent request sequences. The surprising insensitivity of LRU caching performance demonstrates its robustness to changes in document popularity. Furthermore, we show that the derived asymptotic result and simulation experiments are in excellent agreement, even for relatively small cache sizes. Keywords: leastrecentlyused caching, movetofront, Zipf’s law, heavytailed distributions, longrange dependence, semiMarkov processes, averagecase analysis
On paging with locality of reference
 JOURNAL OF COMPUTER AND SYSTEM SCIENCES
, 2005
"... Motivated by the fact that competitive analysis yields too pessimistic results when applied to the paging problem, there has been considerable research interest in refining competitive analysis and in developing alternative models for studying online paging. The goal is to devise models in which the ..."
Abstract

Cited by 23 (3 self)
 Add to MetaCart
(Show Context)
Motivated by the fact that competitive analysis yields too pessimistic results when applied to the paging problem, there has been considerable research interest in refining competitive analysis and in developing alternative models for studying online paging. The goal is to devise models in which theoretical results capture phenomena observed in practice. In this paper we propose a new, simple model for studying paging with locality of reference. The model is closely related to Denning’s working set concept and directly reflects the amount of locality that request sequences exhibit. We demonstrate that our model is reasonable from a practical point of view. We use the page fault rate to evaluate the quality of paging algorithms, which is the performance measure used in practice. We develop tight or nearly tight bounds on the fault rates achieved by popular paging algorithms such as LRU, FIFO, deterministic Marking strategies and LFD. It shows that LRU is an optimal online algorithm, whereas FIFO and Marking strategies are not optimal in general. We present an experimental study comparing the page fault rates proven in our analyses to the page fault rates observed in practice. This is the first such study for an alternative/refined paging model.