Results 11 - 20
of
258
Implementation and Performance of Integrated Application-Controlled Caching, Prefetching and Disk Scheduling
, 1996
"... Although file caching and prefetching are known techniques to improve the performance of file systems, little work has been done on intergrating caching and prefetching. Optimal prefetching is nontrivial because prefetching may require early cache block replacements. Moreover, the tradeoff between t ..."
Abstract
-
Cited by 114 (8 self)
- Add to MetaCart
Although file caching and prefetching are known techniques to improve the performance of file systems, little work has been done on intergrating caching and prefetching. Optimal prefetching is nontrivial because prefetching may require early cache block replacements. Moreover, the tradeoff between the latency-hiding benefits of prefetching and the increase in the number of fetches required must be considered. This paper presents the design and implementation of a file system that integrates application-controlled caching, prefetching and disk scheduling. We use a two-level cache management strategy. The kernel uses the LRU-SP policy [CFL94a] to allocate blocks to processes, and each process uses the controlledaggressive policy, an algorithm previously shown in a theoretical sense to be near-optimal, for managing its cache. Each process then improves its disk access latency by submitting its prefetches in batches and schedules the requests in each batch to optimize disk access performa...
Prefetching from a broadcast disk
- In Proceedings of ICDE'96: The 1996 International Conference on Data Engineering
, 1996
"... Broadcast Disks have been proposed as a means to efficiently deliver data to clients in “asymmetric ” environments where the available bandwidth from the server to the clients greatly exceeds the bandwidth in the opposite direction. A previous study investigated the use of cost-based caching to impr ..."
Abstract
-
Cited by 102 (9 self)
- Add to MetaCart
(Show Context)
Broadcast Disks have been proposed as a means to efficiently deliver data to clients in “asymmetric ” environments where the available bandwidth from the server to the clients greatly exceeds the bandwidth in the opposite direction. A previous study investigated the use of cost-based caching to improve performance when clients access the broadcast in a demand-driven manner [AAF95]. Such demand-driven access however, does not fully exploit the dissemination-based nature of the broadcast, which is particularly conducive to client prefetching. With a Broadcast Disk, pages continually flow past the clients so that, in contrast to traditional environments, prefetching can be performed without placing additional load on shared resources. We argue for the use of a simple prefetch heuristic called ¢¡ and show that ¢ ¡ balances the cache residency time of a data item with its bandwidth allocation. Because of this tradeoff, ¢¡ is very tolerant of variations in the broadcast program. We describe an implementable approximation for ¢¡
A Trace-Driven Comparison of Algorithms for Parallel Prefetching and Caching
- IN PROC. OF THE 2ND CONFERENCE ON OPERATING SYSTEM DESIGN AND IMPLEMENTATION (OSDI
, 1996
"... High-performance I/O systems depend on prefetching and caching in order to deliver good performance to applications. These two techniques have generally been considered in isolation, even though there are significant interactions between them; a block prefetched too early reduces the effectiveness o ..."
Abstract
-
Cited by 97 (8 self)
- Add to MetaCart
High-performance I/O systems depend on prefetching and caching in order to deliver good performance to applications. These two techniques have generally been considered in isolation, even though there are significant interactions between them; a block prefetched too early reduces the effectiveness of the cache, while a block cached too long reduces the effectiveness of prefetching. In this paper we study the effects of several combined prefetching and caching strategies for systems with multiple disks. Using disk-accurate tracedriven simulation, we explore the performance characteristics of each of the algorithms in cases in which applications provide full advance knowledge of accesses using hints. Some of the strategies have been published with theoretical performance bounds, and some are components of systems that have been built. One is a new algorithm that combines the desirable characteristics of the others. We find that when performance is limited by I/O stalls, aggressive prefet...
Evaluating Location Predictors with Extensive Wi-Fi Mobility Data
- In Proceedings of the 23rd Annual Joint Conference of the IEEE Computer and Communications Societies (INFOCOM
, 2004
"... Location is an important feature for many applications, and wireless networks can better serve their clients by anticipating client mobility. As a result, many location predictors have been proposed in the literature, though few have been evaluated with empirical evidence. This paper reports on the ..."
Abstract
-
Cited by 96 (7 self)
- Add to MetaCart
(Show Context)
Location is an important feature for many applications, and wireless networks can better serve their clients by anticipating client mobility. As a result, many location predictors have been proposed in the literature, though few have been evaluated with empirical evidence. This paper reports on the results of the first extensive empirical evaluation of location predictors, using a two-year trace of the mobility patterns of over 6,000 users on Dartmouth's campus-wide Wi-Fi wireless network. We implemented and compared...
Predicting File System Actions from Prior Events
- In Proceedings of the USENIX 1996 Annual Technical Conference
, 1996
"... We have adapted a multi-order context modeling technique used in the data compression method Prediction by Partial Match (PPM) to track sequences of file access events. From this model, we are able to determine file system accesses that have a high probability of occurring as the next event. By pref ..."
Abstract
-
Cited by 93 (5 self)
- Add to MetaCart
We have adapted a multi-order context modeling technique used in the data compression method Prediction by Partial Match (PPM) to track sequences of file access events. From this model, we are able to determine file system accesses that have a high probability of occurring as the next event. By prefetching the data for these events, we have transformed an LRU cache into a predictive cache that in our simulations averages 15% more cache hits than LRU. In fact, on average our fourmegabyte predictive cache has a higher cache hit rate than a 90 megabyte LRU cache. 1 Introduction With the rapid increase of processor speeds, file system latency is a critical issue in computer system performance [14]. Standard Least Recently Used (LRU) based caching techniques offer some assistance, but by ignoring any relationships that exist between file system events, they fail to make full use of the available information. We will show that many of the events in a file system are closely related. For exa...
Analysis of Branch Prediction via Data Compression
- in Proceedings of the 7th International Conference on Architectural Support for Programming Languages and Operating Systems
, 1996
"... Branch prediction is an important mechanism in modem microprocessor design. The focus of research in this area has been on designing new branch prediction schemes. In contrast, very few studies address the theoretical basis behind these prediction schemes. Knowing this theoretical basis helps us to ..."
Abstract
-
Cited by 91 (2 self)
- Add to MetaCart
Branch prediction is an important mechanism in modem microprocessor design. The focus of research in this area has been on designing new branch prediction schemes. In contrast, very few studies address the theoretical basis behind these prediction schemes. Knowing this theoretical basis helps us to evaluate how good a prediction scheme is and how much we can expect to improve its accuracy.
Adaptive Disk Spindown via Optimal Rent-to-Buy in Probabilistic Environments
, 1999
"... In the single rent-to-buy decision problem, without a priori knowledge of the amount of time a resource will be used we need to decide when to buy the resource, given that we can rent the resource for $1 per unit time or buy it once and for all for $c. In this paper we study algorithms that make a ..."
Abstract
-
Cited by 90 (4 self)
- Add to MetaCart
In the single rent-to-buy decision problem, without a priori knowledge of the amount of time a resource will be used we need to decide when to buy the resource, given that we can rent the resource for $1 per unit time or buy it once and for all for $c. In this paper we study algorithms that make a sequence of single rent-to-buy decisions, using the assumption that the resource use times are independently drawn from an unknown probability distribution. Our study of this rent-to-buy problem is motivated by important systems applications, specifically, problems arising from deciding when to spindown disks to conserve energy in mobile computers [4], [13], [15], thread blocking decisions during lock acquisition in multiprocessor applications [7], and virtual circuit holding times in IP-over-ATM networks [11], [19]. We develop a provably optimal and computationally efficient algorithm for the rent-to-buy problem. Our algorithm uses O ( √ t) time and space, and its expected cost for the tth resource use converges to optimal as O ( √ log t/t), for any bounded probability distribution on the resource use times. Alternatively, using O(1) time and space, the algorithm almost converges to optimal. We describe the experimental results for the application of our algorithm to one of the motivating systems problems: the question of when to spindown a disk to save power in a mobile computer. Simulations using disk access traces obtained from an HP workstation environment suggest that our algorithm yields significantly improved power/response time performance over the nonadaptive 2-competitive algorithm which is optimal in the worst-case competitive analysis model.
Sequential prediction of individual sequences under general loss functions
- IEEE Trans. on Information Theory
, 1998
"... Abstract—We consider adaptive sequential prediction of ar-bitrary binary sequences when the performance is evaluated using a general loss function. The goal is to predict on each individual sequence nearly as well as the best prediction strategy in a given comparison class of (possibly adaptive) pre ..."
Abstract
-
Cited by 84 (9 self)
- Add to MetaCart
(Show Context)
Abstract—We consider adaptive sequential prediction of ar-bitrary binary sequences when the performance is evaluated using a general loss function. The goal is to predict on each individual sequence nearly as well as the best prediction strategy in a given comparison class of (possibly adaptive) prediction strategies, called experts. By using a general loss function, we generalize previous work on universal prediction, forecasting, and data compression. However, here we restrict ourselves to the case when the comparison class is finite. For a given sequence, we define the regret as the total loss on the entire sequence suffered by the adaptive sequential predictor, minus the total loss suffered by the predictor in the comparison class that performs best on that particular sequence. We show that for a large class of loss functions, the minimax regret is either (log N)
Application-Controlled File Caching Policies
- IN PROC. THE 1994 SUMMER USENIX TECHNICAL CONFERENCE
, 1994
"... We consider how to improve the performance of file caching by allowing user-level control over file cache replacement decisions. We use two-level cache management: the kernel allocates physical pages to individual applications (allocation), and each application is responsible for deciding how to use ..."
Abstract
-
Cited by 80 (5 self)
- Add to MetaCart
We consider how to improve the performance of file caching by allowing user-level control over file cache replacement decisions. We use two-level cache management: the kernel allocates physical pages to individual applications (allocation), and each application is responsible for deciding how to use its physical pages (replacement). Previous work on two-level memory management has focused on replacement, largely ignoring allocation. The main contribution of this paper is our solution to the allocation problem. Our solution allows processes to manage their own cache blocks, while at the same time maintains the dynamic allocation of cache blocks among processes. Our solution makes sure that good user-level policies can improve the file cache hit ratios of the entire system over the existing replacement approach. We evaluate our scheme by trace-based simulation, demonstrating that it leads to significant improvements in hit ratios for a variety of applications.
Energy Efficient Prefetching and Caching
- IN PROCEEDINGS OF THE USENIX ANNUAL TECHNICAL CONFERENCE
, 2004
"... Traditional disk management strategies---prefetching and caching in particular---are designed to maximize performance. In mobile systems they conflict with strategies that attempt to save energy by powering down the disk when it is idle. We present new rules for prefetching and caching that maximize ..."
Abstract
-
Cited by 77 (5 self)
- Add to MetaCart
(Show Context)
Traditional disk management strategies---prefetching and caching in particular---are designed to maximize performance. In mobile systems they conflict with strategies that attempt to save energy by powering down the disk when it is idle. We present new rules for prefetching and caching that maximize power-down opportunities (without performance loss) by creating an access pattern characterized by intense bursts of activity separated by long idle times. We also describe an automatic system that monitors past application behavior in order to generate appropriate prefetching hints, and a general system of kernel enhancements that coordinate I/O activity across all running applications. We have