Results 1 - 10
of
16
Web Prefetching Using Partial Match Prediction
, 1998
"... Web traffic is now one of the major components of Internet traffic. One of the main directions of research in this area is to reduce the time latencies users experience when navigating through Web sites. Caching is already being used in that direction, yet, the characteristics of the Web cause cachi ..."
Abstract
-
Cited by 67 (1 self)
- Add to MetaCart
(Show Context)
Web traffic is now one of the major components of Internet traffic. One of the main directions of research in this area is to reduce the time latencies users experience when navigating through Web sites. Caching is already being used in that direction, yet, the characteristics of the Web cause caching in this medium to have poor performance. Therefore, prefetching is now being studied in the Web context. This study investigates the use of partial match prediction, a technique taken from the data compression literature, for prefetching in the Web. The main concern when employing prefetching is to predict as many future requests as possible, while limiting the false predictions to a minimum. The simulation results suggest that a high fraction of the predictions are accurate (e.g., predicts 18%-23% of the requests with 90%-80% accuracy), so that additional network traffic is kept low. Furthermore, the simulations show that prefetching can substantially increase cache hit rates. 1 Introduc...
NPS: A Non-interfering Deployable Web Prefetching System
- In Proceedings of the Fourth USENIX Symposium on Internet Technologies and Systems
, 2003
"... We present NPS, a novel non-intrusive web prefetching system that (1) utilizes only spare resources to avoid interference between prefetch and demand requests at the server as well as in the network , and (2) is deployable without any modifications to servers, browsers, network or the HTTP protocol. ..."
Abstract
-
Cited by 40 (9 self)
- Add to MetaCart
We present NPS, a novel non-intrusive web prefetching system that (1) utilizes only spare resources to avoid interference between prefetch and demand requests at the server as well as in the network , and (2) is deployable without any modifications to servers, browsers, network or the HTTP protocol. NPS's self-tuning architecture eliminates the need for traditional "thresholds" or magic numbers typically used to limit interference caused by prefetching, thereby allowing applications to improve bene ts and reduce the risk of aggressive prefetching.
The Cyclone Server Architecture: Streamlining Delivery of Popular Content
- In Proceedings of the 6th International Web Caching and Content Delivery Workshop (WCW
, 2000
"... We propose a new webserver architecture optimized for delivery of large, popular files. Delivery of such files currently pose a scalability problem for conventional content providers, which must devote server-side resources in direct proportion to the high multiprogramming level induced by a set of ..."
Abstract
-
Cited by 17 (4 self)
- Add to MetaCart
We propose a new webserver architecture optimized for delivery of large, popular files. Delivery of such files currently pose a scalability problem for conventional content providers, which must devote server-side resources in direct proportion to the high multiprogramming level induced by a set of these connections. While use of scalable multicast may remedy this problem some day, multicast is rarely supported in today's wide-area infrastructure. Our approach alleviates many of the most serious scalability problems by developing new server-side mechanisms capable of managing a large set of TCP connections transporting the same content. The strategy we employ relies on the use of fast forward error correcting (FEC) codes to generate encodings of popular content, of which only a small sliding window is cached in memory at any time instant. The concurrent TCP connections then access content only from this shared window, which is globally useful to all clients. Our method hinges on eliminating unscalable TCP retransmission buffers, as we can "retransmit" fresh encoding packets in lieu of the originals with no performance degradation and with no modifications to client TCP stacks. Ultimately, our Cyclone server capitalizes on concurrency to maximize sharing of state across different request threads while minimizing context switching, thrashing under high load and the cache memory footprint. In this paper, we describe the design and prototype implementation of our approach as a Linux kernel subsystem. Keywords: TCP, FEC, webserver, popularity, concurrency, digital fountain, Tornado codes I.
Prefetching Techniques for Client/Server, Object-Oriented Database Systems
- University of Edinburgh
, 1999
"... The performance of many object-oriented database applications suffers from the page fetch latency which is determined by the expense of disk access. In this work we suggest several prefetching techniques to avoid, or at least to reduce, page fetch latency. In practice no prediction technique is perf ..."
Abstract
-
Cited by 6 (0 self)
- Add to MetaCart
(Show Context)
The performance of many object-oriented database applications suffers from the page fetch latency which is determined by the expense of disk access. In this work we suggest several prefetching techniques to avoid, or at least to reduce, page fetch latency. In practice no prediction technique is perfect and no prefetching technique can entirely eliminate delay due to page fetch latency. Therefore we are interested in the trade-off between the level of accuracy required for obtaining good results in terms of elapsed time reduction and the processing overhead needed to achieve this level of accuracy. If prefetching accuracy is high then the total elapsed time of an application can be reduced significantly otherwise if the prefetching accuracy is low, many incorrect pages are prefetched and the extra load on the client, network, server and disks decreases the whole system performance. Access pattern of object-oriented databases are often complex and usually hard to predict accurately. The ...
Reducing Retrieval Latencies in the Web: the Past, the Present, and the Future
, 1999
"... One of the main directions of research in the Web is to reduce the time latencies users experience when navigating through Web sites. The work in this direction originated from relevant research in operating systems (e.g. caching), where the aim is to reduce file system latencies. Caching is already ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
(Show Context)
One of the main directions of research in the Web is to reduce the time latencies users experience when navigating through Web sites. The work in this direction originated from relevant research in operating systems (e.g. caching), where the aim is to reduce file system latencies. Caching is already being used in the Web domain. However, recent studies indicate that the benefits from this technique are rather limited. Thus, another method that was first applied in operating systems, prefetching, is now being studied in the Web context. In addition to prefetching, there is a number of other techniques that are being examined in the Web domain and are specifically targeted to this environment. These techniques include compression, and piggybacking of information between the servers and the clients. The main contributions of this survey paper are: A taxonomy of the caching and prefetching techniques, according to the underlying principles, and the algorithms used; a critical survey of the...
Data Dissemination on the Web: Speculative and Unobtrusive
, 1999
"... The rapid growth of the Web results in heavier loads on server/network and in increased latency experienced while retrieving Web documents. Internet traffic is further aggravated by its burstiness, which complicates the design and allocation of network components. Bursty traffic alternates peak peri ..."
Abstract
-
Cited by 2 (2 self)
- Add to MetaCart
The rapid growth of the Web results in heavier loads on server/network and in increased latency experienced while retrieving Web documents. Internet traffic is further aggravated by its burstiness, which complicates the design and allocation of network components. Bursty traffic alternates peak periods with lulls. This paper presents a framework that exploits idle periods to satisfy future HTTP requests speculatively and opportunistically. Our proposal di#ers from previous schemes in that it is explicitly aware of current HTTP traffic loads so as to be unobtrusive. This paper highlights several design trade-offs and details the problem of server arbitration among several candidate documents. We present a theoretical analysis of arbitration and validate it by extensive simulation on server logs, in which we calculate latency experienced by clients. Perfect traffic shaping during peak periods is observed and substantial latency improvements for non-dynamic documents are reported over pure on-dema...
Active Streaming in Transport Delay Minimization
- Workshop on Scalable Web Services, Int. Conf. on Parallel Processing
, 2000
"... In this paper we present a technique for reducing response delay for web systems, which is based on a proactive cache scheme. It combines predictive pre-fetching and streaming to overlap the read-time with loading time. This graph-based model analyzes the hyperlink structure to form the prediction. ..."
Abstract
-
Cited by 2 (2 self)
- Add to MetaCart
(Show Context)
In this paper we present a technique for reducing response delay for web systems, which is based on a proactive cache scheme. It combines predictive pre-fetching and streaming to overlap the read-time with loading time. This graph-based model analyzes the hyperlink structure to form the prediction. It also utilizes data streaming to further minimize the pre-load, without compromising the responsiveness. The analysis demonstrates that such new hyper-graph based pre-fetching can reduce the lag-time of a cache system by a factor ranging from 2-10. In this paper, we focus on pre-fetching and provide the technique, the optimization algorithms and the simulation results.
Caching and Multicasting in DBS Systems
, 1999
"... The use of Caching and Multicasting has been studied extensively in the context of terrestrial networks. However, the use of these technologies in a Direct Broadcast Satellite (DBS) system remains unclear. In this paper we discuss possible choices of caching and multicasting schemes, motivated by cu ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
The use of Caching and Multicasting has been studied extensively in the context of terrestrial networks. However, the use of these technologies in a Direct Broadcast Satellite (DBS) system remains unclear. In this paper we discuss possible choices of caching and multicasting schemes, motivated by current applications in the terrestrial Internet, that could be considered for a DBS system. We examine their advantages and disadvantages as well as the tradeoffs involved in combinations of different approaches. We also propose some uses of these technologies and describe an architecture that enhances the performance and efficiency of a DBS system.
Enhancing Network Object Caches through Cross-Domain Cooperation
"... Model of Cross-Domain Prediction ............................................... 34 3.1.1 Homomorphism Prediction .................................................................. 35 3.1.2 Spanning Prediction .............................................................................. 37 ii 3. ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
Model of Cross-Domain Prediction ............................................... 34 3.1.1 Homomorphism Prediction .................................................................. 35 3.1.2 Spanning Prediction .............................................................................. 37 ii 3.2 3.2.1 3.2.2 3.2.3 3.2.4 2-Domain Predictors ..................................................................................... 38 Inferred Reuse ...................................................................................... 40 Inferred Extended Use .......................................................................... 44 Inferred Delayed Use ............................................................................ 47 Inferred New Use ................................................................................. 50 3.3 3.3.1 3.3.2 Cross-Domain Prediction Opportunities ...................................................... 54 Web-DNS Cooperation ........................................................................ 54 E-mail-Web Cooperation ..................................................................... 57 3.4 3.4.1 3.4.2 3.4.3 3.4.4 Multi-Domain Predictors .............................................................................. 58 Multiple Source Predictions in Series .................................................. 59 3-Domain Prediction ............................................................................ 60 Asymmetric Mappings ......................................................................... 61 Hybrid Predictions ................................................................................ 63 Issues in Prediction and Related Work ....................................................................
1The Cyclone Server Architecture: Streamlining Delivery of Popular Content
"... Abstract— We propose a new technique for efficiently delivering pop-ular content from information repositories with bounded file caches. Our strategy relies on the use of fast erasure codes (a.k.a. forward error correcting codes) to generate en-codings of popular files, of which only a small sliding ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract— We propose a new technique for efficiently delivering pop-ular content from information repositories with bounded file caches. Our strategy relies on the use of fast erasure codes (a.k.a. forward error correcting codes) to generate en-codings of popular files, of which only a small sliding win-dow is cached at any time instant, even to satisfy an un-bounded number of asynchronous requests for the file. Our approach capitalizes on concurrency to maximize sharing of state across different request threads while minimizing cache memory utilization. Additional reduction in resource requirements arises from providing for a lightweight version of the network stack. In this paper, we describe the design and implementation of our approach as a Linux kernel subsystem. I.