Results 1 - 10
of
226
Summary cache: A scalable wide-area web cache sharing protocol
, 1998
"... The sharing of caches among Web proxies is an important technique to reduce Web traffic and alleviate network bottlenecks. Nevertheless it is not widely deployed due to the overhead of existing protocols. In this paper we propose a new protocol called "Summary Cache"; each proxy keeps a su ..."
Abstract
-
Cited by 894 (3 self)
- Add to MetaCart
The sharing of caches among Web proxies is an important technique to reduce Web traffic and alleviate network bottlenecks. Nevertheless it is not widely deployed due to the overhead of existing protocols. In this paper we propose a new protocol called "Summary Cache"; each proxy keeps a summary of the URLs of cached documents of each participating proxy and checks these summaries for potential hits before sending any queries. Two factors contribute to the low overhead: the summaries are updated only periodically, and the summary representations are economical -- as low as 8 bits per entry. Using trace-driven simulations and a prototype implementation, we show that compared to the existing Internet Cache Protocol (ICP), Summary Cache reduces the number of inter-cache messages by a factor of 25 to 60, reduces the bandwidth consumption by over 50%, and eliminates between 30 % to 95 % of the CPU overhead, while at the same time maintaining almost the same hit ratio as ICP. Hence Summary Cache enables cache sharing among a large number of proxies.
Locality-Aware Request Distribution in Cluster-based Network Servers
, 1998
"... We consider cluster-based network servers in which a front-end directs incoming requests to one of a number of back-ends. Specifically, we consider content-based request distribution: the front-end uses the content requested, in addition to information about the load on the back-end nodes, to choose ..."
Abstract
-
Cited by 327 (21 self)
- Add to MetaCart
We consider cluster-based network servers in which a front-end directs incoming requests to one of a number of back-ends. Specifically, we consider content-based request distribution: the front-end uses the content requested, in addition to information about the load on the back-end nodes, to choose which back-end will handle this request. Content-based request distribution can improve locality in the back-ends' main memory caches, increase secondary storage scalability by partitioning the server's database, and provide the ability to employ back-end nodes that are specialized for certain types of requests. As a specific policy for content-based request distribution, we introduce a simple, practical strategy for locality-aware request distribution (LARD). With LARD, the front-end distributes incoming requests in a manner that achieves high locality in the back-ends' main memory caches as well as load balancing. Locality is increased by dynamically subdividing the server's working set o...
An Analysis of Internet Content Delivery Systems
, 2002
"... In the span of only a few years, the Internet has experienced an astronomical increase in the use of specialized content delivery systems, such as content delivery networks and peer-to-peer file sharing systems. Therefore, an understanding of content delivery on the Internet now requires a detailed ..."
Abstract
-
Cited by 318 (9 self)
- Add to MetaCart
In the span of only a few years, the Internet has experienced an astronomical increase in the use of specialized content delivery systems, such as content delivery networks and peer-to-peer file sharing systems. Therefore, an understanding of content delivery on the Internet now requires a detailed understanding of how these systems are used in practice. This paper examines content delivery from the point of view of four content delivery systems: HTTP web traffic, the Akamai content delivery network, and Kazaa and Gnutella peer-to-peer file sharing traffic. We collected a trace of all incoming and outgoing network traffic at the University of Washington, a large university with over 60,000 students, faculty, and staff. From this trace, we isolated and characterized traffic belonging to each of these four delivery classes. Our results (1) quantify the rapidly increasing importance of new content delivery systems, particularly peerto-peer networks, (2) characterize the behavior of these systems from the perspectives of clients, objects, and servers, and (3) derive implications for caching in these systems. 1
On the Scale and Performance of Cooperative Web Proxy Caching
- ACM Symposium on Operating Systems Principles
, 1999
"... While algorithms for cooperative proxy caching have been widely studied, little is understood about cooperative-caching performance in the large-scale World Wide Web environment. This paper uses both trace-based analysis and analytic modelling to show the potential advantages and drawbacks of inter- ..."
Abstract
-
Cited by 313 (15 self)
- Add to MetaCart
(Show Context)
While algorithms for cooperative proxy caching have been widely studied, little is understood about cooperative-caching performance in the large-scale World Wide Web environment. This paper uses both trace-based analysis and analytic modelling to show the potential advantages and drawbacks of inter-proxy cooperation. With our traces, we evaluate quantitatively the performance-improvement potential of cooperation between 200 small-organization proxies within a university environment, and between two large-organization proxies handling 23,000 and 60,000 clients, respectively. With our model, we extend beyond these populations to project cooperative caching behavior in regions with millions of clients. Overall, we demonstrate that cooperative caching has performance benefits only within limited population bounds. We also use our model to examine the implications of future trends in Web-access behavior and traffic.
A survey of web caching schemes for the internet
- ACM Computer Communication Review
, 1999
"... The World Wide Web can be considered as a large distributed information system that provides access to shared data objects. As one of the most popular applications currently running on the Internet, the World Wide Web is of an exponential growth in size, which results in network congestion and serve ..."
Abstract
-
Cited by 292 (2 self)
- Add to MetaCart
(Show Context)
The World Wide Web can be considered as a large distributed information system that provides access to shared data objects. As one of the most popular applications currently running on the Internet, the World Wide Web is of an exponential growth in size, which results in network congestion and server overloading. Web caching has been recognized as one of the effective schemes to alleviate the service bottleneck and reduce the network traffic, thereby minimize the user access latency. In this paper, we first describe the elements of a Web caching system and its desirable properties. Then, we survey the state-of-art techniques which have been used in Web caching systems. Finally, we discuss the research frontier
Rate of Change and other Metrics: a Live Study of the World Wide Web
, 1997
"... Caching in the World Wide Web is based on two critical assumptions: that a significant fraction of requests reaccess resources that have already been retrieved; and that those resources do not change between accesses. We tested the validity of these assumptions, and their dependence on characterist ..."
Abstract
-
Cited by 216 (25 self)
- Add to MetaCart
(Show Context)
Caching in the World Wide Web is based on two critical assumptions: that a significant fraction of requests reaccess resources that have already been retrieved; and that those resources do not change between accesses. We tested the validity of these assumptions, and their dependence on characteristics of Web resources, including access rate, age at time of reference, content type, resource size, and Internet top-level domain. We also measured the rate at which resources change, and the prevalence of duplicate copies in the Web. We quantified the potential benefit of a shared proxycaching server in a large environment by using traces that were collected at the Internet connection points for two large corporations, representing significant numbers of references. Only 22% of the resources referenced in the traces we analyzed were accessed more than once, but about half of the references were to those multiplyreferenced resources. Of this half, 13% were to a resource that had been modifi...
Mining Longest Repeating Subsequences To Predict World Wide Web Surfing
, 1999
"... Modeling and predicting user surfing paths involves tradeoffs between model complexity and predictive accuracy. In this paper we explore predictive modeling techniques that attempt to reduce model complexity while retaining predictive accuracy. We show that compared to various Markov models, longest ..."
Abstract
-
Cited by 206 (5 self)
- Add to MetaCart
Modeling and predicting user surfing paths involves tradeoffs between model complexity and predictive accuracy. In this paper we explore predictive modeling techniques that attempt to reduce model complexity while retaining predictive accuracy. We show that compared to various Markov models, longest repeating subsequence models are able to significantly reduce model size while retaining the ability to make accurate predictions. In addition, sharp increases in the overall predictive capabilities of these models are achievable by modest increases to the number of predictions made. 1. Introduction Users surf the World Wide Web (WWW) by navigating along the hyperlinks that connect islands of content. If we could predict where surfers were going (that is, what they were seeking) we might be able to improve surfers' interactions with the WWW. Indeed, several research and industrial thrusts attempt to generate and utilize such predictions. These technologies include those for searching thro...
Scalable Content-aware Request Distribution in Cluster-based Network Servers
- IN PROCEEDINGS OF THE USENIX 2000 ANNUAL TECHNICAL CONFERENCE
, 2000
"... We present a scalable architecture for content-aware request distribution in Web server clusters. In this architecture, a level-4 switch acts as the point of contact for the server on the Internet and distributes the incoming requests to a number of back-end nodes. The switch does not perform any c ..."
Abstract
-
Cited by 157 (3 self)
- Add to MetaCart
We present a scalable architecture for content-aware request distribution in Web server clusters. In this architecture, a level-4 switch acts as the point of contact for the server on the Internet and distributes the incoming requests to a number of back-end nodes. The switch does not perform any content-based distribution. This function is performed by each of the back-end nodes, which may forward the incoming request to another back-end based on the requested content. In terms of scalability, this architecture compares favorably to existing approaches where a front-end node performs content-based distribution. In our architecture, the expensive operations of TCP connection establishment and hando are distributed among the back-ends, rather than being centralized in the front-end node. Only a minimal additional latency penalty is paid for much improved scalability. We have implemented this new architecture, and we demonstrate its superior scalability by comparing it to a system tha...
World wide web caching: Trends and techniques.
- IEEE Communications Magazine,
, 2000
"... Abstract Academic and corporate communities have been dedicating considerable effort to World Wide Web caching, When correctly deployed, web caching systems can lead to significant bandwidth savings, server load balancing, perceived network latency reduction, and higher content availability. In thi ..."
Abstract
-
Cited by 135 (0 self)
- Add to MetaCart
(Show Context)
Abstract Academic and corporate communities have been dedicating considerable effort to World Wide Web caching, When correctly deployed, web caching systems can lead to significant bandwidth savings, server load balancing, perceived network latency reduction, and higher content availability. In this paper, we survey the state-of-the-art in caching designs, presenting a taxonomy of architectures and describing a variety of specific trends and techniques.
Web Prefetching Between Low-Bandwidth Clients and Proxies: Potential and Performance
, 1999
"... The majority of the Internet population access the World Wide Web via dial-up modem connections. Studies have shown that the limited modem bandwidth is the main contributor to latency perceived by users. In this paper, we investigate one approach to reduce latency: prefetching between caching proxie ..."
Abstract
-
Cited by 122 (0 self)
- Add to MetaCart
(Show Context)
The majority of the Internet population access the World Wide Web via dial-up modem connections. Studies have shown that the limited modem bandwidth is the main contributor to latency perceived by users. In this paper, we investigate one approach to reduce latency: prefetching between caching proxies and browsers. The approach relies on the proxy to predict which cached documents a user might reference next, and takes advantage of the idle time between user requests to push or pull the documents to the user. Using traces of modem Web accesses, we evaluate the potential of the technique at reducing client latency, examine the design of prediction algorithms, and investigate their performance varying the parameters and implementation concerns. Our results show that prefetching combined with large browser cache and delta-compression can reduce client latency up to 23.4%. The reduction is achieved using the Prediction-by-Partial-Matching (PPM) algorithm, whose accuracy ranges from 40% to ...