Results 1  10
of
75
Object Replication Strategies in Content Distribution Networks
 Computer Communications
, 2001
"... content distribution networks (CDNs). In this paper we study the problem of optimally replicating objects in CDN servers. In our model, each Internet Au tonomous System (AS) is a node with finite storage ca pacity for replicating objects. The optimization problem is to replicate objects so that wh ..."
Abstract

Cited by 138 (0 self)
 Add to MetaCart
(Show Context)
content distribution networks (CDNs). In this paper we study the problem of optimally replicating objects in CDN servers. In our model, each Internet Au tonomous System (AS) is a node with finite storage ca pacity for replicating objects. The optimization problem is to replicate objects so that when clients fetch objects from the nearest CDN server with the requested object, the average number of ASs traversed is minimized. We formulate this problem as a combinatorial optimization problem. We show that this optimization problem is NP complete. We develop four natural heuristics and compare them numerically using real Internet topology data. We find that the best results are obtained with heuristics that have all the CDN servers cooperating in making the replication decisions. We also develop a model for studying the benefits of cooperation between nodes, which provides insight into peertopeer content distribution.
The Cache Location Problem
 IEEE/ACM Transactions on Networking
"... This paper studies the problem of where to place network caches. Emphasis is given to caches that are transparent to the clients since they are easier to manage and they require no cooperation from the clients. Our goal is to minimize the overall flow or the average delay by placing a given number o ..."
Abstract

Cited by 131 (6 self)
 Add to MetaCart
(Show Context)
This paper studies the problem of where to place network caches. Emphasis is given to caches that are transparent to the clients since they are easier to manage and they require no cooperation from the clients. Our goal is to minimize the overall flow or the average delay by placing a given number of caches in the network.
A new greedy approach for facility location problems
"... We present a simple and natural greedy algorithm for the metric uncapacitated facility location problem achieving an approximation guarantee of 1.61 whereas the best previously known was 1.73. Furthermore, we will show that our algorithm has a property which allows us to apply the technique of Lagra ..."
Abstract

Cited by 121 (9 self)
 Add to MetaCart
We present a simple and natural greedy algorithm for the metric uncapacitated facility location problem achieving an approximation guarantee of 1.61 whereas the best previously known was 1.73. Furthermore, we will show that our algorithm has a property which allows us to apply the technique of Lagrangian relaxation. Using this property, we can nd better approximation algorithms for many variants of the facility location problem, such as the capacitated facility location problem with soft capacities and a common generalization of the kmedian and facility location problem. We will also prove a lower bound on the approximability of the kmedian problem.
Greedy Facility Location Algorithms analyzed using Dual Fitting with FactorRevealing LP
 Journal of the ACM
, 2001
"... We present a natural greedy algorithm for the metric uncapacitated facility location problem and use the method of dual fitting to analyze its approximation ratio, which turns out to be 1.861. The running time of our algorithm is O(m log m), where m is the total number of edges in the underlying c ..."
Abstract

Cited by 104 (13 self)
 Add to MetaCart
(Show Context)
We present a natural greedy algorithm for the metric uncapacitated facility location problem and use the method of dual fitting to analyze its approximation ratio, which turns out to be 1.861. The running time of our algorithm is O(m log m), where m is the total number of edges in the underlying complete bipartite graph between cities and facilities. We use our algorithm to improve recent results for some variants of the problem, such as the fault tolerant and outlier versions. In addition, we introduce a new variant which can be seen as a special case of the concave cost version of this problem.
Efficient and Adaptive Web Replication using Content Clustering
 IEEE Journal on Selected Areas in Communications
, 2003
"... Recently there has been an increasing deployment of content distribution networks (CDNs) that offer hosting services to Web content providers. In this paper, we first compare the uncooperative pulling of Web contents used by commercial CDNs with the cooperative pushing. Our results show that the lat ..."
Abstract

Cited by 41 (3 self)
 Add to MetaCart
(Show Context)
Recently there has been an increasing deployment of content distribution networks (CDNs) that offer hosting services to Web content providers. In this paper, we first compare the uncooperative pulling of Web contents used by commercial CDNs with the cooperative pushing. Our results show that the latter can achieve comparable users' perceived performance with only 4  5% of replication and update traffic compared to the former scheme. Therefore we explore how to efficiently push content to CDN nodes. Using tracedriven simulation, we show that replicating content in units of URLs can yield 60  70% reduction in clients' latency, compared to replicating in units of Web sites. However, it is very expensive to perform such a finegrained replication.
A Framework for Evaluating Replica Placement Algorithms
, 2002
"... This paper introduces a framework for evaluating replica placement algorithms (RPA) for content delivery networks (CDN) as well as RPAs from other fields that might be applicable to current or future CDNs. First, the framework classifies and qualitatively compares RPAs using a generic set of primiti ..."
Abstract

Cited by 38 (1 self)
 Add to MetaCart
This paper introduces a framework for evaluating replica placement algorithms (RPA) for content delivery networks (CDN) as well as RPAs from other fields that might be applicable to current or future CDNs. First, the framework classifies and qualitatively compares RPAs using a generic set of primitives that capture problem definitions and heuristics. Second, it provides estimates for the decision times of RPAs using an analytic model. To achieve accuracy, the model takes into account disk accesses and message sizes, in addition to computational complexity and message numbers that have been considered traditionally. Third, it uses the "goodness" of produced placements to compare RPAs even when they have different problem definitions. Based on these evaluations, we identify open issues and potential areas for future research.
On the Optimization of Storage Capacity Allocation for Content Distribution
 Computer Networks
, 2003
"... The addition of storage capacity in network nodes for the caching or replication of popular data objects results in reduced enduser delay, reduced network tra#c, and improved scalability. ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
(Show Context)
The addition of storage capacity in network nodes for the caching or replication of popular data objects results in reduced enduser delay, reduced network tra#c, and improved scalability.
Online Algorithms for Network Design
 IN PROCEEDINGS OF THE 16TH ACM SYMPOSIUM ON PARALLELISM IN ALGORITHMS AND ARCHITECTURES
, 2003
"... We give the first polylogarithmiccompetitive online algorithms for twometric network design problems. These problems are very general, including as special cases such problems as steiner tree, facility location, and concavecost single commodity flow. ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
(Show Context)
We give the first polylogarithmiccompetitive online algorithms for twometric network design problems. These problems are very general, including as special cases such problems as steiner tree, facility location, and concavecost single commodity flow.
Joint Object Placement and Node Dimensioning for Internet Content
 Information Processing Letters
, 2004
"... This paper studies a resource allocation problem in a graph, concerning the joint optimization of capacity allocation decisions and object placement decisions, given a single capacity constraint. This problem has applications in internet content distribution and other domains. The solution to the pr ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
This paper studies a resource allocation problem in a graph, concerning the joint optimization of capacity allocation decisions and object placement decisions, given a single capacity constraint. This problem has applications in internet content distribution and other domains. The solution to the problem comes through a multicommodity generalization of the single commodity $k$median problem. A twostep algorithm is developed that is capable of solving the multicommodity case optimally in polynomial time for the case of tree graphs, and approximately (within a constant factor of the optimal) in polynomial time for the case of general graphs.
HighDensity Model for Server Allocation and Placement
 in Proc. of ACM SIGMETRICS ’02
, 2002
"... It is well known that optimal server placement is NPhard. We present an approximate model for the case when both clients and servers are dense, and propose a simple server allocation and placement algorithm based on highrate vector quantization theory. The key idea is to regard the location of a r ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
It is well known that optimal server placement is NPhard. We present an approximate model for the case when both clients and servers are dense, and propose a simple server allocation and placement algorithm based on highrate vector quantization theory. The key idea is to regard the location of a request as a random variable with probability density that is proportional to the demand at that location, and the problem of server placement as source coding, i.e., to optimally map a source value (request location) to a codeword (server location) to minimize distortion (network cost). This view has led to a joint server allocation and placement algorithm that has a timecomplexity that is linear in the number of clients. Simulations are presented to illustrate its performance.