Results 1  10
of
52
Object Replication Strategies in Content Distribution Networks
 Computer Communications
, 2001
"... content distribution networks (CDNs). In this paper we study the problem of optimally replicating objects in CDN servers. In our model, each Internet Au tonomous System (AS) is a node with finite storage ca pacity for replicating objects. The optimization problem is to replicate objects so that wh ..."
Abstract

Cited by 120 (0 self)
 Add to MetaCart
content distribution networks (CDNs). In this paper we study the problem of optimally replicating objects in CDN servers. In our model, each Internet Au tonomous System (AS) is a node with finite storage ca pacity for replicating objects. The optimization problem is to replicate objects so that when clients fetch objects from the nearest CDN server with the requested object, the average number of ASs traversed is minimized. We formulate this problem as a combinatorial optimization problem. We show that this optimization problem is NP complete. We develop four natural heuristics and compare them numerically using real Internet topology data. We find that the best results are obtained with heuristics that have all the CDN servers cooperating in making the replication decisions. We also develop a model for studying the benefits of cooperation between nodes, which provides insight into peertopeer content distribution.
The Cache Location Problem
 IEEE/ACM Transactions on Networking
"... This paper studies the problem of where to place network caches. Emphasis is given to caches that are transparent to the clients since they are easier to manage and they require no cooperation from the clients. Our goal is to minimize the overall flow or the average delay by placing a given number o ..."
Abstract

Cited by 118 (6 self)
 Add to MetaCart
This paper studies the problem of where to place network caches. Emphasis is given to caches that are transparent to the clients since they are easier to manage and they require no cooperation from the clients. Our goal is to minimize the overall flow or the average delay by placing a given number of caches in the network.
A new greedy approach for facility location problems
"... We present a simple and natural greedy algorithm for the metric uncapacitated facility location problem achieving an approximation guarantee of 1.61 whereas the best previously known was 1.73. Furthermore, we will show that our algorithm has a property which allows us to apply the technique of Lagra ..."
Abstract

Cited by 116 (9 self)
 Add to MetaCart
We present a simple and natural greedy algorithm for the metric uncapacitated facility location problem achieving an approximation guarantee of 1.61 whereas the best previously known was 1.73. Furthermore, we will show that our algorithm has a property which allows us to apply the technique of Lagrangian relaxation. Using this property, we can nd better approximation algorithms for many variants of the facility location problem, such as the capacitated facility location problem with soft capacities and a common generalization of the kmedian and facility location problem. We will also prove a lower bound on the approximability of the kmedian problem.
Greedy Facility Location Algorithms analyzed using Dual Fitting with FactorRevealing LP
 Journal of the ACM
, 2001
"... We present a natural greedy algorithm for the metric uncapacitated facility location problem and use the method of dual fitting to analyze its approximation ratio, which turns out to be 1.861. The running time of our algorithm is O(m log m), where m is the total number of edges in the underlying c ..."
Abstract

Cited by 100 (13 self)
 Add to MetaCart
We present a natural greedy algorithm for the metric uncapacitated facility location problem and use the method of dual fitting to analyze its approximation ratio, which turns out to be 1.861. The running time of our algorithm is O(m log m), where m is the total number of edges in the underlying complete bipartite graph between cities and facilities. We use our algorithm to improve recent results for some variants of the problem, such as the fault tolerant and outlier versions. In addition, we introduce a new variant which can be seen as a special case of the concave cost version of this problem.
A Framework for Evaluating Replica Placement Algorithms
, 2002
"... This paper introduces a framework for evaluating replica placement algorithms (RPA) for content delivery networks (CDN) as well as RPAs from other fields that might be applicable to current or future CDNs. First, the framework classifies and qualitatively compares RPAs using a generic set of primiti ..."
Abstract

Cited by 37 (1 self)
 Add to MetaCart
This paper introduces a framework for evaluating replica placement algorithms (RPA) for content delivery networks (CDN) as well as RPAs from other fields that might be applicable to current or future CDNs. First, the framework classifies and qualitatively compares RPAs using a generic set of primitives that capture problem definitions and heuristics. Second, it provides estimates for the decision times of RPAs using an analytic model. To achieve accuracy, the model takes into account disk accesses and message sizes, in addition to computational complexity and message numbers that have been considered traditionally. Third, it uses the "goodness" of produced placements to compare RPAs even when they have different problem definitions. Based on these evaluations, we identify open issues and potential areas for future research.
Efficient and Adaptive Web Replication using Content Clustering
 IEEE Journal on Selected Areas in Communications
, 2003
"... Recently there has been an increasing deployment of content distribution networks (CDNs) that offer hosting services to Web content providers. In this paper, we first compare the uncooperative pulling of Web contents used by commercial CDNs with the cooperative pushing. Our results show that the lat ..."
Abstract

Cited by 33 (3 self)
 Add to MetaCart
Recently there has been an increasing deployment of content distribution networks (CDNs) that offer hosting services to Web content providers. In this paper, we first compare the uncooperative pulling of Web contents used by commercial CDNs with the cooperative pushing. Our results show that the latter can achieve comparable users' perceived performance with only 4  5% of replication and update traffic compared to the former scheme. Therefore we explore how to efficiently push content to CDN nodes. Using tracedriven simulation, we show that replicating content in units of URLs can yield 60  70% reduction in clients' latency, compared to replicating in units of Web sites. However, it is very expensive to perform such a finegrained replication.
On the Optimization of Storage Capacity Allocation for Content Distribution
 Computer Networks
, 2003
"... The addition of storage capacity in network nodes for the caching or replication of popular data objects results in reduced enduser delay, reduced network tra#c, and improved scalability. ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
The addition of storage capacity in network nodes for the caching or replication of popular data objects results in reduced enduser delay, reduced network tra#c, and improved scalability.
Online Algorithms for Network Design
 IN PROCEEDINGS OF THE 16TH ACM SYMPOSIUM ON PARALLELISM IN ALGORITHMS AND ARCHITECTURES
, 2003
"... We give the first polylogarithmiccompetitive online algorithms for twometric network design problems. These problems are very general, including as special cases such problems as steiner tree, facility location, and concavecost single commodity flow. ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
We give the first polylogarithmiccompetitive online algorithms for twometric network design problems. These problems are very general, including as special cases such problems as steiner tree, facility location, and concavecost single commodity flow.
Improved Algorithms for Fault Tolerant Facility Location
 In Symposium on Discrete Algorithms
, 2001
"... We consider a generalization of the classical facility location problem, where we require the solution to be faulttolerant. Every demand point j is served by r j facilities instead of just one. The facilities other than the closest one are "backup" facilities for that demand, and will be used only ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
We consider a generalization of the classical facility location problem, where we require the solution to be faulttolerant. Every demand point j is served by r j facilities instead of just one. The facilities other than the closest one are "backup" facilities for that demand, and will be used only if the closer facility (or the link to it) fails. Hence, for any demand, we assign nonincreasing weights to the routing costs to farther facilities. The cost of assignment for demand j is the weighted linear combination of the assignment costs to its r j closest open facilities. We wish to minimize the sum of the cost of opening the facilities and the assignment cost of each demand j. We obtain a factor 4 approximation to this problem through the application of various rounding techniques to the linear relaxation of an integer program formulation. We further improve this...
Joint Object Placement and Node Dimensioning for Internet Content
 Information Processing Letters
, 2004
"... This paper studies a resource allocation problem in a graph, concerning the joint optimization of capacity allocation decisions and object placement decisions, given a single capacity constraint. This problem has applications in internet content distribution and other domains. The solution to the pr ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
This paper studies a resource allocation problem in a graph, concerning the joint optimization of capacity allocation decisions and object placement decisions, given a single capacity constraint. This problem has applications in internet content distribution and other domains. The solution to the problem comes through a multicommodity generalization of the single commodity $k$median problem. A twostep algorithm is developed that is capable of solving the multicommodity case optimally in polynomial time for the case of tree graphs, and approximately (within a constant factor of the optimal) in polynomial time for the case of general graphs.