Results 1  10
of
96
How bad is selfish routing?
 JOURNAL OF THE ACM
, 2002
"... We consider the problem of routing traffic to optimize the performance of a congested network. We are given a network, a rate of traffic between each pair of nodes, and a latency function for each edge specifying the time needed to traverse the edge given its congestion; the objective is to route t ..."
Abstract

Cited by 516 (27 self)
 Add to MetaCart
We consider the problem of routing traffic to optimize the performance of a congested network. We are given a network, a rate of traffic between each pair of nodes, and a latency function for each edge specifying the time needed to traverse the edge given its congestion; the objective is to route traffic such that the sum of all travel times—the total latency—is minimized. In many settings, it may be expensive or impossible to regulate network traffic so as to implement an optimal assignment of routes. In the absence of regulation by some central authority, we assume that each network user routes its traffic on the minimumlatency path available to it, given the network congestion caused by the other users. In general such a “selfishly motivated ” assignment of traffic to paths will not minimize the total latency; hence, this lack of regulation carries the cost of decreased network performance. In this article, we quantify the degradation in network performance due to unregulated traffic. We prove that if the latency of each edge is a linear function of its congestion, then the total latency of the routes chosen by selfish network users is at most 4/3 times the minimum possible total latency (subject to the condition that all traffic must be routed). We also consider the more general setting in which edge latency functions are assumed only to be continuous and nondecreasing in the edge congestion. Here, the total
Primaldual approximation algorithms for metric facility location and kmedian problems
 Journal of the ACM
, 1999
"... ..."
Nearest Neighbors In HighDimensional Spaces
, 2004
"... In this chapter we consider the following problem: given a set P of points in a highdimensional space, construct a data structure which given any query point q nds the point in P closest to q. This problem, called nearest neighbor search is of significant importance to several areas of computer sci ..."
Abstract

Cited by 76 (2 self)
 Add to MetaCart
In this chapter we consider the following problem: given a set P of points in a highdimensional space, construct a data structure which given any query point q nds the point in P closest to q. This problem, called nearest neighbor search is of significant importance to several areas of computer science, including pattern recognition, searching in multimedial data, vector compression [GG91], computational statistics [DW82], and data mining. Many of these applications involve data sets which are very large (e.g., a database containing Web documents could contain over one billion documents). Moreover, the dimensionality of the points is usually large as well (e.g., in the order of a few hundred). Therefore, it is crucial to design algorithms which scale well with the database size as well as with the dimension. The nearestneighbor problem is an example of a large class of proximity problems, which, roughly speaking, are problems whose definitions involve the notion of...
The Online Median Problem
 In Proceedings of the 41st Annual IEEE Symposium on Foundations of Computer Science
, 2000
"... We introduce a natural variant of the (metric uncapacitated) kmedian problem that we call the online median problem. Whereas the kmedian problem involves optimizing the simultaneous placement of k facilities, the online median problem imposes the following additional constraints: the facilities ar ..."
Abstract

Cited by 75 (2 self)
 Add to MetaCart
We introduce a natural variant of the (metric uncapacitated) kmedian problem that we call the online median problem. Whereas the kmedian problem involves optimizing the simultaneous placement of k facilities, the online median problem imposes the following additional constraints: the facilities are placed one at a time; a facility cannot be moved once it is placed, and the total number of facilities to be placed, k, is not known in advance. The objective of an online median algorithm is to minimize the competitive ratio, that is, the worstcase ratio of the cost of an online placement to that of an optimal offline placement. Our main result is a lineartime constantcompetitive algorithm for the online median problem. In addition, we present a related, though substantially simpler, lineartime constantfactor approximation algorithm for the (metric uncapacitated) facility location problem. The latter algorithm is similar in spirit to the recent primaldualbased facility location algorithm of Jain and Vazirani, but our approach is more elementary and yields an improved running time.
The Prize Collecting Steiner Tree Problem
 In Proceedings of the 11th Annual ACMSIAM Symposium on Discrete Algorithms
, 1998
"... This work is motivated by an application in local access network design that can be modeled using the NPhard Prize Collecting Steiner Tree problem. We consider several variants on this problem and on the primaldual 2approximation algorithm devised for it by Goemans and Williamson. We develop seve ..."
Abstract

Cited by 70 (1 self)
 Add to MetaCart
This work is motivated by an application in local access network design that can be modeled using the NPhard Prize Collecting Steiner Tree problem. We consider several variants on this problem and on the primaldual 2approximation algorithm devised for it by Goemans and Williamson. We develop several modifications to the algorithm which lead to theoretical as well as practical improvements in the performance of the algorithm for the original problem. We also demonstrate how already existing algorithms can be extended to solve the bicriteria variants of the problem with constant factor approximation guarantees. Our work leads to practical heuristics applicable in network design.
A Combinatorial, PrimalDual approach to Semidefinite Programs
"... Semidefinite programs (SDP) have been used in many recent approximation algorithms. We develop a general primaldual approach to solve SDPs using a generalization of the wellknown multiplicative weights update rule to symmetric matrices. For a number of problems, such as Sparsest Cut and Balanced S ..."
Abstract

Cited by 65 (11 self)
 Add to MetaCart
Semidefinite programs (SDP) have been used in many recent approximation algorithms. We develop a general primaldual approach to solve SDPs using a generalization of the wellknown multiplicative weights update rule to symmetric matrices. For a number of problems, such as Sparsest Cut and Balanced Separator in undirected and directed weighted graphs, and the Min UnCut problem, this yields combinatorial approximation algorithms that are significantly more efficient than interior point methods. The design of our primaldual algorithms is guided by a robust analysis of rounding algorithms used to obtain integer solutions from fractional ones.
Strengthening Integrality Gaps for Capacitated Network Design and Covering Problems
"... A capacitated covering IP is an integer program of the form min{cxUx ≥ d, 0 ≤ x ≤ b, x ∈ Z +}, where all entries of c, U, and d are nonnegative. Given such a formulation, the ratio between the optimal integer solution and the optimal solution to the linear program relaxation can be as bad as d∞ ..."
Abstract

Cited by 61 (1 self)
 Add to MetaCart
A capacitated covering IP is an integer program of the form min{cxUx ≥ d, 0 ≤ x ≤ b, x ∈ Z +}, where all entries of c, U, and d are nonnegative. Given such a formulation, the ratio between the optimal integer solution and the optimal solution to the linear program relaxation can be as bad as d∞, even when U consists of a single row. We show that by adding additional inequalities, this ratio can be improved significantly. In the general case, we show that the improved ratio is bounded by the maximum number of nonzero coefficients in a row of U, and provide a polynomialtime approximation algorithm to achieve this bound. This improves the previous best approximation algorithm which guaranteed a solution within the maximum row sum times optimum. We also show that for particular instances of capacitated covering problems, including the minimum knapsack problem and the capacitated network design problem, these additional inequalities yield even stronger improvements in the IP/LP ratio. For the minimum knapsack, we show that this improved ratio is at most 2. This is the first nontrivial IP/LP ratio for this basic problem. Capacitated network design generalizes the classical network design problem by introducing capacities on the edges, whereas previous work only considers the case when all capacities equal 1. For capacitated network design problems, we show that this improved ratio depends on a parameter of the graph, and we also provide polynomialtime approximation algorithms to match this bound. This improves on the best previous mapproximation, where m is the number of edges in the graph. We also discuss improvements for some other special capacitated covering problems, including the fixed charge network flow problem. Finally, for the capacitated network design problem, we give some stronger results and algorithms for series parallel graphs and strengthen these further for outerplanar graphs. Most of our approximation algorithms rely on solving a single LP. When the original LP (before adding our strengthening inequalities) has a polynomial number of constraints, we describe a combinatorial FPTAS for the LP with our (exponentiallymany) inequalities added. Our contribution here is to describe an appropriate
Designing networks for selfish users is hard
 In Proceedings of the 42nd Annual Symposium on Foundations of Computer Science
, 2001
"... Abstract We consider a directed network in which every edge possesses a latency function specifying the time needed to traverse the edge given its congestion. Selfish, noncooperative agents constitute the network traffic and wish to travel from a source s to a sink t as quickly as possible. Since th ..."
Abstract

Cited by 59 (8 self)
 Add to MetaCart
Abstract We consider a directed network in which every edge possesses a latency function specifying the time needed to traverse the edge given its congestion. Selfish, noncooperative agents constitute the network traffic and wish to travel from a source s to a sink t as quickly as possible. Since the route chosen by one network user affects the congestion (and hence the latency) experienced by others, we model the problem as a noncooperative game. Assuming each agent controls only a negligible portion of the overall traffic, Nash equilibria in this noncooperative game correspond to st flows in which all flow paths have equal latency. A natural measure for the performance of a network used by selfish agents is the common latency experienced by each user in a Nash equilibrium. It is a counterintuitive but wellknown fact that removing edges from a network may improve its performance; the most famous example of this phenomenon is the socalled Braess's Paradox. This fact motivates the following network design problem: given such a network, which edges should be removed to obtain the best possible flow at Nash equilibrium? Equivalently, given a large network of candidate edges to be built, which subnetwork will exhibit the best performance when used selfishly? We give optimal inapproximability results and approximation algorithms for several network design problems of this type. For example, we prove that for networks with n vertices and continuous, nondecreasing latency functions, there is no approximation algorithm for this problem with approximation ratio less than n/2 (unless P = N P). We also prove this hardness result to be best possible by exhibiting an n/2approximation algorithm. For networks in which the latency of each edge is a linear function of the congestion, we prove that there is no ( 43 ffl)approximation algorithm for the problem (for any ffl> 0, unless P = N P); the existence of a 43approximation algorithm follows easily from existing work, proving this hardness result sharp. Moreover, we prove that an optimal approximation algorithm for these problems is what we call the trivial algorithm: given a network of candidate edges, build the entire network. A consequence of this result is that Braess's Paradox (even in its worstpossible manifestation) is impossible to detect efficiently.
Primaldual algorithms for connected facility location problems
 Algorithmica
, 2002
"... We consider the Connected Facility Location problem. We are given a graph G = (V, E) with costs {ce} on the edges, a set of facilities F ⊆ V, and a set of clients D ⊆ V. Facility i has a facility opening cost fi and client j has dj units of demand. We are also given a parameter M ≥ 1. A solution ope ..."
Abstract

Cited by 58 (7 self)
 Add to MetaCart
We consider the Connected Facility Location problem. We are given a graph G = (V, E) with costs {ce} on the edges, a set of facilities F ⊆ V, and a set of clients D ⊆ V. Facility i has a facility opening cost fi and client j has dj units of demand. We are also given a parameter M ≥ 1. A solution opens some facilities, say F, assigns each client j to an open facility i(j), and connects the open facilities by a Steiner tree T. The total cost incurred is � i∈F fi + � j∈D djci(j)j + M � e∈T ce. We want a solution of minimum cost. A special case of this problem is when all opening costs are 0 and facilities may be opened anywhere, i.e., F = V. If we know a facility v that is open, then the problem becomes a special case of the singlesink buyatbulk problem with two cable types, also known as the rentorbuy problem. We give the first primaldual algorithms for these problems and achieve the best known approximation guarantees. We give a 8.55approximation algorithm for the connected facility location problem and a 4.55approximation algorithm for the rentorbuy problem. Previously the best approximation factors for these problems were 10.66 and 9.001 respectively [8]. Further, these results were not combinatorial — they were obtained by solving an exponential size linear programming relaxation. Our algorithm integrates the primaldual approaches for the facility location problem [11] and the Steiner tree problem [1, 3]. We also consider the connected kmedian problem and give a constantfactor approximation by using our primaldual algorithm for connected facility location. We generalize our results to an edge capacitated variant of these problems and give a constantfactor approximation for these variants.
One for the price of two: A unified approach for approximating covering problems
, 1998
"... We present a simple and unified approach for developing and analyzing approximation algorithms for covering problems. We illustrate this on approximation algorithms for the following problems: Vertex Cover, Set Cover, Feedback Vertex Set, Generalized Steiner Forest and related problems. The main id ..."
Abstract

Cited by 57 (14 self)
 Add to MetaCart
We present a simple and unified approach for developing and analyzing approximation algorithms for covering problems. We illustrate this on approximation algorithms for the following problems: Vertex Cover, Set Cover, Feedback Vertex Set, Generalized Steiner Forest and related problems. The main idea can be phrased as follows: iteratively, pay two dollars (at most) to reduce the total optimum by one dollar (at least), so the rate of payment is no more than twice the rate of the optimum reduction. This implies a total payment (i.e., approximation cost) ~ twice the optimum cost. Our main contribution is based on a formal definition for covering problems, which includes all the above fundamental problems and others. We further extend the Bafna, Berman and Fujito LocalRatio theorem. This extension eventually yields a short generic rapproximation algorithm which can generate most known approximation algorithms for most covering problems. Another extension of the LocalRatio theorem to randomized algorithms gives a simple proof of Pitt's randomized approximation for Vertex Cover. Using this approach, we develop a modified greedy algorithm, which for Vertex Cover, gives an expected performance ratio <= 2.