Results 1  10
of
17
Algorithmic Graph Minor Theory: Decomposition, Approximation, and Coloring
 In 46th Annual IEEE Symposium on Foundations of Computer Science
, 2005
"... At the core of the seminal Graph Minor Theory of Robertson and Seymour is a powerful structural theorem capturing the structure of graphs excluding a fixed minor. This result is used throughout graph theory and graph algorithms, but is existential. We develop a polynomialtime algorithm using topolog ..."
Abstract

Cited by 47 (12 self)
 Add to MetaCart
At the core of the seminal Graph Minor Theory of Robertson and Seymour is a powerful structural theorem capturing the structure of graphs excluding a fixed minor. This result is used throughout graph theory and graph algorithms, but is existential. We develop a polynomialtime algorithm using topological graph theory to decompose a graph into the structure guaranteed by the theorem: a cliquesum of pieces almostembeddable into boundedgenus surfaces. This result has many applications. In particular, we show applications to developing many approximation algorithms, including a 2approximation to graph coloring, constantfactor approximations to treewidth and the largest grid minor, combinatorial polylogarithmicapproximation to halfintegral multicommodity flow, subexponential fixedparameter algorithms, and PTASs for many minimization and maximization problems, on graphs excluding a fixed minor. 1.
Embeddings of negativetype metrics and an improved approximation to generalized sparsest cut
 IN SODA ‘05: PROCEEDINGS OF THE SIXTEENTH ANNUAL ACMSIAM SYMPOSIUM ON DISCRETE ALGORITHMS
, 2005
"... In this paper, we study the metrics of negative type, which are metrics (V,d) such that √ d is an Euclidean metric; these metrics are thus also known as “ℓ2squared” metrics. We show how to embed npoint negativetype metrics into Euclidean space ℓ2 with distortion D = O(log 3/4 n). This embedding re ..."
Abstract

Cited by 44 (0 self)
 Add to MetaCart
In this paper, we study the metrics of negative type, which are metrics (V,d) such that √ d is an Euclidean metric; these metrics are thus also known as “ℓ2squared” metrics. We show how to embed npoint negativetype metrics into Euclidean space ℓ2 with distortion D = O(log 3/4 n). This embedding result, in turn, implies an O(log 3/4 k)approximation algorithm for the Sparsest Cut problem with nonuniform demands. Another corollary we obtain is that npoint subsets of ℓ1 embed into ℓ2 with distortion O(log 3/4 n).
A Lower Bound on the Distortion of Embedding Planar Metrics into Euclidean Space (Extended Abstract)
, 2002
"... We exhibit a simple infinite family of seriesparallel graphs which cannot be metrically embeddedinto Euclidean space with distortion smaller than \Omega (plog n). This matches Rao's [15] general upper bound for metric embedding of planar graphs into Euclidean space, thus resolving thequestion how w ..."
Abstract

Cited by 35 (2 self)
 Add to MetaCart
We exhibit a simple infinite family of seriesparallel graphs which cannot be metrically embeddedinto Euclidean space with distortion smaller than \Omega (plog n). This matches Rao's [15] general upper bound for metric embedding of planar graphs into Euclidean space, thus resolving thequestion how well do planar metrics embed in Euclidean spaces.
On the impossibility of dimension reduction in ℓ1
 In Proceedings of the 44th Annual IEEE Conference on Foundations of Computer Science
, 2003
"... The JohnsonLindenstrauss Lemma shows that any n points in Euclidean space (with distances measured by the ℓ2 norm) may be mapped down to O((log n)/ε 2) dimensions such that no pairwise distance is distorted by more than a (1+ε) factor. Determining whether such dimension reduction is possible in ℓ1 ..."
Abstract

Cited by 28 (1 self)
 Add to MetaCart
The JohnsonLindenstrauss Lemma shows that any n points in Euclidean space (with distances measured by the ℓ2 norm) may be mapped down to O((log n)/ε 2) dimensions such that no pairwise distance is distorted by more than a (1+ε) factor. Determining whether such dimension reduction is possible in ℓ1 has been an intriguing open question. We show strong lower bounds for general dimension reduction in ℓ1. We give an explicit family of n points in ℓ1 such that any embedding with distortion δ requires n Ω(1/δ2) dimensions. This proves that there is no analog of the JohnsonLindenstrauss Lemma for ℓ1; in fact embedding with any constant distortion requires n Ω(1) dimensions. Further, embedding the points into ℓ1 with 1 + ε distortion requires n 1
Metric embeddings with relaxed guarantees
 IN PROCEEDINGS OF THE 46TH IEEE SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE
, 2005
"... We consider the problem of embedding finite metrics with slack: we seek to produce embeddings with small dimension and distortion while allowing a (small) constant fraction of all distances to be arbitrarily distorted. This definition is motivated by recent research in the networking community, whic ..."
Abstract

Cited by 23 (3 self)
 Add to MetaCart
We consider the problem of embedding finite metrics with slack: we seek to produce embeddings with small dimension and distortion while allowing a (small) constant fraction of all distances to be arbitrarily distorted. This definition is motivated by recent research in the networking community, which achieved striking empirical success at embedding Internet latencies with low distortion into lowdimensional Euclidean space, provided that some small slack is allowed. Answering an open question of Kleinberg, Slivkins, and Wexler [29], we show that provable guarantees of this type can in fact be achieved in general: any finite metric can be embedded, with constant slack and constant distortion, into constantdimensional Euclidean space. We then show that there exist stronger embeddings into ℓ1 which exhibit
LOCAL VERSUS GLOBAL PROPERTIES OF METRIC SPACES
, 2012
"... Motivated by applications in combinatorial optimization, we study the extent to which the global properties of a metric space, and especially its embeddability into ℓ1 with low distortion, are determined by the properties of its small subspaces. We establish both upper and lower bounds on the disto ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
Motivated by applications in combinatorial optimization, we study the extent to which the global properties of a metric space, and especially its embeddability into ℓ1 with low distortion, are determined by the properties of its small subspaces. We establish both upper and lower bounds on the distortion of embedding locally constrained metrics into various target spaces. Other aspects of locally constrained metrics are studied as well, in particular, how far are those metrics from general metrics.
Vertex sparsifiers: New results from old techniques
 IN 13TH INTERNATIONAL WORKSHOP ON APPROXIMATION, RANDOMIZATION, AND COMBINATORIAL OPTIMIZATION, VOLUME 6302 OF LECTURE NOTES IN COMPUTER SCIENCE
, 2010
"... Given a capacitated graph G = (V, E) and a set of terminals K ⊆ V, how should we produce a graph H only on the terminals K so that every (multicommodity) flow between the terminals in G could be supported in H with low congestion, and vice versa? (Such a graph H is called a flowsparsifier for G.) ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
Given a capacitated graph G = (V, E) and a set of terminals K ⊆ V, how should we produce a graph H only on the terminals K so that every (multicommodity) flow between the terminals in G could be supported in H with low congestion, and vice versa? (Such a graph H is called a flowsparsifier for G.) What if we want H to be a “simple ” graph? What if we allow H to be a convex combination of simple graphs? Improving on results of Moitra [FOCS 2009] and Leighton and Moitra [STOC 2010], we give efficient algorithms for constructing: (a) a flowsparsifier H that log k log log k maintains congestion up to a factor of O (), where k = K. (b) a convex combination of trees over the terminals K that maintains congestion up to a factor of O(log k). (c) for a planar graph G, a convex combination of planar graphs that maintains congestion up to a constant factor. This requires us to give a new algorithm for the 0extension problem, the first one in which the preimages of each terminal are connected in G. Moreover, this result extends to minorclosed families of graphs. Our bounds immediately imply improved approximation guarantees for several terminalbased cut and ordering problems.
Approximation Algorithms for Network Design: A Survey
"... In a typical instance of a network design problem, we are given a directed or undirected graph G = (V,E), nonnegative edgecosts ce for all e ∈ E, and our goal is to find a minimumcost subgraph H of G that satisfies some design criteria. For example, we may wish to find a minimumcost set of edges ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
In a typical instance of a network design problem, we are given a directed or undirected graph G = (V,E), nonnegative edgecosts ce for all e ∈ E, and our goal is to find a minimumcost subgraph H of G that satisfies some design criteria. For example, we may wish to find a minimumcost set of edges that induces a connected graph (this is the minimumcost spanning tree problem), or we might want to find a minimumcost set of arcs in a directed graph such that every vertex can reach every other vertex (this is the minimumcost strongly connected subgraph problem). This abstract model for network design problems has a large number of practical applications; the design process of telecommunication and traffic networks, and VLSI chip design are just two examples. Many practically relevant instances of network design problems are NPhard, and thus likely intractable. This survey focuses on approximation algorithms as one possible way of circumventing this impasse. Approximation algorithms are efficient (i.e., they run in polynomialtime), and they compute solutions to a given instance of an optimization problem whose objective values are close to those of the respective optimum solutions. More concretely, most of the problems discussed in this survey are minimization problems. We then say that an algorithm is an αapproximation for a given problem if the ratio of the cost of an approximate solution computed by the algorithm to that of an optimum solution is at most α over all instances. In the
Dimension reduction in L1: a negative result
"... The JohnsonLindenstrauss lemma shows that only d = O ( 1 ɛ 2 log n) dimensions are needed to embed any set of n points in L2 into ℓ d 2 with distortion at most (1 + ɛ). We will show that such dimension reduction is not possible in L1. In particular, we will give a set of n points in L1 that cannot ..."
Abstract
 Add to MetaCart
The JohnsonLindenstrauss lemma shows that only d = O ( 1 ɛ 2 log n) dimensions are needed to embed any set of n points in L2 into ℓ d 2 with distortion at most (1 + ɛ). We will show that such dimension reduction is not possible in L1. In particular, we will give a set of n points in L1 that cannot be Dembedded into ℓ d 1 unless d ≥ n Ω(1/D2). This result was originally shown by Brinkman and Charikar [1], providing a negative answer to whether a JohnsonLindenstrauss analog exists for L1, a previously open question (see e.g. [5]). Our lecture will follow a different proof by Lee and Naor [6]. The highlevel overview of both proofs is simply to show that a particular point set cannot be embedded without the stated number of dimensions. The point set is the recursive diamond graph, which can be defined on n vertices, for n that is any power of 2. Values of n that are not powers of 2 are handled by noting that there exists a power of 2 such that the associated diamond graph is O(1)equivalent to a point set of size n.