Results 1  10
of
12
Subexponential algorithms for Unique Games and related problems
 IN 51 ST IEEE FOCS
, 2010
"... We give subexponential time approximation algorithms for the unique games and the small set expansion problems. Specifically, for some absolute constant c, we give: 1. An exp(kn ε)time algorithm that, given as input a kalphabet unique game on n variables that has an assignment satisfying 1 − ε c f ..."
Abstract

Cited by 82 (7 self)
 Add to MetaCart
We give subexponential time approximation algorithms for the unique games and the small set expansion problems. Specifically, for some absolute constant c, we give: 1. An exp(kn ε)time algorithm that, given as input a kalphabet unique game on n variables that has an assignment satisfying 1 − ε c fraction of its constraints, outputs an assignment satisfying 1 − ε fraction of the constraints. 2. An exp(n ε /δ)time algorithm that, given as input an nvertex regular graph that has a set S of δn vertices with edge expansion at most ε c, outputs a set S ′ of at most δn vertices with edge expansion at most ε. We also obtain a subexponential algorithm with improved approximation for the MultiCut problem, as well as subexponential algorithms with improved approximations to MaxCut, SparsestCut and Vertex Cover on some interesting subclasses of instances. Khot’s Unique Games Conjecture (UGC) states that it is NPhard to achieve approximation guarantees such as ours for unique games. While our results stop short of refusing the UGC, they do suggest that Unique Games is significantly easier than NPhard problems such as 3SAT,3LIN, Label Cover and more, that are believed not to have a subexponential algorithm achieving a nontrivial approximation ratio. The main component in our algorithms is a new result on graph decomposition that may have other applications. Namely we show that for every δ> 0 and a regular nvertex graph G, by changing at most δ fraction of G’s edges, one can break G into disjoint parts so that the induced graph on each part has at most n ε eigenvalues larger than 1 − η (where ε, η depend polynomially on δ). Our results are based on combining this decomposition with previous algorithms for unique games on graphs with few large eigenvalues (Kolla and Tulsiani 2007, Kolla 2010).
Combinatorial Approximation Algorithms for MAXCUT using Random Walks
"... We give the first combinatorial approximation algorithm for MAXCUT that beats the trivial 0.5 factor by a constant. The main partitioning procedure is very intuitive, natural, and easily described. It essentially performs a number of random walks and aggregates the information to provide the partiti ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
(Show Context)
We give the first combinatorial approximation algorithm for MAXCUT that beats the trivial 0.5 factor by a constant. The main partitioning procedure is very intuitive, natural, and easily described. It essentially performs a number of random walks and aggregates the information to provide the partition. We can control the running time to get an approximation factorrunning time tradeoff. We show that for any constant b> 1.5, there is an Õ(nb) algorithm that outputs a (0.5 + δ)approximation for MAXCUT, where δ = δ(b) is some positive constant. One of the components of our algorithm is a weak local graph partitioning procedure that may be of independent interest. Given a starting vertex i and a conductance parameter φ, unless a random walk of length ℓ = O(log n) starting from i mixes rapidly (in terms of φ and ℓ), we can find a cut of conductance at most φ close to the vertex. The work done per vertex found in the cut is sublinear in n.
SumofSquares Proofs and the Quest toward Optimal Algorithms
"... Abstract. In order to obtain the bestknown guarantees, algorithms are traditionally tailored to the particular problem we want to solve. Two recent developments, the Unique Games Conjecture (UGC) and the SumofSquares (SOS) method, surprisingly suggest that this tailoring is not necessary and that ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
Abstract. In order to obtain the bestknown guarantees, algorithms are traditionally tailored to the particular problem we want to solve. Two recent developments, the Unique Games Conjecture (UGC) and the SumofSquares (SOS) method, surprisingly suggest that this tailoring is not necessary and that a single efficient algorithm could achieve best possible guarantees for a wide range of different problems. The Unique Games Conjecture (UGC) is a tantalizing conjecture in computational complexity, which, if true, will shed light on the complexity of a great many problems. In particular this conjecture predicts that a single concrete algorithm provides optimal guarantees among all efficient algorithms for a large class of computational problems. The SumofSquares (SOS) method is a general approach for solving systems of polynomial constraints. This approach is studied in several scientific disciplines, including real algebraic geometry, proof complexity, control theory, and mathematical programming, and has found applications in fields as diverse as quantum information theory, formal verification, game theory and many others. We survey some connections that were recently uncovered between the Unique Games Conjecture and the SumofSquares method. In particular, we discuss new tools to rigorously bound the running time of the SOS method for obtaining approximate solutions to hard optimization problems, and how these tools give the potential for the sumofsquares method to provide new guarantees for many problems of interest, and possibly to even refute the UGC.
Towards an SDPbased Approach to Spectral Methods A NearlyLinearTime Algorithm for Graph Partitioning and Decomposition
"... In this paper, we consider the following graph partitioning problem: The input is an undirected graph G = (V, E), a balance parameter b ∈ (0, 1/2] and a target conductance value γ ∈ (0, 1). The output is a cut which, if nonempty, is of conductance at most O ( f), for some function f (G, γ), and whi ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
(Show Context)
In this paper, we consider the following graph partitioning problem: The input is an undirected graph G = (V, E), a balance parameter b ∈ (0, 1/2] and a target conductance value γ ∈ (0, 1). The output is a cut which, if nonempty, is of conductance at most O ( f), for some function f (G, γ), and which is either balanced or well correlated with all cuts of conductance at most γ. In a seminal paper, Spielman and Teng γ log 3 V [16] gave an Õ(E/γ2)time algorithm for f = and used it to decompose graphs into a collection of nearexpanders [18]. We present a new spectral algorithm for this problem which runs in time Õ(E/γ) for f = √ γ. Our result yields the first nearlylinear time algorithm for the classic Balanced Separator problem that achieves the asymptotically optimal approximation guarantee for spectral methods. Our method has the advantage of being conceptually simple and relies on a primaldual semidefiniteprogramming (SDP) approach. We first consider a natural SDP relaxation for the Balanced Separator problem. While it is easy to obtain from this SDP a certificate of the fact that the graph has no balanced cut of conductance less than γ, somewhat surprisingly, we can obtain a certificate for the stronger correlation condition. This is achieved via a novel separation oracle for our SDP and by appealing to Arora and Kale’s [3] framework to bound the running time. Our result contains technical ingredients that may be of independent interest.
Fast Approximation Algorithms for Graph Partitioning Using Spectral and SemidefiniteProgramming Techniques
, 2011
"... Graphpartitioning problems are a central topic of research in the study of approximation algorithms. They are of interest to both theoreticians, for their farreaching connections to different areas of mathematics, and to practitioners, as algorithms for graph partitioning can be used as fundamenta ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Graphpartitioning problems are a central topic of research in the study of approximation algorithms. They are of interest to both theoreticians, for their farreaching connections to different areas of mathematics, and to practitioners, as algorithms for graph partitioning can be used as fundamental building blocks in many applications, such as image segmentation and clustering. While many theoretical approximation algorithms exist for graph partitioning, they often rely on multicommodityflow computations that run in quadratic time in the worst case and are too timeconsuming for the massive graphs that are prevalent in today’s applications. In this dissertation, we study the design of approximation algorithms that yield strong approximation guarantees, while running in subquadratic time and relying on computational procedures that are often fast in practice. The results that we describe encompass two different approaches to the construction of such fast algorithms. Our first result exploits the CutMatching game of Khandekar, Rao and Vazirani [41], an elegant framework for designing graphpartitioning algorithms that rely on singlecommodity, rather than multicommodity, maximum flow. Within this framework, we give two novel algorithms that achieve an O(log n)approximation for the problem of finding the cut of minimum
Parallelized Solution to Semidefinite Programmings in Quantum Complexity Theory
, 2010
"... In this paper we present an equilibrium value based framework for solving SDPs via the multiplicative weight update method which is different from the one in Kale’s thesis [Kal07]. One of the main advantages of the new framework is that we can guarantee the convertibility from approximate to exact ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
In this paper we present an equilibrium value based framework for solving SDPs via the multiplicative weight update method which is different from the one in Kale’s thesis [Kal07]. One of the main advantages of the new framework is that we can guarantee the convertibility from approximate to exact feasibility in a much more general class of SDPs than previous result. Another advantage is the design of the oracle which is necessary for applying the multiplicative weight update method is much simplified in general cases. This leads to an alternative and easier solutions to the SDPs used in the previous results QIP(2)⊆PSPACE [JUW09] and QMAM=PSPACE [JJUW09]. Furthermore, we provide a generic form of SDPs which can be solved in the similar way. By parallelizing every step in our solution, we are able to solve a class of SDPs in NC. Although our motivation is from quantum computing, our result will also apply directly to any SDP which satisfies our conditions. In addition to the new framework for solving SDPs, we also provide a novel framework which improves the range of equilibrium value problems that can be solved via the multiplicative weight update method. Before this work we are only able to calculate the equilibrium value where one of the two convex sets needs to be the set of density operators. Our work demonstrates that in the case when one set is the set of density operators with further linear constraints, we are still able to approximate the equilibrium value to high precision via the multiplicative weight update method.
Subsampling Mathematical Relaxations and Averagecase Complexity
, 2010
"... We initiate a study of when the value of mathematical relaxations such as linear and semidefinite programs for constraint satisfaction problems (CSPs) is approximately preserved when restricting the instance to a subinstance induced by a small random subsample of the variables. Let C be a family o ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
We initiate a study of when the value of mathematical relaxations such as linear and semidefinite programs for constraint satisfaction problems (CSPs) is approximately preserved when restricting the instance to a subinstance induced by a small random subsample of the variables. Let C be a family of CSPs such as 3SAT, MaxCut, etc.., and let Π be a mathematical program that is a relaxation for C, in the sense that for every instance P ∈ C, Π(P) is a number in [0, 1] upper bounding the maximum fraction of satisfiable constraints of P. Loosely speaking, we say that subsampling holds for C and Π if for every sufficiently dense instance P ∈ C and every ε> 0, if we let P ′ be the instance obtained by restricting P to a sufficiently large constant number of variables, then Π(P ′ ) ∈ (1 ± ε)Π(P). We say that weak subsampling holds if the above guarantee is replaced with Π(P ′ ) = 1 − Θ(γ) whenever Π(P) = 1 − γ, where Θ hides only absolute constants. We obtain both positive and negative results, showing that: 1. Subsampling holds for the BasicLP and BasicSDP programs. BasicSDP is a variant of the semidefinite program considered by Raghavendra (2008), who showed it gives an optimal approximation factor for every constraintsatisfaction problem under the unique games conjecture. BasicLP is the linear programming analog of BasicSDP.
Analyzing Massive Graphs in the Semistreaming Model
"... Massive graphs arise in a many scenarios, for example, traffic data analysis in large networks, large scale scientific experiments, and clustering of large data sets. The semistreaming model was proposed for processing massive graphs. In the semistreaming model, we have a random accessible memory ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Massive graphs arise in a many scenarios, for example, traffic data analysis in large networks, large scale scientific experiments, and clustering of large data sets. The semistreaming model was proposed for processing massive graphs. In the semistreaming model, we have a random accessible memory which is nearlinear in the number of vertices. The input graph (or equivalently, edges in the graph) is presented as a sequential list of edges (insertiononly model) or edge insertions and deletions (dynamic model). The list is readonly but we may make multiple passes over the list. There has been a few results in the insertiononly model such as computing distance spanners and approximating the maximum matching. In this thesis, we present some algorithms and techniques for (i) solving more complex problems in the semistreaming model, (for example, problems in the dynamic model) and (ii) having better solutions for the problems which have been studied (for example, the maximum matching problem). In course of both of these, we develop new techniques with broad applications and explore the rich tradeoffs between the complexity of models (insertiononly streams vs. dynamic streams), the number of passes, space, accuracy, and running time. 1. We initiate the study of dynamic graph streams. We start with basic problems such as the connectivity problem and computing the minimum spanning tree. These problems are This dissertation is available at ScholarlyCommons:
Correlation Clustering in Data Streams
"... In this paper, we address the problem of correlation clustering in the dynamic data stream model. The stream consists of updates to the edge weights of a graph on n nodes and the goal is to find a nodepartition such that the endpoints of negativeweight edges are typically in different clusters w ..."
Abstract
 Add to MetaCart
(Show Context)
In this paper, we address the problem of correlation clustering in the dynamic data stream model. The stream consists of updates to the edge weights of a graph on n nodes and the goal is to find a nodepartition such that the endpoints of negativeweight edges are typically in different clusters whereas the endpoints of positiveweight edges are typically in the same cluster. We present polynomialtime, O(n ·polylog n)space approximation algorithms for natural problems that arise. We first develop data structures based on linear sketches that allow the “quality ” of a given nodepartition to be measured. We then combine these data structures with convex programming and sampling techniques to solve the relevant approximation problem. However the standard LP and SDP formulations are not obviously solvable in O(n ·polylog n)space. Our work presents spaceefficient algorithms for the convex programming required, as well as approaches to reduce the adaptivity of the sampling. Note that the improved space and runningtime bounds achieved from streaming algorithms are also useful for offline settings such as MapReduce models.