Results 1  10
of
85
Optimal inapproximability results for MAXCUT and other 2variable CSPs?
, 2005
"... In this paper we show a reduction from the Unique Games problem to the problem of approximating MAXCUT to within a factor of ffGW + ffl, for all ffl> 0; here ffGW ss.878567 denotes the approximation ratio achieved by the GoemansWilliamson algorithm [25]. This implies that if the Unique Games ..."
Abstract

Cited by 178 (28 self)
 Add to MetaCart
(Show Context)
In this paper we show a reduction from the Unique Games problem to the problem of approximating MAXCUT to within a factor of ffGW + ffl, for all ffl> 0; here ffGW ss.878567 denotes the approximation ratio achieved by the GoemansWilliamson algorithm [25]. This implies that if the Unique Games
Vertex Cover Might be Hard to Approximate to within 2ɛ
 IN PROCEEDINGS OF THE 18TH ANNUAL IEEE CONFERENCE ON COMPUTATIONAL COMPLEXITY
, 2003
"... We show that vertex cover is hard to approximate within any constant factor better than 2 where the hardness is based on a conjecture regarding the power of unique 2prover1round games presented in [15]. We actually show a stronger result, namely, based on the same conjecture, vertex cover on k ..."
Abstract

Cited by 117 (11 self)
 Add to MetaCart
We show that vertex cover is hard to approximate within any constant factor better than 2 where the hardness is based on a conjecture regarding the power of unique 2prover1round games presented in [15]. We actually show a stronger result, namely, based on the same conjecture, vertex cover on kuniform hypergraphs is hard to approximate within any constant factor better than k.
Noise stability of functions with low influences: invariance and optimality
"... In this paper we study functions with low influences on product probability spaces. The analysis of boolean functions f: {−1, 1} n → {−1, 1} with low influences has become a central problem in discrete Fourier analysis. It is motivated by fundamental questions arising from the construction of proba ..."
Abstract

Cited by 85 (10 self)
 Add to MetaCart
In this paper we study functions with low influences on product probability spaces. The analysis of boolean functions f: {−1, 1} n → {−1, 1} with low influences has become a central problem in discrete Fourier analysis. It is motivated by fundamental questions arising from the construction of probabilistically checkable proofs in theoretical computer science and from problems in the theory of social choice in economics. We prove an invariance principle for multilinear polynomials with low influences and bounded degree; it shows that under mild conditions the distribution of such polynomials is essentially invariant for all product spaces. Ours is one of the very few known nonlinear invariance principles. It has the advantage that its proof is simple and that the error bounds are explicit. We also show that the assumption of bounded degree can be eliminated if the polynomials are slightly “smoothed”; this extension is essential for our applications to “noise stability”type problems. In particular, as applications of the invariance principle we prove two conjectures: the “Majority Is Stablest ” conjecture [27] from theoretical computer science, which was the original motivation for this work, and the “It Ain’t Over Till It’s Over” conjecture [25] from social choice theory. The “Majority Is Stablest ” conjecture and its generalizations proven here in conjunction with “Unique Games” and its variants imply a number of (optimal) inapproximability results for graph problems.
On problems without polynomial kernels
 Lect. Notes Comput. Sci
, 2007
"... Abstract. Kernelization is a strong and widelyapplied technique in parameterized complexity. In a nutshell, a kernelization algorithm, or simply a kernel, is a polynomialtime transformation that transforms any given parameterized instance to an equivalent instance of the same problem, with size an ..."
Abstract

Cited by 68 (9 self)
 Add to MetaCart
(Show Context)
Abstract. Kernelization is a strong and widelyapplied technique in parameterized complexity. In a nutshell, a kernelization algorithm, or simply a kernel, is a polynomialtime transformation that transforms any given parameterized instance to an equivalent instance of the same problem, with size and parameter bounded by a function of the parameter in the input. A kernel is polynomial if the size and parameter of the output are polynomiallybounded by the parameter of the input. In this paper we develop a framework which allows showing that a wide range of FPT problems do not have polynomial kernels. Our evidence relies on hypothesis made in the classical world (i.e. nonparametric complexity), and evolves around a new type of algorithm for classical decision problems, called a distillation algorithm, which might be of independent interest. Using the notion of distillation algorithms, we develop a generic lowerbound engine which allows us to show that a variety of FPT problems, fulfilling certain criteria, cannot have polynomial kernels unless the polynomial hierarchy collapses. These problems include kPath, kCycle, kExact Cycle, kShort Cheap Tour, kGraph Minor Order Test, kCutwidth, kSearch Number, kPathwidth, kTreewidth, kBranchwidth, and several optimization problems parameterized by treewidth or cliquewidth. 1
How to rank with few errors
, 2007
"... We present a polynomial time approximation scheme (PTAS) for the minimum feedback arc set problem on tournaments. A simple weighted generalization gives a PTAS for KemenyYoung rank aggregation. ..."
Abstract

Cited by 57 (2 self)
 Add to MetaCart
We present a polynomial time approximation scheme (PTAS) for the minimum feedback arc set problem on tournaments. A simple weighted generalization gives a PTAS for KemenyYoung rank aggregation.
Proving Integrality Gaps Without Knowing the Linear Program
 Theory of Computing
, 2002
"... Proving integrality gaps for linear relaxations of NP optimization problems is a difficult task and usually undertaken on a casebycase basis. We initiate a more systematic approach. We prove an integrality gap of 2o(1) for three families of linear relaxations for vertex cover, and our methods see ..."
Abstract

Cited by 57 (2 self)
 Add to MetaCart
Proving integrality gaps for linear relaxations of NP optimization problems is a difficult task and usually undertaken on a casebycase basis. We initiate a more systematic approach. We prove an integrality gap of 2o(1) for three families of linear relaxations for vertex cover, and our methods seem relevant to other problems as well.
A new multilayered PCP and the hardness of hypergraph vertex cover
 In Proceedings of the 35th Annual ACM Symposium on Theory of Computing
, 2003
"... Abstract Given a kuniform hypergraph, the EkVertexCover problem is to find the smallest subsetof vertices that intersects every hyperedge. We present a new multilayered PCP construction that extends the Raz verifier. This enables us to prove that EkVertexCover is NPhard toapproximate within a ..."
Abstract

Cited by 51 (10 self)
 Add to MetaCart
(Show Context)
Abstract Given a kuniform hypergraph, the EkVertexCover problem is to find the smallest subsetof vertices that intersects every hyperedge. We present a new multilayered PCP construction that extends the Raz verifier. This enables us to prove that EkVertexCover is NPhard toapproximate within a factor of ( k 1 &quot;) for arbitrary constants &quot;> 0 and k> = 3. The resultis nearly tight as this problem can be easily approximated within factor k. Our constructionmakes use of the biased LongCode and is analyzed using combinatorial properties of swise tintersecting families of subsets.We also give a different proof that shows an inapproximability factor of b k 2 c &quot;. In additionto being simpler, this proof also works for superconstant values of k up to (log N)1/c where
Testing Juntas
, 2002
"... We show that a Boolean function over n Boolean variables can be tested for the property of depending on only k of them, using a number of queries that depends only on k and the approximation parameter . We present two tests, both nonadaptive, that require a number of queries that is polynomial k an ..."
Abstract

Cited by 49 (9 self)
 Add to MetaCart
We show that a Boolean function over n Boolean variables can be tested for the property of depending on only k of them, using a number of queries that depends only on k and the approximation parameter . We present two tests, both nonadaptive, that require a number of queries that is polynomial k and linear in . The first test is stronger in that it has a 1sided error, while the second test has a more compact analysis. We also present an adaptive version and a 2sided error version of the first test, that have a somewhat better query complexity than the other algorithms...
Gaussian Bounds for Noise Correlation of Functions and Tight Analysis of Long Codes
 In IEEE Symposium on Foundations of Computer Science (FOCS
, 2008
"... In this paper we derive tight bounds on the expected value of products of low influence functions defined on correlated probability spaces. The proofs are based on extending Fourier theory to an arbitrary number of correlated probability spaces, on a generalization of an invariance principle recentl ..."
Abstract

Cited by 35 (5 self)
 Add to MetaCart
In this paper we derive tight bounds on the expected value of products of low influence functions defined on correlated probability spaces. The proofs are based on extending Fourier theory to an arbitrary number of correlated probability spaces, on a generalization of an invariance principle recently obtained with O’Donnell and Oleszkiewicz for multilinear polynomials with low influences and bounded degree and on properties of multidimensional Gaussian distributions. We present two applications of the new bounds to the theory of social choice. We show that Majority is asymptotically the most predictable function among all low influence functions given a random sample of the voters. Moreover, we derive an almost tight bound in the context of Condorcet aggregation and low influence voting schemes on a large number of candidates. In particular, we show that for every low influence aggregation function, the probability that Condorcet voting on k candidates will result in a unique candidate that is preferable to all others is k−1+o(1). This matches the asymptotic behavior of the majority function for which the probability is k−1−o(1). A number of applications in hardness of approximation in theoretical computer science were