Results 1  10
of
67
Optimal inapproximability results for MAXCUT and other 2variable CSPs?
, 2005
"... In this paper we show a reduction from the Unique Games problem to the problem of approximating MAXCUT to within a factor of ffGW + ffl, for all ffl> 0; here ffGW ss.878567 denotes the approximation ratio achieved by the GoemansWilliamson algorithm [25]. This implies that if the Unique Games ..."
Abstract

Cited by 173 (24 self)
 Add to MetaCart
In this paper we show a reduction from the Unique Games problem to the problem of approximating MAXCUT to within a factor of ffGW + ffl, for all ffl> 0; here ffGW ss.878567 denotes the approximation ratio achieved by the GoemansWilliamson algorithm [25]. This implies that if the Unique Games
Vertex Cover Might be Hard to Approximate to within 2ɛ
 IN PROCEEDINGS OF THE 18TH ANNUAL IEEE CONFERENCE ON COMPUTATIONAL COMPLEXITY
, 2003
"... We show that vertex cover is hard to approximate within any constant factor better than 2 where the hardness is based on a conjecture regarding the power of unique 2prover1round games presented in [15]. We actually show a stronger result, namely, based on the same conjecture, vertex cover on k ..."
Abstract

Cited by 119 (11 self)
 Add to MetaCart
We show that vertex cover is hard to approximate within any constant factor better than 2 where the hardness is based on a conjecture regarding the power of unique 2prover1round games presented in [15]. We actually show a stronger result, namely, based on the same conjecture, vertex cover on kuniform hypergraphs is hard to approximate within any constant factor better than k.
On problems without polynomial kernels
 Lect. Notes Comput. Sci
, 2007
"... Abstract. Kernelization is a strong and widelyapplied technique in parameterized complexity. In a nutshell, a kernelization algorithm, or simply a kernel, is a polynomialtime transformation that transforms any given parameterized instance to an equivalent instance of the same problem, with size an ..."
Abstract

Cited by 69 (8 self)
 Add to MetaCart
Abstract. Kernelization is a strong and widelyapplied technique in parameterized complexity. In a nutshell, a kernelization algorithm, or simply a kernel, is a polynomialtime transformation that transforms any given parameterized instance to an equivalent instance of the same problem, with size and parameter bounded by a function of the parameter in the input. A kernel is polynomial if the size and parameter of the output are polynomiallybounded by the parameter of the input. In this paper we develop a framework which allows showing that a wide range of FPT problems do not have polynomial kernels. Our evidence relies on hypothesis made in the classical world (i.e. nonparametric complexity), and evolves around a new type of algorithm for classical decision problems, called a distillation algorithm, which might be of independent interest. Using the notion of distillation algorithms, we develop a generic lowerbound engine which allows us to show that a variety of FPT problems, fulfilling certain criteria, cannot have polynomial kernels unless the polynomial hierarchy collapses. These problems include kPath, kCycle, kExact Cycle, kShort Cheap Tour, kGraph Minor Order Test, kCutwidth, kSearch Number, kPathwidth, kTreewidth, kBranchwidth, and several optimization problems parameterized by treewidth or cliquewidth. 1
Proving Integrality Gaps Without Knowing the Linear Program
 Theory of Computing
, 2002
"... Proving integrality gaps for linear relaxations of NP optimization problems is a difficult task and usually undertaken on a casebycase basis. We initiate a more systematic approach. We prove an integrality gap of 2o(1) for three families of linear relaxations for vertex cover, and our methods see ..."
Abstract

Cited by 56 (2 self)
 Add to MetaCart
Proving integrality gaps for linear relaxations of NP optimization problems is a difficult task and usually undertaken on a casebycase basis. We initiate a more systematic approach. We prove an integrality gap of 2o(1) for three families of linear relaxations for vertex cover, and our methods seem relevant to other problems as well.
A new multilayered PCP and the hardness of hypergraph vertex cover
 In Proceedings of the 35th Annual ACM Symposium on Theory of Computing
, 2003
"... Abstract Given a kuniform hypergraph, the EkVertexCover problem is to find the smallest subsetof vertices that intersects every hyperedge. We present a new multilayered PCP construction that extends the Raz verifier. This enables us to prove that EkVertexCover is NPhard toapproximate within a ..."
Abstract

Cited by 53 (10 self)
 Add to MetaCart
Abstract Given a kuniform hypergraph, the EkVertexCover problem is to find the smallest subsetof vertices that intersects every hyperedge. We present a new multilayered PCP construction that extends the Raz verifier. This enables us to prove that EkVertexCover is NPhard toapproximate within a factor of ( k 1 ") for arbitrary constants "> 0 and k> = 3. The resultis nearly tight as this problem can be easily approximated within factor k. Our constructionmakes use of the biased LongCode and is analyzed using combinatorial properties of swise tintersecting families of subsets.We also give a different proof that shows an inapproximability factor of b k 2 c ". In additionto being simpler, this proof also works for superconstant values of k up to (log N)1/c where
How to rank with few errors
, 2007
"... We present a polynomial time approximation scheme (PTAS) for the minimum feedback arc set problem on tournaments. A simple weighted generalization gives a PTAS for KemenyYoung rank aggregation. ..."
Abstract

Cited by 50 (2 self)
 Add to MetaCart
We present a polynomial time approximation scheme (PTAS) for the minimum feedback arc set problem on tournaments. A simple weighted generalization gives a PTAS for KemenyYoung rank aggregation.
Semidefinite Programming and Integer Programming
"... We survey how semidefinite programming can be used for finding good approximative solutions to hard combinatorial optimization problems. ..."
Abstract

Cited by 48 (7 self)
 Add to MetaCart
We survey how semidefinite programming can be used for finding good approximative solutions to hard combinatorial optimization problems.
Testing Juntas
, 2002
"... We show that a Boolean function over n Boolean variables can be tested for the property of depending on only k of them, using a number of queries that depends only on k and the approximation parameter . We present two tests, both nonadaptive, that require a number of queries that is polynomial k an ..."
Abstract

Cited by 46 (8 self)
 Add to MetaCart
We show that a Boolean function over n Boolean variables can be tested for the property of depending on only k of them, using a number of queries that depends only on k and the approximation parameter . We present two tests, both nonadaptive, that require a number of queries that is polynomial k and linear in . The first test is stronger in that it has a 1sided error, while the second test has a more compact analysis. We also present an adaptive version and a 2sided error version of the first test, that have a somewhat better query complexity than the other algorithms...
Gaussian Bounds for Noise Correlation of Functions and Tight Analysis of Long Codes
 In IEEE Symposium on Foundations of Computer Science (FOCS
, 2008
"... In this paper we derive tight bounds on the expected value of products of low influence functions defined on correlated probability spaces. The proofs are based on extending Fourier theory to an arbitrary number of correlated probability spaces, on a generalization of an invariance principle recentl ..."
Abstract

Cited by 37 (5 self)
 Add to MetaCart
In this paper we derive tight bounds on the expected value of products of low influence functions defined on correlated probability spaces. The proofs are based on extending Fourier theory to an arbitrary number of correlated probability spaces, on a generalization of an invariance principle recently obtained with O’Donnell and Oleszkiewicz for multilinear polynomials with low influences and bounded degree and on properties of multidimensional Gaussian distributions. We present two applications of the new bounds to the theory of social choice. We show that Majority is asymptotically the most predictable function among all low influence functions given a random sample of the voters. Moreover, we derive an almost tight bound in the context of Condorcet aggregation and low influence voting schemes on a large number of candidates. In particular, we show that for every low influence aggregation function, the probability that Condorcet voting on k candidates will result in a unique candidate that is preferable to all others is k−1+o(1). This matches the asymptotic behavior of the majority function for which the probability is k−1−o(1). A number of applications in hardness of approximation in theoretical computer science were