Results 1  10
of
86
Optimal inapproximability results for MAXCUT and other 2variable CSPs?
, 2005
"... In this paper we show a reduction from the Unique Games problem to the problem of approximating MAXCUT to within a factor of ffGW + ffl, for all ffl> 0; here ffGW ss.878567 denotes the approximation ratio achieved by the GoemansWilliamson algorithm [25]. This implies that if the Unique Games ..."
Abstract

Cited by 178 (28 self)
 Add to MetaCart
In this paper we show a reduction from the Unique Games problem to the problem of approximating MAXCUT to within a factor of ffGW + ffl, for all ffl> 0; here ffGW ss.878567 denotes the approximation ratio achieved by the GoemansWilliamson algorithm [25]. This implies that if the Unique Games
Noise stability of functions with low influences: invariance and optimality
"... In this paper we study functions with low influences on product probability spaces. The analysis of boolean functions f: {−1, 1} n → {−1, 1} with low influences has become a central problem in discrete Fourier analysis. It is motivated by fundamental questions arising from the construction of proba ..."
Abstract

Cited by 85 (10 self)
 Add to MetaCart
In this paper we study functions with low influences on product probability spaces. The analysis of boolean functions f: {−1, 1} n → {−1, 1} with low influences has become a central problem in discrete Fourier analysis. It is motivated by fundamental questions arising from the construction of probabilistically checkable proofs in theoretical computer science and from problems in the theory of social choice in economics. We prove an invariance principle for multilinear polynomials with low influences and bounded degree; it shows that under mild conditions the distribution of such polynomials is essentially invariant for all product spaces. Ours is one of the very few known nonlinear invariance principles. It has the advantage that its proof is simple and that the error bounds are explicit. We also show that the assumption of bounded degree can be eliminated if the polynomials are slightly “smoothed”; this extension is essential for our applications to “noise stability”type problems. In particular, as applications of the invariance principle we prove two conjectures: the “Majority Is Stablest ” conjecture [27] from theoretical computer science, which was the original motivation for this work, and the “It Ain’t Over Till It’s Over” conjecture [25] from social choice theory. The “Majority Is Stablest ” conjecture and its generalizations proven here in conjunction with “Unique Games” and its variants imply a number of (optimal) inapproximability results for graph problems.
The Importance of Being Biased
, 2002
"... The Minimum Vertex Cover problem is the problem of, given a graph, finding a smallest set of vertices that touches all edges. We show that it is NPhard to approximate this problem 1.36067, improving on the previously known hardness result for a 6 factor. 1 ..."
Abstract

Cited by 84 (7 self)
 Add to MetaCart
The Minimum Vertex Cover problem is the problem of, given a graph, finding a smallest set of vertices that touches all edges. We show that it is NPhard to approximate this problem 1.36067, improving on the previously known hardness result for a 6 factor. 1
Computing with Very Weak Random Sources
, 1994
"... For any fixed 6> 0, we show how to simulate RP algorithms in time nO(‘Ogn) using the output of a 6source wath minentropy R‘. Such a weak random source is asked once for R bits; it outputs an Rbit string such that any string has probability at most 2Rc. If 6> 1 l/(k + l), our BPP simulatio ..."
Abstract

Cited by 76 (7 self)
 Add to MetaCart
For any fixed 6> 0, we show how to simulate RP algorithms in time nO(‘Ogn) using the output of a 6source wath minentropy R‘. Such a weak random source is asked once for R bits; it outputs an Rbit string such that any string has probability at most 2Rc. If 6> 1 l/(k + l), our BPP simulations take time no(‘og(k)n) (log(k) is the logarithm iterated k times). We also gave a polynomialtime BPP simulation using ChorGoldreich sources of minentropy Ro(’), which is optimal. We present applications to timespace tradeoffs, expander constructions, and the hardness of approximation. Also of interest is our randomnessefficient Leflover Hash Lemma, found independently by Goldreich & Wigderson.
On the Hardness of Approximating Multicut and SparsestCut
 In Proceedings of the 20th Annual IEEE Conference on Computational Complexity
, 2005
"... We show that the MULTICUT, SPARSESTCUT, and MIN2CNF ≡ DELETION problems are NPhard to approximate within every constant factor, assuming the Unique Games Conjecture of Khot [STOC, 2002]. A quantitatively stronger version of the conjecture implies inapproximability factor of Ω(log log n). 1. ..."
Abstract

Cited by 75 (4 self)
 Add to MetaCart
We show that the MULTICUT, SPARSESTCUT, and MIN2CNF ≡ DELETION problems are NPhard to approximate within every constant factor, assuming the Unique Games Conjecture of Khot [STOC, 2002]. A quantitatively stronger version of the conjecture implies inapproximability factor of Ω(log log n). 1.
On Metric RamseyType Phenomena
"... The main question studied in this article may be viewed as a nonlinear analog of Dvoretzky's Theorem in Banach space theory or as part of Ramsey Theory in combinatorics. ..."
Abstract

Cited by 69 (39 self)
 Add to MetaCart
The main question studied in this article may be viewed as a nonlinear analog of Dvoretzky's Theorem in Banach space theory or as part of Ramsey Theory in combinatorics.
Extracting randomness from samplable distributions
 In Proceedings of the 41st Annual IEEE Symposium on Foundations of Computer Science
, 2000
"... The standard notion of a randomness extractor is a procedure which converts any weak source of randomness into an almost uniform distribution. The conversion necessarily uses a small amount of pure randomness, which can be eliminated by complete enumeration in some, but not all, applications. Here, ..."
Abstract

Cited by 59 (7 self)
 Add to MetaCart
The standard notion of a randomness extractor is a procedure which converts any weak source of randomness into an almost uniform distribution. The conversion necessarily uses a small amount of pure randomness, which can be eliminated by complete enumeration in some, but not all, applications. Here, we consider the problem of deterministically converting a weak source of randomness into an almost uniform distribution. Previously, deterministic extraction procedures were known only for sources satisfying strong independence requirements. In this paper, we look at sources which are samplable, i.e. can be generated by an efficient sampling algorithm. We seek an efficient deterministic procedure that, given a sample from any samplable distribution of sufficiently large minentropy, gives an almost uniformly distributed output. We explore the conditions under which such deterministic extractors exist. We observe that no deterministic extractor exists if the sampler is allowed to use more computational resources than the extractor. On the other hand, if the extractor is allowed (polynomially) more resources than the sampler, we show that deterministic extraction becomes possible. This is true unconditionally in the nonuniform setting (i.e., when the extractor can be computed by a small circuit), and (necessarily) relies on complexity assumptions in the uniform setting. One of our uniform constructions is as follows: assuming that there are problems in���ÌÁÅ�ÇÒthat are not solvable by subexponentialsize circuits with¦� gates, there is an efficient extractor that transforms any samplable distribution of lengthÒand minentropy Ò into an output distribution of length ÇÒ, whereis any sufficiently small constant. The running time of the extractor is polynomial inÒand the circuit complexity of the sampler. These extractors are based on a connection be
Deterministic Extractors for BitFixing Sources and ExposureResilient Cryptography
 In Proceedings of the 44th Annual IEEE Symposium on Foundations of Computer Science
, 2003
"... Abstract. We give an efficient deterministic algorithm that extracts Ω(n2γ) almostrandom bits from sources where n 1 2 +γ of the n bits are uniformly random and the rest are fixed in advance. This improves upon previous constructions, which required that at least n/2 of the bits be random in order ..."
Abstract

Cited by 57 (3 self)
 Add to MetaCart
Abstract. We give an efficient deterministic algorithm that extracts Ω(n2γ) almostrandom bits from sources where n 1 2 +γ of the n bits are uniformly random and the rest are fixed in advance. This improves upon previous constructions, which required that at least n/2 of the bits be random in order to extract many bits. Our construction also has applications in exposureresilient cryptography, giving explicit adaptive exposureresilient functions and, in turn, adaptive allornothing transforms. For sources where instead of bits the values are chosen from [d], for d>2, we give an algorithm that extracts a constant fraction of the randomness. We also give bounds on extracting randomness for sources where the fixed bits can depend on the random bits.