Results 1  10
of
54
The PCP theorem by gap amplification
 In Proceedings of the ThirtyEighth Annual ACM Symposium on Theory of Computing
, 2006
"... The PCP theorem [3, 2] says that every language in NP has a witness format that can be checked probabilistically by reading only a constant number of bits from the proof. The celebrated equivalence of this theorem and inapproximability of certain optimization problems, due to [12], has placed the PC ..."
Abstract

Cited by 128 (10 self)
 Add to MetaCart
The PCP theorem [3, 2] says that every language in NP has a witness format that can be checked probabilistically by reading only a constant number of bits from the proof. The celebrated equivalence of this theorem and inapproximability of certain optimization problems, due to [12], has placed the PCP theorem at the heart of the area of inapproximability. In this work we present a new proof of the PCP theorem that draws on this equivalence. We give a combinatorial proof for the NPhardness of approximating a certain constraint satisfaction problem, which can then be reinterpreted to yield the PCP theorem. Our approach is to consider the unsat value of a constraint system, which is the smallest fraction of unsatisfied constraints, ranging over all possible assignments for the underlying variables. We describe a new combinatorial amplification transformation that doubles the unsatvalue of a constraintsystem, with only a linear blowup in the size of the system. The amplification step causes an increase in alphabetsize that is corrected by a (standard) PCP composition step. Iterative application of these two steps yields a proof for the PCP theorem. The amplification lemma relies on a new notion of “graph powering ” that can be applied to systems of binary constraints. This powering amplifies the unsatvalue of a constraint system provided that the underlying graph structure is an expander. We also extend our amplification lemma towards construction of assignment testers (alternatively, PCPs of Proximity) which are slightly stronger objects than PCPs. We then construct PCPs and locallytestable codes whose length is linear up to a polylog factor, and whose correctness can be probabilistically verified by making a constant number of queries. Namely, we prove SAT ∈
Extractors and Pseudorandom Generators
 Journal of the ACM
, 1999
"... We introduce a new approach to constructing extractors. Extractors are algorithms that transform a "weakly random" distribution into an almost uniform distribution. Explicit constructions of extractors have a variety of important applications, and tend to be very difficult to obtain. ..."
Abstract

Cited by 93 (5 self)
 Add to MetaCart
We introduce a new approach to constructing extractors. Extractors are algorithms that transform a "weakly random" distribution into an almost uniform distribution. Explicit constructions of extractors have a variety of important applications, and tend to be very difficult to obtain.
Extracting Randomness: A Survey and New Constructions
, 1999
"... this paper we do two things. First, we survey extractors and dispersers: what they are, how they can be designed, and some of their applications. The work described in the survey is due to a long list of research papers by various authors##most notably by David Zuckerman. Then, we present a new tool ..."
Abstract

Cited by 89 (5 self)
 Add to MetaCart
this paper we do two things. First, we survey extractors and dispersers: what they are, how they can be designed, and some of their applications. The work described in the survey is due to a long list of research papers by various authors##most notably by David Zuckerman. Then, we present a new tool for constructing explicit extractors and give two new constructions that greatly improve upon previous results. The new tool we devise, a merger," is a function that accepts d strings, one of which is uniformly distributed and outputs a single string that is guaranteed to be uniformly distributed. We show how to build good explicit mergers, and how mergers can be used to build better extractors. Using this, we present two new constructions. The first construction succeeds in extracting all of the randomness from any somewhat random source. This improves upon previous extractors that extract only some of the randomness from somewhat random sources with enough" randomness. The amount of truly random bits used by this extractor, however, is not optimal. The second extractor we build extracts only some of the randomness and works only for sources with enough randomness, but uses a nearoptimal amount of truly random bits. Extractors and dispersers have many applications in removing randomness" in various settings and in making randomized constructions explicit. We survey some of these applications and note whenever our new constructions yield better results, e.g., plugging our new extractors into a previous construction we achieve the first explicit Nsuperconcentrators of linear size and polyloglog(N) depth. ] 1999 Academic Press CONTENTS 1.
Robust PCPs of Proximity, Shorter PCPs and Applications to Coding
 in Proc. 36th ACM Symp. on Theory of Computing
, 2004
"... We continue the study of the tradeo between the length of PCPs and their query complexity, establishing the following main results (which refer to proofs of satis ability of circuits of size n): 1. We present PCPs of length exp( ~ O(log log n) ) n that can be veri ed by making o(log log n) ..."
Abstract

Cited by 84 (28 self)
 Add to MetaCart
We continue the study of the tradeo between the length of PCPs and their query complexity, establishing the following main results (which refer to proofs of satis ability of circuits of size n): 1. We present PCPs of length exp( ~ O(log log n) ) n that can be veri ed by making o(log log n) Boolean queries.
Unbalanced expanders and randomness extractors from parvareshvardy codes
 In Proceedings of the 22nd Annual IEEE Conference on Computational Complexity
, 2007
"... We give an improved explicit construction of highly unbalanced bipartite expander graphs with expansion arbitrarily close to the degree (which is polylogarithmic in the number of vertices). Both the degree and the number of righthand vertices are polynomially close to optimal, whereas the previous ..."
Abstract

Cited by 80 (7 self)
 Add to MetaCart
We give an improved explicit construction of highly unbalanced bipartite expander graphs with expansion arbitrarily close to the degree (which is polylogarithmic in the number of vertices). Both the degree and the number of righthand vertices are polynomially close to optimal, whereas the previous constructions of TaShma, Umans, and Zuckerman (STOC ‘01) required at least one of these to be quasipolynomial in the optimal. Our expanders have a short and selfcontained description and analysis, based on the ideas underlying the recent listdecodable errorcorrecting codes of Parvaresh and Vardy (FOCS ‘05). Our expanders can be interpreted as nearoptimal “randomness condensers, ” that reduce the task of extracting randomness from sources of arbitrary minentropy rate to extracting randomness from sources of minentropy rate arbitrarily close to 1, which is a much easier task. Using this connection, we obtain a new construction of randomness extractors that is optimal up to constant factors, while being much simpler than the previous construction of Lu et al. (STOC ‘03) and improving upon it when the error parameter is small (e.g. 1/poly(n)).
Computing with Very Weak Random Sources
, 1994
"... For any fixed 6> 0, we show how to simulate RP algorithms in time nO(‘Ogn) using the output of a 6source wath minentropy R‘. Such a weak random source is asked once for R bits; it outputs an Rbit string such that any string has probability at most 2Rc. If 6> 1 l/(k + l), our BPP simulatio ..."
Abstract

Cited by 74 (7 self)
 Add to MetaCart
For any fixed 6> 0, we show how to simulate RP algorithms in time nO(‘Ogn) using the output of a 6source wath minentropy R‘. Such a weak random source is asked once for R bits; it outputs an Rbit string such that any string has probability at most 2Rc. If 6> 1 l/(k + l), our BPP simulations take time no(‘og(k)n) (log(k) is the logarithm iterated k times). We also gave a polynomialtime BPP simulation using ChorGoldreich sources of minentropy Ro(’), which is optimal. We present applications to timespace tradeoffs, expander constructions, and the hardness of approximation. Also of interest is our randomnessefficient Leflover Hash Lemma, found independently by Goldreich & Wigderson.
A sample of samplers  a computational perspective on sampling (survey
 In FOCS
, 1997
"... Abstract. We consider the problem of estimating the average of a huge set of values. That is, given oracle access to an arbitrary function f: {0, 1} n P −n → [0, 1], we wish to estimate 2 x∈{0,1} n f(x) upto an additive error of ǫ. We are allowed to employ a randomized algorithm that may err with pr ..."
Abstract

Cited by 71 (7 self)
 Add to MetaCart
Abstract. We consider the problem of estimating the average of a huge set of values. That is, given oracle access to an arbitrary function f: {0, 1} n P −n → [0, 1], we wish to estimate 2 x∈{0,1} n f(x) upto an additive error of ǫ. We are allowed to employ a randomized algorithm that may err with probability at most δ. We survey known algorithms for this problem and focus on the ideas underlying their construction. In particular, we present an algorithm that makes O(ǫ −2 · log(1/δ)) queries and uses n + O(log(1/ǫ)) + O(log(1/δ)) coin tosses, both complexities being very close to the corresponding lower bounds.
Bounds For Dispersers, Extractors, And DepthTwo Superconcentrators
 SIAM JOURNAL ON DISCRETE MATHEMATICS
, 2000
"... ..."
ExposureResilient Functions and AllOrNothing Transforms
, 2000
"... We study the problem of partial key exposure. Standard cryptographic de nitions and constructions do not guarantee any security even if a tiny fraction of the secret key is compromised. We show how to build cryptographic primitives that remain secure even when an adversary is able to learn almo ..."
Abstract

Cited by 62 (12 self)
 Add to MetaCart
We study the problem of partial key exposure. Standard cryptographic de nitions and constructions do not guarantee any security even if a tiny fraction of the secret key is compromised. We show how to build cryptographic primitives that remain secure even when an adversary is able to learn almost all of the secret key.
Extractors: Optimal up to Constant Factors
 STOC'03
, 2003
"... This paper provides the first explicit construction of extractors which are simultaneously optimal up to constant factors in both seed length and output length. More precisely, for every n, k, our extractor uses a random seed of length O(log n) to transform any random source on n bits with (min)ent ..."
Abstract

Cited by 52 (12 self)
 Add to MetaCart
This paper provides the first explicit construction of extractors which are simultaneously optimal up to constant factors in both seed length and output length. More precisely, for every n, k, our extractor uses a random seed of length O(log n) to transform any random source on n bits with (min)entropy k, into a distribution on (1 − α)k bits that is ɛclose to uniform. Here α and ɛ can be taken to be any positive constants. (In fact, ɛ can be almost polynomially small). Our improvements are obtained via three new techniques, each of which may be of independent interest. The first is a general construction of mergers [22] from locally decodable errorcorrecting codes. The second introduces new condensers that have constant seed length (and retain a constant fraction of the minentropy in the random source). The third is a way to augment the “winwin repeated condensing” paradigm of [17] with error reduction techniques like [15] so that the our constant seedlength condensers can be used without error accumulation.