Results 1  10
of
143
The PCP theorem by gap amplification
 In Proceedings of the ThirtyEighth Annual ACM Symposium on Theory of Computing
, 2006
"... The PCP theorem [3, 2] says that every language in NP has a witness format that can be checked probabilistically by reading only a constant number of bits from the proof. The celebrated equivalence of this theorem and inapproximability of certain optimization problems, due to [12], has placed the PC ..."
Abstract

Cited by 166 (8 self)
 Add to MetaCart
The PCP theorem [3, 2] says that every language in NP has a witness format that can be checked probabilistically by reading only a constant number of bits from the proof. The celebrated equivalence of this theorem and inapproximability of certain optimization problems, due to [12], has placed the PCP theorem at the heart of the area of inapproximability. In this work we present a new proof of the PCP theorem that draws on this equivalence. We give a combinatorial proof for the NPhardness of approximating a certain constraint satisfaction problem, which can then be reinterpreted to yield the PCP theorem. Our approach is to consider the unsat value of a constraint system, which is the smallest fraction of unsatisfied constraints, ranging over all possible assignments for the underlying variables. We describe a new combinatorial amplification transformation that doubles the unsatvalue of a constraintsystem, with only a linear blowup in the size of the system. The amplification step causes an increase in alphabetsize that is corrected by a (standard) PCP composition step. Iterative application of these two steps yields a proof for the PCP theorem. The amplification lemma relies on a new notion of “graph powering ” that can be applied to systems of binary constraints. This powering amplifies the unsatvalue of a constraint system provided that the underlying graph structure is an expander. We also extend our amplification lemma towards construction of assignment testers (alternatively, PCPs of Proximity) which are slightly stronger objects than PCPs. We then construct PCPs and locallytestable codes whose length is linear up to a polylog factor, and whose correctness can be probabilistically verified by making a constant number of queries. Namely, we prove SAT ∈
Undirected STConnectivity in LogSpace
, 2004
"... We present a deterministic, logspace algorithm that solves stconnectivity in undirected graphs. The previous bound on the space complexity of undirected stconnectivity was log 4/3 (·) obtained by Armoni, TaShma, Wigderson and Zhou [ATSWZ00]. As undirected stconnectivity is complete for the clas ..."
Abstract

Cited by 162 (3 self)
 Add to MetaCart
We present a deterministic, logspace algorithm that solves stconnectivity in undirected graphs. The previous bound on the space complexity of undirected stconnectivity was log 4/3 (·) obtained by Armoni, TaShma, Wigderson and Zhou [ATSWZ00]. As undirected stconnectivity is complete for the class of problems solvable by symmetric, nondeterministic, logspace computations (the class SL), this algorithm implies that SL = L (where L is the class of problems solvable by deterministic logspace computations). Our algorithm also implies logspace constructible universaltraversal sequences for graphs with restricted labelling and logspace constructible universalexploration sequences for general graphs.
Unbalanced expanders and randomness extractors from parvareshvardy codes
 In Proceedings of the 22nd Annual IEEE Conference on Computational Complexity
, 2007
"... We give an improved explicit construction of highly unbalanced bipartite expander graphs with expansion arbitrarily close to the degree (which is polylogarithmic in the number of vertices). Both the degree and the number of righthand vertices are polynomially close to optimal, whereas the previous ..."
Abstract

Cited by 120 (7 self)
 Add to MetaCart
We give an improved explicit construction of highly unbalanced bipartite expander graphs with expansion arbitrarily close to the degree (which is polylogarithmic in the number of vertices). Both the degree and the number of righthand vertices are polynomially close to optimal, whereas the previous constructions of TaShma, Umans, and Zuckerman (STOC ‘01) required at least one of these to be quasipolynomial in the optimal. Our expanders have a short and selfcontained description and analysis, based on the ideas underlying the recent listdecodable errorcorrecting codes of Parvaresh and Vardy (FOCS ‘05). Our expanders can be interpreted as nearoptimal “randomness condensers, ” that reduce the task of extracting randomness from sources of arbitrary minentropy rate to extracting randomness from sources of minentropy rate arbitrarily close to 1, which is a much easier task. Using this connection, we obtain a new construction of randomness extractors that is optimal up to constant factors, while being much simpler than the previous construction of Lu et al. (STOC ‘03) and improving upon it when the error parameter is small (e.g. 1/poly(n)).
NonApproximability Results for Optimization Problems on Bounded Degree Instances
, 2001
"... We prove some nonapproximability results for restrictions of basic combinatorial optimization problems to instances of bounded \degree" or bounded \width." Speci cally: We prove that the Max 3SAT problem on instances where each variable occurs in at most B clauses, is hard to approxima ..."
Abstract

Cited by 90 (4 self)
 Add to MetaCart
We prove some nonapproximability results for restrictions of basic combinatorial optimization problems to instances of bounded \degree" or bounded \width." Speci cally: We prove that the Max 3SAT problem on instances where each variable occurs in at most B clauses, is hard to approximate to within a factor 7=8+O(1= B), unless RP = NP . Hastad [18] proved that the problem is approximable to within a factor 7=8+1=64B in polynomial time, and that is hard to approximate to within a factor 7=8 + 1=(log B) 3 . Our result uses a new randomized reduction from general instances of Max 3SAT to boundedoccurrences instances. The randomized reduction applies to other Max SNP problems as well.
Uniform expansion bound for Cayley graphs of SL2(Fp)
 ANN. MATH
"... We prove that Cayley graphs of SL2(Fp) are expanders with respect to the projection of any fixed elements in SL(2,Z) generating a nonelementary subgroup, and with respect to generators chosen at random in SL2(Fp). ..."
Abstract

Cited by 89 (11 self)
 Add to MetaCart
We prove that Cayley graphs of SL2(Fp) are expanders with respect to the projection of any fixed elements in SL(2,Z) generating a nonelementary subgroup, and with respect to generators chosen at random in SL2(Fp).
On Constructing Locally Computable Extractors and Cryptosystems In The Bounded Storage Model
 Journal of Cryptology
, 2002
"... We consider the problem of constructing randomness extractors which are locally computable, i.e. only read a small number of bits from their input. As recently shown by Lu (CRYPTO `02 ), locally computable extractors directly yield secure privatekey cryptosystems in Maurer's bounded storage ..."
Abstract

Cited by 81 (8 self)
 Add to MetaCart
We consider the problem of constructing randomness extractors which are locally computable, i.e. only read a small number of bits from their input. As recently shown by Lu (CRYPTO `02 ), locally computable extractors directly yield secure privatekey cryptosystems in Maurer's bounded storage model (J. Cryptology, 1992).
Extracting all the Randomness and Reducing the Error in Trevisan's Extractors
 In Proceedings of the 31st Annual ACM Symposium on Theory of Computing
, 1999
"... We give explicit constructions of extractors which work for a source of any minentropy on strings of length n. These extractors can extract any constant fraction of the minentropy using O(log² n) additional random bits, and can extract all the minentropy using O(log³ n) addition ..."
Abstract

Cited by 81 (14 self)
 Add to MetaCart
(Show Context)
We give explicit constructions of extractors which work for a source of any minentropy on strings of length n. These extractors can extract any constant fraction of the minentropy using O(log&sup2; n) additional random bits, and can extract all the minentropy using O(log&sup3; n) additional random bits. Both of these constructions use fewer truly random bits than any previous construction which works for all minentropies and extracts a constant fraction of the minentropy. We then improve our second construction and show that we can reduce the entropy loss to 2 log(1=") +O(1) bits, while still using O(log&sup3; n) truly random bits (where entropy loss is defined as [(source minentropy) + (# truly random bits used) (# output bits)], and " is the statistical difference from uniform achieved). This entropy loss is optimal up to a constant additive term. our...
Computing Separable Functions via Gossip
, 2006
"... Motivated by applications to sensor, peertopeer, and adhoc networks, we study the problem of computing functions of values at the nodes in a network in a totally distributed manner. In particular, we consider separable functions, which can be written as linear combinations of functions of individu ..."
Abstract

Cited by 75 (6 self)
 Add to MetaCart
(Show Context)
Motivated by applications to sensor, peertopeer, and adhoc networks, we study the problem of computing functions of values at the nodes in a network in a totally distributed manner. In particular, we consider separable functions, which can be written as linear combinations of functions of individual variables. Known iterative algorithms for averaging can be used to compute the normalized values of such functions, but these algorithms do not extend in general to the computation of the actual values of separable functions. The main contribution of this paper is the design of a distributed randomized algorithm for computing separable functions based on properties of exponential random variables. We bound the running time of our algorithm in terms of the running time of an information spreading algorithm used as a subroutine by the algorithm. Since we are interested in totally distributed algorithms, we consider a randomized gossip mechanism for information spreading as the subroutine. Combining these algorithms yields a complete and simple distributed algorithm for computing separable functions. The second contribution of this paper is an analysis of the information spreading time of the gossip algorithm. This analysis yields an upper bound on the information spreading time, and therefore a corresponding upper bound on the running time of the algorithm for computing separable functions, in terms of the conductance of an appropriate stochastic matrix. These bounds imply that, for a class of graphs with small spectral gap (such as grid graphs), the time used by our algorithm to compute averages is of a smaller order than the time required for the computation of averages by a known iterative gossip scheme [5].