Results 1  10
of
42
Two Query PCP with SubConstant Error
, 2008
"... We show that the N PComplete language 3SAT has a PCP verifier that makes two queries to a proof of almostlinear size and achieves subconstant probability of error o(1). The verifier performs only projection tests, meaning that the answer to the first query determines at most one accepting answer ..."
Abstract

Cited by 35 (3 self)
 Add to MetaCart
We show that the N PComplete language 3SAT has a PCP verifier that makes two queries to a proof of almostlinear size and achieves subconstant probability of error o(1). The verifier performs only projection tests, meaning that the answer to the first query determines at most one accepting answer to the second query. Previously, by the parallel repetition theorem, there were PCP Theorems with twoquery projection tests, but only (arbitrarily small) constant error and polynomial size [29]. There were also PCP Theorems with subconstant error and almostlinear size, but a constant number of queries that is larger than 2 [26]. As a corollary, we obtain a host of new results. In particular, our theorem improves many of the hardness of approximation results that are proved using the parallel repetition theorem. A partial list includes the following: 1. 3SAT cannot be efficiently approximated to within a factor of 7 8 + o(1), unless P = N P. This holds even under almostlinear reductions. Previously, the best known N Phardness
Improving the Robustness of Private Information Retrieval
 In Proceedings of IEEE Security and Privacy Symposium
, 2007
"... Since 1995, much work has been done creating protocols for private information retrieval (PIR). Many variants of the basic PIR model have been proposed, including such modifications as computational vs. informationtheoretic privacy protection, correctness in the face of servers that fail to respond ..."
Abstract

Cited by 24 (11 self)
 Add to MetaCart
Since 1995, much work has been done creating protocols for private information retrieval (PIR). Many variants of the basic PIR model have been proposed, including such modifications as computational vs. informationtheoretic privacy protection, correctness in the face of servers that fail to respond or that respond incorrectly, and protection of sensitive data against the database servers themselves. In this paper, we improve on the robustness of PIR in a number of ways. First, we present a Byzantinerobust PIR protocol which provides informationtheoretic privacy protection against coalitions of up to all but one of the responding servers, improving the previous result by a factor of 3. In addition, our protocol allows for more of the responding servers to return incorrect information while still enabling the user to compute the correct result. We then extend our protocol so that queries have informationtheoretic protection if a limited number of servers collude, as before, but still retain computational protection if they all collude. We also extend the protocol to provide informationtheoretic protection to the contents of the database against collusions of limited numbers of the database servers, at no additional communication cost or increase in the number of servers. All of our protocols retrieve a block of data with communication cost only O(ℓ) times the size of the block, where ℓ is the number of servers. Finally, we discuss our implementation of these protocols, and measure their performance in order to determine their practicality. 1
A hypercontractive inequality for matrixvalued functions with applications to quantum computing and LDCs
"... The BonamiBeckner hypercontractive inequality is a powerful tool in Fourier analysis of realvalued functions on the Boolean cube. In this paper we present a version of this inequality for matrixvalued functions on the Boolean cube. Its proof is based on a powerful inequality by Ball, Carlen, and ..."
Abstract

Cited by 21 (4 self)
 Add to MetaCart
The BonamiBeckner hypercontractive inequality is a powerful tool in Fourier analysis of realvalued functions on the Boolean cube. In this paper we present a version of this inequality for matrixvalued functions on the Boolean cube. Its proof is based on a powerful inequality by Ball, Carlen, and Lieb. We also present a number of applications. First, we analyze maps that encode n classical bits into m qubits, in such a way that each set of k bits can be recovered with some probability by an appropriate measurement on the quantum encoding; we show that if m<0.7n, then the success probability is exponentially small in k. This result may be viewed as a direct product version of Nayak’s quantum random access code bound. It in turn implies strong direct product theorems for the oneway quantum communication complexity of Disjointness and other problems. Second, we prove that errorcorrecting codes that are locally decodable with 2 queries require length exponential in the length of the encoded string. This gives what is arguably the first “nonquantum” proof of a result originally derived by Kerenidis and de Wolf using quantum information theory.
Sparse random linear codes are locally decodable and testable
 in Proc. 40th STOC
, 2007
"... We show that random sparse binary linear codes are locally testable and locally decodable (under any linear encoding) with constant queries (with probability tending to one). By sparse, we mean that the code should have only polynomially many codewords. Our results are the first to show that local d ..."
Abstract

Cited by 12 (6 self)
 Add to MetaCart
We show that random sparse binary linear codes are locally testable and locally decodable (under any linear encoding) with constant queries (with probability tending to one). By sparse, we mean that the code should have only polynomially many codewords. Our results are the first to show that local decodability and testability can be found in random, unstructured, codes. Previously known locally decodable or testable codes were either classical algebraic codes, or new ones constructed very carefully. We obtain our results by extending the techniques of Kaufman and Litsyn [11] who used the MacWilliams Identities to show that “almostorthogonal ” binary codes are locally testable. Their definition of almost orthogonality expected codewords to disagree in n 2 ± O( √ n) coordinates in codes of block length n. The only families of codes known to have this property were the dualBCH codes. We extend their techniques, and simplify them in the process, to include codes of distance at least n 2 −O(n1−γ) for any γ> 0, provided the number of codewords is O(n t) for some constant t. Thus our results derive the local testability of linear codes from the classical coding theory parameters, namely the rate and the distance of the codes. More significantly, we show that this technique can also be used to prove the “selfcorrectability ” of sparse codes of sufficiently large distance. This allows us to show that random linear codes under linear encoding functions are locally decodable. This ought to be surprising in that the definition of a code doesn’t specify the encoding function used! Our results effectively say that any linear function of the bits of the codeword can be locally decoded in this case.
Limits on the rate of locally testable affineinvariant codes
, 2009
"... Despite its many applications, to program checking, probabilistically checkable proofs, locally testable and locally decodable codes, and cryptography, “algebraic property testing ” is not wellunderstood. A significant obstacle to a better understanding, was a lack of a concrete definition that abst ..."
Abstract

Cited by 12 (8 self)
 Add to MetaCart
Despite its many applications, to program checking, probabilistically checkable proofs, locally testable and locally decodable codes, and cryptography, “algebraic property testing ” is not wellunderstood. A significant obstacle to a better understanding, was a lack of a concrete definition that abstracted known testable algebraic properties and reflected their testability. This obstacle was removed by [Kaufman and Sudan, STOC 2008] who considered (linear) “affineinvariant properties”, i.e., properties that are closed under summation, and under affine transformations of the domain. Kaufman and Sudan showed that these two features (linearity of the property and its affineinvariance) play a central role in the testability of many known algebraic properties. However their work does not give a complete characterization of the testability of affineinvariant properties, and several technical obstacles need to be overcome to obtain such a characterization. Indeed, their work left open the tantalizing possibility that locally testable codes of rate dramatically better than that of the family of ReedMuller codes (the most popular form of locally testable codes, which also happen to be affineinvariant) could be found by systematically exploring the space of affineinvariant properties.
A Note on Yekhanin’s Locally Decodable Codes
 ELECTRONIC COLLOQUIUM ON COMPUTATIONAL COMPLEXITY, REPORT NO. 16 (2007)
, 2007
"... Locally Decodable codes(LDC) support decoding of any particular symbol of the input message by reading constant number of symbols of the codeword, even in presence of constant fraction of errors. In a recent breakthrough [9], Yekhanin constructedquery LDCs that hugely improve over earlier construct ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
Locally Decodable codes(LDC) support decoding of any particular symbol of the input message by reading constant number of symbols of the codeword, even in presence of constant fraction of errors. In a recent breakthrough [9], Yekhanin constructedquery LDCs that hugely improve over earlier constructions. Specifically, for a Mersenne prime, binary LDCs of length for infinitely many were obtained. Using the largest known Mersenne prime, this implies LDCs of length less than. Assuming infinitude of Mersenne primes, the construction yields LDCs of length for infinitely many. Inspired by [9], we constructquery binary LDCs with same parameters from Mersenne primes. While all the main technical tools are borrowed from [9], we give a selfcontained simple construction of LDCs. Our bounds do not improve over [9], and have worse soundness of the decoder. However the LDCs are simpler and generalize naturally to prime fields other than. The LDCs presented also translate directly in to three server Private Information Retrieval(PIR) protocols with communication! complexities for a database of size, starting with a Mersenne prime.
ErrorCorrecting Data Structures
, 2008
"... We study data structures in the presence of adversarial noise. We want to encode a given object in a succinct data structure that enables us to efficiently answer specific queries about the object, even if the data structure has been corrupted by a constant fraction of errors. This model is the comm ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
We study data structures in the presence of adversarial noise. We want to encode a given object in a succinct data structure that enables us to efficiently answer specific queries about the object, even if the data structure has been corrupted by a constant fraction of errors. This model is the common generalization of (static) data structures and locally decodable errorcorrecting codes. The main issue is the tradeoff between the space used by the data structure and the time (number of probes) needed to answer a query about the encoded object. We prove a number of upper and lower bounds on various natural errorcorrecting data structure problems. In particular, we show that the optimal length of errorcorrecting data structures for the Membership problem (where we want to store subsets of size s from a universe of size n) is closely related to the optimal length of locally decodable codes for sbit strings. 1
Corruption and RecoveryEfficient Locally Decodable Codes
"... Abstract. A (q, δ, ɛ)locally decodable code (LDC) C: {0, 1} n → {0, 1} m is an encoding from nbit strings to mbit strings such that each bit xk can be recovered with probability at least 1 + ɛ from C(x) by a random2 ized algorithm that queries only q positions of C(x), even if up to δm positions ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
Abstract. A (q, δ, ɛ)locally decodable code (LDC) C: {0, 1} n → {0, 1} m is an encoding from nbit strings to mbit strings such that each bit xk can be recovered with probability at least 1 + ɛ from C(x) by a random2 ized algorithm that queries only q positions of C(x), even if up to δm positions of C(x) are corrupted. If C is a linear map, then the LDC is linear. We give improved constructions of LDCs in terms of the corruption parameter δ and recovery parameter ɛ. The key property of our LDCs is that they are nonlinear, whereas all previous LDCs were linear. 1. For any δ, ɛ ∈ [Ω(n −1/2), O(1)], we give a family of (2, δ, ɛ)LDCs with length m = poly(δ −1, ɛ −1) exp (max(δ, ɛ)δn). For linear (2, δ, ɛ)LDCs, Obata has shown that m ≥ exp (δn). Thus, for small enough constants δ, ɛ, twoquery nonlinear LDCs are shorter than twoquery linear LDCs. 2. We improve the dependence on δ and ɛ of all constantquery LDCs by providing general transformations to nonlinear LDCs. Taking Yekhanin’s linear (3, δ, 1/2 − 6δ)LDCs with m = exp � n 1/t � for any prime of the form 2 t − 1, we obtain nonlinear (3, δ, ɛ)LDCs with m = poly(δ −1, ɛ −1) exp � (max(δ, ɛ)δn) 1/t �. Now consider a (q, δ, ɛ)LDC C with a decoder that has n matchings M1,..., Mn on the complete quniform hypergraph, whose vertices are identified with the positions of C(x). On input k ∈ [n] and received word y, the decoder chooses e = {a1,..., aq} ∈ Mk uniformly at random and outputs �q j=1 yaj. All known LDCs and ours have such a decoder, which we call a matching sum decoder. We show that if C is a twoquery LDC with such a decoder, then m ≥ exp (max(δ, ɛ)δn). Interestingly, our techniques used here can further improve the dependence on δ of Yekhanin’s threequery LDCs. Namely, if δ ≥ 1/12 then Yekhanin’s threequery LDCs become trivial (have recovery probability less than half), whereas we obtain threequery LDCs of length exp � n 1/t � for any prime of the form 2 t − 1 with nontrivial recovery probability for any δ < 1/6. 1
EFFICIENT AND ERRORCORRECTING DATA STRUCTURES FOR MEMBERSHIP AND POLYNOMIAL EVALUATION
 SUBMITTED TO THE SYMPOSIUM ON THEORETICAL ASPECTS OF COMPUTER SCIENCE
"... We construct efficient data structures that are resilient against a constant fraction of adversarial noise. Our model requires that the decoder answers most queries correctly with high probability and for the remaining queries, the decoder with high probability either answers correctly or declares “ ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
We construct efficient data structures that are resilient against a constant fraction of adversarial noise. Our model requires that the decoder answers most queries correctly with high probability and for the remaining queries, the decoder with high probability either answers correctly or declares “don’t know.” Furthermore, if there is no noise on the data structure, it answers all queries correctly with high probability. Our model is the common generalization of an errorcorrecting data structure model proposed recently by de Wolf, and the notion of “relaxed locally decodable codes” developed in the PCP literature. We measure the efficiency of a data structure in terms of its length (the number of bits in its representation), and queryanswering time, measured by the number of bitprobes to the (possibly corrupted) representation. We obtain results for the following two data structure problems: • (Membership) Store a subset S of size at most s from a universe of size n such that membership queries can be answered efficiently, i.e., decide if a given element from the universe is in S. We construct an errorcorrecting data structure for this problem with length nearly linear in s log n that answers membership queries with O(1) bitprobes. This nearly matches the asymptotically optimal parameters for the noiseless case: length O(s log n) and one bitprobe, due to
Matching Vector Codes
"... An (r, δ, ɛ)locally decodable code encodes a kbit message x to an Nbit codeword C(x), such that for every i ∈ [k], the ith message bit can be recovered with probability 1 − ɛ, by a randomized decoding procedure that queries only r bits, even if the codeword C(x) is corrupted in up to δN location ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
An (r, δ, ɛ)locally decodable code encodes a kbit message x to an Nbit codeword C(x), such that for every i ∈ [k], the ith message bit can be recovered with probability 1 − ɛ, by a randomized decoding procedure that queries only r bits, even if the codeword C(x) is corrupted in up to δN locations. Recently a new class of locally decodable codes, based on families of vectors with restricted dot products has been discovered. We refer to those codes as Matching Vector (MV) codes. Several families of (r, δ, Θ(rδ))locally decodable MV codes have been obtained. While codes in those families were shorter than codes of earlier generations, they suffered from having large values of ɛ = Ω(rδ), which meant that rquery MV codes could only handle errorrates below 1 r. Thus larger query complexity gave shorter length codes but at the price of less errortolerance. No MV codes of superconstant number of queries capable of tolerating a constant fraction of errors were known to exist. In this paper we present a new view of matching vector codes and uncover certain similarities between MV codes and classical Reed Muller codes. Our view allows us to obtain deeper insights into the power and limitations of MV codes. Specifically, 1. We show that existing families of MV codes can be enhanced to tolerate a large constant fraction of errors, independent of the number of queries. Such enhancement comes at a price of a moderate increase in the number of queries; 2. Our construction yields the first families of matching vector codes of superconstant query complexity that can tolerate a constant fraction of errors. Our codes are shorter than Reed Muller LDCs for all values of r ≤ log k/(log log k) c, for some constant c; 3. We show that any MV code encodes messages of length k to codewords of length at least k2 Ω( √ log k). Therefore MV codes do not improve upon Reed Muller LDCs for r ≥ (log k) Ω ( √ log k)