Results 1  10
of
67
Towards 3Query Locally Decodable Codes of Subexponential Length
, 2008
"... A qquery Locally Decodable Code (LDC) encodes an nbit message x as an Nbit codeword C(x), such that one can probabilistically recover any bit xi of the message by querying only q bits of the codeword C(x), even after some constant fraction of codeword bits has been corrupted. We give new const ..."
Abstract

Cited by 75 (7 self)
 Add to MetaCart
A qquery Locally Decodable Code (LDC) encodes an nbit message x as an Nbit codeword C(x), such that one can probabilistically recover any bit xi of the message by querying only q bits of the codeword C(x), even after some constant fraction of codeword bits has been corrupted. We give new constructions of three query LDCs of vastly shorter length than that of previous constructions. Specifically, given any Mersenne prime p = 2t −1, we design three query LDCs of length N = exp(O(n1/t)), for every n. Based on the largest known Mersenne prime, this translates to a length of less than exp(O(n10−7)), compared to exp(O(n1/2)) in the previous constructions. It has often been conjectured that there are infinitely many Mersenne primes. Under this conjecture, our constructions yield three query locally decodable codes of length N = exp(nO ( 1log log n)) for infinitely many n. We also obtain analogous improvements for Private Information Retrieval (PIR) schemes. We give 3server PIR schemes with communication complexity of O(n10−7) to access an nbit database, compared to the previous best scheme with complexity O(n1/5.25). Assuming again that there are infinitely many Mersenne primes, we get 3server PIR schemes of communication complexity n O ( 1log log n) for infinitely many n. Previous families of LDCs and PIR schemes were based on the properties of lowdegree multivariate polynomials over finite fields. Our constructions are completely different and are obtained by constructing a large number of vectors in a small dimensional vector space whose inner products are restricted to lie in an algebraically nice set.
3Query Locally Decodable Codes of Subexponential Length
, 2008
"... Locally Decodable Codes (LDC) allow one to decode any particular symbol of the input message by making a constant number of queries to a codeword, even if a constant fraction of the codeword is damaged. In a recent work [Yek08] Yekhanin constructs a log n log log n 3query LDC with subexponential l ..."
Abstract

Cited by 58 (2 self)
 Add to MetaCart
(Show Context)
Locally Decodable Codes (LDC) allow one to decode any particular symbol of the input message by making a constant number of queries to a codeword, even if a constant fraction of the codeword is damaged. In a recent work [Yek08] Yekhanin constructs a log n log log n 3query LDC with subexponential length of size exp(exp(O ())). However, this construction requires a conjecture that there are infinitely many Mersenne primes. In this paper we give the first unconditional constant query LDC construction with subexponantial codeword length. In addition our construction reduces codeword length. We give construction of 3query LDC with codeword length exp(exp(O ( √ log n log log n))). Our construction also could be extended to higher number of queries. We give a 2rquery LDC with length of exp(exp(O ( r √ log n(log log n) r−1))). 1
Locally Decodable Codes with 2 queries and Polynomial Identity Testing for depth 3 circuits
 ELECTRONIC COLLOQUIUM ON COMPUTATIONAL COMPLEXITY, REPORT NO. 44 (2005)
, 2005
"... In this work we study two, seemingly unrelated, notions. Locally Decodable Codes (LDCs) are codes that allow the recovery of each message bit from a constant number of entries of the codeword. Polynomial Identity Testing (PIT) is one of the fundamental problems of algebraic complexity: we are given ..."
Abstract

Cited by 55 (14 self)
 Add to MetaCart
(Show Context)
In this work we study two, seemingly unrelated, notions. Locally Decodable Codes (LDCs) are codes that allow the recovery of each message bit from a constant number of entries of the codeword. Polynomial Identity Testing (PIT) is one of the fundamental problems of algebraic complexity: we are given a circuit computing a multivariate polynomial and we have to determine whether the polynomial is identically zero. We improve known results on locally decodable codes and on polynomial identity testing and show a relation between the two notions. In particular we obtain the following results: 1. We show that if E: F n ↦ → F m is a linear LDC with 2 queries then m = exp(Ω(n)). Previously this was only known for fields of size << 2 n [GKST01]. 2. We show that from every depth 3 arithmetic circuit (ΣΠΣ circuit), C, with a bounded (constant) top fanin that computes the zero polynomial, one can construct a locally decodeable code. More formally: Assume that C is minimal (no subset of the multiplication gates sums to zero) and simple (no linear function appears in all the multiplication gates). Denote by d the degree of the polynomial computed by C and by r the rank of the linear
A hypercontractive inequality for matrixvalued functions with applications to quantum computing and LDCs
"... The BonamiBeckner hypercontractive inequality is a powerful tool in Fourier analysis of realvalued functions on the Boolean cube. In this paper we present a version of this inequality for matrixvalued functions on the Boolean cube. Its proof is based on a powerful inequality by Ball, Carlen, and ..."
Abstract

Cited by 39 (3 self)
 Add to MetaCart
(Show Context)
The BonamiBeckner hypercontractive inequality is a powerful tool in Fourier analysis of realvalued functions on the Boolean cube. In this paper we present a version of this inequality for matrixvalued functions on the Boolean cube. Its proof is based on a powerful inequality by Ball, Carlen, and Lieb. We also present a number of applications. First, we analyze maps that encode n classical bits into m qubits, in such a way that each set of k bits can be recovered with some probability by an appropriate measurement on the quantum encoding; we show that if m<0.7n, then the success probability is exponentially small in k. This result may be viewed as a direct product version of Nayak’s quantum random access code bound. It in turn implies strong direct product theorems for the oneway quantum communication complexity of Disjointness and other problems. Second, we prove that errorcorrecting codes that are locally decodable with 2 queries require length exponential in the length of the encoded string. This gives what is arguably the first “nonquantum” proof of a result originally derived by Kerenidis and de Wolf using quantum information theory.
Approximate listdecoding of direct product . . .
"... Given a message msg ∈ {0, 1} N, its kwise direct product encoding is the sequence of ktuples (msg(i1),..., msg(ik)) over all possible ktuples of indices (i1,..., ik) ∈ {1,..., N} k. We give an efficient randomized algorithm for approximate local listdecoding of direct product codes. That is, gi ..."
Abstract

Cited by 36 (11 self)
 Add to MetaCart
(Show Context)
Given a message msg ∈ {0, 1} N, its kwise direct product encoding is the sequence of ktuples (msg(i1),..., msg(ik)) over all possible ktuples of indices (i1,..., ik) ∈ {1,..., N} k. We give an efficient randomized algorithm for approximate local listdecoding of direct product codes. That is, given oracle access to a word which agrees with a kwise direct product encoding of some message msg ∈ {0, 1} N in at least ɛ � poly(1/k) fraction of positions, our algorithm outputs a list of poly(1/ɛ) strings that contains at least one string msg ′ which is equal to msg in all but at most k −Ω(1) fraction of positions. The decoding is local in that our algorithm outputs a list of Boolean circuits so that the jth bit of the ith output string can be computed by running the ith circuit on input j. The running time of the algorithm is polynomial in log N and 1/ɛ. In general, when ɛ> e−kα for a sufficiently small constant α> 0, we get a randomized approximate listdecoding algorithm that runs in time quasipolynomial in 1/ɛ, i.e., (1/ɛ) poly log 1/ɛ. As an application of our decoding algorithm, we get uniform hardness amplification for PNP�, the class of languages reducible to NP through one round of parallel oracle queries: If there is a language in PNP � that cannot be decided by any BPP algorithm on more that 1 − 1/nΩ(1) fraction of inputs, then there is another language in P NP � that cannot be decided by any BPP algorithm on more that 1/2 + 1/nω(1) fraction of inputs.
AverageCase Complexity
 in Foundations and Trends in Theoretical Computer Science Volume 2, Issue 1
, 2006
"... We survey the averagecase complexity of problems in NP. We discuss various notions of goodonaverage algorithms, and present completeness results due to Impagliazzo and Levin. Such completeness results establish the fact that if a certain specific (but somewhat artificial) NP problem is easyonav ..."
Abstract

Cited by 25 (0 self)
 Add to MetaCart
(Show Context)
We survey the averagecase complexity of problems in NP. We discuss various notions of goodonaverage algorithms, and present completeness results due to Impagliazzo and Levin. Such completeness results establish the fact that if a certain specific (but somewhat artificial) NP problem is easyonaverage with respect to the uniform distribution, then all problems in NP are easyonaverage with respect to all samplable distributions. Applying the theory to natural distributional problems remain an outstanding open question. We review some natural distributional problems whose averagecase complexity is of particular interest and that do not yet fit into this theory. A major open question is whether the existence of hardonaverage problems in NP can be based on the P ̸ = NP assumption or on related worstcase assumptions. We review negative results showing that certain proof techniques cannot prove such a result. While the relation between worstcase and averagecase complexity for general NP problems remains open, there has been progress in understanding the relation between different “degrees ” of averagecase complexity. We discuss some of these “hardness amplification ” results. 1
ListDecoding ReedMuller codes over small fields
 IN PROC. 40 TH ACM SYMP. ON THEORY OF COMPUTING (STOC’08)
, 2008
"... We present the first local listdecoding algorithm for the r th order ReedMuller code RM(r, m) over F2 for r ≥ 2. Given an oracle for a received word R: F m 2 → F2, our randomized local listdecoding algorithm produces a list containing all degree r polynomials within relative distance (2 −r − ε) f ..."
Abstract

Cited by 22 (3 self)
 Add to MetaCart
(Show Context)
We present the first local listdecoding algorithm for the r th order ReedMuller code RM(r, m) over F2 for r ≥ 2. Given an oracle for a received word R: F m 2 → F2, our randomized local listdecoding algorithm produces a list containing all degree r polynomials within relative distance (2 −r − ε) from R for any ε> 0 in time poly(m r, ε −r). The list size could be exponential in m at radius 2 −r, so our bound is optimal in the local setting. Since RM(r, m) has relative distance 2 −r, our algorithm beats the Johnson bound for r ≥ 2. In the setting where we are allowed runningtime polynomial in the blocklength, we show that listdecoding is possible up to even larger radii, beyond the minimum distance. We give a deterministic listdecoder that works at error rate below J(2 1−r), where J(δ) denotes the Johnson radius for minimum distance δ. This shows that RM(2, m) codes are listdecodable up to radius η for any constant η < 1 in time 2 polynomial in the blocklength. Over small fields Fq, we present listdecoding algorithms in both the global and local settings that work up to the listdecoding radius. We conjecture that the listdecoding radius approaches the minimum distance (like over F2), and prove this holds true when the degree is divisible by q − 1.
A Note on Yekhanin’s Locally Decodable Codes
 ELECTRONIC COLLOQUIUM ON COMPUTATIONAL COMPLEXITY, REPORT NO. 16 (2007)
, 2007
"... Locally Decodable codes(LDC) support decoding of any particular symbol of the input message by reading constant number of symbols of the codeword, even in presence of constant fraction of errors. In a recent breakthrough [9], Yekhanin constructedquery LDCs that hugely improve over earlier construct ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
(Show Context)
Locally Decodable codes(LDC) support decoding of any particular symbol of the input message by reading constant number of symbols of the codeword, even in presence of constant fraction of errors. In a recent breakthrough [9], Yekhanin constructedquery LDCs that hugely improve over earlier constructions. Specifically, for a Mersenne prime, binary LDCs of length for infinitely many were obtained. Using the largest known Mersenne prime, this implies LDCs of length less than. Assuming infinitude of Mersenne primes, the construction yields LDCs of length for infinitely many. Inspired by [9], we constructquery binary LDCs with same parameters from Mersenne primes. While all the main technical tools are borrowed from [9], we give a selfcontained simple construction of LDCs. Our bounds do not improve over [9], and have worse soundness of the decoder. However the LDCs are simpler and generalize naturally to prime fields other than. The LDCs presented also translate directly in to three server Private Information Retrieval(PIR) protocols with communication! complexities for a database of size, starting with a Mersenne prime.
Linearalgebraic list decoding of folded ReedSolomon codes
 In Proceedings of the 26th IEEE Conference on Computational Complexity
, 2011
"... ar ..."
(Show Context)
Hardness amplification proofs require majority
 In Proceedings of the 40th Annual ACM Symposium on the Theory of Computing (STOC
, 2008
"... Hardness amplification is the fundamental task of converting a δhard function f: {0, 1} n → {0, 1} into a (1/2 − ɛ)hard function Amp(f), where f is γhard if small circuits fail to compute f on at least a γ fraction of the inputs. Typically, ɛ, δ are small (and δ = 2 −k captures the case where f i ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
(Show Context)
Hardness amplification is the fundamental task of converting a δhard function f: {0, 1} n → {0, 1} into a (1/2 − ɛ)hard function Amp(f), where f is γhard if small circuits fail to compute f on at least a γ fraction of the inputs. Typically, ɛ, δ are small (and δ = 2 −k captures the case where f is worstcase hard). Achieving ɛ = 1/n ω(1) is a prerequisite for cryptography and most pseudorandomgenerator constructions. In this paper we study the complexity of blackbox proofs of hardness amplification. A class of circuits D proves a hardness amplification result if for any function h that agrees with Amp(f) on a 1/2 + ɛ fraction of the inputs there exists an oracle circuit D ∈ D such that D h agrees with f on a 1 − δ fraction of the inputs. We focus on the case where every D ∈ D makes nonadaptive queries to h. This setting captures most hardness amplification techniques. We prove two main results: 1. The circuits in D “can be used ” to compute the majority function on 1/ɛ bits. In particular, these circuits have large depth when ɛ ≤ 1/poly log n. 2. The circuits in D must make Ω � log(1/δ)/ɛ 2 � oracle queries. Both our bounds on the depth and on the number of queries are tight up to constant factors.