Results 1  10
of
32
Locally decodable codes with 2 queries and polynomial identity testing for depth 3 circuits
 SIAM J. COMPUT
, 2007
"... In this work we study two, seemingly unrelated, notions. Locally decodable codes (LDCs) are codes that allow the recovery of each message bit from a constant number of entries of the codeword. Polynomial identity testing (PIT) is one of the fundamental problems of algebraic complexity: we are given ..."
Abstract

Cited by 26 (7 self)
 Add to MetaCart
In this work we study two, seemingly unrelated, notions. Locally decodable codes (LDCs) are codes that allow the recovery of each message bit from a constant number of entries of the codeword. Polynomial identity testing (PIT) is one of the fundamental problems of algebraic complexity: we are given a circuit computing a multivariate polynomial and we have to determine whether the polynomial is identically zero. We improve known results on LDCs and on polynomial identity testing and show a relation between the two notions. In particular we obtain the following results: (1) We show that if E: F n ↦ → F m is a linear LDC with two queries, then m = exp(Ω(n)). Previously this was known only for fields of size ≪ 2 n [O. Goldreich et al., Comput. Complexity, 15 (2006), pp. 263–296]. (2) We show that from every depth 3 arithmetic circuit (ΣΠΣ circuit), C, with a bounded (constant) top fanin that computes the zero polynomial, one can construct an LDC. More formally, assume that C is minimal (no subset of the multiplication gates sums to zero) and simple (no linear function appears in all the multiplication gates). Denote by d the degree of the polynomial computed by C and by r the rank of the linear functions appearing in C. Then we can construct a linear LDC with two queries that encodes messages of length r/polylog(d) by codewords of length O(d). (3) We prove a structural theorem for ΣΠΣ circuits, with a bounded top fanin, that
Approximately listdecoding direct product codes and uniform hardness amplification
 In Proceedings of the FortySeventh Annual IEEE Symposium on Foundations of Computer Science
, 2006
"... We consider the problem of approximately locally listdecoding direct product codes. For a parameter k, the kwise direct product encoding of an Nbit message msg is an N klength string over the alphabet {0, 1} k indexed by ktuples (i1,..., ik) ∈ {1,..., N} k so that the symbol at position (i1,..., ..."
Abstract

Cited by 24 (6 self)
 Add to MetaCart
We consider the problem of approximately locally listdecoding direct product codes. For a parameter k, the kwise direct product encoding of an Nbit message msg is an N klength string over the alphabet {0, 1} k indexed by ktuples (i1,..., ik) ∈ {1,..., N} k so that the symbol at position (i1,..., ik) of the codeword is msg(i1)... msg(ik). Such codes arise naturally in the context of hardness amplification of Boolean functions via the Direct Product Lemma (and the closely related Yao’s XOR Lemma), where typically k ≪ N (e.g., k = poly log N). We describe an efficient randomized algorithm for approximate local listdecoding of direct product codes. Given access to a word which agrees with the kwise direct product encoding of some message msg in at least an ɛ fraction of positions, our algorithm outputs a list of poly(1/ɛ) Boolean circuits computing Nbit strings (viewed as truth tables of log Nvariable Boolean functions) such that at least one of them agrees with msg in at least 1 − δ fraction of positions, for δ = O(k−0.1), provided that ɛ = Ω(poly(1/k)); the running time of the algorithm is polynomial in log N and 1/ɛ. When ɛ> e −kα
A hypercontractive inequality for matrixvalued functions with applications to quantum computing and LDCs
"... The BonamiBeckner hypercontractive inequality is a powerful tool in Fourier analysis of realvalued functions on the Boolean cube. In this paper we present a version of this inequality for matrixvalued functions on the Boolean cube. Its proof is based on a powerful inequality by Ball, Carlen, and ..."
Abstract

Cited by 19 (2 self)
 Add to MetaCart
The BonamiBeckner hypercontractive inequality is a powerful tool in Fourier analysis of realvalued functions on the Boolean cube. In this paper we present a version of this inequality for matrixvalued functions on the Boolean cube. Its proof is based on a powerful inequality by Ball, Carlen, and Lieb. We also present a number of applications. First, we analyze maps that encode n classical bits into m qubits, in such a way that each set of k bits can be recovered with some probability by an appropriate measurement on the quantum encoding; we show that if m<0.7n, then the success probability is exponentially small in k. This result may be viewed as a direct product version of Nayak’s quantum random access code bound. It in turn implies strong direct product theorems for the oneway quantum communication complexity of Disjointness and other problems. Second, we prove that errorcorrecting codes that are locally decodable with 2 queries require length exponential in the length of the encoded string. This gives what is arguably the first “nonquantum” proof of a result originally derived by Kerenidis and de Wolf using quantum information theory.
AverageCase Complexity
 in Foundations and Trends in Theoretical Computer Science Volume 2, Issue 1
, 2006
"... We survey the averagecase complexity of problems in NP. We discuss various notions of goodonaverage algorithms, and present completeness results due to Impagliazzo and Levin. Such completeness results establish the fact that if a certain specific (but somewhat artificial) NP problem is easyonav ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
We survey the averagecase complexity of problems in NP. We discuss various notions of goodonaverage algorithms, and present completeness results due to Impagliazzo and Levin. Such completeness results establish the fact that if a certain specific (but somewhat artificial) NP problem is easyonaverage with respect to the uniform distribution, then all problems in NP are easyonaverage with respect to all samplable distributions. Applying the theory to natural distributional problems remain an outstanding open question. We review some natural distributional problems whose averagecase complexity is of particular interest and that do not yet fit into this theory. A major open question is whether the existence of hardonaverage problems in NP can be based on the P ̸ = NP assumption or on related worstcase assumptions. We review negative results showing that certain proof techniques cannot prove such a result. While the relation between worstcase and averagecase complexity for general NP problems remains open, there has been progress in understanding the relation between different “degrees ” of averagecase complexity. We discuss some of these “hardness amplification ” results. 1
Verifying and decoding in constant depth
 In Proceedings of the ThirtyNinth Annual ACM Symposium on Theory of Computing
, 2007
"... We develop a general approach for improving the efficiency of a computationally bounded receiver interacting with a powerful and possibly malicious sender. The key idea we use is that of delegating some of the receiver’s computation to the (potentially malicious) sender. This idea was recently intro ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
We develop a general approach for improving the efficiency of a computationally bounded receiver interacting with a powerful and possibly malicious sender. The key idea we use is that of delegating some of the receiver’s computation to the (potentially malicious) sender. This idea was recently introduced by Goldwasser et al. [14] in the area of program checking. A classic example of such a senderreceiver setting is interactive proof systems. By taking the sender to be a (potentially malicious) prover and the receiver to be a verifier, we show that (pprover) interactive proofs with k rounds of interaction are equivalent to (pprover) interactive proofs with k + O(1) rounds, where the verifier is in NC 0. That is, each round of the verifier’s computation can be implemented in constant parallel time. As a corollary, we obtain interactive proof systems, with (optimally) constant soundness, for languages in AM and NEXP, where the verifier runs in constant paralleltime. Another, less immediate senderreceiver setting arises in considering error correcting codes. By taking the sender to be a (potentially corrupted) codeword and the receiver to be a decoder, we obtain explicit families of codes that are locally (list)decodable by constantdepth circuits of size polylogarithmic in the length of the codeword. Using the tight connection between locally listdecodable codes and averagecase complexity, we obtain a new, more efficient, worstcase to averagecase reduction for languages in EXP.
A Note on Yekhanin’s Locally Decodable Codes
 ELECTRONIC COLLOQUIUM ON COMPUTATIONAL COMPLEXITY, REPORT NO. 16 (2007)
, 2007
"... Locally Decodable codes(LDC) support decoding of any particular symbol of the input message by reading constant number of symbols of the codeword, even in presence of constant fraction of errors. In a recent breakthrough [9], Yekhanin constructedquery LDCs that hugely improve over earlier construct ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
Locally Decodable codes(LDC) support decoding of any particular symbol of the input message by reading constant number of symbols of the codeword, even in presence of constant fraction of errors. In a recent breakthrough [9], Yekhanin constructedquery LDCs that hugely improve over earlier constructions. Specifically, for a Mersenne prime, binary LDCs of length for infinitely many were obtained. Using the largest known Mersenne prime, this implies LDCs of length less than. Assuming infinitude of Mersenne primes, the construction yields LDCs of length for infinitely many. Inspired by [9], we constructquery binary LDCs with same parameters from Mersenne primes. While all the main technical tools are borrowed from [9], we give a selfcontained simple construction of LDCs. Our bounds do not improve over [9], and have worse soundness of the decoder. However the LDCs are simpler and generalize naturally to prime fields other than. The LDCs presented also translate directly in to three server Private Information Retrieval(PIR) protocols with communication! complexities for a database of size, starting with a Mersenne prime.
Hardness amplification proofs require majority
 In Proceedings of the 40th Annual ACM Symposium on the Theory of Computing (STOC
, 2008
"... Hardness amplification is the fundamental task of converting a δhard function f: {0, 1} n → {0, 1} into a (1/2 − ɛ)hard function Amp(f), where f is γhard if small circuits fail to compute f on at least a γ fraction of the inputs. Typically, ɛ, δ are small (and δ = 2 −k captures the case where f i ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
Hardness amplification is the fundamental task of converting a δhard function f: {0, 1} n → {0, 1} into a (1/2 − ɛ)hard function Amp(f), where f is γhard if small circuits fail to compute f on at least a γ fraction of the inputs. Typically, ɛ, δ are small (and δ = 2 −k captures the case where f is worstcase hard). Achieving ɛ = 1/n ω(1) is a prerequisite for cryptography and most pseudorandomgenerator constructions. In this paper we study the complexity of blackbox proofs of hardness amplification. A class of circuits D proves a hardness amplification result if for any function h that agrees with Amp(f) on a 1/2 + ɛ fraction of the inputs there exists an oracle circuit D ∈ D such that D h agrees with f on a 1 − δ fraction of the inputs. We focus on the case where every D ∈ D makes nonadaptive queries to h. This setting captures most hardness amplification techniques. We prove two main results: 1. The circuits in D “can be used ” to compute the majority function on 1/ɛ bits. In particular, these circuits have large depth when ɛ ≤ 1/poly log n. 2. The circuits in D must make Ω � log(1/δ)/ɛ 2 � oracle queries. Both our bounds on the depth and on the number of queries are tight up to constant factors.
ErrorCorrecting Data Structures
, 2008
"... We study data structures in the presence of adversarial noise. We want to encode a given object in a succinct data structure that enables us to efficiently answer specific queries about the object, even if the data structure has been corrupted by a constant fraction of errors. This model is the comm ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
We study data structures in the presence of adversarial noise. We want to encode a given object in a succinct data structure that enables us to efficiently answer specific queries about the object, even if the data structure has been corrupted by a constant fraction of errors. This model is the common generalization of (static) data structures and locally decodable errorcorrecting codes. The main issue is the tradeoff between the space used by the data structure and the time (number of probes) needed to answer a query about the encoded object. We prove a number of upper and lower bounds on various natural errorcorrecting data structure problems. In particular, we show that the optimal length of errorcorrecting data structures for the Membership problem (where we want to store subsets of size s from a universe of size n) is closely related to the optimal length of locally decodable codes for sbit strings. 1
Algorithmic results in list decoding
 In Foundations and Trends in Theoretical Computer Science (FnTTCS
"... Errorcorrecting codes are used to cope with the corruption of data by noise during communication or storage. A code uses an encoding procedure that judiciously introduces redundancy into the data to produce an associated codeword. The redundancy built into the codewords enables one to decode the or ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
Errorcorrecting codes are used to cope with the corruption of data by noise during communication or storage. A code uses an encoding procedure that judiciously introduces redundancy into the data to produce an associated codeword. The redundancy built into the codewords enables one to decode the original data even from a somewhat distorted version of the codeword. The central tradeoff in coding theory is the one between the data rate (amount of nonredundant information per bit of codeword) and the error rate (the fraction of symbols that could be corrupted while still enabling data recovery). The traditional decoding algorithms did as badly at correcting any error pattern as they would do for the worst possible error pattern. This severely limited the maximum fraction of errors those algorithms could tolerate. In turn, this was the source of a big hiatus between the errorcorrection performance known for probabilistic noise models (pioneered by Shannon) and what was thought to be the limit for the more powerful, worstcase noise models (suggested by Hamming). In the last decade or so, there has been much algorithmic progress in coding theory that has bridged this gap (and in fact nearly eliminated it for codes over large alphabets). These developments rely onan errorrecovery model called “list decoding, ” wherein for the pathological error patterns, the decoder is permitted to output a small list of candidates that will include the original message. This book introduces and motivates the problem of list decoding, and discusses the central algorithmic results of the subject, culminating with the recent results on achieving “list decoding capacity. ” Part I General Literature1