Results 11  20
of
64
New constructions for queryefficient locally decodable codes of subexponential length
 IEICE Transactions on Information and Systems
"... is an errorcorrecting code that encodes each message ⃗x = (x1, x2,...,xn) ∈ Fn q to a codeword C(⃗x) ∈ FNq and has the following property: For any ⃗y ∈ FN q such that d(⃗y, C(⃗x)) ≤ δN and each 1 ≤ i ≤ n, the symbol xi of ⃗x can be recovered with probability at least 1−ε by a randomized decoding ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
(Show Context)
is an errorcorrecting code that encodes each message ⃗x = (x1, x2,...,xn) ∈ Fn q to a codeword C(⃗x) ∈ FNq and has the following property: For any ⃗y ∈ FN q such that d(⃗y, C(⃗x)) ≤ δN and each 1 ≤ i ≤ n, the symbol xi of ⃗x can be recovered with probability at least 1−ε by a randomized decoding algorithm looking only at k coordinates of ⃗y. The efficiency of a (k, δ, ε)locally decodable code C: Fn q → FN q is measured by the code length N and the Abstract: A (k, δ, ε)locally decodable code C: F n q → FN q number k of queries. For any kquery locally decodable code C: Fn q → FNq, the code length N is conjectured to be exponential of n, i.e., N = exp(nΩ(1)), however, this was disproved. Yekhanin [In Proc. such that of STOC, 2007] showed that there exists a 3query locally decodable code C: Fn 2 → FN2 N = exp(n (1 / log log n) ) assuming that the number of Mersenne primes is infinite. For a 3query locally decodable code C: Fn q → FN q, Efremenko [ECCC Report No.69, 2008] reduced the code length further to N = exp(nO((log log n/log n)1/2)), and also showed that for any integer r> 1, there exists a kquery locally decodable code C: Fn q → FN q such that k ≤ 2r and N = exp(nO((log log n/log n)1−1/r)). In this paper, we present a queryefficient locally decodable code by introducing a technique of “composition of locally decodable codes, ” and show that for any integer r> 1, there exists a kquery locally decodable code C: Fn q → FNq such that k ≤ 3 · 2r−2 and N = exp(nO((log log n/log n)1−1/r)). Keywords: Locally Decodable Codes, SMatching Vectors, SDecoding Polynomials, Composition of Locally Decodable Codes, Perfectly Smooth Decoders, Private Information Retrieval.
Verifying and decoding in constant depth
 In Proceedings of the ThirtyNinth Annual ACM Symposium on Theory of Computing
, 2007
"... We develop a general approach for improving the efficiency of a computationally bounded receiver interacting with a powerful and possibly malicious sender. The key idea we use is that of delegating some of the receiver’s computation to the (potentially malicious) sender. This idea was recently intro ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
We develop a general approach for improving the efficiency of a computationally bounded receiver interacting with a powerful and possibly malicious sender. The key idea we use is that of delegating some of the receiver’s computation to the (potentially malicious) sender. This idea was recently introduced by Goldwasser et al. [14] in the area of program checking. A classic example of such a senderreceiver setting is interactive proof systems. By taking the sender to be a (potentially malicious) prover and the receiver to be a verifier, we show that (pprover) interactive proofs with k rounds of interaction are equivalent to (pprover) interactive proofs with k + O(1) rounds, where the verifier is in NC 0. That is, each round of the verifier’s computation can be implemented in constant parallel time. As a corollary, we obtain interactive proof systems, with (optimally) constant soundness, for languages in AM and NEXP, where the verifier runs in constant paralleltime. Another, less immediate senderreceiver setting arises in considering error correcting codes. By taking the sender to be a (potentially corrupted) codeword and the receiver to be a decoder, we obtain explicit families of codes that are locally (list)decodable by constantdepth circuits of size polylogarithmic in the length of the codeword. Using the tight connection between locally listdecodable codes and averagecase complexity, we obtain a new, more efficient, worstcase to averagecase reduction for languages in EXP.
Quantum Proofs for Classical Theorems
, 2009
"... Alongside the development of quantum algorithms and quantum complexity theory in recent years, quantum techniques have also proved instrumental in obtaining results in classical (nonquantum) areas. In this paper we survey these results and the quantum toolbox they use. ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
Alongside the development of quantum algorithms and quantum complexity theory in recent years, quantum techniques have also proved instrumental in obtaining results in classical (nonquantum) areas. In this paper we survey these results and the quantum toolbox they use.
Algorithmic results in list decoding
 In Foundations and Trends in Theoretical Computer Science (FnTTCS
"... Errorcorrecting codes are used to cope with the corruption of data by noise during communication or storage. A code uses an encoding procedure that judiciously introduces redundancy into the data to produce an associated codeword. The redundancy built into the codewords enables one to decode the or ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
(Show Context)
Errorcorrecting codes are used to cope with the corruption of data by noise during communication or storage. A code uses an encoding procedure that judiciously introduces redundancy into the data to produce an associated codeword. The redundancy built into the codewords enables one to decode the original data even from a somewhat distorted version of the codeword. The central tradeoff in coding theory is the one between the data rate (amount of nonredundant information per bit of codeword) and the error rate (the fraction of symbols that could be corrupted while still enabling data recovery). The traditional decoding algorithms did as badly at correcting any error pattern as they would do for the worst possible error pattern. This severely limited the maximum fraction of errors those algorithms could tolerate. In turn, this was the source of a big hiatus between the errorcorrection performance known for probabilistic noise models (pioneered by Shannon) and what was thought to be the limit for the more powerful, worstcase noise models (suggested by Hamming). In the last decade or so, there has been much algorithmic progress in coding theory that has bridged this gap (and in fact nearly eliminated it for codes over large alphabets). These developments rely onan errorrecovery model called “list decoding, ” wherein for the pathological error patterns, the decoder is permitted to output a small list of candidates that will include the original message. This book introduces and motivates the problem of list decoding, and discusses the central algorithmic results of the subject, culminating with the recent results on achieving “list decoding capacity. ” Part I General Literature1
QUERYEFFICIENT LOCALLY DECODABLE CODES OF SUBEXPONENTIAL LENGTH
, 2013
"... A kquery locally decodable code (LDC) C: Σn → ΓN encodes each message x into a codeword C(x) such that each symbol of x can be probabilistically recovered by querying only k coordinates of C(x), even after a constant fraction of the coordinates has been corrupted. Yekhanin (in J ACM 55:1–16, 2008 ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
A kquery locally decodable code (LDC) C: Σn → ΓN encodes each message x into a codeword C(x) such that each symbol of x can be probabilistically recovered by querying only k coordinates of C(x), even after a constant fraction of the coordinates has been corrupted. Yekhanin (in J ACM 55:1–16, 2008) constructed a 3query LDC of subexponential length, N = exp(exp(O(logn / log logn))), under the assumption that there are infinitely many Mersenne primes. Efremenko (in Proceedings of the 41st annual ACM symposium on theory of computing, ACM, New York, 2009) constructed a 3query LDC of length N2 = exp(exp(O( log n log logn))) with no assumption, and a 2rquery LDC of length Nr = exp(exp(O ( r logn(log logn)r−1))), for every integer r ≥ 2. Itoh and Suzuki (in IEICE Trans Inform Syst E93D 2:263– 270, 2010) gave a composition method in Efremenko’s framework and
Rank bounds for design matrices with applications to combinatorial geometry and locally correctable codes
 Proc. of the 43rd annual STOC, ACM Press
, 2011
"... A (q, k, t)design matrix is an m × n matrix whose pattern of zeros/nonzeros satisfies the following designlike condition: each row has at most q nonzeros, each column has at least k nonzeros and the supports of every two columns intersect in at most t rows. We prove that for m ≥ n, the rank of ..."
Abstract

Cited by 14 (8 self)
 Add to MetaCart
(Show Context)
A (q, k, t)design matrix is an m × n matrix whose pattern of zeros/nonzeros satisfies the following designlike condition: each row has at most q nonzeros, each column has at least k nonzeros and the supports of every two columns intersect in at most t rows. We prove that for m ≥ n, the rank of any (q, k, t)design matrix over a field of characteristic zero (or sufficiently large finite characteristic) is at least n − ( ) 2 qtn 2k Using this result we derive the following applications: Impossibility results for 2query LCCs over large fields. A 2query locally correctable code (LCC) is an error correcting code in which every codeword coordinate can be recovered, probabilistically, by reading at most two other code positions. Such codes have numerous applications and constructions (with exponential encoding length) are
A Quadratic Lower Bound for ThreeQuery Linear Locally Decodable Codes over Any Field
 In Proceedings of RANDOM 2010
"... Abstract. A linear (q, δ, ɛ, m(n))locally decodable code (LDC) C: F n → F m(n) is a linear transformation from the vector space F n to the space F m(n) for which each message symbol xi can be recovered with probability at least 1 F + ɛ from C(x) by a randomized algorithm that queries only q posit ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
(Show Context)
Abstract. A linear (q, δ, ɛ, m(n))locally decodable code (LDC) C: F n → F m(n) is a linear transformation from the vector space F n to the space F m(n) for which each message symbol xi can be recovered with probability at least 1 F + ɛ from C(x) by a randomized algorithm that queries only q positions of C(x), even if up to δm(n) positions of C(x) are corrupted. In a recent work of Dvir, the author shows that lower bounds for linear LDCs can imply lower bounds for arithmetic circuits. He suggests that proving lower bounds for LDCs over the complex or real field is a good starting point for approaching one of his conjectures. Our main result is an m(n) = Ω(n 2) lower bound for linear 3query LDCs over any, possibly infinite, field. The constant in the Ω(·) depends only on ε and δ. This is the first lower bound better than the trivial m(n) = Ω(n) for arbitrary fields and more than two queries.
Corruption and RecoveryEfficient Locally Decodable Codes
"... Abstract. A (q, δ, ɛ)locally decodable code (LDC) C: {0, 1} n → {0, 1} m is an encoding from nbit strings to mbit strings such that each bit xk can be recovered with probability at least 1 + ɛ from C(x) by a random2 ized algorithm that queries only q positions of C(x), even if up to δm positions ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
(Show Context)
Abstract. A (q, δ, ɛ)locally decodable code (LDC) C: {0, 1} n → {0, 1} m is an encoding from nbit strings to mbit strings such that each bit xk can be recovered with probability at least 1 + ɛ from C(x) by a random2 ized algorithm that queries only q positions of C(x), even if up to δm positions of C(x) are corrupted. If C is a linear map, then the LDC is linear. We give improved constructions of LDCs in terms of the corruption parameter δ and recovery parameter ɛ. The key property of our LDCs is that they are nonlinear, whereas all previous LDCs were linear. 1. For any δ, ɛ ∈ [Ω(n −1/2), O(1)], we give a family of (2, δ, ɛ)LDCs with length m = poly(δ −1, ɛ −1) exp (max(δ, ɛ)δn). For linear (2, δ, ɛ)LDCs, Obata has shown that m ≥ exp (δn). Thus, for small enough constants δ, ɛ, twoquery nonlinear LDCs are shorter than twoquery linear LDCs. 2. We improve the dependence on δ and ɛ of all constantquery LDCs by providing general transformations to nonlinear LDCs. Taking Yekhanin’s linear (3, δ, 1/2 − 6δ)LDCs with m = exp � n 1/t � for any prime of the form 2 t − 1, we obtain nonlinear (3, δ, ɛ)LDCs with m = poly(δ −1, ɛ −1) exp � (max(δ, ɛ)δn) 1/t �. Now consider a (q, δ, ɛ)LDC C with a decoder that has n matchings M1,..., Mn on the complete quniform hypergraph, whose vertices are identified with the positions of C(x). On input k ∈ [n] and received word y, the decoder chooses e = {a1,..., aq} ∈ Mk uniformly at random and outputs �q j=1 yaj. All known LDCs and ours have such a decoder, which we call a matching sum decoder. We show that if C is a twoquery LDC with such a decoder, then m ≥ exp (max(δ, ɛ)δn). Interestingly, our techniques used here can further improve the dependence on δ of Yekhanin’s threequery LDCs. Namely, if δ ≥ 1/12 then Yekhanin’s threequery LDCs become trivial (have recovery probability less than half), whereas we obtain threequery LDCs of length exp � n 1/t � for any prime of the form 2 t − 1 with nontrivial recovery probability for any δ < 1/6. 1