Results 1  10
of
20
Extractor Codes
, 2001
"... We de ne new error correcting codes based on extractors. Weshow that for certain choices of parameters these codes have better list decoding properties than are known for other codes, and are provably better than ReedSolomon codes. We further show that codes with strong list decoding properties ar ..."
Abstract

Cited by 42 (6 self)
 Add to MetaCart
We de ne new error correcting codes based on extractors. Weshow that for certain choices of parameters these codes have better list decoding properties than are known for other codes, and are provably better than ReedSolomon codes. We further show that codes with strong list decoding properties are equivalent to slice extractors, a variant of extractors. Wegive an application of extractor codes to extracting many hardcore bits from a oneway function, using few auxiliary random bits. Finally,weshow that explicit slice extractors for certain other parameters would yield optimal bipartite Ramsey graphs.
Combinatorial Bounds for List Decoding
 IEEE Transactions on Information Theory
, 2000
"... Informally, an errorcorrecting code has "nice" listdecodability properties if every Hamming ball of "large" radius has a "small" number of codewords in it. Here, we report linear codes with nontrivial listdecodability: i.e., codes of large rate that are nicely listdecodable, and codes of large di ..."
Abstract

Cited by 36 (22 self)
 Add to MetaCart
Informally, an errorcorrecting code has "nice" listdecodability properties if every Hamming ball of "large" radius has a "small" number of codewords in it. Here, we report linear codes with nontrivial listdecodability: i.e., codes of large rate that are nicely listdecodable, and codes of large distance that are not nicely listdecodable. Specifically, on the positive side, we show that there exist codes of rate R and block length n that have at most c codewords in every Hamming ball of radius H (1 R 1=c)n. This answers the main open question from the work of Elias [8]. This result also has consequences for the construction of concatenated codes of good rate that are list decodable from a large fraction of errors, improving previous results of [13] in this vein. Specifically, for every " > 0, we present a polynomial time constructible asymptotically good family of binary codes of ) that can be list decoded in polynomial time from up to a fraction (1=2 ") of errors, using lists of size O(" On the negative side, we show that for every and c, there exists < , c1 > 0 and an infinite family of linear codes fC i g i such that if n i denotes the block length of C i , then C i has minimum distance at least n i and contains more than c1 n i codewords in some Hamming ball of radius n i . While this result is still far from known bounds on the listdecodability of linear codes, it is the first to bound the "radius for listdecodability by a polynomialsized list" away from the minimum distance of the code.
Lowdegree tests at large distances
 In Proceedings of the 39th Annual ACM Symposium on Theory of Computing
, 2007
"... Abstract We define tests of boolean functions which distinguish between linear (or quadratic)polynomials, and functions which are very far, in an appropriate sense, from these polynomials. The tests have optimal or nearly optimal tradeoffs between soundness and thenumber of queries. In particular, ..."
Abstract

Cited by 36 (2 self)
 Add to MetaCart
Abstract We define tests of boolean functions which distinguish between linear (or quadratic)polynomials, and functions which are very far, in an appropriate sense, from these polynomials. The tests have optimal or nearly optimal tradeoffs between soundness and thenumber of queries. In particular, we show that functions with small Gowers uniformity norms behave "randomly " with respect to hypergraph linearity tests. A central step in our analysis of quadraticity tests is the proof of an inverse theorem forthe third Gowers uniformity norm of boolean functions. The last result has also a coding theory application. It is possible to estimate efficientlythe distance from the secondorder ReedMuller code on inputs lying far beyond its listdecoding radius.
Explicit CapacityAchieving ListDecodable Codes
 In Proceedings of the 38th Annual ACM Symposium on Theory of Computing (STOC
, 2006
"... For every 0 < R < 1 and ε> 0, we present an explicit construction of errorcorrecting codes of rate R that can be list decoded in polynomial time up to a fraction (1 − R − ε) of errors. These codes achieve the “capacity ” for decoding from adversarial errors, i.e., achieve the optimal tradeoff betw ..."
Abstract

Cited by 24 (7 self)
 Add to MetaCart
For every 0 < R < 1 and ε> 0, we present an explicit construction of errorcorrecting codes of rate R that can be list decoded in polynomial time up to a fraction (1 − R − ε) of errors. These codes achieve the “capacity ” for decoding from adversarial errors, i.e., achieve the optimal tradeoff between rate and errorcorrection radius. At least theoretically, this meets one of the central challenges in coding theory. Prior to this work, explicit codes achieving capacity were not known for any rate R. In fact, our codes are the first to beat the errorcorrection radius of 1 − √ R, that was achieved for ReedSolomon codes in [11], for all rates R. (For rates R < 1/16, a recent breakthrough by Parvaresh and Vardy [14] improved upon the 1 − √ R bound; for R → 0, their algorithm can decode a fraction 1 − O(R log(1/R)) of errors.) Our codes are simple to describe — they are certain folded ReedSolomon codes, which are in fact exactly ReedSolomon (RS) codes, but viewed as a code over a larger alphabet by careful bundling of codeword symbols. Given the ubiquity of RS codes, this is an appealing feature of our result, since the codes we propose are not too far from the ones in actual use. The main insight in our work is that some carefully chosen folded RS codes are “compressed” versions of a related family of ParvareshVardy codes. Further, the decoding of the folded RS codes can be reduced to list decoding the related ParvareshVardy codes. The alphabet size of these folded RS codes is polynomial in the block length. This can be reduced to a (large) constant using ideas concerning “list recovering ” and expanderbased codes from [9, 10]. Concatenating the folded RS codes with suitable inner codes also gives us polytime constructible binary codes that can be efficiently list decoded up to the Zyablov bound.
Explicit Codes Achieving List Decoding Capacity: Errorcorrection with Optimal Redundancy
, 2008
"... We present errorcorrecting codes that achieve the informationtheoretically best possible tradeoff between the rate and errorcorrection radius. Specifically, for every 0 < R < 1 and ε> 0, we present an explicit construction of errorcorrecting codes of rate R that can be list decoded in polynomia ..."
Abstract

Cited by 20 (8 self)
 Add to MetaCart
We present errorcorrecting codes that achieve the informationtheoretically best possible tradeoff between the rate and errorcorrection radius. Specifically, for every 0 < R < 1 and ε> 0, we present an explicit construction of errorcorrecting codes of rate R that can be list decoded in polynomial time up to a fraction (1−R−ε) of worstcase errors. At least theoretically, this meets one of the central challenges in algorithmic coding theory. Our codes are simple to describe: they are folded ReedSolomon codes, which are in fact exactly ReedSolomon (RS) codes, but viewed as a code over a larger alphabet by careful bundling of codeword symbols. Given the ubiquity of RS codes, this is an appealing feature of our result, and in fact our methods directly yield better decoding algorithms for RS codes when errors occur in phased bursts. The alphabet size of these folded RS codes is polynomial in the block length. We are able to reduce this to a constant (depending on ε) using ideas concerning “list recovery ” and expanderbased codes from [11, 12]. Concatenating the folded RS codes with suitable inner codes also gives us polynomial time constructible binary codes that can be efficiently list decoded up to the Zyablov bound, i.e., up to twice the radius achieved by the standard GMD decoding of concatenated codes.
Better extractors for better codes
 Proceedings of the 36th Annual ACM Symposium on Theory of Computing, 2004
, 2004
"... We present an explicit construction of codes that can be list decoded from a fraction (1 − ε) of errors in subexponential time and which have rate ε / log O(1) (1/ε). This comes close to the optimal rate of Ω(ε), and is the first subexponential complexity construction to beat the rate of ε 2 achie ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
We present an explicit construction of codes that can be list decoded from a fraction (1 − ε) of errors in subexponential time and which have rate ε / log O(1) (1/ε). This comes close to the optimal rate of Ω(ε), and is the first subexponential complexity construction to beat the rate of ε 2 achieved by ReedSolomon or algebraicgeometric codes. Our construction is based on recent extractor constructions with very good seed length [17]. While the “standard” way of viewing extractors as codes (as in [16]) cannot beat the O(ε 2) rate barrier due to the 2 log(1/ε) lower bound on seed length for extractors, we use such extractor codes as a component in a wellknown expanderbased construction scheme to get our result. The O(ε 2) rate barrier also arises if one argues about list decoding using the minimum distance (via the socalled Johnson bound) — so this also gives the first explicit construction that “beats the Johnson bound” for list decoding from errors. The main message from our work is perhaps conceptual, namely that good strong extractors for low minentropies will yield nearoptimal list decodable codes. Given all the progress that has been made on extractors, we view this as an optimistic avenue to look for better list decodable codes, both by looking for better explicit extractor constructions, as well as by importing nontrivial techniques from the extractor world in reasoning about and constructing codes.
The Unified Theory of Pseudorandomness
, 2007
"... We survey the close connections between a variety of “pseudorandom objects,” namely pseudorandom generators, expander graphs, listdecodable errorcorrecting codes, randomness extractors, averaging samplers, and hardness amplifiers. ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
We survey the close connections between a variety of “pseudorandom objects,” namely pseudorandom generators, expander graphs, listdecodable errorcorrecting codes, randomness extractors, averaging samplers, and hardness amplifiers.
Algorithmic results in list decoding
 In Foundations and Trends in Theoretical Computer Science (FnTTCS
"... Errorcorrecting codes are used to cope with the corruption of data by noise during communication or storage. A code uses an encoding procedure that judiciously introduces redundancy into the data to produce an associated codeword. The redundancy built into the codewords enables one to decode the or ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
Errorcorrecting codes are used to cope with the corruption of data by noise during communication or storage. A code uses an encoding procedure that judiciously introduces redundancy into the data to produce an associated codeword. The redundancy built into the codewords enables one to decode the original data even from a somewhat distorted version of the codeword. The central tradeoff in coding theory is the one between the data rate (amount of nonredundant information per bit of codeword) and the error rate (the fraction of symbols that could be corrupted while still enabling data recovery). The traditional decoding algorithms did as badly at correcting any error pattern as they would do for the worst possible error pattern. This severely limited the maximum fraction of errors those algorithms could tolerate. In turn, this was the source of a big hiatus between the errorcorrection performance known for probabilistic noise models (pioneered by Shannon) and what was thought to be the limit for the more powerful, worstcase noise models (suggested by Hamming). In the last decade or so, there has been much algorithmic progress in coding theory that has bridged this gap (and in fact nearly eliminated it for codes over large alphabets). These developments rely onan errorrecovery model called “list decoding, ” wherein for the pathological error patterns, the decoder is permitted to output a small list of candidates that will include the original message. This book introduces and motivates the problem of list decoding, and discusses the central algorithmic results of the subject, culminating with the recent results on achieving “list decoding capacity. ” Part I General Literature1
The Minimum Distance Problem for TwoWay Entanglement Purification
, 2004
"... Entanglement purification takes a number of noisy EPR pairs and processes them to produce a smaller number of more reliable pairs. If this is done with only a forward classical side channel, the procedure is equivalent to using a quantum errorcorrecting code (QECC). We instead investigate entanglem ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Entanglement purification takes a number of noisy EPR pairs and processes them to produce a smaller number of more reliable pairs. If this is done with only a forward classical side channel, the procedure is equivalent to using a quantum errorcorrecting code (QECC). We instead investigate entanglement purification protocols with twoway classical side channels (2EPPs) for finite block sizes. In particular, we consider the analog of the minimum distance problem for QECCs, and show that 2EPPs can exceed the quantum Hamming bound and the quantum Singleton bound. We also show that 2EPPs can achieve the rate k/n = 1−(t/n)log 2 3−h(t/n)−O(1/n) (asymptotically reaching the quantum Hamming bound), where the EPP produces at least k good pairs out of n total pairs with up to t arbitrary errors, and h(x) = −xlog 2 x−(1−x)log 2 (1−x) is the Hamming entropy. In contrast, the best known lower bound on the performance of QECCs is the quantum GilbertVarshamov bound k/n ≥ 1 − (2t/n)log 2 3 − h(2t/n). Indeed, in some regimes, the known upper bound on the asymptotic performance of good QECCs is strictly below our lower bound on the existence of 2EPPs. 1