Results 1  10
of
22
Restricted isometry of Fourier matrices and list decodability of random linear codes
 SIAM J. COMPUT
, 2012
"... ..."
Hardness of reconstructing multivariate polynomials over finite fields
 IN PROC. 48 TH IEEE SYMP. ON FOUNDATIONS OF COMPUTER SCIENCE (FOCS’07
, 2007
"... We study the polynomial reconstruction problem for lowdegree multivariate polynomials over F[2]. In this problem, we are given a set of points x ∈ {0, 1} n and target values f(x) ∈ {0, 1} for each of these points, with the promise that there is a polynomial over F[2] of degree at most d that agree ..."
Abstract

Cited by 12 (6 self)
 Add to MetaCart
We study the polynomial reconstruction problem for lowdegree multivariate polynomials over F[2]. In this problem, we are given a set of points x ∈ {0, 1} n and target values f(x) ∈ {0, 1} for each of these points, with the promise that there is a polynomial over F[2] of degree at most d that agrees with f at 1−ε fraction of the points. Our goal is to find a degree d polynomial that has good agreement with f. We show that it is NPhard to find a polynomial that agrees with f on more than 1 − 2 −d + δ fraction of the points for any ɛ, δ> 0. This holds even with the stronger promise that the polynomial that fits the data is in fact linear, whereas the algorithm is allowed to find a polynomial of degree d. Previously the only known hardness of approximation (or even NPcompleteness) was for the case when d = 1, which follows from a celebrated result of H˚astad [16]. In the setting of Computational Learning, our result shows the hardness of (nonproper)agnostic learning of parities, where the learner is allowed a lowdegree polynomial over F[2] as a hypothesis. This is the first nonproper hardness result for this central problem in computational learning. Our results extend to multivariate polynomial reconstruction over any finite field.
A fourieranalytic approach to reedmuller decoding
 In Proceedings of the 51th Annual IEEE Symposium on Foundations of Computer Science (FOCS
, 2010
"... Abstract. We present a Fourieranalytic approach to listdecoding ReedMuller codes over arbitrary finite fields. We use this to show that quadratic forms over any field are locally listdecodeable up to their minimum distance. The analogous statement for linear polynomials was proved in the celebr ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We present a Fourieranalytic approach to listdecoding ReedMuller codes over arbitrary finite fields. We use this to show that quadratic forms over any field are locally listdecodeable up to their minimum distance. The analogous statement for linear polynomials was proved in the celebrated works of GoldreichLevin [GL89] and GoldreichRubinfeldSudan [GRS00]. Previously, tight bounds for quadratic polynomials were known only for q = 2 and 3 [GKZ08]; the best bound known for other fields was the Johnson radius. Departing from previous work on ReedMuller decoding which relies on some form of selfcorrector [GRS00, AS03, STV01, GKZ08], our work applies ideas from Fourier analysis of Boolean functions to lowdegree polynomials over finite fields, in conjunction with results about the weightdistribution. We believe that the techniques used here could find other applications, we present some applications to testing and learning.
LIST DECODING TENSOR PRODUCTS AND INTERLEAVED CODES
"... Abstract. We design the first efficient algorithms and prove new combinatorial bounds for list decoding tensor products of codes and interleaved codes. • We show that for every code, the ratio of its list decoding radius to its minimum distance stays unchanged under the tensor product operation (rat ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
(Show Context)
Abstract. We design the first efficient algorithms and prove new combinatorial bounds for list decoding tensor products of codes and interleaved codes. • We show that for every code, the ratio of its list decoding radius to its minimum distance stays unchanged under the tensor product operation (rather than squaring, as one might expect). This gives the first efficient list decoders and new combinatorial bounds for some natural codes including multivariate polynomials where the degree in each variable is bounded. • We show that for every code, its list decoding radius remains unchanged under mwise interleaving for an integer m. This generalizes a recent result of Dinur et al. [6], who proved such a result for interleaved Hadamard codes (equivalently, linear transformations). • Using the notion of generalized Hamming weights, we give better list size bounds for both tensoring and interleaving of binary linear codes. By analyzing the weight distribution of these codes, we reduce the task of bounding the list size to bounding the number of closeby lowrank codewords. For decoding linear transformations, using rankreduction together with other ideas, we obtain list size bounds that are tight over small fields. Our results give better bounds on the list decoding radius than what is obtained from the Johnson bound, and yield rather general families of codes decodable beyond the Johnson bound. 1.
Source coding with side information using list decoding
 in Information Theory Proceedings (ISIT), 2010 IEEE International Symposium on, 2010
"... Abstract—The problem of source coding with side information (SCSI) is closely related to channel coding. Therefore, existing literature focuses on using the most successful channel codes namely, LDPC codes, turbo codes, and their variants, to solve this problem assuming classical unique decoding of ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
Abstract—The problem of source coding with side information (SCSI) is closely related to channel coding. Therefore, existing literature focuses on using the most successful channel codes namely, LDPC codes, turbo codes, and their variants, to solve this problem assuming classical unique decoding of the underlying channel code. In this paper, in contrast to classical decoding, we have taken the list decoding approach. We show that syndrome source coding using list decoding can achieve the theoretical limit. We argue that, as opposed to channel coding, the correct sequence from the list produced by the list decoder can effectively be recovered in case of SCSI, since we are dealing with a virtual noisy channel rather than a real noisy channel. Finally, we present a guideline for designing constructive SCSI schemes using Reed Solomon code, BCH code, and ReedMuller code, which are the known listdecodable codes. I.
Quadratic GoldreichLevin theorems
 In Proc. 52th Annu
"... Decomposition theorems in classical Fourier analysis enable us to express a bounded function in terms of few linear phases with large Fourier coefficients plus a part that is pseudorandom with respect to linear phases. The GoldreichLevin algorithm [GL89] can be viewed as an algorithmic analogue of ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Decomposition theorems in classical Fourier analysis enable us to express a bounded function in terms of few linear phases with large Fourier coefficients plus a part that is pseudorandom with respect to linear phases. The GoldreichLevin algorithm [GL89] can be viewed as an algorithmic analogue of such a decomposition as it gives a way to efficiently find the linear phases associated with large Fourier coefficients. In the study of “quadratic Fourier analysis”, higherdegree analogues of such decompositions have been developed in which the pseudorandomness property is stronger but the structured part correspondingly weaker. For example, it has previously been shown that it is possible to express a bounded function as a sum of a few quadratic phases plus a part that is small in the U 3 norm, defined by Gowers for the purpose of counting arithmetic progressions of length 4. We give a polynomial time algorithm for computing such a decomposition. A key part of the algorithm is a local selfcorrection procedure for ReedMuller codes of order 2 (over Fn 2) for a function at distance 1/2−ε from a codeword. Given a function f: Fn 2 → {−1, 1} at fractional Hamming distance 1/2 − ε from a quadratic phase (which is a codeword of ReedMuller code of order 2), we give an algorithm that runs in time polynomial in n and finds a codeword at distance at most 1/2 − η for η = η(ε). This is an algorithmic analogue of Samorodnitsky’s result [Sam07], which gave a tester for the above problem. To our knowledge, it represents the first instance of a correction procedure for any class of codes, beyond the listdecoding radius. In the process, we give algorithmic versions of results from additive combinatorics used in Samorodnitsky’s proof and a refined version of the inverse theorem for the Gowers U 3 norm
List Decoding BarnesWall Lattices
, 2012
"... The question of list decoding errorcorrecting codes over finite fields (under the Hamming metric) has been widely studied in recent years. Motivated by the similar discrete linear structure of linear codes and point lattices in R N, and their many shared applications across complexity theory, crypt ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
The question of list decoding errorcorrecting codes over finite fields (under the Hamming metric) has been widely studied in recent years. Motivated by the similar discrete linear structure of linear codes and point lattices in R N, and their many shared applications across complexity theory, cryptography, and coding theory, we initiate the study of list decoding for lattices. Namely: for a lattice L ⊆ R N, given a target vector r ∈ R N and a distance parameter d, output the set of all lattice points w ∈ L that are within distance d of r. In this work we focus on combinatorial and algorithmic questions related to list decoding for the wellstudied family of BarnesWall lattices. Our main contributions are twofold: 1. We give tight (up to polynomials) combinatorial bounds on the worstcase list size, showing it to be polynomial in the lattice dimension for any error radius bounded away from the lattice’s minimum distance (in the Euclidean norm). 2. Building on the unique decoding algorithm of Micciancio and Nicolosi (ISIT ’08), we give a listdecoding algorithm that runs in time polynomial in the lattice dimension and worstcase list size, for any error radius. Moreover, our algorithm is highly parallelizable, and with sufficiently many processors can run in parallel time only polylogarithmic in the lattice dimension. In particular, our results imply a polynomialtime listdecoding algorithm for any error radius bounded away from the minimum distance, thus beating a typical barrier for natural errorcorrecting codes posed by the Johnson radius.
A note on amplifying the errortolerance of locally decodable codes
 COLLOQ. COMPUT. COMPLEX
, 2010
"... Trevisan [Tre03] suggested a transformation that allows amplifying the error rate a code can handle. We observe that this transformation, that was suggested in the nonlocal setting, works also in the local setting and thus gives a generic, simple way to amplify the errortolerance of locally decoda ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Trevisan [Tre03] suggested a transformation that allows amplifying the error rate a code can handle. We observe that this transformation, that was suggested in the nonlocal setting, works also in the local setting and thus gives a generic, simple way to amplify the errortolerance of locally decodable codes. Specifically, this shows how to transform a locally decodable code that can tolerate a constant fraction of errors to a locally decodable code that can recover from a much higher errorrate, and how to transform such locally decodable codes to locally listdecodable codes. The transformation of [Tre03] involves a simple composition with an approximately locally (list) decodable code. Using a construction of such codes by Impagliazzo et al. [IJKW10], the transformation incurs only a negligible growth in the length of the code and in the query complexity.