Results 1  10
of
40
Improving the Robustness of Private Information Retrieval
 In Proceedings of IEEE Security and Privacy Symposium
, 2007
"... Since 1995, much work has been done creating protocols for private information retrieval (PIR). Many variants of the basic PIR model have been proposed, including such modifications as computational vs. informationtheoretic privacy protection, correctness in the face of servers that fail to respond ..."
Abstract

Cited by 44 (16 self)
 Add to MetaCart
(Show Context)
Since 1995, much work has been done creating protocols for private information retrieval (PIR). Many variants of the basic PIR model have been proposed, including such modifications as computational vs. informationtheoretic privacy protection, correctness in the face of servers that fail to respond or that respond incorrectly, and protection of sensitive data against the database servers themselves. In this paper, we improve on the robustness of PIR in a number of ways. First, we present a Byzantinerobust PIR protocol which provides informationtheoretic privacy protection against coalitions of up to all but one of the responding servers, improving the previous result by a factor of 3. In addition, our protocol allows for more of the responding servers to return incorrect information while still enabling the user to compute the correct result. We then extend our protocol so that queries have informationtheoretic protection if a limited number of servers collude, as before, but still retain computational protection if they all collude. We also extend the protocol to provide informationtheoretic protection to the contents of the database against collusions of limited numbers of the database servers, at no additional communication cost or increase in the number of servers. All of our protocols retrieve a block of data with communication cost only O(ℓ) times the size of the block, where ℓ is the number of servers. Finally, we discuss our implementation of these protocols, and measure their performance in order to determine their practicality. 1
Extensions to the Method of Multiplicities, with applications to Kakeya Sets and Mergers
, 2009
"... We extend the “method of multiplicities ” to get the following results, of interest in combinatorics and randomness extraction. 1. We show that every Kakeya set in F n q, the ndimensional vector space over the finite field on q elements, must be of size at least q n /2 n. This bound is tight to wit ..."
Abstract

Cited by 39 (6 self)
 Add to MetaCart
We extend the “method of multiplicities ” to get the following results, of interest in combinatorics and randomness extraction. 1. We show that every Kakeya set in F n q, the ndimensional vector space over the finite field on q elements, must be of size at least q n /2 n. This bound is tight to within a 2 + o(1) factor for every n as q → ∞. 2. We give improved “randomness mergers”, i.e., seeded functions that take as input k (possibly correlated) random variables in {0, 1} N and a short random seed and output a single random variable in {0, 1} N that is statistically close to having entropy (1−δ)·N when one of the k input variables is distributed uniformly. The seed we require is only (1/δ)·log kbits long, which significantly improves upon previous construction of mergers. The “method of multiplicities”, as used in prior work, analyzed subsets of vector spaces over finite fields by constructing somewhat low degree interpolating polynomials that vanish on every point in the subset with high multiplicity. The typical use of this method involved showing that the interpolating polynomial also vanished on some points outside the subset, and then used simple
Linearalgebraic list decoding of folded ReedSolomon codes
 In Proceedings of the 26th IEEE Conference on Computational Complexity
, 2011
"... ar ..."
(Show Context)
Highrate codes with sublineartime decoding
, 2010
"... Locally decodable codes are errorcorrecting codes that admit efficient decoding algorithms; any bit of the original message can be recovered by looking at only a small number of locations of a corrupted codeword. The tradeoff between the rate of a code and the locality/efficiency of its decoding al ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
(Show Context)
Locally decodable codes are errorcorrecting codes that admit efficient decoding algorithms; any bit of the original message can be recovered by looking at only a small number of locations of a corrupted codeword. The tradeoff between the rate of a code and the locality/efficiency of its decoding algorithms has been well studied, and it has widely been suspected that nontrivial locality must come at the price of low rate. A particular setting of potential interest in practice is codes of constant rate. For such codes, decoding algorithms with locality O(k ɛ) were known only for codes of rate exp(1/ɛ), where k is the length of the message. Furthermore, for codes of rate> 1/2, no nontrivial locality has been achieved. In this paper we construct a new family of locally decodable codes that have very efficient local decoding algorithms, and at the same time have rate approaching 1. We show that for every ɛ> 0 and α> 0, for infinitely many k, there exists a code C which encodes messages of length k with rate 1 − α, and is locally decodable from a constant fraction of errors using O(k ɛ) queries and time. The high rate and local decodability are evident even in concrete settings (and not just in asymptotic behavior), giving hope that local decoding techniques may have practical implications. These codes, which we call multiplicity codes, are based on evaluating high degree multivariate polynomials and their derivatives. Multiplicity codes extend traditional multivariate polynomial based codes; they inherit the localdecodability of these codes, and at the same time achieve better tradeoffs and flexibility in their rate and distance.
Subspace Evasive Sets
 ELECTRONIC COLLOQUIUM ON COMPUTATIONAL COMPLEXITY, REPORT NO. 139 (2011)
, 2011
"... In this work we describe an explicit, simple, construction of large subsets of F n, where F is a finite field, that have small intersection with every kdimensional affine subspace. Interest in the explicit construction of such sets, termed subspaceevasive sets, started in the work of Pudlák and Rö ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
(Show Context)
In this work we describe an explicit, simple, construction of large subsets of F n, where F is a finite field, that have small intersection with every kdimensional affine subspace. Interest in the explicit construction of such sets, termed subspaceevasive sets, started in the work of Pudlák and Rödl [PR04] who showed how such constructions over the binary field can be used to construct explicit Ramsey graphs. More recently, Guruswami [Gur11] showed that, over large finite fields (of size polynomial in n), subspace evasive sets can be used to obtain explicit listdecodable codes with optimal rate and constant listsize. In this work we construct subspace evasive sets over large fields and use them, as described in [Gur11], to reduce the list size of folded ReedSolomon codes form poly(n) to a constant.
Better binary listdecodable codes via multilevel concatenation
 In Proceedings of the 11th International Workshop on Randomization and Computation (RANDOM
, 2007
"... Abstract. We give a polynomial time construction of binary codes with the best currently known tradeoff between rate and errorcorrection radius. Specifically, we obtain linear codes over fixed alphabets that can be list decoded in polynomial time up to the so called BlokhZyablov bound. Our work b ..."
Abstract

Cited by 7 (6 self)
 Add to MetaCart
(Show Context)
Abstract. We give a polynomial time construction of binary codes with the best currently known tradeoff between rate and errorcorrection radius. Specifically, we obtain linear codes over fixed alphabets that can be list decoded in polynomial time up to the so called BlokhZyablov bound. Our work builds upon [7] where codes list decodable up to the Zyablov bound (the standard product bound on distance of concatenated codes) were constructed. Our codes are constructed via a (known) generalization of code concatenation called multilevel code concatenation. A probabilistic argument, which is also derandomized via conditional expectations, is used to show the existence of inner codes with a certain nested list decodability property that is appropriate for use in multilevel concatenated codes. A “levelbylevel ” decoding algorithm, which crucially uses the list recovery algorithm for folded ReedSolomon codes from [7], enables list decoding up to the designed distance bound, aka the BlokhZyablov bound, for multilevel concatenated codes.
Concatenated codes can achieve list decoding capacity
 In Proceedings of the 19th Annual ACMSIAM Symposium on Discrete Algorithms
, 2008
"... We prove that binary linear concatenated codes with an outer algebraic code (specifically, a folded ReedSolomon code) and independently and randomly chosen linear inner codes achieve the listdecoding capacity with high probability. In particular, for any 0 < ρ < 1/2 and ε> 0, there exist ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
(Show Context)
We prove that binary linear concatenated codes with an outer algebraic code (specifically, a folded ReedSolomon code) and independently and randomly chosen linear inner codes achieve the listdecoding capacity with high probability. In particular, for any 0 < ρ < 1/2 and ε> 0, there exist concatenated codes of rate at least 1 − H(ρ) − ε that are (combinatorially) listdecodable up to a ρ fraction of errors. (The best possible rate, aka listdecoding capacity, for such codes is 1 − H(ρ), and is achieved by random codes.) A similar result, with better list size guarantees, holds when the outer code is also randomly chosen. Our methods and results extend to the case when the alphabet size is any fixed prime power q � 2. Our result shows that despite the structural restriction imposed by code concatenation, the family of concatenated codes is rich enough to include capacity achieving listdecodable codes. This provides some encouraging news for tackling the problem of constructing explicit binary listdecodable codes with optimal rate, since code concatenation has been the preeminent method for constructing good codes over small alphabets.
LIST DECODING TENSOR PRODUCTS AND INTERLEAVED CODES
"... Abstract. We design the first efficient algorithms and prove new combinatorial bounds for list decoding tensor products of codes and interleaved codes. • We show that for every code, the ratio of its list decoding radius to its minimum distance stays unchanged under the tensor product operation (rat ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
(Show Context)
Abstract. We design the first efficient algorithms and prove new combinatorial bounds for list decoding tensor products of codes and interleaved codes. • We show that for every code, the ratio of its list decoding radius to its minimum distance stays unchanged under the tensor product operation (rather than squaring, as one might expect). This gives the first efficient list decoders and new combinatorial bounds for some natural codes including multivariate polynomials where the degree in each variable is bounded. • We show that for every code, its list decoding radius remains unchanged under mwise interleaving for an integer m. This generalizes a recent result of Dinur et al. [6], who proved such a result for interleaved Hadamard codes (equivalently, linear transformations). • Using the notion of generalized Hamming weights, we give better list size bounds for both tensoring and interleaving of binary linear codes. By analyzing the weight distribution of these codes, we reduce the task of bounding the list size to bounding the number of closeby lowrank codewords. For decoding linear transformations, using rankreduction together with other ideas, we obtain list size bounds that are tight over small fields. Our results give better bounds on the list decoding radius than what is obtained from the Johnson bound, and yield rather general families of codes decodable beyond the Johnson bound. 1.
Errorcorrection up to the informationtheoretic limit
, 2008
"... Ever since the birth of coding theory almost 60 years ago, researchers have been pursuing the elusive goal of constructing the “best codes,” whose encoding introduces the minimum possible redundancy for the level of noise they can correct. In this article, we survey recent progress in list decoding ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
Ever since the birth of coding theory almost 60 years ago, researchers have been pursuing the elusive goal of constructing the “best codes,” whose encoding introduces the minimum possible redundancy for the level of noise they can correct. In this article, we survey recent progress in list decoding that has led to efficient errorcorrection schemes with an optimal amount of redundancy, even against worstcase errors caused by a potentially malicious channel. To correct a proportion p (say 20%) of worstcase errors, these codes only need close to a proportion p of redundant symbols. The redundancy cannot possibly be any lower informationtheoretically. This new method holds the promise of correcting a factor of two more errors compared to the conventional algorithms currently in use in diverse everyday applications.
Explicit capacityachieving codes for worstcase additive errors,” Dec. 2009 [Online]. Available: http://arxiv. org/abs/0910.1511
"... For every p ∈ (0, 1/2), we give an explicit construction of binary codes of rate approaching “capacity ” 1−H(p) that enable reliable communication in the presence of worstcase additive errors, caused by a channel oblivious to the codeword (but not necessarily the message). Formally, we give an eff ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
For every p ∈ (0, 1/2), we give an explicit construction of binary codes of rate approaching “capacity ” 1−H(p) that enable reliable communication in the presence of worstcase additive errors, caused by a channel oblivious to the codeword (but not necessarily the message). Formally, we give an efficient “stochastic ” encoding E(·, ·) of messages combined with a small number of auxiliary random bits, such that for every message m and every error vector e (that could depend on m) that contains at most a fraction p of ones, w.h.p over the random bits r chosen by the encoder, m can be efficiently recovered from the corrupted codeword E(m, r) + e by a decoder without knowledge of the encoder’s randomness r. Our construction for additive errors also yields explicit deterministic codes of rate approaching 1 − H(p) for the “average error ” criterion: for every error vector e of at most p fraction 1’s, most messages m can be efficiently (uniquely) decoded from the corrupted codeword C(m) + e. Note that such codes cannot be linear, as the bad error patterns for all messages are the same in a linear code. We also give a new proof of the existence of such codes based on list decoding and certain algebraic manipulation detection codes. Our proof is simpler than the previous proofs from the literature on arbitrarily varying channels.