Results 1  10
of
63
Fuzzy extractors: How to generate strong keys from biometrics and other noisy data. Technical Report 2003/235, Cryptology ePrint archive, http://eprint.iacr.org, 2006. Previous version appeared at EUROCRYPT 2004
 34 [DRS07] [DS05] [EHMS00] [FJ01] Yevgeniy Dodis, Leonid Reyzin, and Adam
, 2004
"... We provide formal definitions and efficient secure techniques for • turning noisy information into keys usable for any cryptographic application, and, in particular, • reliably and securely authenticating biometric data. Our techniques apply not just to biometric information, but to any keying mater ..."
Abstract

Cited by 318 (34 self)
 Add to MetaCart
We provide formal definitions and efficient secure techniques for • turning noisy information into keys usable for any cryptographic application, and, in particular, • reliably and securely authenticating biometric data. Our techniques apply not just to biometric information, but to any keying material that, unlike traditional cryptographic keys, is (1) not reproducible precisely and (2) not distributed uniformly. We propose two primitives: a fuzzy extractor reliably extracts nearly uniform randomness R from its input; the extraction is errortolerant in the sense that R will be the same even if the input changes, as long as it remains reasonably close to the original. Thus, R can be used as a key in a cryptographic application. A secure sketch produces public information about its input w that does not reveal w, and yet allows exact recovery of w given another value that is close to w. Thus, it can be used to reliably reproduce errorprone biometric inputs without incurring the security risk inherent in storing them. We define the primitives to be both formally secure and versatile, generalizing much prior work. In addition, we provide nearly optimal constructions of both primitives for various measures of “closeness” of input data, such as Hamming distance, edit distance, and set difference.
Unbalanced expanders and randomness extractors from parvareshvardy codes
 In Proceedings of the 22nd Annual IEEE Conference on Computational Complexity
, 2007
"... We give an improved explicit construction of highly unbalanced bipartite expander graphs with expansion arbitrarily close to the degree (which is polylogarithmic in the number of vertices). Both the degree and the number of righthand vertices are polynomially close to optimal, whereas the previous ..."
Abstract

Cited by 80 (7 self)
 Add to MetaCart
We give an improved explicit construction of highly unbalanced bipartite expander graphs with expansion arbitrarily close to the degree (which is polylogarithmic in the number of vertices). Both the degree and the number of righthand vertices are polynomially close to optimal, whereas the previous constructions of TaShma, Umans, and Zuckerman (STOC ‘01) required at least one of these to be quasipolynomial in the optimal. Our expanders have a short and selfcontained description and analysis, based on the ideas underlying the recent listdecodable errorcorrecting codes of Parvaresh and Vardy (FOCS ‘05). Our expanders can be interpreted as nearoptimal “randomness condensers, ” that reduce the task of extracting randomness from sources of arbitrary minentropy rate to extracting randomness from sources of minentropy rate arbitrarily close to 1, which is a much easier task. Using this connection, we obtain a new construction of randomness extractors that is optimal up to constant factors, while being much simpler than the previous construction of Lu et al. (STOC ‘03) and improving upon it when the error parameter is small (e.g. 1/poly(n)).
Improving the Robustness of Private Information Retrieval
 In Proceedings of IEEE Security and Privacy Symposium
, 2007
"... Since 1995, much work has been done creating protocols for private information retrieval (PIR). Many variants of the basic PIR model have been proposed, including such modifications as computational vs. informationtheoretic privacy protection, correctness in the face of servers that fail to respond ..."
Abstract

Cited by 26 (12 self)
 Add to MetaCart
Since 1995, much work has been done creating protocols for private information retrieval (PIR). Many variants of the basic PIR model have been proposed, including such modifications as computational vs. informationtheoretic privacy protection, correctness in the face of servers that fail to respond or that respond incorrectly, and protection of sensitive data against the database servers themselves. In this paper, we improve on the robustness of PIR in a number of ways. First, we present a Byzantinerobust PIR protocol which provides informationtheoretic privacy protection against coalitions of up to all but one of the responding servers, improving the previous result by a factor of 3. In addition, our protocol allows for more of the responding servers to return incorrect information while still enabling the user to compute the correct result. We then extend our protocol so that queries have informationtheoretic protection if a limited number of servers collude, as before, but still retain computational protection if they all collude. We also extend the protocol to provide informationtheoretic protection to the contents of the database against collusions of limited numbers of the database servers, at no additional communication cost or increase in the number of servers. All of our protocols retrieve a block of data with communication cost only O(ℓ) times the size of the block, where ℓ is the number of servers. Finally, we discuss our implementation of these protocols, and measure their performance in order to determine their practicality. 1
Explicit CapacityAchieving ListDecodable Codes
 In Proceedings of the 38th Annual ACM Symposium on Theory of Computing (STOC
, 2006
"... For every 0 < R < 1 and ε> 0, we present an explicit construction of errorcorrecting codes of rate R that can be list decoded in polynomial time up to a fraction (1 − R − ε) of errors. These codes achieve the “capacity ” for decoding from adversarial errors, i.e., achieve the optimal trade ..."
Abstract

Cited by 25 (7 self)
 Add to MetaCart
For every 0 < R < 1 and ε> 0, we present an explicit construction of errorcorrecting codes of rate R that can be list decoded in polynomial time up to a fraction (1 − R − ε) of errors. These codes achieve the “capacity ” for decoding from adversarial errors, i.e., achieve the optimal tradeoff between rate and errorcorrection radius. At least theoretically, this meets one of the central challenges in coding theory. Prior to this work, explicit codes achieving capacity were not known for any rate R. In fact, our codes are the first to beat the errorcorrection radius of 1 − √ R, that was achieved for ReedSolomon codes in [11], for all rates R. (For rates R < 1/16, a recent breakthrough by Parvaresh and Vardy [14] improved upon the 1 − √ R bound; for R → 0, their algorithm can decode a fraction 1 − O(R log(1/R)) of errors.) Our codes are simple to describe — they are certain folded ReedSolomon codes, which are in fact exactly ReedSolomon (RS) codes, but viewed as a code over a larger alphabet by careful bundling of codeword symbols. Given the ubiquity of RS codes, this is an appealing feature of our result, since the codes we propose are not too far from the ones in actual use. The main insight in our work is that some carefully chosen folded RS codes are “compressed” versions of a related family of ParvareshVardy codes. Further, the decoding of the folded RS codes can be reduced to list decoding the related ParvareshVardy codes. The alphabet size of these folded RS codes is polynomial in the block length. This can be reduced to a (large) constant using ideas concerning “list recovering ” and expanderbased codes from [9, 10]. Concatenating the folded RS codes with suitable inner codes also gives us polytime constructible binary codes that can be efficiently list decoded up to the Zyablov bound.
Explicit Codes Achieving List Decoding Capacity: Errorcorrection with Optimal Redundancy
, 2008
"... We present errorcorrecting codes that achieve the informationtheoretically best possible tradeoff between the rate and errorcorrection radius. Specifically, for every 0 < R < 1 and ε> 0, we present an explicit construction of errorcorrecting codes of rate R that can be list decoded in ..."
Abstract

Cited by 23 (10 self)
 Add to MetaCart
We present errorcorrecting codes that achieve the informationtheoretically best possible tradeoff between the rate and errorcorrection radius. Specifically, for every 0 < R < 1 and ε> 0, we present an explicit construction of errorcorrecting codes of rate R that can be list decoded in polynomial time up to a fraction (1−R−ε) of worstcase errors. At least theoretically, this meets one of the central challenges in algorithmic coding theory. Our codes are simple to describe: they are folded ReedSolomon codes, which are in fact exactly ReedSolomon (RS) codes, but viewed as a code over a larger alphabet by careful bundling of codeword symbols. Given the ubiquity of RS codes, this is an appealing feature of our result, and in fact our methods directly yield better decoding algorithms for RS codes when errors occur in phased bursts. The alphabet size of these folded RS codes is polynomial in the block length. We are able to reduce this to a constant (depending on ε) using ideas concerning “list recovery ” and expanderbased codes from [11, 12]. Concatenating the folded RS codes with suitable inner codes also gives us polynomial time constructible binary codes that can be efficiently list decoded up to the Zyablov bound, i.e., up to twice the radius achieved by the standard GMD decoding of concatenated codes.
Efficient and Robust Compressed Sensing using Optimized Expander Graphs
"... Abstract—Expander graphs have been recently proposed to construct efficient compressed sensing algorithms. In particular, it has been shown that any ndimensional vector that is ksparse can be fully recovered using O(k log n) measurements and only O(k log n) simple recovery iterations. In this pape ..."
Abstract

Cited by 20 (3 self)
 Add to MetaCart
Abstract—Expander graphs have been recently proposed to construct efficient compressed sensing algorithms. In particular, it has been shown that any ndimensional vector that is ksparse can be fully recovered using O(k log n) measurements and only O(k log n) simple recovery iterations. In this paper we improve upon this result by considering expander graphs with expansion coefficient beyond 3 and show that, with the same number of 4 measurements, only O(k) recovery iterations are required, which is a significant improvement when n is large. In fact, full recovery can be accomplished by at most 2k very simple iterations. The number of iterations can be reduced arbitrarily close to k, and the recovery algorithm can be implemented very efficiently using a simple priority queue with total recovery time O ( n log ( )) n k We also show that by tolerating a small penalty on the number of measurements, and not on the number of recovery iterations, one can use the efficient construction of a family of expander graphs to come up with explicit measurement matrices for this method. We compare our result with other recently developed expandergraphbased methods and argue that it compares favorably both in terms of the number of required measurements and in terms of the time complexity and the simplicity of recovery. Finally we will show how our analysis extends to give a robust algorithm that finds the position and sign of the k significant elements of an almost ksparse signal and then, using very simple optimization techniques, finds a ksparse signal which is close to the best kterm approximation of the original signal. I.
Extensions to the Method of Multiplicities, with applications to Kakeya Sets and Mergers
, 2009
"... We extend the “method of multiplicities ” to get the following results, of interest in combinatorics and randomness extraction. 1. We show that every Kakeya set in F n q, the ndimensional vector space over the finite field on q elements, must be of size at least q n /2 n. This bound is tight to wit ..."
Abstract

Cited by 14 (5 self)
 Add to MetaCart
We extend the “method of multiplicities ” to get the following results, of interest in combinatorics and randomness extraction. 1. We show that every Kakeya set in F n q, the ndimensional vector space over the finite field on q elements, must be of size at least q n /2 n. This bound is tight to within a 2 + o(1) factor for every n as q → ∞. 2. We give improved “randomness mergers”, i.e., seeded functions that take as input k (possibly correlated) random variables in {0, 1} N and a short random seed and output a single random variable in {0, 1} N that is statistically close to having entropy (1−δ)·N when one of the k input variables is distributed uniformly. The seed we require is only (1/δ)·log kbits long, which significantly improves upon previous construction of mergers. The “method of multiplicities”, as used in prior work, analyzed subsets of vector spaces over finite fields by constructing somewhat low degree interpolating polynomials that vanish on every point in the subset with high multiplicity. The typical use of this method involved showing that the interpolating polynomial also vanished on some points outside the subset, and then used simple
Calderbank R., Efficient and Robust Compressive Sensing using HighQuality Expander Graphs. Submitted to the IEEE transaction on Information Theory
, 2008
"... Abstract—Expander graphs have been recently proposed to construct efficient compressed sensing algorithms. In particular, it has been shown that any ndimensional vector that is ksparse (with k ≪ n) can be fully recovered using O(k log n k) measurements and only O(k log n) simple recovery iteration ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
Abstract—Expander graphs have been recently proposed to construct efficient compressed sensing algorithms. In particular, it has been shown that any ndimensional vector that is ksparse (with k ≪ n) can be fully recovered using O(k log n k) measurements and only O(k log n) simple recovery iterations. In this paper we improve upon this result by considering expander graphs with expansion coefficient beyond 3 and show that, with 4 the same number of measurements, only O(k) recovery iterations are required, which is a significant improvement when n is large. In fact, full recovery can be accomplished by at most 2k very simple iterations. The number of iterations can be made arbitrarily close to k, and the recovery algorithm can be implemented very efficiently using a simple binary search tree. We also show that by tolerating a small penalty on the number of measurements, and not on the number of recovery iterations, one can use the efficient construction of a family of expander graphs to come up with explicit measurement matrices for this method. We compare our result with other recently developed expandergraphbased methods and argue that it compares favorably both in terms of the number of required measurements and in terms of the recovery time complexity. Finally we will show how our analysis extends to give a robust algorithm that finds the position and sign of the k significant elements of an almost ksparse signal and then, using very simple optimization techniques, finds in sublinear time a ksparse signal which approximates the original signal with very high precision. I.
Algorithmic results in list decoding
 In Foundations and Trends in Theoretical Computer Science (FnTTCS
"... Errorcorrecting codes are used to cope with the corruption of data by noise during communication or storage. A code uses an encoding procedure that judiciously introduces redundancy into the data to produce an associated codeword. The redundancy built into the codewords enables one to decode the or ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
Errorcorrecting codes are used to cope with the corruption of data by noise during communication or storage. A code uses an encoding procedure that judiciously introduces redundancy into the data to produce an associated codeword. The redundancy built into the codewords enables one to decode the original data even from a somewhat distorted version of the codeword. The central tradeoff in coding theory is the one between the data rate (amount of nonredundant information per bit of codeword) and the error rate (the fraction of symbols that could be corrupted while still enabling data recovery). The traditional decoding algorithms did as badly at correcting any error pattern as they would do for the worst possible error pattern. This severely limited the maximum fraction of errors those algorithms could tolerate. In turn, this was the source of a big hiatus between the errorcorrection performance known for probabilistic noise models (pioneered by Shannon) and what was thought to be the limit for the more powerful, worstcase noise models (suggested by Hamming). In the last decade or so, there has been much algorithmic progress in coding theory that has bridged this gap (and in fact nearly eliminated it for codes over large alphabets). These developments rely onan errorrecovery model called “list decoding, ” wherein for the pathological error patterns, the decoder is permitted to output a small list of candidates that will include the original message. This book introduces and motivates the problem of list decoding, and discusses the central algorithmic results of the subject, culminating with the recent results on achieving “list decoding capacity. ” Part I General Literature1