Results 1  10
of
118
Fuzzy extractors: How to generate strong keys from biometrics and other noisy data. Technical Report 2003/235, Cryptology ePrint archive, http://eprint.iacr.org, 2006. Previous version appeared at EUROCRYPT 2004
 34 [DRS07] [DS05] [EHMS00] [FJ01] Yevgeniy Dodis, Leonid Reyzin, and Adam
, 2004
"... We provide formal definitions and efficient secure techniques for • turning noisy information into keys usable for any cryptographic application, and, in particular, • reliably and securely authenticating biometric data. Our techniques apply not just to biometric information, but to any keying mater ..."
Abstract

Cited by 318 (34 self)
 Add to MetaCart
We provide formal definitions and efficient secure techniques for • turning noisy information into keys usable for any cryptographic application, and, in particular, • reliably and securely authenticating biometric data. Our techniques apply not just to biometric information, but to any keying material that, unlike traditional cryptographic keys, is (1) not reproducible precisely and (2) not distributed uniformly. We propose two primitives: a fuzzy extractor reliably extracts nearly uniform randomness R from its input; the extraction is errortolerant in the sense that R will be the same even if the input changes, as long as it remains reasonably close to the original. Thus, R can be used as a key in a cryptographic application. A secure sketch produces public information about its input w that does not reveal w, and yet allows exact recovery of w given another value that is close to w. Thus, it can be used to reliably reproduce errorprone biometric inputs without incurring the security risk inherent in storing them. We define the primitives to be both formally secure and versatile, generalizing much prior work. In addition, we provide nearly optimal constructions of both primitives for various measures of “closeness” of input data, such as Hamming distance, edit distance, and set difference.
Privacypreserving set operations
 in Advances in Cryptology  CRYPTO 2005, LNCS
, 2005
"... In many important applications, a collection of mutually distrustful parties must perform private computation over multisets. Each party’s input to the function is his private input multiset. In order to protect these private sets, the players perform privacypreserving computation; that is, no part ..."
Abstract

Cited by 104 (0 self)
 Add to MetaCart
In many important applications, a collection of mutually distrustful parties must perform private computation over multisets. Each party’s input to the function is his private input multiset. In order to protect these private sets, the players perform privacypreserving computation; that is, no party learns more information about other parties ’ private input sets than what can be deduced from the result. In this paper, we propose efficient techniques for privacypreserving operations on multisets. By employing the mathematical properties of polynomials, we build a framework of efficient, secure, and composable multiset operations: the union, intersection, and element reduction operations. We apply these techniques to a wide range of practical problems, achieving more efficient results than those of previous work.
Efficient lattice (H)IBE in the standard model
 In EUROCRYPT 2010, LNCS
, 2010
"... Abstract. We construct an efficient identity based encryption system based on the standard learning with errors (LWE) problem. Our security proof holds in the standard model. The key step in the construction is a family of lattices for which there are two distinct trapdoors for finding short vectors ..."
Abstract

Cited by 55 (12 self)
 Add to MetaCart
Abstract. We construct an efficient identity based encryption system based on the standard learning with errors (LWE) problem. Our security proof holds in the standard model. The key step in the construction is a family of lattices for which there are two distinct trapdoors for finding short vectors. One trapdoor enables the real system to generate short vectors in all lattices in the family. The other trapdoor enables the simulator to generate short vectors for all lattices in the family except for one. We extend this basic technique to an adaptivelysecure IBE and a Hierarchical IBE. 1
CircularSecure Encryption from Decision DiffieHellman
, 2008
"... Let E be a publickey encryption system and let (pk i, ski) be public/private key pairs for E for i = 0,..., n. A natural question is whether E remains secure once an adversary obtains an encryption cycle, which consists of the encryption of ski under pk (i mod n)+1 for all i = 1,..., n. Surprisingl ..."
Abstract

Cited by 50 (5 self)
 Add to MetaCart
Let E be a publickey encryption system and let (pk i, ski) be public/private key pairs for E for i = 0,..., n. A natural question is whether E remains secure once an adversary obtains an encryption cycle, which consists of the encryption of ski under pk (i mod n)+1 for all i = 1,..., n. Surprisingly, even strong notions of security such as chosenciphertext security appear to be insufficient for proving security in these settings. Since encryption cycles come up naturally in several applications, it is desirable to construct systems that remain secure in the presence of such cycles. Until now, all known constructions have only be proved secure in the random oracle model. We construct an encryption system that is circularsecure under the Decision DiffieHellman assumption, without relying on random oracles. Our proof of security holds even if the adversary obtains an encryption clique, that is, encryptions of ski under pk j for all 0 ≤ i, j ≤ n. We also construct a circular counterexample: a oneway secure encryption scheme that becomes completely insecure if an encryption cycle of length 2 is published. 1
On ideal lattices and learning with errors over rings
 In Proc. of EUROCRYPT, volume 6110 of LNCS
, 2010
"... The “learning with errors ” (LWE) problem is to distinguish random linear equations, which have been perturbed by a small amount of noise, from truly uniform ones. The problem has been shown to be as hard as worstcase lattice problems, and in recent years it has served as the foundation for a pleth ..."
Abstract

Cited by 43 (8 self)
 Add to MetaCart
The “learning with errors ” (LWE) problem is to distinguish random linear equations, which have been perturbed by a small amount of noise, from truly uniform ones. The problem has been shown to be as hard as worstcase lattice problems, and in recent years it has served as the foundation for a plethora of cryptographic applications. Unfortunately, these applications are rather inefficient due to an inherent quadratic overhead in the use of LWE. A main open question was whether LWE and its applications could be made truly efficient by exploiting extra algebraic structure, as was done for latticebased hash functions (and related primitives). We resolve this question in the affirmative by introducing an algebraic variant of LWE called ringLWE, and proving that it too enjoys very strong hardness guarantees. Specifically, we show that the ringLWE distribution is pseudorandom, assuming that worstcase problems on ideal lattices are hard for polynomialtime quantum algorithms. Applications include the first truly practical latticebased publickey cryptosystem with an efficient security reduction; moreover, many of the other applications of LWE can be made much more efficient through the use of ringLWE. 1
Rateless deluge: Overtheair programming of wireless sensor networks using random linear codes
 in Proc. of the 7th Int. Conf. on Information Processing in Sensor Networks (IPSN
, 2008
"... Abstract — Overtheair programming (OAP) is a fundamental service in sensor networks that relies upon reliable broadcast for efficient dissemination. As such, existing OAP protocols become decidedly inefficient (with respect to energy, communication or delay) in unreliable broadcast environments, s ..."
Abstract

Cited by 27 (6 self)
 Add to MetaCart
Abstract — Overtheair programming (OAP) is a fundamental service in sensor networks that relies upon reliable broadcast for efficient dissemination. As such, existing OAP protocols become decidedly inefficient (with respect to energy, communication or delay) in unreliable broadcast environments, such as those with relatively high node density or noise. In this paper, we consider OAP approaches based on rateless codes, which significantly improve OAP in such environments by drastically reducing the need for packet rebroadcasting. We thus design and implement two rateless OAP protocols, rateless Deluge and ACKless Deluge, both of which replace the data transfer mechanism of the established OAP Deluge protocol with rateless analogs. Experiments with Tmote Sky motes on singlehop networks with packet loss rates of 7 % show these protocols to save significantly in communication over regular Deluge (roughly 1530 % savings in the data plane, and 5080 % in the control plane), and multihop experiments reveal similar trends. Simulations further shows that our new protocols scale better than standard Deluge (in terms of communication and energy) to high network density. TinyOS code for our implementation can be found at
Functional Encryption for Inner Product Predicates from Learning with Errors
, 2011
"... We propose a latticebased functional encryption scheme for inner product predicates whose security follows from the difficulty of the learning with errors (LWE) problem. This construction allows us to achieve applications such as range and subset queries, polynomial evaluation, and CNF/DNF formulas ..."
Abstract

Cited by 21 (7 self)
 Add to MetaCart
We propose a latticebased functional encryption scheme for inner product predicates whose security follows from the difficulty of the learning with errors (LWE) problem. This construction allows us to achieve applications such as range and subset queries, polynomial evaluation, and CNF/DNF formulas on encrypted data. Our scheme supports inner products over small fields, in contrast to earlier works based on bilinear maps. Our construction is the first functional encryption scheme based on lattice techniques that goes beyond basic identitybased encryption. The main technique in our scheme is a novel twist to the identitybased encryption scheme of Agrawal, Boneh and Boyen (Eurocrypt 2010). Our scheme is weakly attribute hiding in the standard model.
Resource Fairness and Composability of Cryptographic Protocols
 In Cryptology ePrint Archive, http://eprint.iacr.org/2005/370
"... Abstract. We introduce the notion of resourcefair protocols. Informally, this property states that if one party learns the output of the protocol, then so can all other parties, as long as they expend roughly the same amount of resources. As opposed to similar previously proposed definitions, our d ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
Abstract. We introduce the notion of resourcefair protocols. Informally, this property states that if one party learns the output of the protocol, then so can all other parties, as long as they expend roughly the same amount of resources. As opposed to similar previously proposed definitions, our definition follows the standard simulation paradigm and enjoys strong composability properties. In particular, our definition is similar to the security definition in the universal composability (UC) framework, but works in a model that allows any party to request additional resources from the environment to deal with dishonest parties that may prematurely abort. In this model we specify the ideally fair functionality as allowing parties to “invest resources ” in return for outputs, but in such an event offering all other parties a fair deal. (The formulation of fair dealings is kept independent of any particular functionality, by defining it using a “wrapper.”) Thus, by relaxing the notion of fairness, we avoid a wellknown impossibility result for fair multiparty computation with corrupted majority; in particular, our definition admits constructions that tolerate arbitrary number of corruptions. We also show that, as in the UC framework, protocols in our framework may be arbitrarily and concurrently composed. Turning to constructions, we define a “commitprovefairopen ” functionality and design an efficient resourcefair protocol that securely realizes it, using a new variant of a cryptographic primitive known as “timelines.” With (the fairly wrapped version of) this functionality we show that some of the existing secure multiparty computation protocols can be easily transformed into resourcefair protocols while preserving their security. 1
The parity problem in the presence of noise, decoding random linear codes, and the subset sum problem
 In RANDOM
, 2005
"... Abstract. In [2], Blum et al. demonstrated the first subexponential algorithm for learning the parity function in the presence of noise. They solved the lengthn parity problem in time 2 O(n / log n) but it required the availability of 2 O(n / log n) labeled examples. As an open problem, they asked ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
Abstract. In [2], Blum et al. demonstrated the first subexponential algorithm for learning the parity function in the presence of noise. They solved the lengthn parity problem in time 2 O(n / log n) but it required the availability of 2 O(n / log n) labeled examples. As an open problem, they asked whether there exists a 2 o(n) algorithm for the lengthn parity problem that uses only poly(n) labeled examples. In this work, we provide a positive answer to this question. We show that there is an algorithm that solves the lengthn parity problem in time 2 O(n / log log n) using n 1+ɛ labeled examples. This result immediately gives us a subexponential algorithm for decoding n × n 1+ɛ random binary linear codes (i.e. codes where the messages are n bits and the codewords are n 1+ɛ bits) in the presence of random noise. We are also able to extend the same techniques to provide a subexponential algorithm for dense instances of the random subset sum problem. 1
Fast modular composition in any characteristic
, 2008
"... We give an algorithm for modular composition of degree n univariate polynomials over a finite field Fq requiring n 1+o(1) log 1+o(1) q bit operations; this had earlier been achieved in characteristic n o(1) by Umans (2008). As an application, we obtain a randomized algorithm for factoring degree n p ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
We give an algorithm for modular composition of degree n univariate polynomials over a finite field Fq requiring n 1+o(1) log 1+o(1) q bit operations; this had earlier been achieved in characteristic n o(1) by Umans (2008). As an application, we obtain a randomized algorithm for factoring degree n polynomials over Fq requiring (n 1.5+o(1) + n 1+o(1) log q) log 1+o(1) q bit operations, improving upon the methods of von zur Gathen & Shoup (1992) and Kaltofen & Shoup (1998). Our results also imply algorithms for irreducibility testing and computing minimal polynomials whose running times are bestpossible, up to lower order terms. As in Umans (2008), we reduce modular composition to certain instances of multipoint evaluation of multivariate polynomials. We then give an algorithm that solves this problem optimally (up to lower order terms), in arbitrary characteristic. The main idea is to lift to characteristic 0, apply a small number of rounds of multimodular reduction, and finish with a small number of multidimensional FFTs. The final evaluations are then reconstructed using the Chinese Remainder Theorem. As a bonus, we obtain a very efficient data structure supporting polynomial evaluation queries, which is of independent interest. Our algorithm uses techniques which are commonly employed in practice, so it may be competitive for real problem sizes. This contrasts with previous asymptotically fast methods relying on fast matrix multiplication. Supported by NSF DMS0545904 (CAREER) and a Sloan Research Fellowship.