Results 1  10
of
43
Fully homomorphic encryption using ideal lattices
 In Proc. STOC
, 2009
"... We propose a fully homomorphic encryption scheme – i.e., a scheme that allows one to evaluate circuits over encrypted data without being able to decrypt. Our solution comes in three steps. First, we provide a general result – that, to construct an encryption scheme that permits evaluation of arbitra ..."
Abstract

Cited by 300 (13 self)
 Add to MetaCart
We propose a fully homomorphic encryption scheme – i.e., a scheme that allows one to evaluate circuits over encrypted data without being able to decrypt. Our solution comes in three steps. First, we provide a general result – that, to construct an encryption scheme that permits evaluation of arbitrary circuits, it suffices to construct an encryption scheme that can evaluate (slightly augmented versions of) its own decryption circuit; we call a scheme that can evaluate its (augmented) decryption circuit bootstrappable. Next, we describe a public key encryption scheme using ideal lattices that is almost bootstrappable. Latticebased cryptosystems typically have decryption algorithms with low circuit complexity, often dominated by an inner product computation that is in NC1. Also, ideal lattices provide both additive and multiplicative homomorphisms (modulo a publickey ideal in a polynomial ring that is represented as a lattice), as needed to evaluate general circuits. Unfortunately, our initial scheme is not quite bootstrappable – i.e., the depth that the scheme can correctly evaluate can be logarithmic in the lattice dimension, just like the depth of the decryption circuit, but the latter is greater than the former. In the final step, we show how to modify the scheme to reduce the depth of the decryption circuit, and thereby obtain a bootstrappable encryption scheme, without reducing the depth that the scheme can evaluate. Abstractly, we accomplish this by enabling the encrypter to start the decryption process, leaving less work for the decrypter, much like the server leaves less work for the decrypter in a serveraided cryptosystem.
Efficient lattice (H)IBE in the standard model
 In EUROCRYPT 2010, LNCS
, 2010
"... Abstract. We construct an efficient identity based encryption system based on the standard learning with errors (LWE) problem. Our security proof holds in the standard model. The key step in the construction is a family of lattices for which there are two distinct trapdoors for finding short vectors ..."
Abstract

Cited by 55 (12 self)
 Add to MetaCart
Abstract. We construct an efficient identity based encryption system based on the standard learning with errors (LWE) problem. Our security proof holds in the standard model. The key step in the construction is a family of lattices for which there are two distinct trapdoors for finding short vectors. One trapdoor enables the real system to generate short vectors in all lattices in the family. The other trapdoor enables the simulator to generate short vectors for all lattices in the family except for one. We extend this basic technique to an adaptivelysecure IBE and a Hierarchical IBE. 1
Efficient collisionresistant hashing from worstcase assumptions on cyclic lattices
 In TCC
, 2006
"... Abstract The generalized knapsack function is defined as fa(x) = Pi ai * xi, where a = (a1,..., am)consists of m elements from some ring R, and x = (x1,..., xm) consists of m coefficients froma specified subset S ` R. Micciancio (FOCS 2002) proposed a specific choice of the ring R andsubset S for w ..."
Abstract

Cited by 47 (14 self)
 Add to MetaCart
Abstract The generalized knapsack function is defined as fa(x) = Pi ai * xi, where a = (a1,..., am)consists of m elements from some ring R, and x = (x1,..., xm) consists of m coefficients froma specified subset S ` R. Micciancio (FOCS 2002) proposed a specific choice of the ring R andsubset S for which inverting this function (for random a, x) is at least as hard as solving certainworstcase problems on cyclic lattices. We show that for a different choice of S ae R, the generalized knapsack function is in factcollisionresistant, assuming it is infeasible to approximate the shortest vector in ndimensionalcyclic lattices up to factors ~ O(n). For slightly larger factors, we even get collisionresistancefor any m> = 2. This yields very efficient collisionresistant hash functions having key size andtime complexity almost linear in the security parameter n. We also show that altering S isnecessary, in the sense that Micciancio's original function is not collisionresistant (nor even universal oneway).Our results exploit an intimate connection between the linear algebra of ndimensional cycliclattices and the ring Z [ ff]/(ffn 1), and crucially depend on the factorization of ffn 1 intoirreducible cyclotomic polynomials. We also establish a new bound on the discrete Gaussian distribution over general lattices, employing techniques introduced by Micciancio and Regev(FOCS 2004) and also used by Micciancio in his study of compact knapsacks. 1 Introduction A function family {fa}a2A is said to be collisionresistant if given a uniformly chosen a 2 A, it is infeasible to find elements x1 6 = x2 so that fa(x1) = fa(x2). Collisionresistant hash functions are one of the most widelyemployed cryptographic primitives. Their applications include integrity checking, user and message authentication, commitment protocols, and more. Many of the applications of collisionresistant hashing tend to invoke the hash function only a small number of times. Thus, the efficiency of the function has a direct effect on the efficiency of the application that uses it. This is in contrast to primitives such as oneway functions, which typically must be invoked many times in their applications (at least when used in a blackbox way) [9].
On ideal lattices and learning with errors over rings
 In Proc. of EUROCRYPT, volume 6110 of LNCS
, 2010
"... The “learning with errors ” (LWE) problem is to distinguish random linear equations, which have been perturbed by a small amount of noise, from truly uniform ones. The problem has been shown to be as hard as worstcase lattice problems, and in recent years it has served as the foundation for a pleth ..."
Abstract

Cited by 43 (8 self)
 Add to MetaCart
The “learning with errors ” (LWE) problem is to distinguish random linear equations, which have been perturbed by a small amount of noise, from truly uniform ones. The problem has been shown to be as hard as worstcase lattice problems, and in recent years it has served as the foundation for a plethora of cryptographic applications. Unfortunately, these applications are rather inefficient due to an inherent quadratic overhead in the use of LWE. A main open question was whether LWE and its applications could be made truly efficient by exploiting extra algebraic structure, as was done for latticebased hash functions (and related primitives). We resolve this question in the affirmative by introducing an algebraic variant of LWE called ringLWE, and proving that it too enjoys very strong hardness guarantees. Specifically, we show that the ringLWE distribution is pseudorandom, assuming that worstcase problems on ideal lattices are hard for polynomialtime quantum algorithms. Applications include the first truly practical latticebased publickey cryptosystem with an efficient security reduction; moreover, many of the other applications of LWE can be made much more efficient through the use of ringLWE. 1
Latticebased Cryptography
, 2008
"... In this chapter we describe some of the recent progress in latticebased cryptography. Latticebased cryptographic constructions hold a great promise for postquantum cryptography, as they enjoy very strong security proofs based on worstcase hardness, relatively efficient implementations, as well a ..."
Abstract

Cited by 36 (5 self)
 Add to MetaCart
In this chapter we describe some of the recent progress in latticebased cryptography. Latticebased cryptographic constructions hold a great promise for postquantum cryptography, as they enjoy very strong security proofs based on worstcase hardness, relatively efficient implementations, as well as great simplicity. In addition, latticebased cryptography is believed to be secure against quantum computers. Our focus here
Homomorphic signatures for polynomial functions
, 2010
"... We construct the first homomorphic signature scheme that is capable of evaluating multivariate polynomials on signed data. Given the public key and a signed data set, there is an efficient algorithm to produce a signature on the mean, standard deviation, and other statistics of the signed data. Prev ..."
Abstract

Cited by 30 (4 self)
 Add to MetaCart
We construct the first homomorphic signature scheme that is capable of evaluating multivariate polynomials on signed data. Given the public key and a signed data set, there is an efficient algorithm to produce a signature on the mean, standard deviation, and other statistics of the signed data. Previous systems for computing on signed data could only handle linear operations. For polynomials of constant degree, the length of a derived signature only depends logarithmically on the size of the data set. Our system uses ideal lattices in a way that is a “signature analogue” of Gentry’s fully homomorphic encryption. Security is based on hard problems on ideal lattices similar to those in Gentry’s system.
SWIFFT: A Modest Proposal for FFT Hashing
"... We propose SWIFFT, a collection of compression functions that are highly parallelizable and admit very efficient implementations on modern microprocessors. The main technique underlying our functions is a novel use of the Fast Fourier Transform (FFT) to achieve “diffusion, ” together with a linear ..."
Abstract

Cited by 30 (11 self)
 Add to MetaCart
We propose SWIFFT, a collection of compression functions that are highly parallelizable and admit very efficient implementations on modern microprocessors. The main technique underlying our functions is a novel use of the Fast Fourier Transform (FFT) to achieve “diffusion, ” together with a linear combination to achieve compression and “confusion. ” We provide a detailed security analysis of concrete instantiations, and give a highperformance software implementation that exploits the inherent parallelism of the FFT algorithm. The throughput of our implementation is competitive with that of SHA256, with additional parallelism yet to be exploited. Our functions are set apart from prior proposals (having comparable efficiency) by a supporting asymptotic security proof: it can be formally proved that finding a collision in a randomlychosen function from the family (with noticeable probability) is at least as hard as finding short vectors in cyclic/ideal lattices in the worst case.
LatticeBased Identification Schemes Secure Under Active Attacks
, 2008
"... There is an inherent difficulty in building 3move ID schemes based on combinatorial problems without much algebraic structure. A consequence of this, is that most standard ID schemes today are based on the hardness of number theory problems. Not having schemes based on alternate assumptions is a c ..."
Abstract

Cited by 20 (6 self)
 Add to MetaCart
There is an inherent difficulty in building 3move ID schemes based on combinatorial problems without much algebraic structure. A consequence of this, is that most standard ID schemes today are based on the hardness of number theory problems. Not having schemes based on alternate assumptions is a cause for concern since improved number theoretic algorithms or the realization of quantum computing would make the known schemes insecure. In this work, we examine the possibility of creating identification protocols based on the hardness of lattice problems. We construct a 3move identification scheme whose security is based on the worstcase hardness of the shortest vector problem in all lattices, and also present a more efficient version based on the hardness of the same problem in ideal lattices.
Lattices that admit logarithmic worstcase to averagecase connection factors
 In STOC
, 2007
"... Abstract We demonstrate an averagecase problem which is as hard as finding fl(n)approximateshortest vectors in certain ndimensional lattices in the worst case, where fl(n) = O(plog n).The previously best known factor for any class of lattices was fl(n) = ~O(n).To obtain our results, we focus on ..."
Abstract

Cited by 19 (9 self)
 Add to MetaCart
Abstract We demonstrate an averagecase problem which is as hard as finding fl(n)approximateshortest vectors in certain ndimensional lattices in the worst case, where fl(n) = O(plog n).The previously best known factor for any class of lattices was fl(n) = ~O(n).To obtain our results, we focus on families of lattices having special algebraic structure. Specifically, we consider lattices that correspond to ideals in the ring of integers of an algebraicnumber field. The worstcase assumption we rely on is that in some `p length, it is hard to findapproximate shortest vectors in these lattices, under an appropriate form of preprocessing of the number field. Our results build upon prior works by Micciancio (FOCS 2002), Peikert andRosen (TCC 2006), and Lyubashevsky and Micciancio (ICALP 2006). For the connection factors fl(n) we achieve, the corresponding decisional promise problemson ideal lattices are not known to be NPhard; in fact, they are in P. However, the search approximation problems still appear to be very hard. Indeed, ideal lattices are wellstudiedobjects in computational number theory, and the best known algorithms for them seem to perform no better than the best known algorithms for general lattices.To obtain the best possible connection factor, we instantiate our constructions with infinite families of number fields having constant root discriminant. Such families are known to existand are computable, though no efficient construction is yet known. Our work motivates the search for such constructions. Even constructions of number fields having root discriminant upto O(n2/3ffl) would yield connection factors better than the current best of ~O(n).