Results 1  10
of
126
Fully homomorphic encryption using ideal lattices
 In Proc. STOC
, 2009
"... We propose a fully homomorphic encryption scheme – i.e., a scheme that allows one to evaluate circuits over encrypted data without being able to decrypt. Our solution comes in three steps. First, we provide a general result – that, to construct an encryption scheme that permits evaluation of arbitra ..."
Abstract

Cited by 663 (17 self)
 Add to MetaCart
(Show Context)
We propose a fully homomorphic encryption scheme – i.e., a scheme that allows one to evaluate circuits over encrypted data without being able to decrypt. Our solution comes in three steps. First, we provide a general result – that, to construct an encryption scheme that permits evaluation of arbitrary circuits, it suffices to construct an encryption scheme that can evaluate (slightly augmented versions of) its own decryption circuit; we call a scheme that can evaluate its (augmented) decryption circuit bootstrappable. Next, we describe a public key encryption scheme using ideal lattices that is almost bootstrappable. Latticebased cryptosystems typically have decryption algorithms with low circuit complexity, often dominated by an inner product computation that is in NC1. Also, ideal lattices provide both additive and multiplicative homomorphisms (modulo a publickey ideal in a polynomial ring that is represented as a lattice), as needed to evaluate general circuits. Unfortunately, our initial scheme is not quite bootstrappable – i.e., the depth that the scheme can correctly evaluate can be logarithmic in the lattice dimension, just like the depth of the decryption circuit, but the latter is greater than the former. In the final step, we show how to modify the scheme to reduce the depth of the decryption circuit, and thereby obtain a bootstrappable encryption scheme, without reducing the depth that the scheme can evaluate. Abstractly, we accomplish this by enabling the encrypter to start the decryption process, leaving less work for the decrypter, much like the server leaves less work for the decrypter in a serveraided cryptosystem.
On Lattices, Learning with Errors, Random Linear Codes, and Cryptography
 In STOC
, 2005
"... Our main result is a reduction from worstcase lattice problems such as SVP and SIVP to a certain learning problem. This learning problem is a natural extension of the ‘learning from parity with error’ problem to higher moduli. It can also be viewed as the problem of decoding from a random linear co ..."
Abstract

Cited by 364 (6 self)
 Add to MetaCart
(Show Context)
Our main result is a reduction from worstcase lattice problems such as SVP and SIVP to a certain learning problem. This learning problem is a natural extension of the ‘learning from parity with error’ problem to higher moduli. It can also be viewed as the problem of decoding from a random linear code. This, we believe, gives a strong indication that these problems are hard. Our reduction, however, is quantum. Hence, an efficient solution to the learning problem implies a quantum algorithm for SVP and SIVP. A main open question is whether this reduction can be made classical. We also present a (classical) publickey cryptosystem whose security is based on the hardness of the learning problem. By the main result, its security is also based on the worstcase quantum hardness of SVP and SIVP. Previous latticebased publickey cryptosystems such as the one by Ajtai and Dwork were based only on uniqueSVP, a special case of SVP. The new cryptosystem is much more efficient than previous cryptosystems: the public key is of size Õ(n2) and encrypting a message increases its size by a factor of Õ(n) (in previous cryptosystems these values are Õ(n4) and Õ(n2), respectively). In fact, under the assumption that all parties share a random bit string of length Õ(n2), the size of the public key can be reduced to Õ(n). 1
Trapdoors for Hard Lattices and New Cryptographic Constructions
, 2007
"... We show how to construct a variety of “trapdoor ” cryptographic tools assuming the worstcase hardness of standard lattice problems (such as approximating the shortest nonzero vector to within small factors). The applications include trapdoor functions with preimage sampling, simple and efficient “ha ..."
Abstract

Cited by 191 (26 self)
 Add to MetaCart
We show how to construct a variety of “trapdoor ” cryptographic tools assuming the worstcase hardness of standard lattice problems (such as approximating the shortest nonzero vector to within small factors). The applications include trapdoor functions with preimage sampling, simple and efficient “hashandsign ” digital signature schemes, universally composable oblivious transfer, and identitybased encryption. A core technical component of our constructions is an efficient algorithm that, given a basis of an arbitrary lattice, samples lattice points from a Gaussianlike probability distribution whose standard deviation is essentially the length of the longest vector in the basis. In particular, the crucial security property is that the output distribution of the algorithm is oblivious to the particular geometry of the given basis. ∗ Supported by the Herbert Kunzel Stanford Graduate Fellowship. † This material is based upon work supported by the National Science Foundation under Grants CNS0716786 and CNS0749931. Any opinions, findings, and conclusions or recommedations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. ‡ The majority of this work was performed while at SRI International. 1 1
Publickey cryptosystems from the worstcase shortest vector problem
, 2008
"... We construct publickey cryptosystems that are secure assuming the worstcase hardness of approximating the length of a shortest nonzero vector in an ndimensional lattice to within a small poly(n) factor. Prior cryptosystems with worstcase connections were based either on the shortest vector probl ..."
Abstract

Cited by 152 (22 self)
 Add to MetaCart
(Show Context)
We construct publickey cryptosystems that are secure assuming the worstcase hardness of approximating the length of a shortest nonzero vector in an ndimensional lattice to within a small poly(n) factor. Prior cryptosystems with worstcase connections were based either on the shortest vector problem for a special class of lattices (Ajtai and Dwork, STOC 1997; Regev, J. ACM 2004), or on the conjectured hardness of lattice problems for quantum algorithms (Regev, STOC 2005). Our main technical innovation is a reduction from certain variants of the shortest vector problem to corresponding versions of the “learning with errors” (LWE) problem; previously, only a quantum reduction of this kind was known. In addition, we construct new cryptosystems based on the search version of LWE, including a very natural chosen ciphertextsecure system that has a much simpler description and tighter underlying worstcase approximation factor than prior constructions.
On ideal lattices and learning with errors over rings
 In Proc. of EUROCRYPT, volume 6110 of LNCS
, 2010
"... The “learning with errors ” (LWE) problem is to distinguish random linear equations, which have been perturbed by a small amount of noise, from truly uniform ones. The problem has been shown to be as hard as worstcase lattice problems, and in recent years it has served as the foundation for a pleth ..."
Abstract

Cited by 125 (18 self)
 Add to MetaCart
The “learning with errors ” (LWE) problem is to distinguish random linear equations, which have been perturbed by a small amount of noise, from truly uniform ones. The problem has been shown to be as hard as worstcase lattice problems, and in recent years it has served as the foundation for a plethora of cryptographic applications. Unfortunately, these applications are rather inefficient due to an inherent quadratic overhead in the use of LWE. A main open question was whether LWE and its applications could be made truly efficient by exploiting extra algebraic structure, as was done for latticebased hash functions (and related primitives). We resolve this question in the affirmative by introducing an algebraic variant of LWE called ringLWE, and proving that it too enjoys very strong hardness guarantees. Specifically, we show that the ringLWE distribution is pseudorandom, assuming that worstcase problems on ideal lattices are hard for polynomialtime quantum algorithms. Applications include the first truly practical latticebased publickey cryptosystem with an efficient security reduction; moreover, many of the other applications of LWE can be made much more efficient through the use of ringLWE. 1
Simultaneous hardcore bits and cryptography against memory attacks
 IN TCC
, 2009
"... This paper considers two questions in cryptography. Cryptography Secure Against Memory Attacks. A particularly devastating sidechannel attack against cryptosystems, termed the “memory attack”, was proposed recently. In this attack, a significant fraction of the bits of a secret key of a cryptograp ..."
Abstract

Cited by 116 (11 self)
 Add to MetaCart
(Show Context)
This paper considers two questions in cryptography. Cryptography Secure Against Memory Attacks. A particularly devastating sidechannel attack against cryptosystems, termed the “memory attack”, was proposed recently. In this attack, a significant fraction of the bits of a secret key of a cryptographic algorithm can be measured by an adversary if the secret key is ever stored in a part of memory which can be accessed even after power has been turned off for a short amount of time. Such an attack has been shown to completely compromise the security of various cryptosystems in use, including the RSA cryptosystem and AES. We show that the publickey encryption scheme of Regev (STOC 2005), and the identitybased encryption scheme of Gentry, Peikert and Vaikuntanathan (STOC 2008) are remarkably robust against memory attacks where the adversary can measure a large fraction of the bits of the secretkey, or more generally, can compute an arbitrary function of the secretkey of bounded output length. This is done without increasing the size of the secretkey, and without introducing any
Better key sizes (and attacks) for LWEbased encryption
 In CTRSA
, 2011
"... We analyze the concrete security and key sizes of theoretically sound latticebased encryption schemes based on the “learning with errors ” (LWE) problem. Our main contributions are: (1) a new lattice attack on LWE that combines basis reduction with an enumeration algorithm admitting a time/success ..."
Abstract

Cited by 71 (7 self)
 Add to MetaCart
We analyze the concrete security and key sizes of theoretically sound latticebased encryption schemes based on the “learning with errors ” (LWE) problem. Our main contributions are: (1) a new lattice attack on LWE that combines basis reduction with an enumeration algorithm admitting a time/success tradeoff, which performs better than the simple distinguishing attack considered in prior analyses; (2) concrete parameters and security estimates for an LWEbased cryptosystem that is more compact and efficient than the wellknown schemes from the literature. Our new key sizes are up to 10 times smaller than prior examples, while providing even stronger concrete security levels.
Latticebased Cryptography
, 2008
"... In this chapter we describe some of the recent progress in latticebased cryptography. Latticebased cryptographic constructions hold a great promise for postquantum cryptography, as they enjoy very strong security proofs based on worstcase hardness, relatively efficient implementations, as well a ..."
Abstract

Cited by 66 (5 self)
 Add to MetaCart
In this chapter we describe some of the recent progress in latticebased cryptography. Latticebased cryptographic constructions hold a great promise for postquantum cryptography, as they enjoy very strong security proofs based on worstcase hardness, relatively efficient implementations, as well as great simplicity. In addition, latticebased cryptography is believed to be secure against quantum computers. Our focus here