Results 1  10
of
36
Trapdoors for Hard Lattices and New Cryptographic Constructions
, 2007
"... We show how to construct a variety of “trapdoor ” cryptographic tools assuming the worstcase hardness of standard lattice problems (such as approximating the shortest nonzero vector to within small factors). The applications include trapdoor functions with preimage sampling, simple and efficient “ha ..."
Abstract

Cited by 104 (20 self)
 Add to MetaCart
We show how to construct a variety of “trapdoor ” cryptographic tools assuming the worstcase hardness of standard lattice problems (such as approximating the shortest nonzero vector to within small factors). The applications include trapdoor functions with preimage sampling, simple and efficient “hashandsign ” digital signature schemes, universally composable oblivious transfer, and identitybased encryption. A core technical component of our constructions is an efficient algorithm that, given a basis of an arbitrary lattice, samples lattice points from a Gaussianlike probability distribution whose standard deviation is essentially the length of the longest vector in the basis. In particular, the crucial security property is that the output distribution of the algorithm is oblivious to the particular geometry of the given basis. ∗ Supported by the Herbert Kunzel Stanford Graduate Fellowship. † This material is based upon work supported by the National Science Foundation under Grants CNS0716786 and CNS0749931. Any opinions, findings, and conclusions or recommedations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. ‡ The majority of this work was performed while at SRI International. 1 1
Publickey cryptosystems from the worstcase shortest vector problem
, 2008
"... We construct publickey cryptosystems that are secure assuming the worstcase hardness of approximating the length of a shortest nonzero vector in an ndimensional lattice to within a small poly(n) factor. Prior cryptosystems with worstcase connections were based either on the shortest vector probl ..."
Abstract

Cited by 84 (18 self)
 Add to MetaCart
We construct publickey cryptosystems that are secure assuming the worstcase hardness of approximating the length of a shortest nonzero vector in an ndimensional lattice to within a small poly(n) factor. Prior cryptosystems with worstcase connections were based either on the shortest vector problem for a special class of lattices (Ajtai and Dwork, STOC 1997; Regev, J. ACM 2004), or on the conjectured hardness of lattice problems for quantum algorithms (Regev, STOC 2005). Our main technical innovation is a reduction from certain variants of the shortest vector problem to corresponding versions of the “learning with errors” (LWE) problem; previously, only a quantum reduction of this kind was known. In addition, we construct new cryptosystems based on the search version of LWE, including a very natural chosen ciphertextsecure system that has a much simpler description and tighter underlying worstcase approximation factor than prior constructions.
Worstcase to averagecase reductions based on Gaussian measures
 SIAM J. on Computing
, 2004
"... We show that finding small solutions to random modular linear equations is at least as hard as approximating several lattice problems in the worst case within a factor almost linear in the dimension of the lattice. The lattice problems we consider are the shortest vector problem, the shortest indepe ..."
Abstract

Cited by 83 (16 self)
 Add to MetaCart
We show that finding small solutions to random modular linear equations is at least as hard as approximating several lattice problems in the worst case within a factor almost linear in the dimension of the lattice. The lattice problems we consider are the shortest vector problem, the shortest independent vectors problem, the covering radius problem, and the guaranteed distance decoding problem (a variant of the well known closest vector problem). The approximation factor we obtain is nlog O(1) n for all four problems. This greatly improves on all previous work on the subject starting from Ajtai’s seminal paper (STOC, 1996), up to the strongest previously known results by Micciancio (SIAM J. on Computing, 2004). Our results also bring us closer to the limit where the problems are no longer known to be in NP intersect coNP. Our main tools are Gaussian measures on lattices and the highdimensional Fourier transform. We start by defining a new lattice parameter which determines the amount of Gaussian noise that one has to add to a lattice in order to get close to a uniform distribution. In addition to yielding quantitatively much stronger results, the use of this parameter allows us to simplify many of the complications in previous work. Our technical contributions are twofold. First, we show tight connections between this new parameter and existing lattice parameters. One such important connection is between this parameter and the length of the shortest set of linearly independent vectors. Second, we prove that the distribution that one obtains after adding Gaussian noise to the lattice has the following interesting property: the distribution of the noise vector when conditioning on the final value behaves in many respects like the original Gaussian noise vector. In particular, its moments remain essentially unchanged. 1
Lossy Trapdoor Functions and Their Applications
 ELECTRONIC COLLOQUIUM ON COMPUTATIONAL COMPLEXITY, REPORT NO. 80 (2007)
, 2007
"... We propose a new general primitive called lossy trapdoor functions (lossy TDFs), and realize it under a variety of different number theoretic assumptions, including hardness of the decisional DiffieHellman (DDH) problem and the worstcase hardness of standard lattice problems. Using lossy TDFs, we ..."
Abstract

Cited by 79 (17 self)
 Add to MetaCart
We propose a new general primitive called lossy trapdoor functions (lossy TDFs), and realize it under a variety of different number theoretic assumptions, including hardness of the decisional DiffieHellman (DDH) problem and the worstcase hardness of standard lattice problems. Using lossy TDFs, we develop a new approach for constructing many important cryptographic primitives, including standard trapdoor functions, CCAsecure cryptosystems, collisionresistant hash functions, and more. All of our constructions are simple, efficient, and blackbox. Taken all together, these results resolve some longstanding open problems in cryptography. They give the first known (injective) trapdoor functions based on problems not directly related to integer factorization, and provide the first known CCAsecure cryptosystem based solely on worstcase lattice assumptions.
Efficient collisionresistant hashing from worstcase assumptions on cyclic lattices
 In TCC
, 2006
"... Abstract The generalized knapsack function is defined as fa(x) = Pi ai * xi, where a = (a1,..., am)consists of m elements from some ring R, and x = (x1,..., xm) consists of m coefficients froma specified subset S ` R. Micciancio (FOCS 2002) proposed a specific choice of the ring R andsubset S for w ..."
Abstract

Cited by 43 (12 self)
 Add to MetaCart
Abstract The generalized knapsack function is defined as fa(x) = Pi ai * xi, where a = (a1,..., am)consists of m elements from some ring R, and x = (x1,..., xm) consists of m coefficients froma specified subset S ` R. Micciancio (FOCS 2002) proposed a specific choice of the ring R andsubset S for which inverting this function (for random a, x) is at least as hard as solving certainworstcase problems on cyclic lattices. We show that for a different choice of S ae R, the generalized knapsack function is in factcollisionresistant, assuming it is infeasible to approximate the shortest vector in ndimensionalcyclic lattices up to factors ~ O(n). For slightly larger factors, we even get collisionresistancefor any m> = 2. This yields very efficient collisionresistant hash functions having key size andtime complexity almost linear in the security parameter n. We also show that altering S isnecessary, in the sense that Micciancio's original function is not collisionresistant (nor even universal oneway).Our results exploit an intimate connection between the linear algebra of ndimensional cycliclattices and the ring Z [ ff]/(ffn 1), and crucially depend on the factorization of ffn 1 intoirreducible cyclotomic polynomials. We also establish a new bound on the discrete Gaussian distribution over general lattices, employing techniques introduced by Micciancio and Regev(FOCS 2004) and also used by Micciancio in his study of compact knapsacks. 1 Introduction A function family {fa}a2A is said to be collisionresistant if given a uniformly chosen a 2 A, it is infeasible to find elements x1 6 = x2 so that fa(x1) = fa(x2). Collisionresistant hash functions are one of the most widelyemployed cryptographic primitives. Their applications include integrity checking, user and message authentication, commitment protocols, and more. Many of the applications of collisionresistant hashing tend to invoke the hash function only a small number of times. Thus, the efficiency of the function has a direct effect on the efficiency of the application that uses it. This is in contrast to primitives such as oneway functions, which typically must be invoked many times in their applications (at least when used in a blackbox way) [9].
Latticebased Cryptography
, 2008
"... In this chapter we describe some of the recent progress in latticebased cryptography. Latticebased cryptographic constructions hold a great promise for postquantum cryptography, as they enjoy very strong security proofs based on worstcase hardness, relatively efficient implementations, as well a ..."
Abstract

Cited by 36 (5 self)
 Add to MetaCart
In this chapter we describe some of the recent progress in latticebased cryptography. Latticebased cryptographic constructions hold a great promise for postquantum cryptography, as they enjoy very strong security proofs based on worstcase hardness, relatively efficient implementations, as well as great simplicity. In addition, latticebased cryptography is believed to be secure against quantum computers. Our focus here
Better key sizes (and attacks) for LWEbased encryption
 In CTRSA
, 2011
"... We analyze the concrete security and key sizes of theoretically sound latticebased encryption schemes based on the “learning with errors ” (LWE) problem. Our main contributions are: (1) a new lattice attack on LWE that combines basis reduction with an enumeration algorithm admitting a time/success ..."
Abstract

Cited by 21 (4 self)
 Add to MetaCart
We analyze the concrete security and key sizes of theoretically sound latticebased encryption schemes based on the “learning with errors ” (LWE) problem. Our main contributions are: (1) a new lattice attack on LWE that combines basis reduction with an enumeration algorithm admitting a time/success tradeoff, which performs better than the simple distinguishing attack considered in prior analyses; (2) concrete parameters and security estimates for an LWEbased cryptosystem that is more compact and efficient than the wellknown schemes from the literature. Our new key sizes are up to 10 times smaller than prior examples, while providing even stronger concrete security levels.
Lattices that admit logarithmic worstcase to averagecase connection factors
 In STOC
, 2007
"... Abstract We demonstrate an averagecase problem which is as hard as finding fl(n)approximateshortest vectors in certain ndimensional lattices in the worst case, where fl(n) = O(plog n).The previously best known factor for any class of lattices was fl(n) = ~O(n).To obtain our results, we focus on ..."
Abstract

Cited by 19 (9 self)
 Add to MetaCart
Abstract We demonstrate an averagecase problem which is as hard as finding fl(n)approximateshortest vectors in certain ndimensional lattices in the worst case, where fl(n) = O(plog n).The previously best known factor for any class of lattices was fl(n) = ~O(n).To obtain our results, we focus on families of lattices having special algebraic structure. Specifically, we consider lattices that correspond to ideals in the ring of integers of an algebraicnumber field. The worstcase assumption we rely on is that in some `p length, it is hard to findapproximate shortest vectors in these lattices, under an appropriate form of preprocessing of the number field. Our results build upon prior works by Micciancio (FOCS 2002), Peikert andRosen (TCC 2006), and Lyubashevsky and Micciancio (ICALP 2006). For the connection factors fl(n) we achieve, the corresponding decisional promise problemson ideal lattices are not known to be NPhard; in fact, they are in P. However, the search approximation problems still appear to be very hard. Indeed, ideal lattices are wellstudiedobjects in computational number theory, and the best known algorithms for them seem to perform no better than the best known algorithms for general lattices.To obtain the best possible connection factor, we instantiate our constructions with infinite families of number fields having constant root discriminant. Such families are known to existand are computable, though no efficient construction is yet known. Our work motivates the search for such constructions. Even constructions of number fields having root discriminant upto O(n2/3ffl) would yield connection factors better than the current best of ~O(n).
Fully homomorphic encryption without modulus switching from classical GapSVP
 In Advances in Cryptology  Crypto 2012, volume 7417 of Lecture
"... We present a new tensoring technique for LWEbased fully homomorphic encryption. While in all previous works, the ciphertext noise grows quadratically (B → B 2 · poly(n)) with every multiplication (before “refreshing”), our noise only grows linearly (B → B · poly(n)). We use this technique to constr ..."
Abstract

Cited by 19 (2 self)
 Add to MetaCart
We present a new tensoring technique for LWEbased fully homomorphic encryption. While in all previous works, the ciphertext noise grows quadratically (B → B 2 · poly(n)) with every multiplication (before “refreshing”), our noise only grows linearly (B → B · poly(n)). We use this technique to construct a scaleinvariant fully homomorphic encryption scheme, whose properties only depend on the ratio between the modulus q and the initial noise level B, and not on their absolute values. Our scheme has a number of advantages over previous candidates: It uses the same modulus throughout the evaluation process (no need for “modulus switching”), and this modulus can take arbitrary form. In addition, security can be classically reduced from the worstcase hardness of the GapSVP problem (with quasipolynomial approximation factor), whereas previous constructions could only exhibit a quantum reduction from GapSVP. Fully homomorphic encryption has been the focus of extensive study since the first candidate scheme was introduced by Gentry [Gen09b]. In a nutshell, fully homomorphic encryption allows to
Limits on the hardness of lattice problems in ℓp norms
 In IEEE Conference on Computational Complexity
, 2007
"... In recent years, several papers have established limits on the computational difficulty of lattice problems, focusing primarily on the ℓ2 (Euclidean) norm. We demonstrate close analogues of these results in ℓp norms, for every 2 < p ≤ ∞. In particular, for lattices of dimension n: • Approximating th ..."
Abstract

Cited by 18 (11 self)
 Add to MetaCart
In recent years, several papers have established limits on the computational difficulty of lattice problems, focusing primarily on the ℓ2 (Euclidean) norm. We demonstrate close analogues of these results in ℓp norms, for every 2 < p ≤ ∞. In particular, for lattices of dimension n: • Approximating the closest vector problem, the shortest vector problem, and other related problems to within O ( √ n) factors (or O ( √ n log n) factors, for p = ∞) is in coNP. • Approximating the closest vector and bounded distance decoding problems with preprocessing to within O ( √ n) factors can be accomplished in deterministic polynomial time. • Approximating several problems (such as the shortest independent vectors problem) to within Õ(n) factors in the worst case reduces to solving the averagecase problems defined in prior works (Ajtai, STOC 1996; Micciancio and Regev, SIAM J. on Computing 2007; Regev, STOC 2005). Our results improve prior approximation factors for ℓp norms by up to √ n factors. Taken all together, they complement recent reductions from the ℓ2 norm to ℓp norms (Regev and Rosen, STOC 2006), and provide some evidence that lattice problems in ℓp norms (for p> 2) may not be substantially harder than they are in the ℓ2 norm. One of our main technical contributions is a very general analysis of Gaussian distributions over lattices, which may be of independent interest. Our proofs employ analytical techniques of Banaszczyk that, to our knowledge, have yet to be exploited in computer science. 1