Results 11 - 20
of
964
Trapdoors for Hard Lattices and New Cryptographic Constructions
, 2007
"... We show how to construct a variety of “trapdoor ” cryptographic tools assuming the worstcase hardness of standard lattice problems (such as approximating the shortest nonzero vector to within small factors). The applications include trapdoor functions with preimage sampling, simple and efficient “ha ..."
Abstract
-
Cited by 191 (26 self)
- Add to MetaCart
We show how to construct a variety of “trapdoor ” cryptographic tools assuming the worstcase hardness of standard lattice problems (such as approximating the shortest nonzero vector to within small factors). The applications include trapdoor functions with preimage sampling, simple and efficient “hash-and-sign ” digital signature schemes, universally composable oblivious transfer, and identity-based encryption. A core technical component of our constructions is an efficient algorithm that, given a basis of an arbitrary lattice, samples lattice points from a Gaussian-like probability distribution whose standard deviation is essentially the length of the longest vector in the basis. In particular, the crucial security property is that the output distribution of the algorithm is oblivious to the particular geometry of the given basis. ∗ Supported by the Herbert Kunzel Stanford Graduate Fellowship. † This material is based upon work supported by the National Science Foundation under Grants CNS-0716786 and CNS-0749931. Any opinions, findings, and conclusions or recommedations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. ‡ The majority of this work was performed while at SRI International. 1 1
Public-key cryptosystems from the worst-case shortest vector problem
, 2008
"... We construct public-key cryptosystems that are secure assuming the worst-case hardness of approximating the length of a shortest nonzero vector in an n-dimensional lattice to within a small poly(n) factor. Prior cryptosystems with worst-case connections were based either on the shortest vector probl ..."
Abstract
-
Cited by 152 (22 self)
- Add to MetaCart
(Show Context)
We construct public-key cryptosystems that are secure assuming the worst-case hardness of approximating the length of a shortest nonzero vector in an n-dimensional lattice to within a small poly(n) factor. Prior cryptosystems with worst-case connections were based either on the shortest vector problem for a special class of lattices (Ajtai and Dwork, STOC 1997; Regev, J. ACM 2004), or on the conjectured hardness of lattice problems for quantum algorithms (Regev, STOC 2005). Our main technical innovation is a reduction from certain variants of the shortest vector problem to corresponding versions of the “learning with errors” (LWE) problem; previously, only a quantum reduction of this kind was known. In addition, we construct new cryptosystems based on the search version of LWE, including a very natural chosen ciphertext-secure system that has a much simpler description and tighter underlying worst-case approximation factor than prior constructions.
Public-Key Cryptosystems from Lattice Reduction Problems
, 1996
"... We present a new proposal for a trapdoor one-way function, from whichwe derive public-key encryption and digital signatures. The security of the new construction is based on the conjectured computational difficulty of lattice-reduction problems, providing a possible alternative to existing public-ke ..."
Abstract
-
Cited by 148 (4 self)
- Add to MetaCart
(Show Context)
We present a new proposal for a trapdoor one-way function, from whichwe derive public-key encryption and digital signatures. The security of the new construction is based on the conjectured computational difficulty of lattice-reduction problems, providing a possible alternative to existing public-key encryption algorithms and digital signatures such as RSA and DSS.
On the sphere-decoding algorithm I. Expected complexity
- IEEE Trans. Sig. Proc
, 2005
"... Abstract—The problem of finding the least-squares solution to a system of linear equations where the unknown vector is comprised of integers, but the matrix coefficient and given vector are comprised of real numbers, arises in many applications: communications, cryptography, GPS, to name a few. The ..."
Abstract
-
Cited by 135 (7 self)
- Add to MetaCart
(Show Context)
Abstract—The problem of finding the least-squares solution to a system of linear equations where the unknown vector is comprised of integers, but the matrix coefficient and given vector are comprised of real numbers, arises in many applications: communications, cryptography, GPS, to name a few. The problem is equivalent to finding the closest lattice point to a given point and is known to be NP-hard. In communications applications, however, the given vector is not arbitrary but rather is an unknown lattice point that has been perturbed by an additive noise vector whose statistical properties are known. Therefore, in this paper, rather than dwell on the worst-case complexity of the integer least-squares problem, we study its expected complexity, averaged over the noise and over the lattice. For the “sphere decoding” algorithm of Fincke and Pohst, we find a closed-form expression for the expected complexity, both for the infinite and finite lattice.
Worst-case to average-case reductions based on Gaussian measures
- SIAM J. on Computing
, 2004
"... We show that finding small solutions to random modular linear equations is at least as hard as approximating several lattice problems in the worst case within a factor almost linear in the dimension of the lattice. The lattice problems we consider are the shortest vector problem, the shortest indepe ..."
Abstract
-
Cited by 131 (23 self)
- Add to MetaCart
(Show Context)
We show that finding small solutions to random modular linear equations is at least as hard as approximating several lattice problems in the worst case within a factor almost linear in the dimension of the lattice. The lattice problems we consider are the shortest vector problem, the shortest independent vectors problem, the covering radius problem, and the guaranteed distance decoding problem (a variant of the well known closest vector problem). The approximation factor we obtain is nlog O(1) n for all four problems. This greatly improves on all previous work on the subject starting from Ajtai’s seminal paper (STOC, 1996), up to the strongest previously known results by Micciancio (SIAM J. on Computing, 2004). Our results also bring us closer to the limit where the problems are no longer known to be in NP intersect coNP. Our main tools are Gaussian measures on lattices and the high-dimensional Fourier transform. We start by defining a new lattice parameter which determines the amount of Gaussian noise that one has to add to a lattice in order to get close to a uniform distribution. In addition to yielding quantitatively much stronger results, the use of this parameter allows us to simplify many of the complications in previous work. Our technical contributions are two-fold. First, we show tight connections between this new parameter and existing lattice parameters. One such important connection is between this parameter and the length of the shortest set of linearly independent vectors. Second, we prove that the distribution that one obtains after adding Gaussian noise to the lattice has the following interesting property: the distribution of the noise vector when conditioning on the final value behaves in many respects like the original Gaussian noise vector. In particular, its moments remain essentially unchanged. 1
An Algorithmic Theory of Lattice Points in Polyhedra
, 1999
"... We discuss topics related to lattice points in rational polyhedra, including efficient enumeration of lattice points, “short” generating functions for lattice points in rational polyhedra, relations to classical and higher-dimensional Dedekind sums, complexity of the Presburger arithmetic, efficien ..."
Abstract
-
Cited by 128 (7 self)
- Add to MetaCart
We discuss topics related to lattice points in rational polyhedra, including efficient enumeration of lattice points, “short” generating functions for lattice points in rational polyhedra, relations to classical and higher-dimensional Dedekind sums, complexity of the Presburger arithmetic, efficient computations with rational functions, and others. Although the main slant is algorithmic, structural results are discussed, such as relations to the general theory of valuations on polyhedra and connections with the theory of toric varieties. The paper surveys known results and presents some new results and connections.
Solving low-density subset sum problems
- in Proceedings of 24rd Annu. Symp. Foundations of comput. Sci
, 1983
"... Abstract. The subset sum problem is to decide whether or not the O-1 integer programming problem C aixi = M, Vi,x,=O or 1, i-l has a solution, where the ai and M are given positive integers. This problem is NP-complete, and the difficulty of solving it is the basis of public-key cryptosystems of kna ..."
Abstract
-
Cited by 124 (3 self)
- Add to MetaCart
Abstract. The subset sum problem is to decide whether or not the O-1 integer programming problem C aixi = M, Vi,x,=O or 1, i-l has a solution, where the ai and M are given positive integers. This problem is NP-complete, and the difficulty of solving it is the basis of public-key cryptosystems of knapsack type. An algorithm is proposed that searches for a solution when given an instance of the subset sum problem. This algorithm always halts in polynomial time but does not always find a solution when one exists. It converts the problem to one of finding a particular short vector v in a lattice, and then uses a lattice basis reduction algorithm due to A. K. Lenstra, H. W. Lenstra, Jr., and L. Lovasz to attempt to find v. The performance of the proposed algorithm is analyzed. Let the density d of a subset sum problem be defined by d = n/log2(maxi ai). Then for “almost all ” problems of density d c 0.645, the vector v we searched for is the shortest nonzero vector in the lattice. For “almost all ” problems of density d < l/a it is proved that the lattice basis reduction algorithm locates v. Extensive computational tests of the algorithm suggest that it works for densities d < de(n), where d=(n) is a cutoff value that is substantially larger than I/n. This method gives a polynomial time attack on knapsack public-key cryptosystems that can be expected to break them if they transmit information at rates below d=(n), as n-+ 01.
Efficient Fully Homomorphic Encryption from (Standard) LWE
- LWE, FOCS 2011, IEEE 52ND ANNUAL SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE, IEEE
, 2011
"... We present a fully homomorphic encryption scheme that is based solely on the (standard) learning with errors (LWE) assumption. Applying known results on LWE, the security of our scheme is based on the worst-case hardness of “short vector problems ” on arbitrary lattices. Our construction improves on ..."
Abstract
-
Cited by 120 (6 self)
- Add to MetaCart
We present a fully homomorphic encryption scheme that is based solely on the (standard) learning with errors (LWE) assumption. Applying known results on LWE, the security of our scheme is based on the worst-case hardness of “short vector problems ” on arbitrary lattices. Our construction improves on previous works in two aspects: 1. We show that “somewhat homomorphic” encryption can be based on LWE, using a new relinearization technique. In contrast, all previous schemes relied on complexity assumptions related to ideals in various rings. 2. We deviate from the “squashing paradigm” used in all previous works. We introduce a new dimension-modulus reduction technique, which shortens the ciphertexts and reduces the decryption complexity of our scheme, without introducing additional assumptions. Our scheme has very short ciphertexts and we therefore use it to construct an asymptotically efficient LWE-based single-server private information retrieval (PIR) protocol. The communication complexity of our protocol (in the public-key model) is k · polylog(k) + log |DB | bits per single-bit query (here, k is a security parameter).
Representation Theory for High-Rate Multiple-Antenna Code Design
- IEEE Trans. Inform. Theory
, 2000
"... this paper, we show how to design signal matrices satisfying these requirements. As shown in [1], the design problem for unitary space time constellations is the following: let ..."
Abstract
-
Cited by 119 (15 self)
- Add to MetaCart
(Show Context)
this paper, we show how to design signal matrices satisfying these requirements. As shown in [1], the design problem for unitary space time constellations is the following: let
On the Limits of Non-Approximability of Lattice Problems
, 1998
"... We show simple constant-round interactive proof systems for problems capturing the approximability, to within a factor of p n, of optimization problems in integer lattices; specifically, the closest vector problem (CVP), and the shortest vector problem (SVP). These interactive proofs are for th ..."
Abstract
-
Cited by 99 (2 self)
- Add to MetaCart
We show simple constant-round interactive proof systems for problems capturing the approximability, to within a factor of p n, of optimization problems in integer lattices; specifically, the closest vector problem (CVP), and the shortest vector problem (SVP). These interactive proofs are for the "coNP direction"; that is, we give an interactive protocol showing that a vector is "far" from the lattice (for CVP), and an interactive protocol showing that the shortest-latticevector is "long" (for SVP). Furthermore, these interactive proof systems are Honest-Verifier Perfect Zero-Knowledge. We conclude that approximating CVP (resp., SVP) within a factor of p n is in NP " coAM. Thus, it seems unlikely that approximating these problems to within a p n factor is NPhard. Previously, for the CVP (resp., SVP) problem, Lagarias et. al., Hastad and Banaszczyk showed that the gap problem corresponding to approximating CVP (resp., SVP) within n is in NP " coNP . On the other hand, Ar...