Results 1  10
of
25
The Two Faces of Lattices in Cryptology
, 2001
"... Lattices are regular arrangements of points in ndimensional space, whose study appeared in the 19th century in both number theory and crystallography. Since the appearance of the celebrated LenstraLenstra Lov'asz lattice basis reduction algorithm twenty years ago, lattices have had surprising ..."
Abstract

Cited by 69 (16 self)
 Add to MetaCart
Lattices are regular arrangements of points in ndimensional space, whose study appeared in the 19th century in both number theory and crystallography. Since the appearance of the celebrated LenstraLenstra Lov'asz lattice basis reduction algorithm twenty years ago, lattices have had surprising applications in cryptology. Until recently, the applications of lattices to cryptology were only negative, as lattices were used to break various cryptographic schemes. Paradoxically, several positive cryptographic applications of lattices have emerged in the past five years: there now exist publickey cryptosystems based on the hardness of lattice problems, and lattices play a crucial role in a few security proofs.
The shortest vector in a lattice is hard to approximate to within some constant
 in Proc. 39th Symposium on Foundations of Computer Science
, 1998
"... Abstract. We show that approximating the shortest vector problem (in any ℓp norm) to within any constant factor less than p √ 2 is hardfor NP under reverse unfaithful random reductions with inverse polynomial error probability. In particular, approximating the shortest vector problem is not in RP (r ..."
Abstract

Cited by 51 (4 self)
 Add to MetaCart
Abstract. We show that approximating the shortest vector problem (in any ℓp norm) to within any constant factor less than p √ 2 is hardfor NP under reverse unfaithful random reductions with inverse polynomial error probability. In particular, approximating the shortest vector problem is not in RP (random polynomial time), unless NP equals RP. We also prove a proper NPhardness result (i.e., hardness under deterministic manyone reductions) under a reasonable number theoretic conjecture on the distribution of squarefree smooth numbers. As part of our proof, we give an alternative construction of Ajtai’s constructive variant of Sauer’s lemma that greatly simplifies Ajtai’s original proof. Key words. NPhardness, shortest vector problem, point lattices, geometry of numbers, sphere packing
Statistical zeroknowledge proofs with efficient provers: Lattice problems and more
 In CRYPTO
, 2003
"... Abstract. We construct several new statistical zeroknowledge proofs with efficient provers, i.e. ones where the prover strategy runs in probabilistic polynomial time given an NP witness for the input string. Our first proof systems are for approximate versions of the Shortest Vector Problem (SVP) a ..."
Abstract

Cited by 39 (8 self)
 Add to MetaCart
Abstract. We construct several new statistical zeroknowledge proofs with efficient provers, i.e. ones where the prover strategy runs in probabilistic polynomial time given an NP witness for the input string. Our first proof systems are for approximate versions of the Shortest Vector Problem (SVP) and Closest Vector Problem (CVP), where the witness is simply a short vector in the lattice or a lattice vector close to the target, respectively. Our proof systems are in fact proofs of knowledge, and as a result, we immediately obtain efficient latticebased identification schemes which can be implemented with arbitrary families of lattices in which the approximate SVP or CVP are hard. We then turn to the general question of whether all problems in SZK ∩ NP admit statistical zeroknowledge proofs with efficient provers. Towards this end, we give a statistical zeroknowledge proof system with an efficient prover for a natural restriction of Statistical Difference, a complete problem for SZK. We also suggest a plausible approach to resolving the general question in the positive. 1
A Deterministic Single Exponential Time Algorithm for Most Lattice Problems based on Voronoi Cell Computations (Extended Abstract)
, 2009
"... We give deterministic 2O(n)time algorithms to solve all the most important computational problems on point lattices in NP, including the Shortest Vector Problem (SVP), Closest Vector Problem (CVP), and Shortest Independent Vectors Problem (SIVP). This improves the nO(n) running time of the best pre ..."
Abstract

Cited by 31 (2 self)
 Add to MetaCart
We give deterministic 2O(n)time algorithms to solve all the most important computational problems on point lattices in NP, including the Shortest Vector Problem (SVP), Closest Vector Problem (CVP), and Shortest Independent Vectors Problem (SIVP). This improves the nO(n) running time of the best previously known algorithms for CVP (Kannan, Math. Operation Research 12(3):415440, 1987) and SIVP (Micciancio, Proc. of SODA, 2008), and gives a deterministic alternative to the 2 O(n)time (and space) randomized algorithm for SVP of (Ajtai, Kumar and Sivakumar, STOC 2001). The core of our algorithm is a new method to solve the closest vector problem with preprocessing (CVPP) that uses the Voronoi cell of the lattice (described as intersection of halfspaces) as the result of the preprocessing function. In the process, we also give algorithms for several other lattice problems, including computing the kissing number of a lattice, and computing the set of all Voronoi relevant vectors. All our algorithms are deterministic, and have 2 O(n) time and space complexity 1 1
Almost perfect lattices, the covering radius problem, and applications to Ajtai's connection factor
, 2002
"... ..."
Improving Lattice Based Cryptosystems Using the Hermite Normal Form
 In Silverman [Sil01
"... We describe a simple technique that can be used to substantially reduce the key and ciphertext size of various lattice based cryptosystems and trapdoor functions of the kind proposed by Goldreich, Goldwasser and Halevi (GGH). The improvement is signi cant both from the theoretical and practical poin ..."
Abstract

Cited by 28 (7 self)
 Add to MetaCart
We describe a simple technique that can be used to substantially reduce the key and ciphertext size of various lattice based cryptosystems and trapdoor functions of the kind proposed by Goldreich, Goldwasser and Halevi (GGH). The improvement is signi cant both from the theoretical and practical point of view, reducing the size of both key and ciphertext by a factor n equal to the dimension of the lattice (i.e., several hundreds for typical values of the security parameter.) The eciency improvement is obtained without decreasing the security of the functions: we formally prove that the new functions are at least as secure as the original ones, and possibly even better as the adversary gets less information in a strong information theoretical sense. The increased eciency of the new cryptosystems allows the use of bigger values for the security parameter, making the functions secure against the best cryptanalytic attacks, while keeping the size of the key even below the smallest key size for which lattice cryptosystems were ever conjectured to be hard to break.
Lattices that admit logarithmic worstcase to averagecase connection factors
 In STOC
, 2007
"... Abstract We demonstrate an averagecase problem which is as hard as finding fl(n)approximateshortest vectors in certain ndimensional lattices in the worst case, where fl(n) = O(plog n).The previously best known factor for any class of lattices was fl(n) = ~O(n).To obtain our results, we focus on ..."
Abstract

Cited by 19 (9 self)
 Add to MetaCart
Abstract We demonstrate an averagecase problem which is as hard as finding fl(n)approximateshortest vectors in certain ndimensional lattices in the worst case, where fl(n) = O(plog n).The previously best known factor for any class of lattices was fl(n) = ~O(n).To obtain our results, we focus on families of lattices having special algebraic structure. Specifically, we consider lattices that correspond to ideals in the ring of integers of an algebraicnumber field. The worstcase assumption we rely on is that in some `p length, it is hard to findapproximate shortest vectors in these lattices, under an appropriate form of preprocessing of the number field. Our results build upon prior works by Micciancio (FOCS 2002), Peikert andRosen (TCC 2006), and Lyubashevsky and Micciancio (ICALP 2006). For the connection factors fl(n) we achieve, the corresponding decisional promise problemson ideal lattices are not known to be NPhard; in fact, they are in P. However, the search approximation problems still appear to be very hard. Indeed, ideal lattices are wellstudiedobjects in computational number theory, and the best known algorithms for them seem to perform no better than the best known algorithms for general lattices.To obtain the best possible connection factor, we instantiate our constructions with infinite families of number fields having constant root discriminant. Such families are known to existand are computable, though no efficient construction is yet known. Our work motivates the search for such constructions. Even constructions of number fields having root discriminant upto O(n2/3ffl) would yield connection factors better than the current best of ~O(n).
Improved Inapproximability of Lattice and Coding Problems with Preprocessing
 IEEE Transactions on Information Theory
, 2003
"... We show that the closest vector problem with preprocessing (CVPP) is NPhard to approximate to within 3 for any > 0. In addition, we show that the nearest codeword problem with preprocessing (NCPP) is NPhard to approximate to within 3  epsilon. These results improve the results of Feige and Micc ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
We show that the closest vector problem with preprocessing (CVPP) is NPhard to approximate to within 3 for any > 0. In addition, we show that the nearest codeword problem with preprocessing (NCPP) is NPhard to approximate to within 3  epsilon. These results improve the results of Feige and Micciancio in [10]. We also present the first inapproximability result for the relatively nearest codeword problem with preprocessing (RNCP). Finally, we describe an napproximation algorithm to CVPP.
Adapting Density Attacks to LowWeight Knapsacks
"... Abstract. Cryptosystems based on the knapsack problem were among the first publickey systems to be invented. Their high encryption/decryption rate attracted considerable interest until it was noticed that the underlying knapsacks often had a low density, which made them vulnerable to lattice attack ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
Abstract. Cryptosystems based on the knapsack problem were among the first publickey systems to be invented. Their high encryption/decryption rate attracted considerable interest until it was noticed that the underlying knapsacks often had a low density, which made them vulnerable to lattice attacks, both in theory and practice. To prevent lowdensity attacks, several designers found a subtle way to increase the density beyond the critical density by decreasing the weight of the knapsack, and possibly allowing nonbinary coefficients. This approach is actually a bit misleading: we show that lowweight knapsacks do not prevent efficient reductions to lattice problems like the shortest vector problem, they even make reductions more likely. To measure the resistance of lowweight knapsacks, we introduce the novel notion of pseudodensity, and we apply the new notion to the OkamotoTanakaUchiyama (OTU) cryptosystem from Crypto ’00. We do not claim to break OTU and we actually believe that this system may be secure with an appropriate choice of the parameters. However, our research indicates that, in its current form, OTU cannot be supported by an argument based on density. Our results also explain why Schnorr and Hörner were able to solve at Eurocrypt ’95 certain highdensity knapsacks related to the ChorRivest cryptosystem, using lattice reduction.
Efficient polynomial L∞ approximations
 In Proceedings of the 18th IEEE Symposium on Computer Arithmetic (ARITH18). IEEE Computer
"... We address the problem of computing a good floatingpointcoefficient polynomial approximation to a function, with respect to the supremum norm. This is a key step in most processes of evaluation of a function. We present a fast and efficient method, based on lattice basis reduction, that often give ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
We address the problem of computing a good floatingpointcoefficient polynomial approximation to a function, with respect to the supremum norm. This is a key step in most processes of evaluation of a function. We present a fast and efficient method, based on lattice basis reduction, that often gives the best polynomial possible and most of the time returns a very good approximation.