Results 11  20
of
143
LLL on the Average
, 2006
"... Despite their popularity, lattice reduction algorithms remain mysterious in many ways. It has been widely reported that they behave much more nicely than what was expected from the worstcase proved bounds, both in terms of the running time and the output quality. In this article, we investigate t ..."
Abstract

Cited by 43 (11 self)
 Add to MetaCart
Despite their popularity, lattice reduction algorithms remain mysterious in many ways. It has been widely reported that they behave much more nicely than what was expected from the worstcase proved bounds, both in terms of the running time and the output quality. In this article, we investigate this puzzling statement by trying to model the average case of lattice reduction algorithms, starting with the celebrated LenstraLenstraLovász algorithm (L³). We discuss what is meant by lattice reduction on the average, and we present extensive experiments on the average case behavior of L³, in order to give a clearer picture of the differences/similarities between the average and worst cases. Our work is intended to clarify the practical behavior of L³ and to raise theoretical questions on its average behavior.
Power from Random Strings
 IN PROCEEDINGS OF THE 43RD IEEE SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE
, 2002
"... We show that sets consisting of strings of high Kolmogorov complexity provide examples of sets that are complete for several complexity classes under probabilistic and nonuniform reductions. These sets are provably not complete under the usual manyone reductions. Let ..."
Abstract

Cited by 41 (17 self)
 Add to MetaCart
We show that sets consisting of strings of high Kolmogorov complexity provide examples of sets that are complete for several complexity classes under probabilistic and nonuniform reductions. These sets are provably not complete under the usual manyone reductions. Let
Improving Lattice Based Cryptosystems Using the Hermite Normal Form
 In Silverman [Sil01
"... We describe a simple technique that can be used to substantially reduce the key and ciphertext size of various lattice based cryptosystems and trapdoor functions of the kind proposed by Goldreich, Goldwasser and Halevi (GGH). The improvement is signi cant both from the theoretical and practical poin ..."
Abstract

Cited by 40 (8 self)
 Add to MetaCart
(Show Context)
We describe a simple technique that can be used to substantially reduce the key and ciphertext size of various lattice based cryptosystems and trapdoor functions of the kind proposed by Goldreich, Goldwasser and Halevi (GGH). The improvement is signi cant both from the theoretical and practical point of view, reducing the size of both key and ciphertext by a factor n equal to the dimension of the lattice (i.e., several hundreds for typical values of the security parameter.) The eciency improvement is obtained without decreasing the security of the functions: we formally prove that the new functions are at least as secure as the original ones, and possibly even better as the adversary gets less information in a strong information theoretical sense. The increased eciency of the new cryptosystems allows the use of bigger values for the security parameter, making the functions secure against the best cryptanalytic attacks, while keeping the size of the key even below the smallest key size for which lattice cryptosystems were ever conjectured to be hard to break.
New algorithms for learning in presence of errors
 ICALP
"... We give new algorithms for a variety of randomlygenerated instances of computational problems using a linearization technique that reduces to solving a system of linear equations. These algorithms are derived in the context of learning with structured noise, a notion introduced in this paper. This ..."
Abstract

Cited by 39 (0 self)
 Add to MetaCart
We give new algorithms for a variety of randomlygenerated instances of computational problems using a linearization technique that reduces to solving a system of linear equations. These algorithms are derived in the context of learning with structured noise, a notion introduced in this paper. This notion is best illustrated with the learning parities with noise (LPN) problem —wellstudied in learning theory and cryptography. In the standard version, we have access to an oracle that, each time we press a button, returns a random vector a ∈ GF(2) n together with a bit b ∈ GF(2) that was computed as a · u + η, where u ∈ GF(2) n is a secret vector, and η ∈ GF(2) is a noise bit that is 1 with some probability p. Say p = 1/3. The goal is to recover u. This task is conjectured to be intractable. In the structured noise setting we introduce a slight (?) variation of the model: upon pressing a button, we receive (say) 10 random vectors a1, a2,..., a10 ∈ GF(2) n, and corresponding bits b1, b2,..., b10, of which at most 3 are noisy. The oracle may arbitrarily decide which of the 10 bits to make noisy. We exhibit a polynomialtime algorithm to recover the secret vector u given such an oracle. We think this structured noise model may be of independent interest in machine learning. We discuss generalizations of our result, including learning with more general noise patterns. We also give the first nontrivial algorithms for two problems, which we show fit in our structured noise framework. We give a slightly subexponential algorithm for the wellknown learning with errors (LWE) problem over GF(q) introduced by Regev for cryptographic uses. Our algorithm works for the case when the gaussian noise is small; which was an open problem. We also give polynomialtime algorithms for learning the MAJORITY OF PARITIES function of Applebaum et al. for certain parameter values. This function is a special case of Goldreich’s pseudorandom generator.
Lowdimensional lattice basis reduction revisited (Extended Abstract)
 LECTURE NOTES IN COMPUTER SCIENCE, 3076: 338–357, 2004. CODEN LNCSD9. ISBN 3540221565. ISSN 03029743. ACHA:1992:LOF
, 2004
"... Most of the interesting algorithmic problems in the geometry of numbers are NPhard as the lattice dimension increases. This article deals with the lowdimensional case. We study a greedy lattice basis reduction algorithm for the Euclidean norm, which is arguably the most natural lattice basis red ..."
Abstract

Cited by 38 (3 self)
 Add to MetaCart
Most of the interesting algorithmic problems in the geometry of numbers are NPhard as the lattice dimension increases. This article deals with the lowdimensional case. We study a greedy lattice basis reduction algorithm for the Euclidean norm, which is arguably the most natural lattice basis reduction algorithm, because it is a straightforward generalization of the wellknown twodimensional Gaussian algorithm. Our results are twofold. From a mathematical point of view, we show that up to dimension four, the output of the greedy algorithm is optimal: the output basis reaches all the successive minima of the lattice. However, as soon as the lattice dimension is strictly higher than four, the output basis may not even reach the first minimum. More importantly, from a computational point of view, we show that up to dimension four, the bitcomplexity of the greedy algorithm is quadratic without fast integer arithmetic: this allows to compute various lattice problems (e.g. computing a Minkowskireduced basis and a closest vector) in quadratic time, without fast integer arithmetic, up to dimension four, while all other algorithms known for such problems have a bitcomplexity which is at least cubic. This was already proved by Semaev up to dimension three using rather technical means, but it was previously unknown whether or not the algorithm was still polynomial in dimension four. Our analysis, based on geometric properties of lowdimensional lattices and in particular Voronoï cells, arguably simplifies Semaev’s analysis in dimensions two and three, unifies the cases of dimensions two, three and four, but breaks down in dimension five.
Lattice mixing and vanishing trapdoors – a framework for fully secure short signatures and more
 In Public Key Cryptography—PKC 2010, volume 6056 of LNCS
, 2010
"... Abstract. We propose a framework for adaptive security from hard random lattices in the standard model. Our approach borrows from the recent AgrawalBonehBoyen families of lattices, which can admit reliable and punctured trapdoors, respectively used in reality and in simulation. We extend this idea ..."
Abstract

Cited by 38 (5 self)
 Add to MetaCart
(Show Context)
Abstract. We propose a framework for adaptive security from hard random lattices in the standard model. Our approach borrows from the recent AgrawalBonehBoyen families of lattices, which can admit reliable and punctured trapdoors, respectively used in reality and in simulation. We extend this idea to make the simulation trapdoors cancel not for a speci c target but on a nonnegligible subset of the possible challenges. Conceptually, we build a compactly representable, large family of inputdependent mixture lattices, set up with trapdoors that vanish for a secret subset wherein we hope the attack occurs. Technically, we tweak the lattice structure to achieve naturally nice distributions for arbitrary choices of subset size. The framework is very general. Here we obtain fully secure signatures, and also IBE, that are compact, simple, and elegant. 1
Asymptotically efficient latticebased digital signatures
 IN FIFTH THEORY OF CRYPTOGRAPHY CONFERENCE (TCC
, 2008
"... We give a direct construction of digital signatures based on the complexity of approximating the shortest vector in ideal (e.g., cyclic) lattices. The construction is provably secure based on the worstcase hardness of approximating the shortest vector in such lattices within a polynomial factor, an ..."
Abstract

Cited by 28 (9 self)
 Add to MetaCart
(Show Context)
We give a direct construction of digital signatures based on the complexity of approximating the shortest vector in ideal (e.g., cyclic) lattices. The construction is provably secure based on the worstcase hardness of approximating the shortest vector in such lattices within a polynomial factor, and it is also asymptotically efficient: the time complexity of the signing and verification algorithms, as well as key and signature size is almost linear (up to polylogarithmic factors) in the dimension n of the underlying lattice. Since no subexponential (in n) time algorithm is known to solve lattice problems in the worst case, even when restricted to cyclic lattices, our construction gives a digital signature scheme with an essentially optimal performance/security tradeoff.
Lattice problems in NP ∩ coNP
 Journal of the ACM
"... We show that the problems of approximating the shortest and closest vector in a lattice to within a factor of √ n lie in NP intersect coNP. The result (almost) subsumes the three mutuallyincomparable previous results regarding these lattice problems: Banaszczyk [7], Goldreich and Goldwasser [14], a ..."
Abstract

Cited by 27 (1 self)
 Add to MetaCart
(Show Context)
We show that the problems of approximating the shortest and closest vector in a lattice to within a factor of √ n lie in NP intersect coNP. The result (almost) subsumes the three mutuallyincomparable previous results regarding these lattice problems: Banaszczyk [7], Goldreich and Goldwasser [14], and Aharonov and Regev [2]. Our technique is based on a simple fact regarding succinct approximation of functions using their Fourier series over the lattice. This technique might be useful elsewhere – we demonstrate this by giving a simple and efficient algorithm for one other lattice problem (CVPP) improving on a previous result of Regev [26]. An interesting fact is that our result emerged from a “dequantization ” of our previous quantum result in [2]. This route to proving purely classical results might be beneficial elsewhere. 1
Lattice Signatures and Bimodal Gaussians
"... Abstract. Our main result is a construction of a latticebased digital signature scheme that represents an improvement, both in theory and in practice, over today’s most efficient lattice schemes. The novel scheme is obtained as a result of a modification of the rejection sampling algorithm that is ..."
Abstract

Cited by 23 (4 self)
 Add to MetaCart
(Show Context)
Abstract. Our main result is a construction of a latticebased digital signature scheme that represents an improvement, both in theory and in practice, over today’s most efficient lattice schemes. The novel scheme is obtained as a result of a modification of the rejection sampling algorithm that is at the heart of Lyubashevsky’s signature scheme (Eurocrypt, 2012) and several other lattice primitives. Our new rejection sampling algorithm which samples from a bimodal Gaussian distribution, combined with a modified scheme instantiation, ends up reducing the standard deviation of the resulting signatures by a factor that is asymptotically square root in the security parameter. The implementations of our signature scheme for security levels of 128, 160, and 192 bits compare very favorably to existing schemes such as RSA and ECDSA in terms of efficiency. In addition, the new scheme has shorter signature and public key sizes than all previously proposed lattice signature schemes. As part of our implementation, we also designed several novel algorithms which could be of independent interest. Of particular note, is a new algorithm for efficiently generating discrete Gaussian samples over Z n. Current algorithms either require many highprecision floating point exponentiations or the storage of very large precomputed tables, which makes them completely inappropriate for usage in constrained devices. Our sampling algorithm reduces the hardcoded table sizes from linear to logarithmic as compared to the timeoptimal implementations, at the cost of being only a small factor slower. 1
Lattices that admit logarithmic worstcase to averagecase connection factors
, 2006
"... We demonstrate an averagecase problem which is as hard as finding γ(n)approximate shortest vectors in certain ndimensional lattices in the worst case, where γ(n) = O( log n). The previously best known factor for any class of lattices was γ(n) = Õ(n). To obtain our results, we focus on families ..."
Abstract

Cited by 23 (11 self)
 Add to MetaCart
(Show Context)
We demonstrate an averagecase problem which is as hard as finding γ(n)approximate shortest vectors in certain ndimensional lattices in the worst case, where γ(n) = O( log n). The previously best known factor for any class of lattices was γ(n) = Õ(n). To obtain our results, we focus on families of lattices having special algebraic structure. Specifically, we consider lattices that correspond to ideals in the ring of integers of an algebraic number field. The worstcase assumption we rely on is that in some `p length, it is hard to find approximate shortest vectors in these lattices, under an appropriate form of preprocessing of the number field. Our results build upon prior works by Micciancio (FOCS 2002), Peikert and Rosen (TCC 2006), and Lyubashevsky and Micciancio (ICALP 2006). For the connection factors γ(n) we achieve, the corresponding decisional promise problems on ideal lattices are not known to be NPhard; in fact, they are in P. However, the search approximation problems still appear to be very hard. Indeed, ideal lattices are wellstudied objects in computational number theory, and the best known algorithms for them seem to perform no better than the best known algorithms for general lattices. To obtain the best possible connection factor, we instantiate our constructions with infinite families of number fields having constant root discriminant. Such families are known to exist and are computable, though no efficient construction is yet known. Our work motivates the search for such constructions. Even constructions of number fields having root discriminant up to O(n2/3−) would yield connection factors better than the current best of Õ(n).