Results 1  10
of
50
On ideal lattices and learning with errors over rings
 In Proc. of EUROCRYPT, volume 6110 of LNCS
, 2010
"... The “learning with errors ” (LWE) problem is to distinguish random linear equations, which have been perturbed by a small amount of noise, from truly uniform ones. The problem has been shown to be as hard as worstcase lattice problems, and in recent years it has served as the foundation for a pleth ..."
Abstract

Cited by 46 (8 self)
 Add to MetaCart
The “learning with errors ” (LWE) problem is to distinguish random linear equations, which have been perturbed by a small amount of noise, from truly uniform ones. The problem has been shown to be as hard as worstcase lattice problems, and in recent years it has served as the foundation for a plethora of cryptographic applications. Unfortunately, these applications are rather inefficient due to an inherent quadratic overhead in the use of LWE. A main open question was whether LWE and its applications could be made truly efficient by exploiting extra algebraic structure, as was done for latticebased hash functions (and related primitives). We resolve this question in the affirmative by introducing an algebraic variant of LWE called ringLWE, and proving that it too enjoys very strong hardness guarantees. Specifically, we show that the ringLWE distribution is pseudorandom, assuming that worstcase problems on ideal lattices are hard for polynomialtime quantum algorithms. Applications include the first truly practical latticebased publickey cryptosystem with an efficient security reduction; moreover, many of the other applications of LWE can be made much more efficient through the use of ringLWE. 1
GENERATING SHORTER BASES FOR HARD RANDOM LATTICES
, 2009
"... We revisit the problem of generating a “hard” random lattice together with a basis of relatively short vectors. This problem has gained in importance lately due to new cryptographic schemes that use such a procedure for generating public/secret key pairs. In these applications, a shorter basis dire ..."
Abstract

Cited by 43 (6 self)
 Add to MetaCart
We revisit the problem of generating a “hard” random lattice together with a basis of relatively short vectors. This problem has gained in importance lately due to new cryptographic schemes that use such a procedure for generating public/secret key pairs. In these applications, a shorter basis directly corresponds to milder underlying complexity assumptions and smaller key sizes. The contributions of this work are twofold. First, using the Hermite normal form as an organizing principle, we simplify and generalize an approach due to Ajtai (ICALP 1999). Second, we improve the construction and its analysis in several ways, most notably by tightening the length of the output basis essentially to the optimum value.
Fast Cryptographic Primitives and CircularSecure Encryption Based on Hard Learning Problems
"... Abstract. The wellstudied task of learning a linear function with errors is a seemingly hard problem and the basis for several cryptographic schemes. Here we demonstrate additional applications that enjoy strong security properties and a high level of efficiency. Namely, we construct: 1. Publickey ..."
Abstract

Cited by 42 (12 self)
 Add to MetaCart
Abstract. The wellstudied task of learning a linear function with errors is a seemingly hard problem and the basis for several cryptographic schemes. Here we demonstrate additional applications that enjoy strong security properties and a high level of efficiency. Namely, we construct: 1. Publickey and symmetrickey cryptosystems that provide security for keydependent messages and enjoy circular security. Our schemes are highly efficient: in both cases the ciphertext is only a constant factor larger than the plaintext, and the cost of encryption and decryption is only n · polylog(n) bit operations per message symbol in the publickey case, and polylog(n) bit operations in the symmetric case. 2. Two efficient pseudorandom objects: a “weak randomized pseudorandom function ” — a relaxation of standard PRF — that can be computed obliviously via a simple protocol, and a lengthdoubling pseudorandom generator that can be computed by a circuit of n ·
Lattice mixing and vanishing trapdoors – a framework for fully secure short signatures and more
 In Public Key Cryptography—PKC 2010, volume 6056 of LNCS
, 2010
"... Abstract. We propose a framework for adaptive security from hard random lattices in the standard model. Our approach borrows from the recent AgrawalBonehBoyen families of lattices, which can admit reliable and punctured trapdoors, respectively used in reality and in simulation. We extend this idea ..."
Abstract

Cited by 28 (5 self)
 Add to MetaCart
Abstract. We propose a framework for adaptive security from hard random lattices in the standard model. Our approach borrows from the recent AgrawalBonehBoyen families of lattices, which can admit reliable and punctured trapdoors, respectively used in reality and in simulation. We extend this idea to make the simulation trapdoors cancel not for a speci c target but on a nonnegligible subset of the possible challenges. Conceptually, we build a compactly representable, large family of inputdependent mixture lattices, set up with trapdoors that vanish for a secret subset wherein we hope the attack occurs. Technically, we tweak the lattice structure to achieve naturally nice distributions for arbitrary choices of subset size. The framework is very general. Here we obtain fully secure signatures, and also IBE, that are compact, simple, and elegant. 1
Better key sizes (and attacks) for LWEbased encryption
 In CTRSA
, 2011
"... We analyze the concrete security and key sizes of theoretically sound latticebased encryption schemes based on the “learning with errors ” (LWE) problem. Our main contributions are: (1) a new lattice attack on LWE that combines basis reduction with an enumeration algorithm admitting a time/success ..."
Abstract

Cited by 27 (5 self)
 Add to MetaCart
We analyze the concrete security and key sizes of theoretically sound latticebased encryption schemes based on the “learning with errors ” (LWE) problem. Our main contributions are: (1) a new lattice attack on LWE that combines basis reduction with an enumeration algorithm admitting a time/success tradeoff, which performs better than the simple distinguishing attack considered in prior analyses; (2) concrete parameters and security estimates for an LWEbased cryptosystem that is more compact and efficient than the wellknown schemes from the literature. Our new key sizes are up to 10 times smaller than prior examples, while providing even stronger concrete security levels.
Making NTRU as secure as worstcase problems over ideal lattices
 In Proc. of EUROCRYPT, volume 6632 of LNCS
, 2011
"... Abstract. NTRUEncrypt, proposed in 1996 by Ho stein, Pipher and Silverman, is the fastest known latticebased encryption scheme. Its moderate keysizes, excellent asymptotic performance and conjectured resistance to quantum computers could make it a desirable alternative to factorisation and discret ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
(Show Context)
Abstract. NTRUEncrypt, proposed in 1996 by Ho stein, Pipher and Silverman, is the fastest known latticebased encryption scheme. Its moderate keysizes, excellent asymptotic performance and conjectured resistance to quantum computers could make it a desirable alternative to factorisation and discretelog based encryption schemes. However, since its introduction, doubts have regularly arisen on its security. In the present work, we show how to modify NTRUEncrypt to make it provably secure in the standard model, under the assumed quantum hardness of standard worstcase lattice problems, restricted to a family of lattices related to some cyclotomic elds. Our main contribution is to show that if the secret key polynomials are selected by rejection from discrete Gaussians, then the public key, which is their ratio, is statistically indistinguishable from uniform over its domain. The security then follows from the already proven hardness of the RLWE problem.
Algorithms for the shortest and closest lattice vector problems
 In Yeow Meng Chee, Zhenbo Guo, San Ling, Fengjing Shao, Yuansheng Tang, Huaxiong Wang, and Chaoping Xing, editors, IWCC, volume 6639 of Lecture Notes in Computer Science
"... Abstract. We present the state of the art solvers of the Shortest and Closest Lattice Vector Problems in the Euclidean norm. We recall the three main families of algorithms for these problems, namely the algorithm by Micciancio and Voulgaris based on the Voronoi cell [STOC’10], the MonteCarlo algor ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We present the state of the art solvers of the Shortest and Closest Lattice Vector Problems in the Euclidean norm. We recall the three main families of algorithms for these problems, namely the algorithm by Micciancio and Voulgaris based on the Voronoi cell [STOC’10], the MonteCarlo algorithms derived from the Ajtai, Kumar and Sivakumar algorithm [STOC’01] and the enumeration algorithms originally elaborated by Kannan [STOC’83] and Fincke and Pohst [EUROCAL’83]. We concentrate on the theoretical worstcase complexity bounds, but also consider some practical facets of these algorithms. 1
FiatShamir with aborts: Applications to lattice and factoringbased signatures
, 2009
"... Abstract. We demonstrate how the framework that is used for creating efficient numbertheoretic ID and signature schemes can be transferred into the setting of lattices. This results in constructions of the most efficient todate identification and signature schemes with security based on the worst ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
(Show Context)
Abstract. We demonstrate how the framework that is used for creating efficient numbertheoretic ID and signature schemes can be transferred into the setting of lattices. This results in constructions of the most efficient todate identification and signature schemes with security based on the worstcase hardness of problems in ideal lattices. In particular, our ID scheme has communication complexity of around 65, 000 bits and the length of the signatures produced by our signature scheme is about 50, 000 bits. All prior latticebased identification schemes required on the order of millions of bits to be transferred, while all previous latticebased signature schemes were either stateful, too inefficient, or produced signatures whose lengths were also on the order of millions of bits. The security of our identification scheme is based on the hardness of finding the approximate shortest vector to within a factor of Õ(n2) in the standard model, while the security of the signature scheme is based on the same assumption in the random oracle model. Our protocols are very efficient, with all operations requiring Õ(n) time. We also show that the technique for constructing our latticebased schemes can be used to improve certain numbertheoretic schemes. In particular, we are able to shorten the length of the signatures that are produced by Girault’s factoringbased digital signature scheme ([10, 11, 31]). 1
Analyzing Blockwise Lattice Algorithms using Dynamical Systems
 Proc. 31th Cryptology Conference (CRYPTO
, 2011
"... n−1 Abstract. Strong lattice reduction is the key element for most attacks against latticebased cryptosystems. Between the strongest but impractical HKZ reduction and the weak but fast LLL reduction, there have been several attempts to find efficient tradeoffs. Among them, the BKZ algorithm introd ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
n−1 Abstract. Strong lattice reduction is the key element for most attacks against latticebased cryptosystems. Between the strongest but impractical HKZ reduction and the weak but fast LLL reduction, there have been several attempts to find efficient tradeoffs. Among them, the BKZ algorithm introduced by Schnorr and Euchner [FCT’91] seems to achieve the best time/quality compromise in practice. However, no reasonable complexity upper bound is known for BKZ, and Gama and Nguyen [Eurocrypt’08] observed experimentally that its practical runtime seems to grow exponentially with the lattice dimension. In this work, we show that BKZ can be terminated long before its completion, while still providing bases of excellent quality. More precisely, we show that if given as inputs a basis (bi)i≤n ∈ Q n×n “ of a lattice L and a blocksize β, and if terminated after
Faster Gaussian lattice sampling using lazy floatingpoint arithmetic
 FULL VERSION OF THE ASIACRYPT ’12 ARTICLE
, 2013
"... Many lattice cryptographic primitives require an efficient algorithm to sample lattice points according to some Gaussian distribution. All algorithms known for this task require longinteger arithmetic at some point, which may be problematic in practice. We study how much lattice sampling can be sp ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
Many lattice cryptographic primitives require an efficient algorithm to sample lattice points according to some Gaussian distribution. All algorithms known for this task require longinteger arithmetic at some point, which may be problematic in practice. We study how much lattice sampling can be sped up using floatingpoint arithmetic. First, we show that a direct floatingpoint implementation of these algorithms does not give any asymptotic speedup: the floatingpoint precision needs to be greater than the security parameter, leading to an overall complexity Õ(n 3) where n is the lattice dimension. However, we introduce a laziness technique that can significantly speed up these algorithms. Namely, in certain cases such as NTRUSign lattices, laziness can decrease the complexity to Õ(n2) or even Õ(n). Furthermore, our analysis is practical: for typical parameters, most of the floatingpoint operations only require the doubleprecision IEEE standard.