Results 1  10
of
58
Fully homomorphic encryption with relatively small key and ciphertext sizes
 In Public Key Cryptography — PKC ’10, Springer LNCS 6056
, 2010
"... Abstract. We present a fully homomorphic encryption scheme which has both relatively small key and ciphertext size. Our construction follows that of Gentry by producing a fully homomorphic scheme from a “somewhat ” homomorphic scheme. For the somewhat homomorphic scheme the public and private keys c ..."
Abstract

Cited by 55 (6 self)
 Add to MetaCart
Abstract. We present a fully homomorphic encryption scheme which has both relatively small key and ciphertext size. Our construction follows that of Gentry by producing a fully homomorphic scheme from a “somewhat ” homomorphic scheme. For the somewhat homomorphic scheme the public and private keys consist of two large integers (one of which is shared by both the public and private key) and the ciphertext consists of one large integer. As such, our scheme has smaller message expansion and key size than Gentry’s original scheme. In addition, our proposal allows efficient fully homomorphic encryption over any field of characteristic two. 1
FloatingPoint LLL Revisited
, 2005
"... The LenstraLenstraLovász lattice basis reduction algorithm (LLL or L³) is a very popular tool in publickey cryptanalysis and in many other fields. Given an integer ddimensional lattice basis with vectors of norm less than B in an ndimensional space, L³ outputs a socalled L³reduced basis in po ..."
Abstract

Cited by 37 (6 self)
 Add to MetaCart
The LenstraLenstraLovász lattice basis reduction algorithm (LLL or L³) is a very popular tool in publickey cryptanalysis and in many other fields. Given an integer ddimensional lattice basis with vectors of norm less than B in an ndimensional space, L³ outputs a socalled L³reduced basis in polynomial time O(d 5 n log³ B), using arithmetic operations on integers of bitlength O(d log B). This worstcase complexity is problematic for lattices arising in cryptanalysis where d or/and log B are often large. As a result, the original L³ is almost never used in practice. Instead, one applies floatingpoint variants of L³, where the longinteger arithmetic required by GramSchmidt orthogonalisation (central in L³) is replaced by floatingpoint arithmetic. Unfortunately, this is known to be unstable in the worstcase: the usual floatingpoint L³ is not even guaranteed to terminate, and the output basis may not be L³reduced at all. In this article, we introduce the L² algorithm, a new and natural floatingpoint variant of L³ which provably outputs L 3reduced bases in polynomial time O(d 4 n(d + log B) log B). This is the first L³ algorithm whose running time (without fast integer arithmetic) provably grows only quadratically with respect to log B, like the wellknown Euclidean and Gaussian algorithms, which it generalizes.
Latticebased Cryptography
, 2008
"... In this chapter we describe some of the recent progress in latticebased cryptography. Latticebased cryptographic constructions hold a great promise for postquantum cryptography, as they enjoy very strong security proofs based on worstcase hardness, relatively efficient implementations, as well a ..."
Abstract

Cited by 36 (5 self)
 Add to MetaCart
In this chapter we describe some of the recent progress in latticebased cryptography. Latticebased cryptographic constructions hold a great promise for postquantum cryptography, as they enjoy very strong security proofs based on worstcase hardness, relatively efficient implementations, as well as great simplicity. In addition, latticebased cryptography is believed to be secure against quantum computers. Our focus here
LLL on the Average
, 2006
"... Despite their popularity, lattice reduction algorithms remain mysterious in many ways. It has been widely reported that they behave much more nicely than what was expected from the worstcase proved bounds, both in terms of the running time and the output quality. In this article, we investigate t ..."
Abstract

Cited by 34 (10 self)
 Add to MetaCart
Despite their popularity, lattice reduction algorithms remain mysterious in many ways. It has been widely reported that they behave much more nicely than what was expected from the worstcase proved bounds, both in terms of the running time and the output quality. In this article, we investigate this puzzling statement by trying to model the average case of lattice reduction algorithms, starting with the celebrated LenstraLenstraLovász algorithm (L³). We discuss what is meant by lattice reduction on the average, and we present extensive experiments on the average case behavior of L³, in order to give a clearer picture of the differences/similarities between the average and worst cases. Our work is intended to clarify the practical behavior of L³ and to raise theoretical questions on its average behavior.
The Insecurity of the Elliptic Curve Digital Signature Algorithm with Partially Known Nonces
 Design, Codes and Cryptography
, 2000
"... Nguyen and Shparlinski recently presented a polynomialtime algorithm that provably recovers the signer's secret DSA key when a few bits of the random nonces k (used at each signature generation) are known for a number of DSA signatures at most linear in log q (q denoting as usual the small prime of ..."
Abstract

Cited by 34 (10 self)
 Add to MetaCart
Nguyen and Shparlinski recently presented a polynomialtime algorithm that provably recovers the signer's secret DSA key when a few bits of the random nonces k (used at each signature generation) are known for a number of DSA signatures at most linear in log q (q denoting as usual the small prime of DSA), under a reasonable assumption on the hash function used in DSA. The number of required bits is about log 1/2 q, and can be further decreased to 2 if one assumes access to ideal lattice basis reduction, namely an oracle for the lattice closest vector problem for the infinity norm. All previously known results were only heuristic, including those of HowgraveGraham and Smart who introduced the topic. Here, we obtain similar results for the elliptic curve variant of DSA (ECDSA).
A Deterministic Single Exponential Time Algorithm for Most Lattice Problems based on Voronoi Cell Computations (Extended Abstract)
, 2009
"... We give deterministic 2O(n)time algorithms to solve all the most important computational problems on point lattices in NP, including the Shortest Vector Problem (SVP), Closest Vector Problem (CVP), and Shortest Independent Vectors Problem (SIVP). This improves the nO(n) running time of the best pre ..."
Abstract

Cited by 31 (2 self)
 Add to MetaCart
We give deterministic 2O(n)time algorithms to solve all the most important computational problems on point lattices in NP, including the Shortest Vector Problem (SVP), Closest Vector Problem (CVP), and Shortest Independent Vectors Problem (SIVP). This improves the nO(n) running time of the best previously known algorithms for CVP (Kannan, Math. Operation Research 12(3):415440, 1987) and SIVP (Micciancio, Proc. of SODA, 2008), and gives a deterministic alternative to the 2 O(n)time (and space) randomized algorithm for SVP of (Ajtai, Kumar and Sivakumar, STOC 2001). The core of our algorithm is a new method to solve the closest vector problem with preprocessing (CVPP) that uses the Voronoi cell of the lattice (described as intersection of halfspaces) as the result of the preprocessing function. In the process, we also give algorithms for several other lattice problems, including computing the kissing number of a lattice, and computing the set of all Voronoi relevant vectors. All our algorithms are deterministic, and have 2 O(n) time and space complexity 1 1
Paillier's Cryptosystem Revisited
 IN ACM CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY 2001
, 2001
"... We reexamine Paillier's cryptosystem, and show that by choosing a particular discrete log base g, and by introducing an alternative decryption procedure, we can extend the scheme to allow an arbitrary exponent e instead of N. The use of low exponents substantially increases the eciency of the schem ..."
Abstract

Cited by 29 (4 self)
 Add to MetaCart
We reexamine Paillier's cryptosystem, and show that by choosing a particular discrete log base g, and by introducing an alternative decryption procedure, we can extend the scheme to allow an arbitrary exponent e instead of N. The use of low exponents substantially increases the eciency of the scheme. The semantic security is now based on a new decisional assumption, namely the hardness of deciding whether an element is a "small" eth residue modulo N². We also
Lowdimensional lattice basis reduction revisited (Extended Abstract)
 LECTURE NOTES IN COMPUTER SCIENCE, 3076: 338–357, 2004. CODEN LNCSD9. ISBN 3540221565. ISSN 03029743. ACHA:1992:LOF
, 2004
"... Most of the interesting algorithmic problems in the geometry of numbers are NPhard as the lattice dimension increases. This article deals with the lowdimensional case. We study a greedy lattice basis reduction algorithm for the Euclidean norm, which is arguably the most natural lattice basis red ..."
Abstract

Cited by 25 (3 self)
 Add to MetaCart
Most of the interesting algorithmic problems in the geometry of numbers are NPhard as the lattice dimension increases. This article deals with the lowdimensional case. We study a greedy lattice basis reduction algorithm for the Euclidean norm, which is arguably the most natural lattice basis reduction algorithm, because it is a straightforward generalization of the wellknown twodimensional Gaussian algorithm. Our results are twofold. From a mathematical point of view, we show that up to dimension four, the output of the greedy algorithm is optimal: the output basis reaches all the successive minima of the lattice. However, as soon as the lattice dimension is strictly higher than four, the output basis may not even reach the first minimum. More importantly, from a computational point of view, we show that up to dimension four, the bitcomplexity of the greedy algorithm is quadratic without fast integer arithmetic: this allows to compute various lattice problems (e.g. computing a Minkowskireduced basis and a closest vector) in quadratic time, without fast integer arithmetic, up to dimension four, while all other algorithms known for such problems have a bitcomplexity which is at least cubic. This was already proved by Semaev up to dimension three using rather technical means, but it was previously unknown whether or not the algorithm was still polynomial in dimension four. Our analysis, based on geometric properties of lowdimensional lattices and in particular Voronoï cells, arguably simplifies Semaev’s analysis in dimensions two and three, unifies the cases of dimensions two, three and four, but breaks down in dimension five.
Faster exponential time algorithms for the shortest vector problem
 ELECTRONIC COLLOQUIUM ON COMPUTATIONAL COMPLEXITY
, 2009
"... We present new faster algorithms for the exact solution of the shortest vector problem in arbitrary lattices. Our main result shows that the shortest vector in any ndimensional lattice can be found in time 2 3.199n and space 2 1.325n. This improves the best previously known algorithm by Ajtai, Kuma ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
We present new faster algorithms for the exact solution of the shortest vector problem in arbitrary lattices. Our main result shows that the shortest vector in any ndimensional lattice can be found in time 2 3.199n and space 2 1.325n. This improves the best previously known algorithm by Ajtai, Kumar and Sivakumar [Proceedings of STOC 2001] which was shown by Nguyen and Vidick [J. Math. Crypto. 2(2):181–207] to run in time 2 5.9n and space 2 2.95n. We also present a practical variant of our algorithm which provably uses an amount of space proportional to τn, the “kissing ” constant in dimension n. Based on the best currently known upper and lower bounds on the kissing constant, the space complexity of our second algorithm is provably bounded by 2 0.41n, and it is likely to be at most 2 0.21n in practice. No upper bound on the running time of our second algorithm is currently known, but experimentally the algorithm seems to perform fairly well in practice, with running time 2 0.48n, and space complexity 2 0.18n.
Learning a Parallelepiped: Cryptanalysis of GGH and NTRU Signatures
 PROCEEDINGS OF EUROCRYPT ’06
, 2006
"... Latticebased signature schemes following the GoldreichGoldwasserHalevi (GGH) design have the unusual property that each signature leaks information on the signer’s secret key, but this does not necessarily imply that such schemes are insecure. At Eurocrypt ’03, Szydlo proposed a potential attack ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
Latticebased signature schemes following the GoldreichGoldwasserHalevi (GGH) design have the unusual property that each signature leaks information on the signer’s secret key, but this does not necessarily imply that such schemes are insecure. At Eurocrypt ’03, Szydlo proposed a potential attack by showing that the leakage reduces the keyrecovery problem to that of distinguishing integral quadratic forms. He proposed a heuristic method to solve the latter problem, but it was unclear whether his method could attack reallife parameters of GGH and NTRUSign. Here, we propose an alternative method to attack signature schemes à la GGH, by studying the following learning problem: given many random points uniformly distributed over an unknown ndimensional parallelepiped, recover the parallelepiped or an approximation thereof. We transform this problem into a multivariate optimization problem that can provably be solved by a gradient descent. Our approach is very effective in practice: we present the first successful keyrecovery experiments on NTRUSign251 without perturbation, as proposed in half of the parameter choices in NTRU standards under consideration by IEEE P1363.1. Experimentally, 400 signatures are sufficient to recover the NTRUSign251 secret key, thanks to symmetries in NTRU lattices. We are also able to recover the secret key in the signature analogue of all the GGH encryption challenges.