Results 1 
5 of
5
GGHLite: More Efficient Multilinear Maps from Ideal Lattices?
"... Abstract. The GGH Graded Encoding Scheme [10], based on ideal lattices, is the first plausible approximation to a cryptographic multilinear map. Unfortunately, using the security analysis in [10], the scheme requires very large parameters to provide security for its underlying “encoding rerandomiz ..."
Abstract

Cited by 26 (5 self)
 Add to MetaCart
(Show Context)
Abstract. The GGH Graded Encoding Scheme [10], based on ideal lattices, is the first plausible approximation to a cryptographic multilinear map. Unfortunately, using the security analysis in [10], the scheme requires very large parameters to provide security for its underlying “encoding rerandomization” process. Our main contributions are to formalize, simplify and improve the efficiency and the security analysis of the rerandomization process in the GGH construction. This results in a new construction that we call GGHLite. In particular, we first lower the size of a standard deviation parameter of the rerandomization process of [10] from exponential to polynomial in the security parameter. This first improvement is obtained via a finer security analysis of the “drowning ” step of rerandomization, in which we apply the Rényi divergence instead of the conventional statistical distance as a measure of distance between distributions. Our second improvement is to reduce the number of randomizers needed from Ω(n logn) to 2, where n is the dimension of the underlying ideal lattices. These two contributions allow us to decrease the bit size of the public parameters from O(λ5 log λ) for the GGH scheme to O(λ log2 λ) in GGHLite, with respect to the security parameter λ (for a constant multilinearity parameter κ). 1
A note on discrete gaussian combinations of lattice vectors, 2013. Draft. Available at http://arxiv.org/pdf/1308.2405v1.pdf
"... ar ..."
(Show Context)
Tighter security for efficient lattice cryptography via the Rényi divergence of optimized orders
 Miyaji (Eds.), ProvSec 2014, Vol. 9451 of LNCS
, 2015
"... Abstract. In security proofs of lattice based cryptography, bounding the closeness of two probability distributions is an important procedure. To measure the closeness, the Rényi divergence has been used instead of the classical statistical distance. Recent results have shown that the Rényi diverg ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. In security proofs of lattice based cryptography, bounding the closeness of two probability distributions is an important procedure. To measure the closeness, the Rényi divergence has been used instead of the classical statistical distance. Recent results have shown that the Rényi divergence offers security reductions with better parameters, e.g. smaller deviations for discrete Gaussian distributions. However, since previous analyses used a fixed order Rényi divergence, i.e., order two, they lost tightness of reductions. To overcome the deficiency, we adaptively optimize the orders based on the advantages of the adversary for several latticebased schemes. The optimizations enable us to prove the security with both improved efficiency and tighter reductions. Indeed, our analysis offers security reductions with smaller parameters than the statistical distance based analysis and the reductions are tighter than those of previous Rényi divergence based analyses. As applications, we show tighter security reductions for sampling discrete Gaussian distributions with smaller precomputed tables for Bimodal Lattice Signature Scheme (BLISS), and the variants of learning with errors (LWE) problem and the small integer solution (SIS) problem called kLWE and kSIS, respectively.
Des. Codes Cryptogr. DOI 10.1007/s106230139864x On the complexity of the BKW algorithm on LWE
"... Abstract This work presents a study of the complexity of the Blum–Kalai–Wasserman (BKW) algorithm when applied to the Learning with Errors (LWE) problem, by providing refined estimates for the data and computational effort requirements for solving concrete instances of the LWE problem. We apply this ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract This work presents a study of the complexity of the Blum–Kalai–Wasserman (BKW) algorithm when applied to the Learning with Errors (LWE) problem, by providing refined estimates for the data and computational effort requirements for solving concrete instances of the LWE problem. We apply this refined analysis to suggested parameters for various LWEbased cryptographic schemes from the literature and compare with alternative approaches based on lattice reduction. As a result, we provide new upper bounds for the concrete hardness of these LWEbased schemes. Rather surprisingly, it appears that BKW algorithm outperforms known estimates for lattice reduction algorithms starting in dimension n ≈ 250 when LWE is reduced to SIS. However, this assumes access to an unbounded number of LWE samples. Communicated by R. Steinwandt.