Results 1  10
of
11
FloatingPoint LLL Revisited
, 2005
"... The LenstraLenstraLovász lattice basis reduction algorithm (LLL or L³) is a very popular tool in publickey cryptanalysis and in many other fields. Given an integer ddimensional lattice basis with vectors of norm less than B in an ndimensional space, L³ outputs a socalled L³reduced basis in po ..."
Abstract

Cited by 37 (6 self)
 Add to MetaCart
The LenstraLenstraLovász lattice basis reduction algorithm (LLL or L³) is a very popular tool in publickey cryptanalysis and in many other fields. Given an integer ddimensional lattice basis with vectors of norm less than B in an ndimensional space, L³ outputs a socalled L³reduced basis in polynomial time O(d 5 n log³ B), using arithmetic operations on integers of bitlength O(d log B). This worstcase complexity is problematic for lattices arising in cryptanalysis where d or/and log B are often large. As a result, the original L³ is almost never used in practice. Instead, one applies floatingpoint variants of L³, where the longinteger arithmetic required by GramSchmidt orthogonalisation (central in L³) is replaced by floatingpoint arithmetic. Unfortunately, this is known to be unstable in the worstcase: the usual floatingpoint L³ is not even guaranteed to terminate, and the output basis may not be L³reduced at all. In this article, we introduce the L² algorithm, a new and natural floatingpoint variant of L³ which provably outputs L 3reduced bases in polynomial time O(d 4 n(d + log B) log B). This is the first L³ algorithm whose running time (without fast integer arithmetic) provably grows only quadratically with respect to log B, like the wellknown Euclidean and Gaussian algorithms, which it generalizes.
Factoring Polynomials and the Knapsack Problem.
"... Although a polynomial time algorithm exists, the most commonly used algorithm for factoring a univariate polynomial f with integer coefficients is the BerlekampZassenhaus algorithm which has a complexity that depends exponentially on n where n is the number of modular factors of f . This expone ..."
Abstract

Cited by 37 (12 self)
 Add to MetaCart
Although a polynomial time algorithm exists, the most commonly used algorithm for factoring a univariate polynomial f with integer coefficients is the BerlekampZassenhaus algorithm which has a complexity that depends exponentially on n where n is the number of modular factors of f . This exponential time complexity is due to a combinatorial problem; the problem of choosing the right subset of these n factors. In this paper we reduce this combinatorial problem to a knapsack problem of a kind that can be solved with polynomial time algorithms such LLL or PSLQ. The result is a practical algorithm that can factor large polynomials even when n is large as well. 1 Introduction Let f be a polynomial of degree N with integer coefficients, f = N X i=0 a i x i where a i 2 ZZ. Assume that f is monic (i.e. aN = 1) and that f is squarefree (no multiple roots), so the gcd of f and f 0 equals 1. Let p be a prime number and let F p = ZZ=(p) be the field with p elements. Let ZZ p ...
An LLLreduction algorithm with quasilinear time complexity
, 2010
"... Abstract. We devise an algorithm, e L 1, with the following specifications: It takes as input an arbitrary basis B = (bi)i ∈ Z d×d of a Euclidean lattice L; It computes a basis of L which is reduced for a mild modification of the LenstraLenstraLovász reduction; It terminates in time O(d 5+ε β + d ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
Abstract. We devise an algorithm, e L 1, with the following specifications: It takes as input an arbitrary basis B = (bi)i ∈ Z d×d of a Euclidean lattice L; It computes a basis of L which is reduced for a mild modification of the LenstraLenstraLovász reduction; It terminates in time O(d 5+ε β + d ω+1+ε β 1+ε) where β = log max ‖bi ‖ (for any ε> 0 and ω is a valid exponent for matrix multiplication). This is the first LLLreducing algorithm with a time complexity that is quasilinear in β and polynomial in d. The backbone structure of e L 1 is able to mimic the KnuthSchönhage fast gcd algorithm thanks to a combination of cuttingedge ingredients. First the bitsize of our lattice bases can be decreased via truncations whose validity are backed by recent numerical stability results on the QR matrix factorization. Also we establish a new framework for analyzing unimodular transformation matrices which reduce shifts of reduced bases, this includes bitsize control and new perturbation tools. We illustrate the power of this framework by generating a family of reduction algorithms. 1
Certification of the QR Factor R, and of Lattice Basis Reducedness
"... Given a lattice basis of n vectors in Z n, we propose an algorithm using 12n 3 + O(n 2) floating point operations for checking whether the basis is LLLreduced. If the basis is reduced then the algorithm will hopefully answer “yes”. If the basis is not reduced, or if the precision used is not suffic ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Given a lattice basis of n vectors in Z n, we propose an algorithm using 12n 3 + O(n 2) floating point operations for checking whether the basis is LLLreduced. If the basis is reduced then the algorithm will hopefully answer “yes”. If the basis is not reduced, or if the precision used is not sufficient with respect to n, and to the numerical properties of the basis, the algorithm will answer “failed”. Hence a positive answer is a rigorous certificate. For implementing the certificate itself, we propose a floating point algorithm for computing (certified) error bounds for the R factor of the QR factorization. This algorithm takes into account all possible approximation and rounding errors. The certificate may be implemented using matrix library routines only. We report experiments that show that for a reduced basis of adequate dimension and quality the certificate succeeds, and establish the effectiveness of the certificate. This effectiveness is applied for certifying the output of fastest existing floating point heuristics of LLL reduction, without slowing down the whole process.
HLLL: Using Householder Inside LLL
"... We describe a new LLLtype algorithm, HLLL, that relies on Householder transformations to approximate the underlying GramSchmidt orthogonalizations. The latter computations are performed with floatingpoint arithmetic. We prove that a precision essentially equal to the dimension suffices to ensure ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
We describe a new LLLtype algorithm, HLLL, that relies on Householder transformations to approximate the underlying GramSchmidt orthogonalizations. The latter computations are performed with floatingpoint arithmetic. We prove that a precision essentially equal to the dimension suffices to ensure that the output basis is reduced. HLLL resembles the L 2 algorithm of Nguyen and Stehlé that relies on a floatingpoint Cholesky algorithm. However, replacing Cholesky’s algorithm by Householder’s is not benign, as their numerical behaviors differ significantly. Broadly speaking, our correctness proof is more involved, whereas our complexity analysis is more direct. Thanks to the new orthogonalization strategy, HLLL is the first LLLtype algorithm that admits a natural vectorial description, which leads to a complexity upper bound that is proportional to the progress performed on the basis (for fixed dimensions).
Progress on LLL and lattice reduction
 Proceedings LLL+25
"... Abstract. We surview variants and extensions of the LLLalgorithm of Lenstra, Lenstra Lovász, extensions to quadratic indefinite forms and to faster and stronger reduction algorithms. The LLLalgorithm with Householder orthogonalisation in floatingpoint arithmetic is very efficient and highly accura ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
Abstract. We surview variants and extensions of the LLLalgorithm of Lenstra, Lenstra Lovász, extensions to quadratic indefinite forms and to faster and stronger reduction algorithms. The LLLalgorithm with Householder orthogonalisation in floatingpoint arithmetic is very efficient and highly accurate. We surview approximations of the shortest lattice vector by feasible lattice reduction, in particular by block reduction, primaldual reduction and random sampling reduction. Segment reduction performs LLLreduction in high dimension mostly working with a few local coordinates. Keywords. LLLreduction, Householder orthogonalisation, floatingpoint arithmetic, block reduction, segment reduction, primaldual reduction, sampling reduction, reduction of indefinite quadratic forms. 1
Floatingpoint LLL revisited
 In Advances in cryptology – Eurocrypt 2005, LNCS 3494, p. 215–233
, 2005
"... Abstract. The LenstraLenstraLovász lattice basis reduction algorithm (LLL or L 3) is a very popular tool in publickey cryptanalysis and in many other fields. Given an integer ddimensional lattice basis with vectors of norm less than B in an ndimensional space, L 3 outputs a socalled L 3reduced ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. The LenstraLenstraLovász lattice basis reduction algorithm (LLL or L 3) is a very popular tool in publickey cryptanalysis and in many other fields. Given an integer ddimensional lattice basis with vectors of norm less than B in an ndimensional space, L 3 outputs a socalled L 3reduced basis in polynomial time O(d 5 n log 3 B), using arithmetic operations on integers of bitlength O(d log B). This worstcase complexity is problematic for lattices arising in cryptanalysis where d or/and log B are often large. As a result, the original L 3 is almost never used in practice. Instead, one applies floatingpoint variants of L 3, where the longinteger arithmetic required by GramSchmidt orthogonalisation (central in L 3) is replaced by floatingpoint arithmetic. Unfortunately, this is known to be unstable in the worstcase: the usual floatingpoint L 3 is not even guaranteed to terminate, and the output basis may not be L 3reduced at all. In this article, we introduce the L 2 algorithm, a new and natural floatingpoint variant of L 3 which provably outputs L 3reduced bases in polynomial time O(d 4 n(d + log B) log B). This is the first L 3 algorithm whose running time (without fast integer arithmetic) provably grows only quadratically with respect to log B, like the wellknown Euclidean and Gaussian algorithms, which it generalizes.
devant le jury composé de
"... Chargé de recherche au CNRS Mémoire d’habilitation à diriger des recherches présenté le 14 octobre 2011, après avis des rapporteurs ..."
Abstract
 Add to MetaCart
Chargé de recherche au CNRS Mémoire d’habilitation à diriger des recherches présenté le 14 octobre 2011, après avis des rapporteurs
Oaxaca: Mexico (2010)"
, 2010
"... Author manuscript, published in "in Gradual sublattice reduction and a new complexity for factoring polynomials LATIN 2010, ..."
Abstract
 Add to MetaCart
Author manuscript, published in "in Gradual sublattice reduction and a new complexity for factoring polynomials LATIN 2010,