Results 1  10
of
124
Lattice Basis Reduction: Improved Practical Algorithms and Solving Subset Sum Problems.
 Math. Programming
, 1993
"... We report on improved practical algorithms for lattice basis reduction. We propose a practical floating point version of the L3algorithm of Lenstra, Lenstra, Lov'asz (1982). We present a variant of the L3 algorithm with "deep insertions" and a practical algorithm for block KorkinZ ..."
Abstract

Cited by 327 (6 self)
 Add to MetaCart
(Show Context)
We report on improved practical algorithms for lattice basis reduction. We propose a practical floating point version of the L3algorithm of Lenstra, Lenstra, Lov'asz (1982). We present a variant of the L3 algorithm with "deep insertions" and a practical algorithm for block KorkinZolotarev reduction, a concept introduced by Schnorr (1987). Empirical tests show that the strongest of these algorithms solves almost all subset sum problems with up to 66 random weights of arbitrary bit length within at most a few hours on a UNISYS 6000/70 or within a couple of minutes on a SPARC 1+ computer.
The NPcompleteness column: an ongoing guide
 JOURNAL OF ALGORITHMS
, 1987
"... This is the nineteenth edition of a (usually) quarterly column that covers new developments in the theory of NPcompleteness. The presentation is modeled on that used by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NPCompleteness," W. H. Freem ..."
Abstract

Cited by 239 (0 self)
 Add to MetaCart
(Show Context)
This is the nineteenth edition of a (usually) quarterly column that covers new developments in the theory of NPcompleteness. The presentation is modeled on that used by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NPCompleteness," W. H. Freeman & Co., New York, 1979 (hereinafter referred to as "[G&J]"; previous columns will be referred to by their dates). A background equivalent to that provided by [G&J] is assumed, and, when appropriate, crossreferences will be given to that book and the list of problems (NPcomplete and harder) presented there. Readers who have results they would like mentioned (NPhardness, PSPACEhardness, polynomialtimesolvability, etc.) or open problems they would like publicized, should
The Hardness of Approximate Optima in Lattices, Codes, and Systems of Linear Equations
, 1993
"... We prove the following about the Nearest Lattice Vector Problem (in any `p norm), the Nearest Codeword Problem for binary codes, the problem of learning a halfspace in the presence of errors, and some other problems. 1. Approximating the optimum within any constant factor is NPhard. 2. If for some ..."
Abstract

Cited by 170 (7 self)
 Add to MetaCart
We prove the following about the Nearest Lattice Vector Problem (in any `p norm), the Nearest Codeword Problem for binary codes, the problem of learning a halfspace in the presence of errors, and some other problems. 1. Approximating the optimum within any constant factor is NPhard. 2. If for some ffl ? 0 there exists a polynomialtime algorithm that approximates the optimum within a factor of 2 log 0:5\Gammaffl n , then every NP language can be decided in quasipolynomial deterministic time, i.e., NP ` DTIME(n poly(log n) ). Moreover, we show that result 2 also holds for the Shortest Lattice Vector Problem in the `1 norm. Also, for some of these problems we can prove the same result as above, but for a larger factor such as 2 log 1\Gammaffl n or n ffl . Improving the factor 2 log 0:5\Gammaffl n to p dimension for either of the lattice problems would imply the hardness of the Shortest Vector Problem in `2 norm; an old open problem. Our proofs use reductions from fewpr...
On MemoryBound Functions for Fighting Spam
 In Crypto
, 2002
"... In 1992, Dwork and Naor proposed that email messages be accompanied by easytocheck proofs of computational effort in order to discourage junk email, now known as spam. They proposed specific CPUbound functions for this purpose. Burrows suggested that, since memory access speeds vary across ma ..."
Abstract

Cited by 103 (2 self)
 Add to MetaCart
In 1992, Dwork and Naor proposed that email messages be accompanied by easytocheck proofs of computational effort in order to discourage junk email, now known as spam. They proposed specific CPUbound functions for this purpose. Burrows suggested that, since memory access speeds vary across machines much less than do CPU speeds, memorybound functions may behave more equitably than CPUbound functions; this approach was first explored by Abadi, Burrows, Manasse, and Wobber [8].
Predicting lattice reduction
 In Proceedings of the theory and applications of cryptographic techniques 27th annual international conference on Advances in cryptology, EUROCRYPT’08
, 2008
"... Abstract. Despite their popularity, lattice reduction algorithms remain mysterious cryptanalytical tools. Though it has been widely reported that they behave better than their proved worstcase theoretical bounds, no precise assessment has ever been given. Such an assessment would be very helpful to ..."
Abstract

Cited by 97 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Despite their popularity, lattice reduction algorithms remain mysterious cryptanalytical tools. Though it has been widely reported that they behave better than their proved worstcase theoretical bounds, no precise assessment has ever been given. Such an assessment would be very helpful to predict the behaviour of latticebased attacks, as well as to select keysizes for latticebased cryptosystems. The goal of this paper is to provide such an assessment, based on extensive experiments performed with the NTL library. The experiments suggest several conjectures on the worst case and the actual behaviour of lattice reduction algorithms. We believe the assessment might also help to design new reduction algorithms overcoming the limitations of current algorithms.
Hardness of Approximating the Shortest Vector Problemin High Lp Norms
, 2003
"... We show that for every ffl? 0, there is a constant p(ffl) such that for all integers p * p(ffl), it is NPhard to approximate the Shortest Vector Problem in Lp norm within factor p1\Gamma ffl under randomized reductions. For large values of p, this improves the factor 21=p \Gamma ffi hardness show ..."
Abstract

Cited by 90 (3 self)
 Add to MetaCart
We show that for every ffl? 0, there is a constant p(ffl) such that for all integers p * p(ffl), it is NPhard to approximate the Shortest Vector Problem in Lp norm within factor p1\Gamma ffl under randomized reductions. For large values of p, this improves the factor 21=p \Gamma ffi hardness shown by Micciancio [27].
The two faces of lattices in cryptology.
 In Proceedings of CaLC ’01,
, 2001
"... ..."
(Show Context)
A New Identification Scheme Based on Syndrome Decoding
, 1994
"... Zeroknowledge proofs were introduced in 1985, in a paper by Goldwasser, Micali and Rackoff ([6]). Their practical significance was soon demonstrated in the work of Fiat and Shamir ([4]), who turned zeroknowledge proofs of quadratic residuosity into efficient means of establishing user identities. ..."
Abstract

Cited by 82 (8 self)
 Add to MetaCart
Zeroknowledge proofs were introduced in 1985, in a paper by Goldwasser, Micali and Rackoff ([6]). Their practical significance was soon demonstrated in the work of Fiat and Shamir ([4]), who turned zeroknowledge proofs of quadratic residuosity into efficient means of establishing user identities. Still, as is almost always the case in publickey cryptography, the FiatShamir scheme relied on arithmetic operations on large numbers. In 1989, there were two attempts to build identification protocols that only use simple operations (see [11, 10]). One appeared in the EUROCRYPT proceedings and relies on the intractability of some coding problems, the other was presented at the CRYPTO rump session and depends on the socalled Permuted Kernel problem (PKP). Unfortunately, the first of the schemes was not really practical. In the present paper, we propose a new identification scheme, based on errorcorrecting codes, which is zeroknowledge and is of practical value. Furthermore, we describe several variants, including one which has an identity based character. The security of our scheme depends on the hardness of decoding a word of given syndrome w.r.t. some binary linear errorcorrecting code.
Lattice Reduction: a Toolbox for the Cryptanalyst
 Journal of Cryptology
, 1994
"... In recent years, methods based on lattice reduction have been used repeatedly for the cryptanalytic attack of various systems. Even if they do not rest on highly sophisticated theories, these methods may look a bit intricate to the practically oriented cryptographers, both from the mathematical ..."
Abstract

Cited by 72 (9 self)
 Add to MetaCart
(Show Context)
In recent years, methods based on lattice reduction have been used repeatedly for the cryptanalytic attack of various systems. Even if they do not rest on highly sophisticated theories, these methods may look a bit intricate to the practically oriented cryptographers, both from the mathematical and the algorithmic point of view. The aim of the present paper is to explain what can be achieved by lattice reduction algorithms, even without understanding of the actual mechanisms involved. Two examples are given, one of them being the attack devised by the second named author against Knuth's truncated linear congruential generator, which has been announced a few years ago and appears here for the first time in journal version.
Attacking the ChorRivest Cryptosystem by Improved Lattice Reduction
, 1995
"... We introduce algorithms for lattice basis reduction that are improvements of the famous L 3 algorithm. If a random L 3 reduced lattice basis b1 ; : : : ; bn is given such that the vector of reduced Gram Schmidt coefficients (f¯ i;j g 1 j ! i n) is uniformly distributed in [0; 1) ( n 2 ) ..."
Abstract

Cited by 72 (6 self)
 Add to MetaCart
(Show Context)
We introduce algorithms for lattice basis reduction that are improvements of the famous L 3 algorithm. If a random L 3 reduced lattice basis b1 ; : : : ; bn is given such that the vector of reduced Gram Schmidt coefficients (f¯ i;j g 1 j ! i n) is uniformly distributed in [0; 1) ( n 2 ) , then the pruned enumeration finds with positive probability a shortest lattice vector. We demonstrate the power of these algorithms by solving random subset sum problems of arbitrary density with 74 and 82 many weights, by breaking the ChorRivest cryptoscheme in dimensions 103 and 151 and by breaking Damgard's hash function.