Results 1  10
of
109
Lattice Basis Reduction: Improved Practical Algorithms and Solving Subset Sum Problems.
 Math. Programming
, 1993
"... We report on improved practical algorithms for lattice basis reduction. We propose a practical floating point version of the L3algorithm of Lenstra, Lenstra, Lov'asz (1982). We present a variant of the L3 algorithm with "deep insertions" and a practical algorithm for block KorkinZ ..."
Abstract

Cited by 213 (6 self)
 Add to MetaCart
We report on improved practical algorithms for lattice basis reduction. We propose a practical floating point version of the L3algorithm of Lenstra, Lenstra, Lov'asz (1982). We present a variant of the L3 algorithm with "deep insertions" and a practical algorithm for block KorkinZolotarev reduction, a concept introduced by Schnorr (1987). Empirical tests show that the strongest of these algorithms solves almost all subset sum problems with up to 66 random weights of arbitrary bit length within at most a few hours on a UNISYS 6000/70 or within a couple of minutes on a SPARC 1+ computer.
The NPcompleteness column: an ongoing guide
 Journal of Algorithms
, 1985
"... This is the nineteenth edition of a (usually) quarterly column that covers new developments in the theory of NPcompleteness. The presentation is modeled on that used by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NPCompleteness,’ ’ W. H. Freeman & ..."
Abstract

Cited by 190 (0 self)
 Add to MetaCart
This is the nineteenth edition of a (usually) quarterly column that covers new developments in the theory of NPcompleteness. The presentation is modeled on that used by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NPCompleteness,’ ’ W. H. Freeman & Co., New York, 1979 (hereinafter referred to as ‘‘[G&J]’’; previous columns will be referred to by their dates). A background equivalent to that provided by [G&J] is assumed, and, when appropriate, crossreferences will be given to that book and the list of problems (NPcomplete and harder) presented there. Readers who have results they would like mentioned (NPhardness, PSPACEhardness, polynomialtimesolvability, etc.) or open problems they would like publicized, should
The Hardness of Approximate Optima in Lattices, Codes, and Systems of Linear Equations
, 1993
"... We prove the following about the Nearest Lattice Vector Problem (in any `p norm), the Nearest Codeword Problem for binary codes, the problem of learning a halfspace in the presence of errors, and some other problems. 1. Approximating the optimum within any constant factor is NPhard. 2. If for some ..."
Abstract

Cited by 154 (7 self)
 Add to MetaCart
We prove the following about the Nearest Lattice Vector Problem (in any `p norm), the Nearest Codeword Problem for binary codes, the problem of learning a halfspace in the presence of errors, and some other problems. 1. Approximating the optimum within any constant factor is NPhard. 2. If for some ffl ? 0 there exists a polynomialtime algorithm that approximates the optimum within a factor of 2 log 0:5\Gammaffl n , then every NP language can be decided in quasipolynomial deterministic time, i.e., NP ` DTIME(n poly(log n) ). Moreover, we show that result 2 also holds for the Shortest Lattice Vector Problem in the `1 norm. Also, for some of these problems we can prove the same result as above, but for a larger factor such as 2 log 1\Gammaffl n or n ffl . Improving the factor 2 log 0:5\Gammaffl n to p dimension for either of the lattice problems would imply the hardness of the Shortest Vector Problem in `2 norm; an old open problem. Our proofs use reductions from fewpr...
An improved lowdensity subset sum algorithm
 in Advances in Cryptology: Proceedings of Eurocrypt '91
"... Abstract. The general subset sum problem is NPcomplete. However, there are two algorithms, one due to Brickell and the other to Lagarias and Odlyzko, which in polynomial time solve almost all subset sum problems of sufficiently low density. Both methods rely on basis reduction algorithms to find sh ..."
Abstract

Cited by 85 (14 self)
 Add to MetaCart
Abstract. The general subset sum problem is NPcomplete. However, there are two algorithms, one due to Brickell and the other to Lagarias and Odlyzko, which in polynomial time solve almost all subset sum problems of sufficiently low density. Both methods rely on basis reduction algorithms to find short nonzero vectors in special lattices. The LagariasOdlyzko algorithm would solve almost all subset sum problems of density < 0.6463... in polynomial time if it could invoke a polynomialtime algorithm for finding the shortest nonzero vector in a lattice. This paper presents two modifications of that algorithm, either one of which would solve almost all problems of density < 0.9408... if it could find shortest nonzero vectors in lattices. These modifications also yield dramatic improvements in practice when they are combined with known lattice basis reduction algorithms. Key words, subset sum problems; knapsack cryptosystems; lattices; lattice basis reduction. Subject classifications. 11Y16. 1.
On MemoryBound Functions for Fighting Spam
 In Crypto
, 2002
"... In 1992, Dwork and Naor proposed that email messages be accompanied by easytocheck proofs of computational effort in order to discourage junk email, now known as spam. They proposed specific CPUbound functions for this purpose. Burrows suggested that, since memory access speeds vary across ma ..."
Abstract

Cited by 82 (2 self)
 Add to MetaCart
In 1992, Dwork and Naor proposed that email messages be accompanied by easytocheck proofs of computational effort in order to discourage junk email, now known as spam. They proposed specific CPUbound functions for this purpose. Burrows suggested that, since memory access speeds vary across machines much less than do CPU speeds, memorybound functions may behave more equitably than CPUbound functions; this approach was first explored by Abadi, Burrows, Manasse, and Wobber [8].
The Two Faces of Lattices in Cryptology
 In Cryptography and Lattices, International Conference (CaLC 2001), volume 2146 of LNCS
, 2001
"... ..."
Attacking the ChorRivest Cryptosystem by Improved Lattice Reduction
, 1995
"... We introduce algorithms for lattice basis reduction that are improvements of the famous L 3 algorithm. If a random L 3 reduced lattice basis b1 ; : : : ; bn is given such that the vector of reduced Gram Schmidt coefficients (f¯ i;j g 1 j ! i n) is uniformly distributed in [0; 1) ( n 2 ) ..."
Abstract

Cited by 67 (5 self)
 Add to MetaCart
We introduce algorithms for lattice basis reduction that are improvements of the famous L 3 algorithm. If a random L 3 reduced lattice basis b1 ; : : : ; bn is given such that the vector of reduced Gram Schmidt coefficients (f¯ i;j g 1 j ! i n) is uniformly distributed in [0; 1) ( n 2 ) , then the pruned enumeration finds with positive probability a shortest lattice vector. We demonstrate the power of these algorithms by solving random subset sum problems of arbitrary density with 74 and 82 many weights, by breaking the ChorRivest cryptoscheme in dimensions 103 and 151 and by breaking Damgard's hash function.
A New Identification Scheme Based on Syndrome Decoding
, 1994
"... Zeroknowledge proofs were introduced in 1985, in a paper by Goldwasser, Micali and Rackoff ([6]). Their practical significance was soon demonstrated in the work of Fiat and Shamir ([4]), who turned zeroknowledge proofs of quadratic residuosity into efficient means of establishing user identities. ..."
Abstract

Cited by 65 (8 self)
 Add to MetaCart
Zeroknowledge proofs were introduced in 1985, in a paper by Goldwasser, Micali and Rackoff ([6]). Their practical significance was soon demonstrated in the work of Fiat and Shamir ([4]), who turned zeroknowledge proofs of quadratic residuosity into efficient means of establishing user identities. Still, as is almost always the case in publickey cryptography, the FiatShamir scheme relied on arithmetic operations on large numbers. In 1989, there were two attempts to build identification protocols that only use simple operations (see [11, 10]). One appeared in the EUROCRYPT proceedings and relies on the intractability of some coding problems, the other was presented at the CRYPTO rump session and depends on the socalled Permuted Kernel problem (PKP). Unfortunately, the first of the schemes was not really practical. In the present paper, we propose a new identification scheme, based on errorcorrecting codes, which is zeroknowledge and is of practical value. Furthermore, we describe several variants, including one which has an identity based character. The security of our scheme depends on the hardness of decoding a word of given syndrome w.r.t. some binary linear errorcorrecting code.
Hardness of approximating the shortest vector problem in high Lp norms
 In Proceedings of the 44th IEEE Symposium on Foundations of Computer Science. IEEE Computer
"... Abstract. Let p> 1beany fixed real. We show that assuming NP ⊆ RP, there is no polynomial time algorithm that approximates the Shortest Vector Problem (SVP) in ℓp norm within a constant factor. Under the stronger assumption NP ⊆ RTIME(2poly(log n)), we show that there is no polynomialtime (log ..."
Abstract

Cited by 64 (2 self)
 Add to MetaCart
Abstract. Let p> 1beany fixed real. We show that assuming NP ⊆ RP, there is no polynomial time algorithm that approximates the Shortest Vector Problem (SVP) in ℓp norm within a constant factor. Under the stronger assumption NP ⊆ RTIME(2poly(log n)), we show that there is no polynomialtime (log n)1/2−ɛ algorithm with approximation ratio 2 where n is the dimension of the lattice and ɛ>0isan arbitrarily small constant. We first give a new (randomized) reduction from Closest Vector Problem (CVP) to SVP that achieves some constant factor hardness. The reduction is based on BCH Codes. Its advantage is that the SVP instances produced by the reduction behave well under the augmented tensor product,anew (log n)1/2−ɛ variant of tensor product that we introduce. This enables us to boost the hardness factor to 2.
Lattice Reduction: a Toolbox for the Cryptanalyst
 Journal of Cryptology
, 1994
"... In recent years, methods based on lattice reduction have been used repeatedly for the cryptanalytic attack of various systems. Even if they do not rest on highly sophisticated theories, these methods may look a bit intricate to the practically oriented cryptographers, both from the mathematical ..."
Abstract

Cited by 56 (7 self)
 Add to MetaCart
In recent years, methods based on lattice reduction have been used repeatedly for the cryptanalytic attack of various systems. Even if they do not rest on highly sophisticated theories, these methods may look a bit intricate to the practically oriented cryptographers, both from the mathematical and the algorithmic point of view. The aim of the present paper is to explain what can be achieved by lattice reduction algorithms, even without understanding of the actual mechanisms involved. Two examples are given, one of them being the attack devised by the second named author against Knuth's truncated linear congruential generator, which has been announced a few years ago and appears here for the first time in journal version.