Results 1  10
of
74
Lattice Basis Reduction: Improved Practical Algorithms and Solving Subset Sum Problems.
 Math. Programming
, 1993
"... We report on improved practical algorithms for lattice basis reduction. We propose a practical floating point version of the L3algorithm of Lenstra, Lenstra, Lov'asz (1982). We present a variant of the L3 algorithm with "deep insertions" and a practical algorithm for block KorkinZolotarev reduct ..."
Abstract

Cited by 212 (6 self)
 Add to MetaCart
We report on improved practical algorithms for lattice basis reduction. We propose a practical floating point version of the L3algorithm of Lenstra, Lenstra, Lov'asz (1982). We present a variant of the L3 algorithm with "deep insertions" and a practical algorithm for block KorkinZolotarev reduction, a concept introduced by Schnorr (1987). Empirical tests show that the strongest of these algorithms solves almost all subset sum problems with up to 66 random weights of arbitrary bit length within at most a few hours on a UNISYS 6000/70 or within a couple of minutes on a SPARC 1+ computer.
Efficient Cryptographic Schemes Provably as Secure as Subset Sum
 Journal of Cryptology
, 1993
"... We show very efficient constructions for a pseudorandom generator and for a universal oneway hash function based on the intractability of the subset sum problem for certain dimensions. (Pseudorandom generators can be used for private key encryption and universal oneway hash functions for sign ..."
Abstract

Cited by 78 (8 self)
 Add to MetaCart
We show very efficient constructions for a pseudorandom generator and for a universal oneway hash function based on the intractability of the subset sum problem for certain dimensions. (Pseudorandom generators can be used for private key encryption and universal oneway hash functions for signature schemes). The increase in efficiency in our construction is due to the fact that many bits can be generated/hashed with one application of the assumed oneway function. All our construction can be implemented in NC using an optimal number of processors. Part of this work done while both authors were at UC Berkeley and part when the second author was at the IBM Almaden Research Center. Research supported by NSF grant CCR 88  13632. A preliminary version of this paper appeared in Proc. of the 30th Symp. on Foundations of Computer Science, 1989. 1 Introduction Many cryptosystems are based on the intractability of such number theoretic problems such as factoring and discrete logarit...
Analysis of PSLQ, An Integer Relation Finding Algorithm
 Mathematics of Computation
, 1999
"... Let K be either the real, complex, or quaternion number system and let O(K) be the corresponding integers. Let × = (Xl, • • • , ×n) be a vector in K n. The vector × has an integer relation if there exists a vector m = (ml,..., mn) E O(K) n, m = _ O, such that mlx I + m2x 2 +... + mnXn = O. In th ..."
Abstract

Cited by 71 (29 self)
 Add to MetaCart
Let K be either the real, complex, or quaternion number system and let O(K) be the corresponding integers. Let × = (Xl, • • • , ×n) be a vector in K n. The vector × has an integer relation if there exists a vector m = (ml,..., mn) E O(K) n, m = _ O, such that mlx I + m2x 2 +... + mnXn = O. In this paper we define the parameterized integer relation construction algorithm PSLQ(r), where the parameter rcan be freely chosen in a certain interval. Beginning with an arbitrary vector X = (Xl,..., Xn) _ K n, iterations of PSLQ(r) will produce lower bounds on the norm of any possible relation for X. Thus PS/Q(r) can be used to prove that there are no relations for × of norm less than a given size. Let M x be the smallest norm of any relation for ×. For the real and complex case and each fixed parameter rin a certain interval, we prove that PSLQ(r) constructs a relation in less than O(fl 3 + n 2 log Mx) iterations.
The Two Faces of Lattices in Cryptology
, 2001
"... Lattices are regular arrangements of points in ndimensional space, whose study appeared in the 19th century in both number theory and crystallography. Since the appearance of the celebrated LenstraLenstra Lov'asz lattice basis reduction algorithm twenty years ago, lattices have had surprising ..."
Abstract

Cited by 69 (16 self)
 Add to MetaCart
Lattices are regular arrangements of points in ndimensional space, whose study appeared in the 19th century in both number theory and crystallography. Since the appearance of the celebrated LenstraLenstra Lov'asz lattice basis reduction algorithm twenty years ago, lattices have had surprising applications in cryptology. Until recently, the applications of lattices to cryptology were only negative, as lattices were used to break various cryptographic schemes. Paradoxically, several positive cryptographic applications of lattices have emerged in the past five years: there now exist publickey cryptosystems based on the hardness of lattice problems, and lattices play a crucial role in a few security proofs.
Attacking the ChorRivest Cryptosystem by Improved Lattice Reduction
, 1995
"... We introduce algorithms for lattice basis reduction that are improvements of the famous L 3 algorithm. If a random L 3 reduced lattice basis b1 ; : : : ; bn is given such that the vector of reduced Gram Schmidt coefficients (f¯ i;j g 1 j ! i n) is uniformly distributed in [0; 1) ( n 2 ) ..."
Abstract

Cited by 66 (5 self)
 Add to MetaCart
We introduce algorithms for lattice basis reduction that are improvements of the famous L 3 algorithm. If a random L 3 reduced lattice basis b1 ; : : : ; bn is given such that the vector of reduced Gram Schmidt coefficients (f¯ i;j g 1 j ! i n) is uniformly distributed in [0; 1) ( n 2 ) , then the pruned enumeration finds with positive probability a shortest lattice vector. We demonstrate the power of these algorithms by solving random subset sum problems of arbitrary density with 74 and 82 many weights, by breaking the ChorRivest cryptoscheme in dimensions 103 and 151 and by breaking Damgard's hash function.
A New Identification Scheme Based on Syndrome Decoding
, 1994
"... Zeroknowledge proofs were introduced in 1985, in a paper by Goldwasser, Micali and Rackoff ([6]). Their practical significance was soon demonstrated in the work of Fiat and Shamir ([4]), who turned zeroknowledge proofs of quadratic residuosity into efficient means of establishing user identities. ..."
Abstract

Cited by 63 (8 self)
 Add to MetaCart
Zeroknowledge proofs were introduced in 1985, in a paper by Goldwasser, Micali and Rackoff ([6]). Their practical significance was soon demonstrated in the work of Fiat and Shamir ([4]), who turned zeroknowledge proofs of quadratic residuosity into efficient means of establishing user identities. Still, as is almost always the case in publickey cryptography, the FiatShamir scheme relied on arithmetic operations on large numbers. In 1989, there were two attempts to build identification protocols that only use simple operations (see [11, 10]). One appeared in the EUROCRYPT proceedings and relies on the intractability of some coding problems, the other was presented at the CRYPTO rump session and depends on the socalled Permuted Kernel problem (PKP). Unfortunately, the first of the schemes was not really practical. In the present paper, we propose a new identification scheme, based on errorcorrecting codes, which is zeroknowledge and is of practical value. Furthermore, we describe several variants, including one which has an identity based character. The security of our scheme depends on the hardness of decoding a word of given syndrome w.r.t. some binary linear errorcorrecting code.
The shortest vector in a lattice is hard to approximate to within some constant
 in Proc. 39th Symposium on Foundations of Computer Science
, 1998
"... Abstract. We show that approximating the shortest vector problem (in any ℓp norm) to within any constant factor less than p √ 2 is hardfor NP under reverse unfaithful random reductions with inverse polynomial error probability. In particular, approximating the shortest vector problem is not in RP (r ..."
Abstract

Cited by 51 (4 self)
 Add to MetaCart
Abstract. We show that approximating the shortest vector problem (in any ℓp norm) to within any constant factor less than p √ 2 is hardfor NP under reverse unfaithful random reductions with inverse polynomial error probability. In particular, approximating the shortest vector problem is not in RP (random polynomial time), unless NP equals RP. We also prove a proper NPhardness result (i.e., hardness under deterministic manyone reductions) under a reasonable number theoretic conjecture on the distribution of squarefree smooth numbers. As part of our proof, we give an alternative construction of Ajtai’s constructive variant of Sauer’s lemma that greatly simplifies Ajtai’s original proof. Key words. NPhardness, shortest vector problem, point lattices, geometry of numbers, sphere packing
Noisy Polynomial Interpolation and Noisy Chinese Remaindering
, 2000
"... Abstract. The noisy polynomial interpolation problem is a new intractability assumption introduced last year in oblivious polynomial evaluation. It also appeared independently in password identification schemes, due to its connection with secret sharing schemes based on Lagrange’s polynomial interpo ..."
Abstract

Cited by 41 (2 self)
 Add to MetaCart
Abstract. The noisy polynomial interpolation problem is a new intractability assumption introduced last year in oblivious polynomial evaluation. It also appeared independently in password identification schemes, due to its connection with secret sharing schemes based on Lagrange’s polynomial interpolation. This paper presents new algorithms to solve the noisy polynomial interpolation problem. In particular, we prove a reduction from noisy polynomial interpolation to the lattice shortest vector problem, when the parameters satisfy a certain condition that we make explicit. Standard lattice reduction techniques appear to solve many instances of the problem. It follows that noisy polynomial interpolation is much easier than expected. We therefore suggest simple modifications to several cryptographic schemes recently proposed, in order to change the intractability assumption. We also discuss analogous methods for the related noisy Chinese remaindering problem arising from the wellknown analogy between polynomials and integers. 1
Lattice Reduction in Cryptology: An Update
 Lect. Notes in Comp. Sci
, 2000
"... Lattices are regular arrangements of points in space, whose study appeared in the 19th century in both number theory and crystallography. ..."
Abstract

Cited by 36 (7 self)
 Add to MetaCart
Lattices are regular arrangements of points in space, whose study appeared in the 19th century in both number theory and crystallography.
Random knapsack in expected polynomial time
 IN PROC. 35TH ANNUAL ACM SYMPOSIUM ON THEORY OF COMPUTING (STOC2003
, 2003
"... We present the first averagecase analysis proving a polynomial upper bound on the expected running time of an exact algorithm for the 0/1 knapsack problem. In particular, we prove for various input distributions, that the number of Paretooptimal knapsack fillings is polynomially bounded in the num ..."
Abstract

Cited by 32 (11 self)
 Add to MetaCart
We present the first averagecase analysis proving a polynomial upper bound on the expected running time of an exact algorithm for the 0/1 knapsack problem. In particular, we prove for various input distributions, that the number of Paretooptimal knapsack fillings is polynomially bounded in the number of available items. An algorithm by Nemhauser and Ullmann can enumerate these solutions very efficiently so that a polynomial upper bound on the number of Paretooptimal solutions implies an algorithm with expected polynomial running time. The random input model underlying our analysis is quite general and not restricted to a particular input distribution. We assume adversarial weights and randomly drawn profits (or vice versa). Our analysis covers general probability distributions with finite mean and, in its most general form, can even handle different probability distributions for the profits of different items. This feature enables us to study the effects of correlations between profits and weights. Our analysis confirms and explains practical studies showing that socalled strongly correlated instances are harder to solve than weakly correlated ones.