Results 1  10
of
14
The Two Faces of Lattices in Cryptology
, 2001
"... Lattices are regular arrangements of points in ndimensional space, whose study appeared in the 19th century in both number theory and crystallography. Since the appearance of the celebrated LenstraLenstra Lov'asz lattice basis reduction algorithm twenty years ago, lattices have had surprising ..."
Abstract

Cited by 69 (16 self)
 Add to MetaCart
Lattices are regular arrangements of points in ndimensional space, whose study appeared in the 19th century in both number theory and crystallography. Since the appearance of the celebrated LenstraLenstra Lov'asz lattice basis reduction algorithm twenty years ago, lattices have had surprising applications in cryptology. Until recently, the applications of lattices to cryptology were only negative, as lattices were used to break various cryptographic schemes. Paradoxically, several positive cryptographic applications of lattices have emerged in the past five years: there now exist publickey cryptosystems based on the hardness of lattice problems, and lattices play a crucial role in a few security proofs.
Parallel Algorithms for Integer Factorisation
"... The problem of finding the prime factors of large composite numbers has always been of mathematical interest. With the advent of public key cryptosystems it is also of practical importance, because the security of some of these cryptosystems, such as the RivestShamirAdelman (RSA) system, depends o ..."
Abstract

Cited by 41 (17 self)
 Add to MetaCart
The problem of finding the prime factors of large composite numbers has always been of mathematical interest. With the advent of public key cryptosystems it is also of practical importance, because the security of some of these cryptosystems, such as the RivestShamirAdelman (RSA) system, depends on the difficulty of factoring the public keys. In recent years the best known integer factorisation algorithms have improved greatly, to the point where it is now easy to factor a 60decimal digit number, and possible to factor numbers larger than 120 decimal digits, given the availability of enough computing power. We describe several algorithms, including the elliptic curve method (ECM), and the multiplepolynomial quadratic sieve (MPQS) algorithm, and discuss their parallel implementation. It turns out that some of the algorithms are very well suited to parallel implementation. Doubling the degree of parallelism (i.e. the amount of hardware devoted to the problem) roughly increases the size of a number which can be factored in a fixed time by 3 decimal digits. Some recent computational results are mentioned – for example, the complete factorisation of the 617decimal digit Fermat number F11 = 2211 + 1 which was accomplished using ECM.
Lattice Reduction in Cryptology: An Update
 Lect. Notes in Comp. Sci
, 2000
"... Lattices are regular arrangements of points in space, whose study appeared in the 19th century in both number theory and crystallography. ..."
Abstract

Cited by 36 (7 self)
 Add to MetaCart
Lattices are regular arrangements of points in space, whose study appeared in the 19th century in both number theory and crystallography.
NFS with Four Large Primes: An Explosive Experiment
, 1995
"... The purpose of this paper is to report the unexpected results that we obtained while experimenting with the multilarge prime variation of the general number field sieve integer factoring algorithm (NFS, cf. [8]). For traditional factoring algorithms that make use of at most two large primes, the ..."
Abstract

Cited by 23 (2 self)
 Add to MetaCart
The purpose of this paper is to report the unexpected results that we obtained while experimenting with the multilarge prime variation of the general number field sieve integer factoring algorithm (NFS, cf. [8]). For traditional factoring algorithms that make use of at most two large primes, the completion time can quite accurately be predicted by extrapolating an almost quartic and entirely ‘smooth ’ function that counts the number of useful combinations among the large primes [l]. For NFS such extrapolations seem to be impossiblethe number of useful combinations suddenly ‘explodes ’ in an as yet unpredictable way, that we have not yet been able to understand completely. The consequence of this explosion is that NFS is substantially faster than expected, which implies that factoring is somewhat easier than we thought.
Recent progress and prospects for integer factorisation algorithms
 In Proc. of COCOON 2000
, 2000
"... Abstract. The integer factorisation and discrete logarithm problems are of practical importance because of the widespread use of public key cryptosystems whose security depends on the presumed difficulty of solving these problems. This paper considers primarily the integer factorisation problem. In ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
Abstract. The integer factorisation and discrete logarithm problems are of practical importance because of the widespread use of public key cryptosystems whose security depends on the presumed difficulty of solving these problems. This paper considers primarily the integer factorisation problem. In recent years the limits of the best integer factorisation algorithms have been extended greatly, due in part to Moore’s law and in part to algorithmic improvements. It is now routine to factor 100decimal digit numbers, and feasible to factor numbers of 155 decimal digits (512 bits). We outline several integer factorisation algorithms, consider their suitability for implementation on parallel machines, and give examples of their current capabilities. In particular, we consider the problem of parallel solution of the large, sparse linear systems which arise with the MPQS and NFS methods. 1
MerkleHellman Revisited: A Cryptanalysis of the QuVanstone Cryptosystem Based on Group Factorizations
 In Proc. of Crypto '97, volume 1294 of LNCS
, 1997
"... . Cryptosystems based on the knapsack problem were among the first public key systems to be invented and for a while were considered quite promising. Basically all knapsack cryptosystems that have been proposed so far have been broken, mainly by means of lattice reduction techniques. However, a few ..."
Abstract

Cited by 18 (11 self)
 Add to MetaCart
. Cryptosystems based on the knapsack problem were among the first public key systems to be invented and for a while were considered quite promising. Basically all knapsack cryptosystems that have been proposed so far have been broken, mainly by means of lattice reduction techniques. However, a few knapsacklike cryptosystems have withstood cryptanalysis, among which the ChorRivest scheme [2] even if this is debatable (see [16]), and the QuVanstone scheme proposed at the Dagstuhl'93 workshop [13] and published in [14]. The QuVanstone scheme is a public key scheme based on group factorizations in the additive group of integers modulo n that generalizes MerkleHellman cryptosystems. In this paper, we present a novel use of lattice reduction, which is of independent interest, exploiting in a systematic manner the notion of an orthogonal lattice. Using the new technique, we successfully attack the QuVanstone cryptosystem. Namely, we show how to recover the private key from the public k...
Discrete Logarithms and Smooth Polynomials
 Contemporary Mathematics, AMS
, 1993
"... . This paper is a survey of recent advances in discrete logarithm algorithms. Improved estimates for smooth integers and smooth polynomials are also discussed. 1. Introduction If G denotes a group (written multiplicatively), and hgi the cyclic subgroup generated by g 2 G, then the discrete logarith ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
. This paper is a survey of recent advances in discrete logarithm algorithms. Improved estimates for smooth integers and smooth polynomials are also discussed. 1. Introduction If G denotes a group (written multiplicatively), and hgi the cyclic subgroup generated by g 2 G, then the discrete logarithm problem for G is to find, given g 2 G and y 2 hgi, the smallest nonnegative integer x such that y = g x . This integer x is called the discrete logarithm of y to the base g, and is written x = log g y. The discrete log problem has been studied by number theorists for a long time. The main reason for the intense current interest in it, though, is that many public key cryptosystems depend for their security on the assumption that it is hard, at least for suitably chosen groups. With the proposed adoption of the NIST digital signature algorithm [28] (based on the ElGamal [10] and Schnorr [35] proposals), even more attention is likely to be drawn to this area. There are already several su...
A Survey of Modern Integer Factorization Algorithms
 CWI Quarterly
, 1994
"... Introduction An integer n ? 1 is said to be a prime number (or simply prime) if the only divisors of n are \Sigma1 and \Sigman. There are infinitely many prime numbers, the first four being 2, 3, 5, and 7. If n ? 1 and n is not prime, then n is said to be composite. The integer 1 is neither prime ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
Introduction An integer n ? 1 is said to be a prime number (or simply prime) if the only divisors of n are \Sigma1 and \Sigman. There are infinitely many prime numbers, the first four being 2, 3, 5, and 7. If n ? 1 and n is not prime, then n is said to be composite. The integer 1 is neither prime nor composite. The Fundamental Theorem of Arithmetic states that every positive integer can be expressed as a finite (perhaps empty) product of prime numbers, and that this factorization is unique except for the ordering of the factors. Table 1.1 has some sample factorizations. 1990 = 2 \Delta 5 \Delta 199 1995 = 3 \Delta 5 \Delta 7 \Delta 19 2000 = 2 4 \Delta 5 3 2005 = 5 \Delta 401
Strategies in Filtering in the Number Field Sieve
 In preparation
, 2000
"... A critical step when factoring large integers by the Number Field Sieve [8] consists of finding dependencies in a huge sparse matrix over the field F2 , using a Block Lanczos algorithm. Both size and weight (the number of nonzero elements) of the matrix critically affect the running time of Block ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
A critical step when factoring large integers by the Number Field Sieve [8] consists of finding dependencies in a huge sparse matrix over the field F2 , using a Block Lanczos algorithm. Both size and weight (the number of nonzero elements) of the matrix critically affect the running time of Block Lanczos. In order to keep size and weight small the relations coming out of the siever do not flow directly into the matrix, but are filtered first in order to reduce the matrix size. This paper discusses several possible filter strategies and their use in the recent record factorizations of RSA140, R211 and RSA155. 2000 Mathematics Subject Classification: Primary 11Y05. Secondary 11A51. 1999 ACM Computing Classification System: F.2.1. Keywords and Phrases: Number Field Sieve, factoring, filtering, Structured Gaussian elimination, Block Lanczos, RSA. Note: Work carried out under project MAS2.2 "Computational number theory and data security". This report will appear in the proceed...
A Montgomerylike Square Root for the Number Field Sieve
, 1998
"... The Number Field Sieve (NFS) is the asymptotically fastest factoring algorithm known. It had spectacular successes in factoring numbers of a special form. Then the method was adapted for general numbers, and recently applied to the RSA130 number [6], setting a new world record in factorization. Th ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
The Number Field Sieve (NFS) is the asymptotically fastest factoring algorithm known. It had spectacular successes in factoring numbers of a special form. Then the method was adapted for general numbers, and recently applied to the RSA130 number [6], setting a new world record in factorization. The NFS has undergone several modifications since its appearance. One of these modifications concerns the last stage: the computation of the square root of a huge algebraic number given as a product of hundreds of thousands of small ones. This problem was not satisfactorily solved until the appearance of an algorithm by Peter Montgomery. Unfortunately, Montgomery only published a preliminary version of his algorithm [15], while a description of his own implementation can be found in [7]. In this paper, we present a variant of the algorithm, compare it with the original algorithm, and discuss its complexity.