Results 1  10
of
24
Some integer factorization algorithms using elliptic curves
 Australian Computer Science Communications
, 1986
"... Lenstra’s integer factorization algorithm is asymptotically one of the fastest known algorithms, and is also ideally suited for parallel computation. We suggest a way in which the algorithm can be speeded up by the addition of a second phase. Under some plausible assumptions, the speedup is of order ..."
Abstract

Cited by 47 (13 self)
 Add to MetaCart
Lenstra’s integer factorization algorithm is asymptotically one of the fastest known algorithms, and is also ideally suited for parallel computation. We suggest a way in which the algorithm can be speeded up by the addition of a second phase. Under some plausible assumptions, the speedup is of order log(p), where p is the factor which is found. In practice the speedup is significant. We mention some refinements which give greater speedup, an alternative way of implementing a second phase, and the connection with Pollard’s “p − 1” factorization algorithm. 1
Parallel Algorithms for Integer Factorisation
"... The problem of finding the prime factors of large composite numbers has always been of mathematical interest. With the advent of public key cryptosystems it is also of practical importance, because the security of some of these cryptosystems, such as the RivestShamirAdelman (RSA) system, depends o ..."
Abstract

Cited by 41 (17 self)
 Add to MetaCart
The problem of finding the prime factors of large composite numbers has always been of mathematical interest. With the advent of public key cryptosystems it is also of practical importance, because the security of some of these cryptosystems, such as the RivestShamirAdelman (RSA) system, depends on the difficulty of factoring the public keys. In recent years the best known integer factorisation algorithms have improved greatly, to the point where it is now easy to factor a 60decimal digit number, and possible to factor numbers larger than 120 decimal digits, given the availability of enough computing power. We describe several algorithms, including the elliptic curve method (ECM), and the multiplepolynomial quadratic sieve (MPQS) algorithm, and discuss their parallel implementation. It turns out that some of the algorithms are very well suited to parallel implementation. Doubling the degree of parallelism (i.e. the amount of hardware devoted to the problem) roughly increases the size of a number which can be factored in a fixed time by 3 decimal digits. Some recent computational results are mentioned – for example, the complete factorisation of the 617decimal digit Fermat number F11 = 2211 + 1 which was accomplished using ECM.
Efficient Rational Number Reconstruction
 Journal of Symbolic Computation
, 1994
"... this paper we describe how a variant of the algorithm in Jebelean [6] can be so adapted. In Section 2 we review the problem of rational reconstruction and the solution proposed by Wang, while fixing some notation and terminology along the way. We also discuss certain errors that have appeared in the ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
this paper we describe how a variant of the algorithm in Jebelean [6] can be so adapted. In Section 2 we review the problem of rational reconstruction and the solution proposed by Wang, while fixing some notation and terminology along the way. We also discuss certain errors that have appeared in the literature. Section 3 describes a multiprecision Euclidean algorithm for computing gcds that will be the basis of our algorithm. In Section 4 we discuss our algorithm and various details that are essential for an efficient implementation. 2 Reconstructing Rational Numbers
Variations by complexity theorists on three themes of
 Computational Complexity
, 2005
"... This paper surveys some connections between geometry and complexity. A main role is played by some quantities —degree, Euler characteristic, Betti numbers — associated to algebraic or semialgebraic sets. This role is twofold. On the one hand, lower bounds on the deterministic time (sequential and pa ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
This paper surveys some connections between geometry and complexity. A main role is played by some quantities —degree, Euler characteristic, Betti numbers — associated to algebraic or semialgebraic sets. This role is twofold. On the one hand, lower bounds on the deterministic time (sequential and parallel) necessary to decide a set S are established as functions of these quantities associated to S. The optimality of some algorithms is obtained as a consequence. On the other hand, the computation of these quantities gives rise to problems which turn out to be hard (or complete) in different complexity classes. These two kind of results thus turn the quantities above into measures of complexity in two quite different ways. 1
An LLLreduction algorithm with quasilinear time complexity
, 2010
"... Abstract. We devise an algorithm, e L 1, with the following specifications: It takes as input an arbitrary basis B = (bi)i ∈ Z d×d of a Euclidean lattice L; It computes a basis of L which is reduced for a mild modification of the LenstraLenstraLovász reduction; It terminates in time O(d 5+ε β + d ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
Abstract. We devise an algorithm, e L 1, with the following specifications: It takes as input an arbitrary basis B = (bi)i ∈ Z d×d of a Euclidean lattice L; It computes a basis of L which is reduced for a mild modification of the LenstraLenstraLovász reduction; It terminates in time O(d 5+ε β + d ω+1+ε β 1+ε) where β = log max ‖bi ‖ (for any ε> 0 and ω is a valid exponent for matrix multiplication). This is the first LLLreducing algorithm with a time complexity that is quasilinear in β and polynomial in d. The backbone structure of e L 1 is able to mimic the KnuthSchönhage fast gcd algorithm thanks to a combination of cuttingedge ingredients. First the bitsize of our lattice bases can be decreased via truncations whose validity are backed by recent numerical stability results on the QR matrix factorization. Also we establish a new framework for analyzing unimodular transformation matrices which reduce shifts of reduced bases, this includes bitsize control and new perturbation tools. We illustrate the power of this framework by generating a family of reduction algorithms. 1
An Analysis of Lehmer's Euclidean GCD Algorithm
 Proceedings Of The 1995 International Symposium On Symbolic And Algebraic Computation
, 1995
"... Let u and v be positive integers. We show that a slightly modified version of D. H. Lehmer's greatest common divisor algorithm will compute gcd(u; v) (with u ? v) using at most Of(log u log v)=k + k log v + log u + k 2 g bit operations and O(log u + k2 2k ) space, where k is the number of bits ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
Let u and v be positive integers. We show that a slightly modified version of D. H. Lehmer's greatest common divisor algorithm will compute gcd(u; v) (with u ? v) using at most Of(log u log v)=k + k log v + log u + k 2 g bit operations and O(log u + k2 2k ) space, where k is the number of bits in the multiprecision base of the algorithm. This is faster than Euclid's algorithm by a factor that is roughly proportional to k. Letting n be the number of bits in u and v, and setting k = b(log n)=4c, we obtain a subquadratic running time of O(n 2 = log n) in linear space. 1 Introduction Let u and v be positive integers. The greatest common divisor (GCD) of u and v is the largest integer d such that d divides both u and v. The most wellknown algorithm for computing GCDs is the Euclidean Algorithm. Much is known about this algorithm: the number of iterations required is \Theta(log v), and the worstcase running time is \Theta(log u log v), where time is measured in bit operation...
On a parallel Lehmer–Euclid GCD algorithm
 in: Proceedings of the International Symposium on Symbolic and Algebraic Computation ISSAC’2001
"... A new version of Euclid’s GCD algorithm is proposed. It matches the best existing parallel integer GCD algorithms since it can be achieved in Oɛ(n / log n) time using at most n 1+ɛ processors on CRCW PRAM. 1. ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
A new version of Euclid’s GCD algorithm is proposed. It matches the best existing parallel integer GCD algorithms since it can be achieved in Oɛ(n / log n) time using at most n 1+ɛ processors on CRCW PRAM. 1.
A Comparative Study of Algorithms for Computing Continued Fractions of Algebraic Numbers
 Pages 35–47 in Algorithmic number theory (Talence, 1996), Lecture Notes in Computer Science
, 1996
"... The obvious way to compute the continued fraction of a real number α> 1 is to compute a very accurate numerical approximation of α, and then to iterate the wellknown truncateandinvert step which computes the next partial quotient a = ⌊α ⌋ and the next complete quotient α ′ = 1/(α − a). This meth ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
The obvious way to compute the continued fraction of a real number α> 1 is to compute a very accurate numerical approximation of α, and then to iterate the wellknown truncateandinvert step which computes the next partial quotient a = ⌊α ⌋ and the next complete quotient α ′ = 1/(α − a). This method is called the basic method. In the course
Computer algebra of polynomials and rational functions
 AMERICAN MATHEMATICAL MONTHLY
, 1973
"... ..."
An analysis of the generalized binary GCD algorithm
 HIGH PRIMES AND MISDEMEANORS, LECTURES IN HONOUR OF HUGH COWIE
, 2007
"... In this paper we analyze a slight modification of Jebelean’s version of the kary GCD algorithm. Jebelean had shown that on nbit inputs, the algorithm runs in O(n²) time. In this paper, we show that the average running time of our modified algorithm is O(n²/ log n). This analysis involves explori ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
In this paper we analyze a slight modification of Jebelean’s version of the kary GCD algorithm. Jebelean had shown that on nbit inputs, the algorithm runs in O(n²) time. In this paper, we show that the average running time of our modified algorithm is O(n²/ log n). This analysis involves exploring the behavior of spurious factors introduced during the main loop of the algorithm. We also introduce a Jebeleanstyle leftshift kary GCD algorithm with a similar complexity that performs well in practice.