Results 1  10
of
11
PolynomialTime Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer
 SIAM J. on Computing
, 1997
"... A digital computer is generally believed to be an efficient universal computing device; that is, it is believed able to simulate any physical computing device with an increase in computation time by at most a polynomial factor. This may not be true when quantum mechanics is taken into consideration. ..."
Abstract

Cited by 882 (2 self)
 Add to MetaCart
A digital computer is generally believed to be an efficient universal computing device; that is, it is believed able to simulate any physical computing device with an increase in computation time by at most a polynomial factor. This may not be true when quantum mechanics is taken into consideration. This paper considers factoring integers and finding discrete logarithms, two problems which are generally thought to be hard on a classical computer and which have been used as the basis of several proposed cryptosystems. Efficient randomized algorithms are given for these two problems on a hypothetical quantum computer. These algorithms take a number of steps polynomial in the input size, e.g., the number of digits of the integer to be factored.
Simulating Physics with Computers
 SIAM Journal on Computing
, 1982
"... A digital computer is generally believed to be an efficient universal computing device; that is, it is believed able to simulate any physical computing device with an increase in computation time of at most a polynomial factor. This may not be true when quantum mechanics is taken into consideration. ..."
Abstract

Cited by 393 (1 self)
 Add to MetaCart
A digital computer is generally believed to be an efficient universal computing device; that is, it is believed able to simulate any physical computing device with an increase in computation time of at most a polynomial factor. This may not be true when quantum mechanics is taken into consideration. This paper considers factoring integers and finding discrete logarithms, two problems which are generally thought to be hard on a classical computer and have been used as the basis of several proposed cryptosystems. Efficient randomized algorithms are given for these two problems on a hypothetical quantum computer. These algorithms take a number of steps polynomial in the input size, e.g., the number of digits of the integer to be factored. AMS subject classifications: 82P10, 11Y05, 68Q10. 1 Introduction One of the first results in the mathematics of computation, which underlies the subsequent development of much of theoretical computer science, was the distinction between computable and ...
On The Rapid Computation of Various Polylogarithmic Constants”, manuscript
, 1996
"... We give algorithms for the computation of the dth digit of certain transcendental numbers in various bases. These algorithms can be easily implemented (multiple precision arithmetic is not needed), require virtually no memory, and feature run times that scale nearly linearly with the order of the d ..."
Abstract

Cited by 104 (31 self)
 Add to MetaCart
We give algorithms for the computation of the dth digit of certain transcendental numbers in various bases. These algorithms can be easily implemented (multiple precision arithmetic is not needed), require virtually no memory, and feature run times that scale nearly linearly with the order of the digit desired. They make it feasible to compute, for example, the billionth binary digit of log (2) or on a modest work station in a few hours run time. We demonstrate this technique by computing the ten billionth hexadecimal digit of, the billionth hexadecimal digits of 2 2 log(2) and log (2), and the ten billionth decimal digit of log(9=10). These calculations rest on the observation that very special types of identities exist for certain numbers like, 2,log(2) and log 2 (2). These are essentially polylogarithmic ladders in an integer base. A number of these identities that we deriveinthiswork appear to be new, for example the critical identity for:
Faster Integer Multiplication
 STOC'07
, 2007
"... For more than 35 years, the fastest known method for integer multiplication has been the SchönhageStrassen algorithm running in time O(n log n log log n). Under certain restrictive conditions there is a corresponding Ω(n log n) lower bound. The prevailing conjecture has always been that the complex ..."
Abstract

Cited by 41 (0 self)
 Add to MetaCart
For more than 35 years, the fastest known method for integer multiplication has been the SchönhageStrassen algorithm running in time O(n log n log log n). Under certain restrictive conditions there is a corresponding Ω(n log n) lower bound. The prevailing conjecture has always been that the complexity of an optimal algorithm is Θ(n log n). We present a major step towards closing the gap from above by presenting an algorithm running in time n log n 2 O(log ∗ n). The main result is for boolean circuits as well as for multitape Turing machines, but it has consequences to other models of computation as well.
Univariate polynomials: nearly optimal algorithms for factorization and rootfinding
 In Proceedings of the International Symposium on Symbolic and Algorithmic Computation
, 2001
"... To approximate all roots (zeros) of a univariate polynomial, we develop two effective algorithms and combine them in a single recursive process. One algorithm computes a basic well isolated zerofree annulus on the complex plane, whereas another algorithm numerically splits the input polynomial of t ..."
Abstract

Cited by 38 (11 self)
 Add to MetaCart
To approximate all roots (zeros) of a univariate polynomial, we develop two effective algorithms and combine them in a single recursive process. One algorithm computes a basic well isolated zerofree annulus on the complex plane, whereas another algorithm numerically splits the input polynomial of the nth degree into two factors balanced in the degrees and with the zero sets separated by the basic annulus. Recursive combination of the two algorithms leads to computation of the complete numerical factorization of a polynomial into the product of linear factors and further to the approximation of the roots. The new rootfinder incorporates the earlier techniques of Schönhage, Neff/Reif, and Kirrinnis and our old and new techniques and yields nearly optimal (up to polylogarithmic factors) arithmetic and Boolean cost estimates for the computational complexity of both complete factorization and rootfinding. The improvement over our previous record Boolean complexity estimates is by roughly the factor of n for complete factorization and also for the approximation of wellconditioned (well isolated) roots, whereas the same algorithm is also optimal (under both arithmetic and Boolean models of computing) for the worst case input polynomial, whose roots can be illconditioned, forming
Multidigit Multiplication For Mathematicians
"... . This paper surveys techniques for multiplying elements of various commutative rings. It covers Karatsuba multiplication, dual Karatsuba multiplication, Toom multiplication, dual Toom multiplication, the FFT trick, the twisted FFT trick, the splitradix FFT trick, Good's trick, the SchonhageStrass ..."
Abstract

Cited by 27 (9 self)
 Add to MetaCart
. This paper surveys techniques for multiplying elements of various commutative rings. It covers Karatsuba multiplication, dual Karatsuba multiplication, Toom multiplication, dual Toom multiplication, the FFT trick, the twisted FFT trick, the splitradix FFT trick, Good's trick, the SchonhageStrassen trick, Schonhage's trick, Nussbaumer's trick, the cyclic SchonhageStrassen trick, and the CantorKaltofen theorem. It emphasizes the underlying ring homomorphisms. 1.
Faster polynomial multiplication via multipoint Kronecker substitution
, 2007
"... Abstract. We give several new algorithms for dense polynomial multiplication based on the Kronecker substitution method. For moderately sized input polynomials, the new algorithms improve on the performance of the standard Kronecker substitution by a sizeable constant, both in theory and in empirica ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Abstract. We give several new algorithms for dense polynomial multiplication based on the Kronecker substitution method. For moderately sized input polynomials, the new algorithms improve on the performance of the standard Kronecker substitution by a sizeable constant, both in theory and in empirical tests. 1.
The Similarities (and Differences) between Polynomials and Integers
, 1994
"... The purpose of this paper is to examine the two domains of the integers and the polynomials, in an attempt to understand the nature of complexity in these very basic situations. Can we formalize the integer algorithms which shed light on the polynomial domain, and vice versa? When will the casti ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
The purpose of this paper is to examine the two domains of the integers and the polynomials, in an attempt to understand the nature of complexity in these very basic situations. Can we formalize the integer algorithms which shed light on the polynomial domain, and vice versa? When will the casting of one in the other speed up an existing algorithm? Why do some problems not lend themselves to this kind of speedup? We give several simple and natural theorems that show how problems in one domain can be embedded in the other, and we examine the complexitytheoretic consequences of these embeddings. We also prove several results on the impossibility of solving integer problems by mimicking their polynomial counterparts. 1 Introduction It is a fact frequently remarked upon that polynomials and integers share a number of characteristics. Usually the Fast Fourier Transform is then Supported by NSF grants DMS8807202 and CCR9204630. y Supported by NSF grant CCR9207797. 1 giv...
Towards an Implementation of a Computer Algebra System in a Functional Language
 In Intelligent Computer Mathematics, AISC
, 2008
"... Abstract. This paper discusses the pros and cons of using a functional language for implementing a computer algebra system. The contributions of the paper are twofold. Firstly, we discuss some language–centered design aspects of a computer algebra system — the “language unity” concept. Secondly, we ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Abstract. This paper discusses the pros and cons of using a functional language for implementing a computer algebra system. The contributions of the paper are twofold. Firstly, we discuss some language–centered design aspects of a computer algebra system — the “language unity” concept. Secondly, we provide an implementation of a fast polynomial multiplication algorithm, which is one of the core elements of a computer algebra system. The goal of the paper is to test the feasibility of an implementation of (some elements of) a computer algebra system in a modern functional language.
An implementation of Schonhage's multiplication algorithm (or how to compute the square of a number with one million digits on your workstation in less than one minute)
"... This report describes an implementation of a fast multiplication algorithm proposed by A. Schonhage [5]. The algorithm performs the multiplication of two integers modulo a number of the form 2 N + 1 in O(N log N log log N) operations. Using the BigNum package [2], we wrote a C program of less than ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
This report describes an implementation of a fast multiplication algorithm proposed by A. Schonhage [5]. The algorithm performs the multiplication of two integers modulo a number of the form 2 N + 1 in O(N log N log log N) operations. Using the BigNum package [2], we wrote a C program of less than 350 lines that performs both the modular and integer multiplication. We give detailed timings and comparisons with the naive method and Karatsuba's algorithm on two particular machines, a DecStation 5000 and an IBM RS 6000. 1 Introduction The Fast Fourier Transform (FFT) is a wellknown tool to reduce the theoretical complexity of algorithms, typically to transform a O(n 2 ) algorithm into a O(n log n) one. But so far, only a few people published an implementation of an algorithm using the FFT, and studied values of n for which the "fast" algorithm was really better. In the field of integer multiplication, two papers describe such implementations. In 1986, A. Schonhage encoded his algori...