Results 1  10
of
37
PolynomialTime Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer
 SIAM J. on Computing
, 1997
"... A digital computer is generally believed to be an efficient universal computing device; that is, it is believed able to simulate any physical computing device with an increase in computation time by at most a polynomial factor. This may not be true when quantum mechanics is taken into consideration. ..."
Abstract

Cited by 1278 (4 self)
 Add to MetaCart
A digital computer is generally believed to be an efficient universal computing device; that is, it is believed able to simulate any physical computing device with an increase in computation time by at most a polynomial factor. This may not be true when quantum mechanics is taken into consideration. This paper considers factoring integers and finding discrete logarithms, two problems which are generally thought to be hard on a classical computer and which have been used as the basis of several proposed cryptosystems. Efficient randomized algorithms are given for these two problems on a hypothetical quantum computer. These algorithms take a number of steps polynomial in the input size, e.g., the number of digits of the integer to be factored.
On the TimeSpace Complexity of Geometric Elimination Procedures
, 1999
"... In [25] and [22] a new algorithmic concept was introduced for the symbolic solution of a zero dimensional complete intersection polynomial equation system satisfying a certain generic smoothness condition. The main innovative point of this algorithmic concept consists in the introduction of a new ge ..."
Abstract

Cited by 29 (19 self)
 Add to MetaCart
In [25] and [22] a new algorithmic concept was introduced for the symbolic solution of a zero dimensional complete intersection polynomial equation system satisfying a certain generic smoothness condition. The main innovative point of this algorithmic concept consists in the introduction of a new geometric invariant, called the degree of the input system, and the proof that the most common elimination problems have time complexity which is polynomial in this degree and the length of the input.
Faster Fourier Transforms via Automatic Program Specialization
, 1998
"... this paper, we investigate whether partial evaluation is a suitable tool for generating an efficient FFT implementation. One measure of success is how many expressions are eliminated or simplified by partial evaluation. In this paper, we take a lowerlevel approach, and analyze the performance obtai ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
(Show Context)
this paper, we investigate whether partial evaluation is a suitable tool for generating an efficient FFT implementation. One measure of success is how many expressions are eliminated or simplified by partial evaluation. In this paper, we take a lowerlevel approach, and analyze the performance obtained on a particular architecture (the Sun Ultrasparc). We find that partial evaluation improves the unoptimized implementation over 9 times when the input contains 16 elements and over 3 times when the input contains 512 elements. In an expanded version of this paper [16], we demonstrate that these results are competitive with the performance of hand optimization techniques, as illustrated by a variety of existing, publiclyavailable implementations. The rest of this paper is organized as follows: Section 2 presents an overview of partial evaluation. Section 3 assesses the opportunities for specialization presented by the FFT algorithm, and estimates the speedup that can be obtained. Section 4 carries out the specialization of a simple implementation of the FFT. In Sections 5 and 6, we slightly rewrite the source program to get better results from specialization at compiletime and runtime, respectively. Finally, Section 7 describes other related work and Section 8 concludes. 2 Overview of partial evaluation Partial evaluation is an automatic program transformation that specializes a program with respect to part of its input. Expressions that depend only on the known input and on program constants are said to be static. These expressions can be evaluated during specialization. Other expressions are said to be dynamic.
Fast algorithms for Taylor shifts and certain difference equations (Extended Abstract)
, 1997
"... We analyze six algorithms for computing integral Taylor shifts for polynomials with integral coefficients. We present and analyze a new algorithm for solving the "key equation" which occurs in many rational and hypergeometric summation algorithms. In a special case, our algorithm is asympt ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
We analyze six algorithms for computing integral Taylor shifts for polynomials with integral coefficients. We present and analyze a new algorithm for solving the "key equation" which occurs in many rational and hypergeometric summation algorithms. In a special case, our algorithm is asymptotically faster than previously known methods. We give experimental results for our algorithms.
Time and SpaceEfficient Evaluation of Some Hypergeometric Constants
, 2007
"... The currently best known algorithms for the numerical evaluation of hypergeometric constants such as ζ(3) to d decimal digits have time complexity O(M(d) log 2 d) and space complexity of O(d log d) or O(d). Following work from Cheng, Gergel, Kim and Zima, we present a new algorithm with the same asy ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
(Show Context)
The currently best known algorithms for the numerical evaluation of hypergeometric constants such as ζ(3) to d decimal digits have time complexity O(M(d) log 2 d) and space complexity of O(d log d) or O(d). Following work from Cheng, Gergel, Kim and Zima, we present a new algorithm with the same asymptotic complexity, but more efficient in practice. Our implementation of this algorithm improves over existing programs for the computation of π, and we announce a new record of 2 billion digits for ζ(3).
Partial Fraction Decomposition in C(z) and Simultaneous Newton Iteration for Factorization in C[z]
, 1998
"... The subject of this paper is fast numerical algorithms for factoring univariate polynomials with complex coefficients and for computing partial fraction decompositions (PFDs) of rational functions in C(z). Numerically stable and computationally feasible versions of PFD are specified first for the sp ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
The subject of this paper is fast numerical algorithms for factoring univariate polynomials with complex coefficients and for computing partial fraction decompositions (PFDs) of rational functions in C(z). Numerically stable and computationally feasible versions of PFD are specified first for the special case of rational functions with all singularities in the unit disk (the ``bounded case'') and then for rational functions with arbitrarily distributed singularities. Two major algorithms for computing PFDs are presented: The first one is an extension of the ``splitting circle method' ' by A. Schonhage (``The Fundamental Theorem of Algebra in Terms of Computational Complexity,' ' Technical Report, Univ. Tubingen, 1982) for factoring polynomials in C[z] to an algorithm for PFD. The second algorithm is a Newton iteration for simultaneously improving the accuracy of all factors in an approximate factorization of a polynomial resp. all partial fractions of an approximate PFD of a rational function. Algorithmically useful starting value conditions for the Newton algorithm are provided. Three subalgorithms are of independent interest. They compute the product of a sequence of polynomials, the sum
Modular Algorithms for Polynomial Basis Conversion and Greatest Factorial Factorization
, 2000
"... We give new algorithms for converting between representations of polynomials with respect to certain kinds of bases, comprising the usual monomial basis and the falling factorial basis, for fast multiplication and Taylor shift in the falling factorial basis, and for computing the greatest factori ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
We give new algorithms for converting between representations of polynomials with respect to certain kinds of bases, comprising the usual monomial basis and the falling factorial basis, for fast multiplication and Taylor shift in the falling factorial basis, and for computing the greatest factorial factorization. We analyze both the classical and the new algorithms in terms of arithmetic coefficient operations. For the special case of polynomials with integer coefficients, we present modular variants of these methods and give cost estimates in terms of word operations.
Bidirectional Exact Integer Division
 J. SYMBOLIC COMPUTATION
, 1994
"... Division of integers is called exact if the remainder is zero. We show that the highorder part and the loworder part of the exact quotient can be computed independently from each other. A sequential implementation of this algorithm is up to twice as fast as ordinary exact division and four times ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Division of integers is called exact if the remainder is zero. We show that the highorder part and the loworder part of the exact quotient can be computed independently from each other. A sequential implementation of this algorithm is up to twice as fast as ordinary exact division and four times as fast as the general classical division algorithm if the dividend is twice as long as the divisor. A sharedmemory parallel implementation on two processors gains another factor of two in speed.
A search for Wilson primes
 Mathematics of Computation, preprint http://arxiv.org/abs/1209.3436
, 2012
"... ar ..."
(Show Context)