Results 1  10
of
12
Improved dense multivariate polynomial factorization algorithms
 J. Symbolic Comput
, 2005
"... We present new deterministic and probabilistic algorithms that reduce the factorization of dense polynomials from several to one variable. The deterministic algorithm runs in subquadratic time in the dense size of the input polynomial, and the probabilistic algorithm is softly optimal when the numb ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
We present new deterministic and probabilistic algorithms that reduce the factorization of dense polynomials from several to one variable. The deterministic algorithm runs in subquadratic time in the dense size of the input polynomial, and the probabilistic algorithm is softly optimal when the number of variables is at least three. We also investigate the reduction from several to two variables and improve the quantitative version of Bertini’s irreducibility theorem. Key words: Polynomial factorization, Hensel lifting, Bertini’s irreducibility theorem.
Faster Multiplication in GF(2)[x]
"... Abstract. In this paper, we discuss an implementation of various algorithms for multiplying polynomials in GF(2)[x]: variants of the window methods, Karatsuba’s, ToomCook’s, Schönhage’s and Cantor’s algorithms. For most of them, we propose improvements that lead to practical speedups. ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper, we discuss an implementation of various algorithms for multiplying polynomials in GF(2)[x]: variants of the window methods, Karatsuba’s, ToomCook’s, Schönhage’s and Cantor’s algorithms. For most of them, we propose improvements that lead to practical speedups.
A GMPbased implementation of SchönhageStrassen’s large integer multiplication algorithm
 In Proceedings of ISSAC’07
, 2007
"... Abstract. SchönhageStrassen’s algorithm is one of the best known algorithms for multiplying large integers. Implementing it efficiently is of utmost importance, since many other algorithms rely on it as a subroutine. We present here an improved implementation, based on the one distributed within th ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
(Show Context)
Abstract. SchönhageStrassen’s algorithm is one of the best known algorithms for multiplying large integers. Implementing it efficiently is of utmost importance, since many other algorithms rely on it as a subroutine. We present here an improved implementation, based on the one distributed within the GMP library. The following ideas and techniques were used or tried: faster arithmetic modulo 2 n + 1, improved cache locality, Mersenne transforms, Chinese Remainder Reconstruction, the √ 2 trick, Harley’s and Granlund’s tricks, improved tuning. We also discuss some ideas we plan to try in the future.
Automatic differentiation tools in computational dynamical systems
, 2008
"... Abstract. In this paper we describe a unified framework for the computation of power series expansions of invariant manifolds and normal forms of vector fields, and estimate the computational cost when applied to simple models. By simple we mean that the model can be written using a finite sequence ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper we describe a unified framework for the computation of power series expansions of invariant manifolds and normal forms of vector fields, and estimate the computational cost when applied to simple models. By simple we mean that the model can be written using a finite sequence of compositions of arithmetic operations and elementary functions. In this case, the tools of Automatic Differentiation are the key to produce efficient algorithms. By efficient we mean that the cost of computing the coefficients up to order k of the expansion of a ddimensional invariant manifold attached to a fix point of a ndimensional vector field (d = n for normal forms) is proportional to the cost of computing the truncated product of two dvariate power series up to order k. We present actual implementations of some of the algorithms, with special emphasis to the computation of the 4D center manifold of a Lagrangian point of the Restricted Three Body Problem. Mathematics Subject Classification: 34C20,34C30,34C30,34C30,65Pxx,68W30
O n the com plexity of m ultivariate blockwise p olynom ial multiplication *
"... ABSTRACT In this article, we study the problem of multiplying two multivariate polynomials which are somewhat but not too sparse, typically like polynomials with convex supports. We design and analyze an algorithm which is based on blockwise decomposition of the input polynomials, and which perform ..."
Abstract
 Add to MetaCart
(Show Context)
ABSTRACT In this article, we study the problem of multiplying two multivariate polynomials which are somewhat but not too sparse, typically like polynomials with convex supports. We design and analyze an algorithm which is based on blockwise decomposition of the input polynomials, and which performs the actual multiplication in an FFT model or some other more general so called "evaluated model". If the input polynomials have total degrees at most d, then, under mild assumptions on the coefficient ring, we show that their product can be computed with O(s 1.5337 ) ring operations, where s denotes the number of all the monomials of total degree at most 2 d.
Space and TimeEfficient Polynomial Multiplication
, 2009
"... Countless algorithms have been developed for the multiplication of univariate polynomials and multiprecision integers, but all those with subquadratic time complexity currently require at least Ω(n) extra space for the computation. A new routine based on the Karatsuba/Ofman algorithm is presented ..."
Abstract
 Add to MetaCart
Countless algorithms have been developed for the multiplication of univariate polynomials and multiprecision integers, but all those with subquadratic time complexity currently require at least Ω(n) extra space for the computation. A new routine based on the Karatsuba/Ofman algorithm is presented with the same time complexity of O(n 1.59) but only O(log n) extra space. A second routine based on the method of Schönhage/Strassen achieves the same pseudolinear time and O(1) extra space, but only under certain conditions. A preliminary implementation over Fp[x], where p fits into a single machine word, is presented and compared with existing software.
Mathematical Sciences
, 2008
"... This manuscript describes a number of algorithms that can be used to quickly evaluate a polynomial over a collection of points and interpolate these evaluations back into a polynomial. Engineers define the “Fast Fourier Transform ” as a method of solving the interpolation problem where the coefficie ..."
Abstract
 Add to MetaCart
(Show Context)
This manuscript describes a number of algorithms that can be used to quickly evaluate a polynomial over a collection of points and interpolate these evaluations back into a polynomial. Engineers define the “Fast Fourier Transform ” as a method of solving the interpolation problem where the coefficient ring used to construct the polynomials has a special multiplicative structure. Mathematicians define the “Fast Fourier Transform ” as a method of solving the multipoint evaluation problem. One purpose of the document is to provide a mathematical treatment of the topic of the “Fast Fourier Transform ” that can also be understood by someone who has an understanding of the topic from the engineering perspective. The manuscript will also introduce several new algorithms that efficiently solve the multipoint evaluation problem over certain finite fields and require fewer finite field operations than existing techniques. The document will also demonstrate that these new algorithms can be used to multiply polynomials with finite field coefficients with fewer operations than Schönhage’s algorithm in most circumstances. A third objective of this document is to provide a mathematical perspective
The Truncated Fourier Transform 1
, 2009
"... I summarize (and correct some mistakes from) Joris van der Hoeven’s papers [2] and [3]. These papers introduce the Truncated Fourier Transform (TFT) which is a variation of the Fast Fourier Transform (FFT) that allows one to work with input vectors that are not a power of two. I also expand upon the ..."
Abstract
 Add to MetaCart
(Show Context)
I summarize (and correct some mistakes from) Joris van der Hoeven’s papers [2] and [3]. These papers introduce the Truncated Fourier Transform (TFT) which is a variation of the Fast Fourier Transform (FFT) that allows one to work with input vectors that are not a power of two. I also expand upon the development of the inverse TFT in order to impose some clarity on van der Hoeven’s descriptions. 1