• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations | Disambiguate

New Algorithms for Relaxed Multiplication (2003)

by Joris van der Hoeven
Add To MetaCart

Tools

Sorted by:
Results 1 - 3 of 3

The Truncated Fourier Transform and Applications

by Joris Van Der Hoeven , 2004
"... In this paper, we present a truncated version of the classical Fast Fourier Transform. When applied to polynomial multiplication, this algorithm has the nice property of eliminating the “jumps ” in the complexity at powers of two. When applied to the multiplication of multivariate polynomials or tru ..."
Abstract - Cited by 12 (1 self) - Add to MetaCart
In this paper, we present a truncated version of the classical Fast Fourier Transform. When applied to polynomial multiplication, this algorithm has the nice property of eliminating the “jumps ” in the complexity at powers of two. When applied to the multiplication of multivariate polynomials or truncated multivariate power series, we gain a logarithmic factor with respect to the best previously known algorithms.

Newton’s method and FFT trading

by Joris Van Der Hoeven , 2006
"... Let C[[z]] be the ring of power series over an effective ring C. In Brent and Kung (1978), it was first shown that differential equations over C[[z]] may be solved in an asymptotically efficient way using Newton’s method. More precisely, if M(n) denotes the complexity for multiplying two polynomials ..."
Abstract - Cited by 4 (3 self) - Add to MetaCart
Let C[[z]] be the ring of power series over an effective ring C. In Brent and Kung (1978), it was first shown that differential equations over C[[z]] may be solved in an asymptotically efficient way using Newton’s method. More precisely, if M(n) denotes the complexity for multiplying two polynomials of degree < n over C, then the first n coefficients of the solution can be computed in time O(M(n)). However, this complexity does not take into account the dependency on the order r of the equation, which is exponential for the original method van der Hoeven (2002) and quadratic for a recent improvement Bostan et al. (2007). In this paper, we present a technique to further improve the dependency on r, by applying Newton’s method up to a lower order, such as n/r, and trading the remaining Newton steps against a lazy or relaxed algorithm in a suitable FFT model. The technique leads to improved asymptotic complexities for several basic operations on formal power series, such as division, exponentiation and the resolution of more general linear and non-linear systems of equations.

From implicit to recursive equations *

by Joris Van Der Hoeven
"... The technique of relaxed power series expansion provides an efficient way to solve so called recursive equations of the form F = Φ(F ), where the unknown F is a vector of power series, and where the solution can be obtained as the limit of the sequence 0, Φ(0), Φ(Φ(0)), . With respect to other tech ..."
Abstract - Add to MetaCart
The technique of relaxed power series expansion provides an efficient way to solve so called recursive equations of the form F = Φ(F ), where the unknown F is a vector of power series, and where the solution can be obtained as the limit of the sequence 0, Φ(0), Φ(Φ(0)), . With respect to other techniques, such as Newton's method, two major advantages are its generality and the fact that it takes advantage of possible sparseness of Φ. In this paper, we consider more general implicit equations of the form Φ(F ) = 0. Under mild assumptions on such an equation, we will show that it can be rewritten as a recursive equation. If we are actually computing with analytic functions, then recursive equations also provide a systematic device for the computation of verified error bounds. We will show how to apply our results in this context.
(Show Context)

Citation Context

... interested in the efficient computation of this solution up to a given order n. In the most favourable case, the equation Φ(f)= 0 is of the form f −Ψ(f) = 0, (2) ∗. This work has been supported by the ANR-09-JCJC-0098-01 MaGiX and ANR-10-BLAN-0109 projects, as well as a Digiteo 2009-36HD grant and Région Ile-de-France. 1 where the coefficient Ψ(f)n of zn in Ψ(f) only depends on earlier coefficients f0, , fn−1 of f , for each n∈N. In that case, fn = Ψ(f)n actually provides us with a recurrence relation for the computation of the solution. Using the technique of relaxed power series expansions [vdH02, vdH07a], which will briefly be recalled in section 2.4, it is then possible to compute the expansion f;n= f0+ + fn−1zn−1 up till order n in time T(n) = sR(n)+O(t n), (3) where s is the number of multiplications occurring in Ψ, t is the total size of Ψ as an expression, and R(n) denotes the complexity of relaxed multiplication of two power series up till order n. Here we assume that Ψ is represented by a directed acyclic graph, with possible common subexpressions. For large n, we have R(n) = O(M(n) log n), where M(n) = O(n log n log log n) denotes the complexity [CT65, SS71, CK91] of multiplying two p...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2016 The Pennsylvania State University