## Fast multiple-precision evaluation of elementary functions (1976)

Venue: | Journal of the ACM |

Citations: | 90 - 6 self |

### BibTeX

@ARTICLE{Brent76fastmultiple-precision,

author = {Richard P. Brent},

title = {Fast multiple-precision evaluation of elementary functions},

journal = {Journal of the ACM},

year = {1976},

volume = {23},

pages = {242--251}

}

### Years of Citing Articles

### OpenURL

### Abstract

XI3STnXC'r. Let f(x) be one of the usual elementary functions (exp, log, artan, sin, cosh, etc.), and let M(n) be the number of single-precision operations reqmred to multiply n-bit integers. It is shown that f(x) can be evaluated, with relative error 0(2-'), m O(M(n)log (n)) operations as n-- ~ ~, for any floating-point number x (with an n-bit fraction) in a suitable finite interval. From the Sehonbage-Strassen bound on M(n), it follows that an n-bit approximation to f(x) may be evaluated in O(n logS(n) log log(n)) operations. Special cases include the evaluation of constants such as f, e, and e'. The algorithms depend on the theory of elhptic integrals, using the arithmetic-geometric mean iteration and ascending Landen transformations. Itsr wol~os Ar~o en~As~s: multiple-precision arithmetic, analytic complexity, arithmetic-geometric mean, computational complexity, elementary function, elliptic integral, evaluation of x, exponentml, Landen transformation, logarithm, trigonometric funetmn CR CATEGORIES: 5.12, 5.15, 5.25 1.

### Citations

737 | Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables - Abramowitz, Stegun - 1964 |

62 | Multiple-precision zero-finding methods and the complexity of elementary function evaluation - Brent - 1975 |

60 | A fortran multiple-precision arithmetic package
- Brent
- 1978
(Show Context)
Citation Context ... ~ = 0.5772 ... can be evaluated with O(M(n)log s n) operations, using Sweeney's method [22] combined with binary splitting [4]. Similarly for r(a), where a is rational (or even algebraic): see Brent =-=[7]-=-. Related results are given by Gosper [13] and Schroeppel [20]. It is not known whether any of these upper bounds are asymptotically the best possible. 2. Reciprocals and Square Roots In this section ... |

14 |
Algorithms involving arithmetic and geometric means
- Carlson
- 1971
(Show Context)
Citation Context ...d. A fast algorithm for evaluating log(x) was also found independently by Salamin (see [2 or 5]). Apparently similar algorithms for evaluating elementary functions are given by Borchardt [3], Carlson =-=[8, 9]-=-, and Thacher [23]. However, these algorithms require O(M(n)n) or O(M(n)n ~) operations, so our algorithms are asymptotically faster. We know how to evaluate certain other constants and functions almo... |

2 |
Exercices de Calcul Intégral
- Legendre
(Show Context)
Citation Context ...me that a and ~b are in [0, ~r/2]. The complete elliptic integrals, F(v/2, c~) and E(Tr/2, a), are simply written as F(a) and E(a), respectively. Legendre's Relation. We need the identity of Legendre =-=[17]-=-: E(a)F(~r/2 - a) % E(~r/2 - a)F(a) -- F(a)F(Tr/2 -- a) = ~r/2, (4.3) and, in particular, the special case 2E(v/4)F(v/4) - (F(v/4))2 = v/2. (4.4) Small Angle Approximation. From (4.1) it is clear that... |

1 |
Gesammelte Werke
- BORCHARDT
(Show Context)
Citation Context ...r is described. A fast algorithm for evaluating log(x) was also found independently by Salamin (see [2 or 5]). Apparently similar algorithms for evaluating elementary functions are given by Borchardt =-=[3]-=-, Carlson [8, 9], and Thacher [23]. However, these algorithms require O(M(n)n) or O(M(n)n ~) operations, so our algorithms are asymptotically faster. We know how to evaluate certain other constants an... |

1 |
The complexity of multiple-precmlon arithmetic. Proc Seminar on Complexity of Computational Problem Solving (held at Austrahan National
- BRENT
- 1975
(Show Context)
Citation Context ... n, in O(M(n) log(n)) operations. Note that O(M(n)n) operations are required if the Taylor series for log(1 -t- x) is summed in the obvious way. Our result improves the bound O(M(n) logS(n)) given in =-=[4]-=-, although the algorithms described there may be faster for small n. Preliminary results are given in Sections 2 to 5. In Section 2 we give, for completeness, the known result that division and extrac... |

1 |
Computer Solutwn of Nonhnear Equations
- BRENT
(Show Context)
Citation Context ... is easy to show that it is sufficient to use precision n at the last iteration (~ = k - 1), precision slightly greater than n/2 for i = k - 2, etc. (Details, and more efficient methods, are given in =-=[4, 6]-=-.) Thus the result follows from Lemma 2.1. Since x/y = x(1/y), it is clear that floating-point division may also be done in O(M(n)) operations. LEMMA 2.3. If C ~_ 0 is a floating-point number, then c ... |

1 |
AND SIMONY1, C Manuscript in preparation
- FINKEL, GUIBAS
- 1974
(Show Context)
Citation Context ...) operations. This is asymptotically faster than the usual O(n 2) methods [14, 21] if a fast multiplication algorithm is used. A highprecision computation of ~" by a similar algorithm is described in =-=[10]-=-. Note that, becatrse the arithmetic-geometric mean iteration is not self-correcting, we cannot obtain a bound O(M(n) ) in the same way as for the evaluation of reciprocals and square roots by Newton'... |

1 |
Carl Fr~edmch Gauss Werke
- GAUSS
(Show Context)
Citation Context ...wl = sin(2~b,+l -- ¢,) = 2sx,/(1 + v,:). (4.15) Arithmetic-Geometric Mean Iteration. From the ascending Landen transformation it is possible to derive the arithmetic-geometric mean iteration of Gauss =-=[12]-=- and Lagrange [16]: if ao = 1, bo = cos a > 0, and a,+~ = (a, + b,)/2, (4,16) b,+x = (a,b,) t, (4.17)s246 RICHARD P. BRENT then Also, if Co =sm a and then lim a, = 7r/[2F(a)]. (4.18) s-too c,+i = a, -... |

1 |
Acceleration of serms. Memo No 304
- GosPR
- 1974
(Show Context)
Citation Context ...(n)log s n) operations, using Sweeney's method [22] combined with binary splitting [4]. Similarly for r(a), where a is rational (or even algebraic): see Brent [7]. Related results are given by Gosper =-=[13]-=- and Schroeppel [20]. It is not known whether any of these upper bounds are asymptotically the best possible. 2. Reciprocals and Square Roots In this section we show that reciprocals and square roots ... |

1 |
The Art of Computer Proqramm~nq
- KNUTH
(Show Context)
Citation Context ... weak regularity condition M(an) <_ ~M(n), (1.1) for some a and fl in (0; 1), and all sufficiently large n. Similar, but stronger, conditions are usually assumed, either explicitly [11] or implicitly =-=[15]-=-. Our assumptions are certainly valid if the Sch6nhage-Strassen method [15, 19] is used to multiply n-bit integers (in the usual binary representation) in 0 (n log ( n ) log log (n) ) operation s. The... |

1 |
Oeuvres de Lagrange, Tome
- LAGRANG
(Show Context)
Citation Context ... ¢,) = 2sx,/(1 + v,:). (4.15) Arithmetic-Geometric Mean Iteration. From the ascending Landen transformation it is possible to derive the arithmetic-geometric mean iteration of Gauss [12] and Lagrange =-=[16]-=-: if ao = 1, bo = cos a > 0, and a,+~ = (a, + b,)/2, (4,16) b,+x = (a,b,) t, (4.17)s246 RICHARD P. BRENT then Also, if Co =sm a and then lim a, = 7r/[2F(a)]. (4.18) s-too c,+i = a, -- a,+l, (4.19) t--... |

1 |
Computation of 7r using arithmetic-geometric mean
- SALADIN
- 1971
(Show Context)
Citation Context ...sible to estimate how large n needs to be before our algorithms are faster than the conventional ones. After this paper was submitted for publication, Bill Gosper drew my attention to Salamin's paper =-=[18]-=-, where an algorithm very similar to our algorithm for evaluating 7r is described. A fast algorithm for evaluating log(x) was also found independently by Salamin (see [2 or 5]). Apparently similar alg... |

1 |
Unpubhshed manuscript dated
- SCHROEPPEL
- 1975
(Show Context)
Citation Context ...ns, using Sweeney's method [22] combined with binary splitting [4]. Similarly for r(a), where a is rational (or even algebraic): see Brent [7]. Related results are given by Gosper [13] and Schroeppel =-=[20]-=-. It is not known whether any of these upper bounds are asymptotically the best possible. 2. Reciprocals and Square Roots In this section we show that reciprocals and square roots of floating-point nu... |