Results 1 
8 of
8
Numerical Evaluation of Special Functions
 In W. Gautschi (Ed.), AMS Proceedings of Symposia in Applied Mathematics 48
, 1994
"... . This document is an excerpt from the current hypertext version of an article that appeared in Walter Gautschi (ed.), Mathematics of Computation 19431993: A HalfCentury of Computational Mathematics, Proceedings of Symposia in Applied Mathematics 48, American Mathematical Society, Providence, ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
. This document is an excerpt from the current hypertext version of an article that appeared in Walter Gautschi (ed.), Mathematics of Computation 19431993: A HalfCentury of Computational Mathematics, Proceedings of Symposia in Applied Mathematics 48, American Mathematical Society, Providence, RI 02940, 1994. The symposium was held at the University of British Columbia August 913, 1993, in honor of the fiftieth anniversary of the journal Mathematics of Computation. The original abstract follows. Higher transcendental functions continue to play varied and important roles in investigations by engineers, mathematicians, scientists and statisticians. The purpose of this paper is to assist in locating useful approximations and software for the numerical generation of these functions, and to offer some suggestions for future developments in this field. 5.9. Mathieu, Lam'e, and Spheroidal Wave Functions. 5.9.1. Characteristic Values of Mathieu's Equation. Software Packages:...
A FORTRAN package for floatingpoint multipleprecision arithmetic
 ACM transactions on mathematical software
, 1991
"... FM is a collection of Fortran77 routines which performs floatingpoint multipleprecision arithmetic and elementary functions. Results are almost always correctly rounded, and due to improved algorithms used for the elementary functions, reasonable efficiency is obtained. Categories and Subject Des ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
FM is a collection of Fortran77 routines which performs floatingpoint multipleprecision arithmetic and elementary functions. Results are almost always correctly rounded, and due to improved algorithms used for the elementary functions, reasonable efficiency is obtained. Categories and Subject Descriptors: G.1.0 [Numerical Analysis]: General – computer arithmetic; G.1.2 [Numerical Analysis]: Approximation – elementary function approximation;
Algorithm: Fortran 90 Software for FloatingPoint Multiple Precision Arithmetic, Gamma and Related Functions
"... INTRODUCTION The FMLIB package of Fortran subroutines for floatingpoint multipleprecision computation now includes routines to evaluate the Gamma function and related functions. These routines use the basic FM operations and derived types [Smith 1991; 1998] for multipleprecision arithmetic, cons ..."
Abstract
 Add to MetaCart
INTRODUCTION The FMLIB package of Fortran subroutines for floatingpoint multipleprecision computation now includes routines to evaluate the Gamma function and related functions. These routines use the basic FM operations and derived types [Smith 1991; 1998] for multipleprecision arithmetic, constants, and elementary functions. The new functions available are essentially those in the chapter on the Gamma function in a reference such as Abramowitz and Stegun [1965]. The FM routines almost always return correctly rounded results. Extensive testing has found no cases where the error before rounding the final value was more than 0.001 unit in the last place of the returned result. This means that in rare cases the returned result di#ered from the correctly rounded value by a maximum of one unit in the last place for a given precision. Most of these routines gain speed by storing constants such as Bernoulli numbers and Euler's constant so they do not have to be computed again on s
High Radix BKM algorithm with Selection by Rounding
, 2002
"... We present in this paper a high radix implementation of BKM algorithm. This is a shift and add CORDICLike algorithm that allows fast computations of complex exponential and logarithm. The improvement lies in fewer iterations for a given precision and in the reduction of the size of lookup tables fo ..."
Abstract
 Add to MetaCart
We present in this paper a high radix implementation of BKM algorithm. This is a shift and add CORDICLike algorithm that allows fast computations of complex exponential and logarithm. The improvement lies in fewer iterations for a given precision and in the reduction of the size of lookup tables for high radices.
The Yacas Book of Algorithms
"... September 27, 2007 This book is a detailed description of the algorithms used in the Yacas system for exact symbolic and arbitraryprecision numerical computations. Very few of these algorithms are new, and most are wellknown. The goal of this book is to become a compendium of all relevant issues o ..."
Abstract
 Add to MetaCart
September 27, 2007 This book is a detailed description of the algorithms used in the Yacas system for exact symbolic and arbitraryprecision numerical computations. Very few of these algorithms are new, and most are wellknown. The goal of this book is to become a compendium of all relevant issues of design and implementation of these algorithms.
HIGHPRECISION ARITHMETIC IS USEFUL IN MANY DIFFERENT COMPUTATIONAL PROBLEMS. THE MOST COMMON IS A NUMERI CALLY UNSTABLE ALGORITHM, FOR WHICH, SAY,
"... 53bit (ANSI/IEEE 7541985 Standard) double precision would not yield a sufficiently accurate result. For most current machines, 53bit double precision is the highest provided in hardware, giving about 16 significant digits. (By “significant digits, ” I mean the number of equivalent decimal digits ..."
Abstract
 Add to MetaCart
53bit (ANSI/IEEE 7541985 Standard) double precision would not yield a sufficiently accurate result. For most current machines, 53bit double precision is the highest provided in hardware, giving about 16 significant digits. (By “significant digits, ” I mean the number of equivalent decimal digits of precision, rather than the number with base ≠ 10.) Calculating with 30 or 40 significant digits can often overcome the algorithm’s instability and provide adequate accuracy. In this article, I will give some examples of calculations in which multiple precision can be useful. What Is Multiple Precision? Although the term “multiple precision ” brings to mind applications such as determining π to billions of digits, most applications of highprecision arithmetic require only a few tens of digits, rather than hundreds or thousands. As an example, consider the Bessel function J 1(x) for large x  (up to 200 to 300). Of course, many subroutine libraries include J 1(x), but for many functions represented by similar formulas, no such libraries are available. For small values of x, we can use the convergent series J ( x) 1
Multipleprecision evaluation of the Airy Ai function with reduced cancellation
"... Abstract—The series expansion at the origin of the Airy function Ai(x) is alternating and hence problematic to evaluate for x> 0 due to cancellation. Based on a method recently proposed by Gawronski, Müller, and Reinhard, we exhibit two functions F and G, both with nonnegative Taylor expansions at t ..."
Abstract
 Add to MetaCart
Abstract—The series expansion at the origin of the Airy function Ai(x) is alternating and hence problematic to evaluate for x> 0 due to cancellation. Based on a method recently proposed by Gawronski, Müller, and Reinhard, we exhibit two functions F and G, both with nonnegative Taylor expansions at the origin, such that Ai(x) = G(x)/F(x). The sums are now wellconditioned, but the Taylor coefficients of G turn out to obey an illconditioned threeterm recurrence. We use the classical Miller algorithm to overcome this issue. We bound all errors and our implementation allows an arbitrary and certified accuracy, that can be used, e.g., for providing correct rounding in arbitrary precision. KeywordsSpecial functions; algorithm; numerical evaluation; arbitrary precision; Miller method; asymptotics; correct rounding; error bounds. Many mathematical functions (e.g., trigonometric functions, erf, Bessel functions) have a Taylor series of the form y(x) = x s ∞∑ yn x dn, yn ∼ (−1) n λ αn n! κ (1) n=0 with d, s ∈ Z and α, κ> 0. For large x> 0, the computation in finite precision arithmetic of such a sum is notoriously prone to catastrophic cancellation. Indeed, the terms ynxn  are first growing before the series “starts to converge ” when nκ ≥ αx. In particular, when nκ ≈ αx, the terms ynxn usually get much larger than y(x). Eventually, their leading bits cancel out while lowerorder bits that actually contribute to the first significant digits of the result get lost in roundoff errors. This cancellation phenomenon makes the direct computation by Taylor series impractical for large values of x. Often, the function y(x) admits an asymptotic expansion as x → + ∞ that can be used very effectively to obtain numerical approximations when x is large, but might not provide enough accuracy (at least without resorting to sophisticated resummation methods) for intermediate values of x. In the case of the error function erf(x), a classical trick going back at least to Stegun and Zucker [18] is to compute erf(x) as G(x)/F(x) where F(x) = ex2 and [1, Eq. 7.6.2] G(x) = e x2 erf(x) = 2x
Author manuscript, published in "21st IEEE Symposium on Computer Arithmetic (2013)" Multipleprecision evaluation of the Airy Ai function with reduced cancellation
, 2013
"... Abstract—The series expansion at the origin of the Airy function Ai(x) is alternating and hence problematic to evaluate for x> 0 due to cancellation. Based on a method recently proposed by Gawronski, Müller, and Reinhard, we exhibit two functions F and G, both with nonnegative Taylor expansions at t ..."
Abstract
 Add to MetaCart
Abstract—The series expansion at the origin of the Airy function Ai(x) is alternating and hence problematic to evaluate for x> 0 due to cancellation. Based on a method recently proposed by Gawronski, Müller, and Reinhard, we exhibit two functions F and G, both with nonnegative Taylor expansions at the origin, such that Ai(x) = G(x)/F(x). The sums are now wellconditioned, but the Taylor coefficients of G turn out to obey an illconditioned threeterm recurrence. We use the classical Miller algorithm to overcome this issue. We bound all errors and our implementation allows an arbitrary and certified accuracy, that can be used, e.g., for providing correct rounding in arbitrary precision. Index Terms—Special functions; algorithm; numerical evaluation; arbitrary precision; Miller method; asymptotics; correct rounding; error bounds. Many mathematical functions (e.g., trigonometric functions,