Results 1  10
of
11
HighPrecision Computation and Mathematical Physics
"... At the present time, IEEE 64bit floatingpoint arithmetic is sufficiently accurate for most scientific applications. However, for a rapidly growing body of important scientific computing applications, a higher level of numeric precision is required. Such calculations are facilitated by highpreci ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
At the present time, IEEE 64bit floatingpoint arithmetic is sufficiently accurate for most scientific applications. However, for a rapidly growing body of important scientific computing applications, a higher level of numeric precision is required. Such calculations are facilitated by highprecision software packages that include highlevel language translation modules to minimize the conversion effort. This paper presents a survey of recent applications of these techniques and provides some analysis of their numerical requirements. These applications include supernova simulations, climate modeling, planetary orbit calculations, Coulomb nbody atomic systems, scattering amplitudes of quarks, gluons and bosons, nonlinear oscillator theory, Ising theory, quantum field theory and experimental mathematics. We conclude that highprecision arithmetic facilities are now an indispensable component of a modern largescale scientific computing environment.
A Note on the Space Complexity of Fast DFinite Function Evaluation
"... Abstract. We state and analyze a generalization of the “truncation trick ” suggested by Gourdon and Sebah to improve the performance of power series evaluation by binary splitting. It follows from our analysis that the values of Dfinite functions (i.e., functions described as solutions of linear di ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. We state and analyze a generalization of the “truncation trick ” suggested by Gourdon and Sebah to improve the performance of power series evaluation by binary splitting. It follows from our analysis that the values of Dfinite functions (i.e., functions described as solutions of linear differential equations with polynomial coefficients) may be computed with error bounded by 2 −p in timeO(p(lgp) 3+o(1) ) and spaceO(p). The standard fast algorithm for this task, due to Chudnovsky and Chudnovsky, achieves the same time complexity bound but requires Θ(p lgp) bits of memory. 1.
Florent de Dinechin
"... Generating certified and efficient numerical codes requires information ranging from the mathematical level to the representation of numbers. Even though the mathematical semantics can be expressed using the content part of MathML, this language does not encompass the implementation on computers. In ..."
Abstract
 Add to MetaCart
Generating certified and efficient numerical codes requires information ranging from the mathematical level to the representation of numbers. Even though the mathematical semantics can be expressed using the content part of MathML, this language does not encompass the implementation on computers. Indeed various arithmetics may be involved, like floatingpoint or fixedpoint, in fixed precision or arbitrary precision, and current tools do not handle all of these. Therefore we propose in this paper LEMA (Langage pour les Expressions Mathématiques Annotées), a descriptive language based on MathML with additional expressiveness. LEMA will be used during the automatic generation of certified numerical codes. Such a generation process typically involves several steps, and LEMA would thus act as a glue
ProjectTeam CACAO Curves, Algebra, Computer Arithmetic, and so On
"... c t i v it y e p o r t 2009 Table of contents ..."
New modular multiplication and division algorithms based on continued fraction expansion
"... In this paper, we apply results on number systems based on continued fraction expansions to modular arithmetic. We provide two new algorithms in order to compute modular multiplication and modular division. The presented algorithms are based on the Euclidean algorithm and are of quadratic complexity ..."
Abstract
 Add to MetaCart
In this paper, we apply results on number systems based on continued fraction expansions to modular arithmetic. We provide two new algorithms in order to compute modular multiplication and modular division. The presented algorithms are based on the Euclidean algorithm and are of quadratic complexity. 1.
RIGOROUS UNIFORM APPROXIMATION OF DFINITE FUNCTIONS USING CHEBYSHEV EXPANSIONS — DRAFT ⋆ —
"... Abstract. A wide range of numerical methods exists for computing polynomial approximations of solutions of ordinary differential equations based on Chebyshev series expansions or Chebyshev interpolation polynomials. We consider the application of such methods in the context of rigorous computing (wh ..."
Abstract
 Add to MetaCart
Abstract. A wide range of numerical methods exists for computing polynomial approximations of solutions of ordinary differential equations based on Chebyshev series expansions or Chebyshev interpolation polynomials. We consider the application of such methods in the context of rigorous computing (where we need guarantees on the accuracy of the result), and from the complexity point of view. It is wellknown that the order
Multipleprecision evaluation of the Airy Ai function with reduced cancellation
"... Abstract—The series expansion at the origin of the Airy function Ai(x) is alternating and hence problematic to evaluate for x> 0 due to cancellation. Based on a method recently proposed by Gawronski, Müller, and Reinhard, we exhibit two functions F and G, both with nonnegative Taylor expansions at t ..."
Abstract
 Add to MetaCart
Abstract—The series expansion at the origin of the Airy function Ai(x) is alternating and hence problematic to evaluate for x> 0 due to cancellation. Based on a method recently proposed by Gawronski, Müller, and Reinhard, we exhibit two functions F and G, both with nonnegative Taylor expansions at the origin, such that Ai(x) = G(x)/F(x). The sums are now wellconditioned, but the Taylor coefficients of G turn out to obey an illconditioned threeterm recurrence. We use the classical Miller algorithm to overcome this issue. We bound all errors and our implementation allows an arbitrary and certified accuracy, that can be used, e.g., for providing correct rounding in arbitrary precision. KeywordsSpecial functions; algorithm; numerical evaluation; arbitrary precision; Miller method; asymptotics; correct rounding; error bounds. Many mathematical functions (e.g., trigonometric functions, erf, Bessel functions) have a Taylor series of the form y(x) = x s ∞∑ yn x dn, yn ∼ (−1) n λ αn n! κ (1) n=0 with d, s ∈ Z and α, κ> 0. For large x> 0, the computation in finite precision arithmetic of such a sum is notoriously prone to catastrophic cancellation. Indeed, the terms ynxn  are first growing before the series “starts to converge ” when nκ ≥ αx. In particular, when nκ ≈ αx, the terms ynxn usually get much larger than y(x). Eventually, their leading bits cancel out while lowerorder bits that actually contribute to the first significant digits of the result get lost in roundoff errors. This cancellation phenomenon makes the direct computation by Taylor series impractical for large values of x. Often, the function y(x) admits an asymptotic expansion as x → + ∞ that can be used very effectively to obtain numerical approximations when x is large, but might not provide enough accuracy (at least without resorting to sophisticated resummation methods) for intermediate values of x. In the case of the error function erf(x), a classical trick going back at least to Stegun and Zucker [18] is to compute erf(x) as G(x)/F(x) where F(x) = ex2 and [1, Eq. 7.6.2] G(x) = e x2 erf(x) = 2x
Author manuscript, published in "21st IEEE Symposium on Computer Arithmetic (2013)" Multipleprecision evaluation of the Airy Ai function with reduced cancellation
, 2013
"... Abstract—The series expansion at the origin of the Airy function Ai(x) is alternating and hence problematic to evaluate for x> 0 due to cancellation. Based on a method recently proposed by Gawronski, Müller, and Reinhard, we exhibit two functions F and G, both with nonnegative Taylor expansions at t ..."
Abstract
 Add to MetaCart
Abstract—The series expansion at the origin of the Airy function Ai(x) is alternating and hence problematic to evaluate for x> 0 due to cancellation. Based on a method recently proposed by Gawronski, Müller, and Reinhard, we exhibit two functions F and G, both with nonnegative Taylor expansions at the origin, such that Ai(x) = G(x)/F(x). The sums are now wellconditioned, but the Taylor coefficients of G turn out to obey an illconditioned threeterm recurrence. We use the classical Miller algorithm to overcome this issue. We bound all errors and our implementation allows an arbitrary and certified accuracy, that can be used, e.g., for providing correct rounding in arbitrary precision. Index Terms—Special functions; algorithm; numerical evaluation; arbitrary precision; Miller method; asymptotics; correct rounding; error bounds. Many mathematical functions (e.g., trigonometric functions,