Results 11  20
of
93
NumGfun: a Package for Numerical and Analytic Computation with Dfinite Functions
"... This article describes the implementation in the software package NumGfun of classical algorithms that operate on solutions of linear differential equations or recurrence relations with polynomial coefficients, including what seems to be the first general implementation of the fast highprecision nu ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
This article describes the implementation in the software package NumGfun of classical algorithms that operate on solutions of linear differential equations or recurrence relations with polynomial coefficients, including what seems to be the first general implementation of the fast highprecision numerical evaluation algorithms of Chudnovsky & Chudnovsky. In some cases, our descriptions contain improvements over existing algorithms. We also provide references to relevant ideas not currently used in NumGfun.
Why and how to use arbitrary precision
 Computing in Science and Engineering
"... Most nowadays floatingpoint computations are done in double precision, i.e., with a significand (or mantissa, see the “Glossary ” sidebar) of 53 bits. However, some applications require more precision: doubleextended (64 bits or more), quadruple precision (113 bits) or even more. In an article pub ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Most nowadays floatingpoint computations are done in double precision, i.e., with a significand (or mantissa, see the “Glossary ” sidebar) of 53 bits. However, some applications require more precision: doubleextended (64 bits or more), quadruple precision (113 bits) or even more. In an article published in The Astronomical Journal in 2001, Toshio Fukushima says: “In the days of powerful computers, the errors of numerical integration are the main limitation in the research of complex dynamical systems, such as the longterm stability of our solar system and of some exoplanets [...] ” and gives an example where using double precision leads to an accumulated roundoff error of more than 1 radian for the solar system! Another example where arbitrary precision is useful is static analysis of floatingpoint programs running in electronic control units of aircrafts or in nuclear reactors. Assume we want to determine 10 decimal digits of the constant 173746a + 94228b − 78487c, where a = sin(1022), b = log(17.1), and c = exp(0.42). We will consider this as our running example throughout the paper. In this simple example, there are no input errors, since all values are known exactly, i.e., with infinite precision. Our first program — in the C language — is:
alphaCertified: certifying solutions to polynomial systems
 ACM Trans. Math. Softw
"... Abstract. Smale’s αtheory uses estimates related to the convergence of Newton’s method to certify that Newton iterations will converge quadratically to solutions to a square polynomial system. The program alphaCertified implements algorithms based on αtheory to certify solutions of polynomial syst ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Smale’s αtheory uses estimates related to the convergence of Newton’s method to certify that Newton iterations will converge quadratically to solutions to a square polynomial system. The program alphaCertified implements algorithms based on αtheory to certify solutions of polynomial systems using both exact rational arithmetic and arbitrary precision floating point arithmetic. It also implements algorithms that certify whether a given point corresponds to a real solution, and algorithms to heuristically validate solutions to overdetermined systems. Examples are presented to demonstrate the algorithms.
Worst Cases for the Exponential Function in the IEEE 754r decimal64 Format
"... We searched for the worst cases for correct rounding of the exponential function in the IEEE 754r decimal64 format, and computed all the bad cases whose distance from a breakpoint (for all rounding modes) is less than 10 −15 ulp, and we give the worst ones. In particular, the worst case for x  ≥ ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
We searched for the worst cases for correct rounding of the exponential function in the IEEE 754r decimal64 format, and computed all the bad cases whose distance from a breakpoint (for all rounding modes) is less than 10 −15 ulp, and we give the worst ones. In particular, the worst case for x  ≥ 3 × 10 −11 is exp(9.407822313572878 × 10 −2) = 1.098645682066338 5 0000000000000000 278.... This work can be extended to other elementary functions in the decimal64 format and allows the design of reasonably fast routines that will evaluate these functions with correct rounding, at least in some domains.
Guaranteed Precision for Transcendental and Algebraic Computation made Easy
, 2006
"... ..."
(Show Context)
On the Computation of Correctly Rounded Sums
"... Abstract—This paper presents a study of some basic blocks needed in the design of floatingpoint summation algorithms. In particular, in radix2 floatingpoint arithmetic, we show that among the set of the algorithms with no comparisons performing only floatingpoint additions/subtractions, the 2Sum ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract—This paper presents a study of some basic blocks needed in the design of floatingpoint summation algorithms. In particular, in radix2 floatingpoint arithmetic, we show that among the set of the algorithms with no comparisons performing only floatingpoint additions/subtractions, the 2Sum algorithm introduced by Knuth is minimal, both in terms of number of operations and depth of the dependency graph. We investigate the possible use of another algorithm, Dekker’s Fast2Sum algorithm, in radix10 arithmetic. We give methods for computing, in radix 10, the floatingpoint number nearest the average value of two floatingpoint numbers. We also prove that under reasonable conditions, an algorithm performing only roundtonearest additions/subtractions cannot compute the roundtonearest sum of at least three floatingpoint numbers. Starting from an algorithm due to Boldo and Melquiond, we also present new results about the computation of the correctlyrounded sum of three floatingpoint numbers. For a few of our algorithms, we assume new operations defined by the recent IEEE 7542008 Standard are available. Index Terms—Floatingpoint arithmetic, summation algorithms, correct rounding, 2Sum and Fast2Sum algorithms. Ç 1
The Design of Core 2: A Library for Exact Numeric Computation in Geometry and Algebra
 In Third International Congress on Mathematical Software, volume 6327, Kobe, Japon
, 2010
"... Abstract. There is a growing interest in numericalgebraic techniques in the computer algebra community as such techniques can speed up many applications. This paper is concerned with one such approach called Exact Numeric Computation (ENC). The ENC approach to algebraic number computation is based ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. There is a growing interest in numericalgebraic techniques in the computer algebra community as such techniques can speed up many applications. This paper is concerned with one such approach called Exact Numeric Computation (ENC). The ENC approach to algebraic number computation is based on iterative verified approximations, combined with constructive zero bounds. This paper describesCore 2, the latest version of the Core Library, a package designed for applications such as nonlinear computational geometry. The adaptive complexity of ENC combined with filters makes such libraries practical. Core 2 smoothly integrates our algebraic ENC subsystem with transcendental functions with εaccurate comparisons. This paper describes how the design of Core 2 addresses key software issues such as modularity, extensibility, and efficiency in a setting that combines algebraic and transcendental elements. Our redesign preserves the original goals of the Core Library, namely, to provide a simple and natural interface for ENC computation to support rapid prototyping and exploration. We present examples, experimental results, and timings for our new system, released as Core Library 2.0. 1
Foundations of exact rounding
 Proc. WALCOM 2009
"... Abstract. Exact rounding of numbers and functions is a fundamental computational problem. This paper introduces the mathematical and computational foundations for exact rounding. We show that all the elementary functions in ISO standard (ISO/IEC 10967) for Language Independent Arithmetic can be exac ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Exact rounding of numbers and functions is a fundamental computational problem. This paper introduces the mathematical and computational foundations for exact rounding. We show that all the elementary functions in ISO standard (ISO/IEC 10967) for Language Independent Arithmetic can be exactly rounded, in any format, and to any precision. Moreover, a priori complexity bounds can be given for these rounding problems. Our conclusions are derived from results in transcendental number theory. 1
A mixed precision Monte Carlo methodology for reconfigurable accelerator systems
 In Proc. FPGA
, 2012
"... This paper introduces a novel mixed precision methodology applicable to any Monte Carlo (MC) simulation. It involves the use of datapaths with reduced precision, and the resulting errors are corrected by auxiliary sampling. An analytical model is developed for a reconfigurable accelerator system wi ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
This paper introduces a novel mixed precision methodology applicable to any Monte Carlo (MC) simulation. It involves the use of datapaths with reduced precision, and the resulting errors are corrected by auxiliary sampling. An analytical model is developed for a reconfigurable accelerator system with a fieldprogrammable gate array (FPGA) and a general purpose processor (GPP). Optimisation based on mixed integer geometric programming is employed for determining the optimal reduced precision and optimal resource allocation among the MC datapaths and correction datapaths. Experiments show that the proposed mixed precision methodology requires up to 11 % additional evaluations while less than 4 % of all the evaluations are computed in the reference precision; the resulting designs are up to 7.1 times faster and 3.1 times more energy efficient than baseline double precision FPGA designs, and up to 163 times faster and 170 times more energy efficient than quadcore software designs optimised with the Intel compiler and Math Kernel Library. Our methodology also produces designs for pricing Asian options which are 4.6 times faster and 5.5 times more energy efficient than NVIDIA Tesla C2070 GPU implementations.