Results 11  20
of
61
Why and how to use arbitrary precision
 Computing in Science and Engineering
"... Most nowadays floatingpoint computations are done in double precision, i.e., with a significand (or mantissa, see the “Glossary ” sidebar) of 53 bits. However, some applications require more precision: doubleextended (64 bits or more), quadruple precision (113 bits) or even more. In an article pub ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Most nowadays floatingpoint computations are done in double precision, i.e., with a significand (or mantissa, see the “Glossary ” sidebar) of 53 bits. However, some applications require more precision: doubleextended (64 bits or more), quadruple precision (113 bits) or even more. In an article published in The Astronomical Journal in 2001, Toshio Fukushima says: “In the days of powerful computers, the errors of numerical integration are the main limitation in the research of complex dynamical systems, such as the longterm stability of our solar system and of some exoplanets [...] ” and gives an example where using double precision leads to an accumulated roundoff error of more than 1 radian for the solar system! Another example where arbitrary precision is useful is static analysis of floatingpoint programs running in electronic control units of aircrafts or in nuclear reactors. Assume we want to determine 10 decimal digits of the constant 173746a + 94228b − 78487c, where a = sin(1022), b = log(17.1), and c = exp(0.42). We will consider this as our running example throughout the paper. In this simple example, there are no input errors, since all values are known exactly, i.e., with infinite precision. Our first program — in the C language — is:
alphaCertified: certifying solutions to polynomial systems
 ACM Trans. Math. Softw
"... Abstract. Smale’s αtheory uses estimates related to the convergence of Newton’s method to certify that Newton iterations will converge quadratically to solutions to a square polynomial system. The program alphaCertified implements algorithms based on αtheory to certify solutions of polynomial syst ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract. Smale’s αtheory uses estimates related to the convergence of Newton’s method to certify that Newton iterations will converge quadratically to solutions to a square polynomial system. The program alphaCertified implements algorithms based on αtheory to certify solutions of polynomial systems using both exact rational arithmetic and arbitrary precision floating point arithmetic. It also implements algorithms that certify whether a given point corresponds to a real solution, and algorithms to heuristically validate solutions to overdetermined systems. Examples are presented to demonstrate the algorithms.
Guaranteed Precision for Transcendental and Algebraic Computation made Easy
, 2006
"... Dedicated to the friends and families who blessed and supported me iv ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Dedicated to the friends and families who blessed and supported me iv
Worst Cases for the Exponential Function in the IEEE 754r decimal64 Format
"... We searched for the worst cases for correct rounding of the exponential function in the IEEE 754r decimal64 format, and computed all the bad cases whose distance from a breakpoint (for all rounding modes) is less than 10 −15 ulp, and we give the worst ones. In particular, the worst case for x  ≥ ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We searched for the worst cases for correct rounding of the exponential function in the IEEE 754r decimal64 format, and computed all the bad cases whose distance from a breakpoint (for all rounding modes) is less than 10 −15 ulp, and we give the worst ones. In particular, the worst case for x  ≥ 3 × 10 −11 is exp(9.407822313572878 × 10 −2) = 1.098645682066338 5 0000000000000000 278.... This work can be extended to other elementary functions in the decimal64 format and allows the design of reasonably fast routines that will evaluate these functions with correct rounding, at least in some domains.
D.: Fast and robust generation of cityscale seamless 3d urban models
 In: SIAM Conference on Geometric and Physical Modeling (GD/SPM). SIAM/ACM
, 2011
"... Since the introduction of the concept of “Digital Earth”, almost every major international city has been reconstructed in the virtual world. A large volume of geometric models describing urban objects has become freely available in the public domain via software like ArcGlobe and Google Earth. Alth ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Since the introduction of the concept of “Digital Earth”, almost every major international city has been reconstructed in the virtual world. A large volume of geometric models describing urban objects has become freely available in the public domain via software like ArcGlobe and Google Earth. Although mostly created for visualization, these urban models can benefit many applications beyond visualization including city scale evacuation planning and earth phenomenon simulations. However, these models are mostly loosely structured and implicitly defined and require tedious manual preparation that usually takes weeks if not months before they can be used. Designing algorithms that can robustly and efficiently handle unstructured urban models at the city scale becomes a main technical challenge. In this paper, we present a framework that generates seamless 3D architectural models from 2D ground plans with elevation and height information. These overlapping ground plans are commonly used in the current GIS software such as ESRI ArcGIS and urban model synthesis methods to depict various components of buildings. Due to measurement and manual errors, these ground plans usually contain small, sharp, and various (nearly) degenerate artifacts. In this paper, we show both theoretically and empirically that our framework is efficient and numerically stable. Based on our review of the related work, we believe this is the first work that attempts to automatically create 3D architectural meshes for simulation at the city level. With the goal of providing greater benefit beyond visualization from this large volume of urban models, our initial results are encouraging.
A mixed precision Monte Carlo methodology for reconfigurable accelerator systems
 In Proc. FPGA
, 2012
"... This paper introduces a novel mixed precision methodology applicable to any Monte Carlo (MC) simulation. It involves the use of datapaths with reduced precision, and the resulting errors are corrected by auxiliary sampling. An analytical model is developed for a reconfigurable accelerator system wi ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
This paper introduces a novel mixed precision methodology applicable to any Monte Carlo (MC) simulation. It involves the use of datapaths with reduced precision, and the resulting errors are corrected by auxiliary sampling. An analytical model is developed for a reconfigurable accelerator system with a fieldprogrammable gate array (FPGA) and a general purpose processor (GPP). Optimisation based on mixed integer geometric programming is employed for determining the optimal reduced precision and optimal resource allocation among the MC datapaths and correction datapaths. Experiments show that the proposed mixed precision methodology requires up to 11 % additional evaluations while less than 4 % of all the evaluations are computed in the reference precision; the resulting designs are up to 7.1 times faster and 3.1 times more energy efficient than baseline double precision FPGA designs, and up to 163 times faster and 170 times more energy efficient than quadcore software designs optimised with the Intel compiler and Math Kernel Library. Our methodology also produces designs for pricing Asian options which are 4.6 times faster and 5.5 times more energy efficient than NVIDIA Tesla C2070 GPU implementations.
On the Computation of Correctly Rounded Sums
"... Abstract—This paper presents a study of some basic blocks needed in the design of floatingpoint summation algorithms. In particular, in radix2 floatingpoint arithmetic, we show that among the set of the algorithms with no comparisons performing only floatingpoint additions/subtractions, the 2Sum ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract—This paper presents a study of some basic blocks needed in the design of floatingpoint summation algorithms. In particular, in radix2 floatingpoint arithmetic, we show that among the set of the algorithms with no comparisons performing only floatingpoint additions/subtractions, the 2Sum algorithm introduced by Knuth is minimal, both in terms of number of operations and depth of the dependency graph. We investigate the possible use of another algorithm, Dekker’s Fast2Sum algorithm, in radix10 arithmetic. We give methods for computing, in radix 10, the floatingpoint number nearest the average value of two floatingpoint numbers. We also prove that under reasonable conditions, an algorithm performing only roundtonearest additions/subtractions cannot compute the roundtonearest sum of at least three floatingpoint numbers. Starting from an algorithm due to Boldo and Melquiond, we also present new results about the computation of the correctlyrounded sum of three floatingpoint numbers. For a few of our algorithms, we assume new operations defined by the recent IEEE 7542008 Standard are available. Index Terms—Floatingpoint arithmetic, summation algorithms, correct rounding, 2Sum and Fast2Sum algorithms. Ç 1
A MixedPrecision Fused Multiply and Add
"... Abstract—The floatingpoint fused multiply and add, computing R=AB+C with a single rounding, is now an IEEE754 standard operator. This article investigates variants in which the addend C and the result R are of a larger format, for instance binary64 (double precision), while the multiplier inputs A ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract—The floatingpoint fused multiply and add, computing R=AB+C with a single rounding, is now an IEEE754 standard operator. This article investigates variants in which the addend C and the result R are of a larger format, for instance binary64 (double precision), while the multiplier inputs A and B are of a smaller format, for instance binary32 (single precision). Like the standard FMA operator, the proposed mixedprecision operator computes AB+C with a single rounding, and fully support subnormals. With minor modifications, it is also able to perform the standard FMA in the smaller format, and the standard addition in the larger format. For sumofproduct applications, the proposed mixedprecision FMA provides the accumulation accuracy of the larger format at a cost that is shown to be only one third more than that of a classical FMA in the smaller format. Besides, we show that such a mixedprecision FMA, although not mentioned in existing standard (IEEE 754, C and Fortran), is perfectly compliant to these standards. For DSP and embedded applications, a mixed binary32/binary64 FMA will enable binary64 computing where it is most needed, at a small cost overhead with respect to current binary32 FMAs, and with fewer data transfers, hence lower power than a pure binary64 approach. In highend processors, a mixed binary64/binary128 FMA could provide an adequate solution to the binary128 requirements of very large scale computing applications. Keywords Floatingpoint; fused multiplyadd; dot product; mixed precision. I.
Proceedings in Applied Mathematics and Mechanics, 11/4/2008 Implementing Taylor models arithmetic with floatingpoint arithmetic
"... The implementation of Taylor models arithmetic may use floatingpoint arithmetic to benefit from the speed of the floatingpoint implementation. The issue is then to take into account the roundoff errors. Here, we assume that the floatingpoint arithmetic is compliant with the IEEE754 standard. We s ..."
Abstract
 Add to MetaCart
The implementation of Taylor models arithmetic may use floatingpoint arithmetic to benefit from the speed of the floatingpoint implementation. The issue is then to take into account the roundoff errors. Here, we assume that the floatingpoint arithmetic is compliant with the IEEE754 standard. We show how to get tight bounds of the roundoff errors, and more generally how to get high accuracy for the coefficients as well as for the bounds on the roundoff errors. Copyright line will be provided by the publisher 1 Taylor models: definition and implementation issues A Taylor model of order n is a pair (p, I) composed of a polynomial p of degree n and an interval I. It represents the class of functions in m variables f: IR m → IR that verify: ∀x ∈ [−1; 1] m, ∃r ∈ I: f(x) = p(x) + r. By convention, we choose [−1; 1] m as the input domain for every Taylor model and we apply a linear change of variables if needed. Arithmetic and algebraic operations, composition, elementary functions (sine, exponential, arctangent...) can be applied