Results 1  10
of
36
Algorithms for Arbitrary Precision Floating Point Arithmetic
 Proceedings of the 10th Symposium on Computer Arithmetic
, 1991
"... We present techniques which may be used to perform computations of very high accuracy using only straightforward floating point arithmetic operations of limited precision, and we prove the validity of these techniques under very general hypotheses satisfied by most implementations of floating point ..."
Abstract

Cited by 71 (1 self)
 Add to MetaCart
We present techniques which may be used to perform computations of very high accuracy using only straightforward floating point arithmetic operations of limited precision, and we prove the validity of these techniques under very general hypotheses satisfied by most implementations of floating point arithmetic. To illustrate the application of these techniques, we present an algorithm which computes the intersection of a line and a line segment. The algorithm is guaranteed to correctly decide whether an intersection exists and, if so, to produce the coordinates of the intersection point accurate to full precision. Moreover, the algorithm is usually quite efficient; only in a few cases does guaranteed accuracy necessitate an expensive computation. 1. Introduction "How accurate is a computed result if each intermediate quantity is computed using floating point arithmetic of a given precision?" The casual reader of Wilkinson's famous treatise [21] and similar roundoff error analyses might...
On Properties of Floating Point Arithmetics: Numerical Stability and the Cost of Accurate Computations
, 1992
"... Floating point arithmetics generally possess many regularity properties in addition to those that are typically used in roundoff error analyses; these properties can be exploited to produce computations that are more accurate and cost effective than many programmers might think possible. Furthermore ..."
Abstract

Cited by 32 (0 self)
 Add to MetaCart
Floating point arithmetics generally possess many regularity properties in addition to those that are typically used in roundoff error analyses; these properties can be exploited to produce computations that are more accurate and cost effective than many programmers might think possible. Furthermore, many of these properties are quite simple to state and to comprehend, but few programmers seem to be aware of them (or at least willing to rely on them). This dissertation presents some of these properties and explores their consequences for computability, accuracy, cost, and portability. For example, we consider several algorithms for summing a sequence of numbers and show that under very general hypotheses, we can compute a sum to full working precision at only somewhat greater cost than a simple accumulation, which can often produce a sum with no significant figures at all. This example, as well as others we present, can be generalized further by substituting still more complex algorith...
Formal Verification of Floating Point Trigonometric Functions
 Formal Methods in ComputerAided Design: Third International Conference FMCAD 2000, volume 1954 of Lecture Notes in Computer Science
, 2000
"... Abstract. We have formal verified a number of algorithms for evaluating transcendental functions in doubleextended precision floating point arithmetic in the Intel ® IA64 architecture. These algorithms are used in the Itanium TM processor to provide compatibility with IA32 (x86) hardware transcen ..."
Abstract

Cited by 32 (5 self)
 Add to MetaCart
Abstract. We have formal verified a number of algorithms for evaluating transcendental functions in doubleextended precision floating point arithmetic in the Intel ® IA64 architecture. These algorithms are used in the Itanium TM processor to provide compatibility with IA32 (x86) hardware transcendentals, and similar ones are used in mathematical software libraries. In this paper we describe in some depth the formal verification of the sin and cos functions, including the initial range reduction step. This illustrates the different facets of verification in this field, covering both pure mathematics and the detailed analysis of floating point rounding. 1
Handling FloatingPoint Exceptions in Numeric Programs
 ACM Transactions on Programming Languages and Systems
, 1996
"... Language Constructs Termination exception mechanisms like in Ada and C++ are supposed to terminate an unsuccessful computation as soon as possible after an exception occurs. However, none of the examples of numeric exception handling presented earlier depends ACM Transactions on Programming Languag ..."
Abstract

Cited by 24 (0 self)
 Add to MetaCart
Language Constructs Termination exception mechanisms like in Ada and C++ are supposed to terminate an unsuccessful computation as soon as possible after an exception occurs. However, none of the examples of numeric exception handling presented earlier depends ACM Transactions on Programming Languages and Systems, Vol. 18, No. 2, March 1996. Handling FloatingPoint Exceptions 167 on the immediate termination of a calculation signaling an exception. The IEEE exception flags scheme actually takes advantage of the fact that an immediate jump is not necessary; by raising a flag, making a substitution, and continuing, the IEEE Standard supports both an attempted/alternate form and a default substitution with a single, simple reponse to exceptions. A detraction of the IEEE flag solution, though, is its obvious lack of structure. Instead of being forced to set and reset flags, one would ideally have available a language construct that more directly reflected the attempted/alternate algorit...
Toward Efficient Static Analysis of FinitePrecision Effects In DSP Applications via Affine Arithmetic Modeling
 in DSP Applications via Affine Arithmetic Modeling. In Design Automation Conference (DAC 2003
, 2003
"... We introduce a static error analysis technique, based on smart interval methods from a#ne arithmetic, to help designers translate DSP codes from fullprecision floatingpoint to smaller finiteprecision formats. The technique gives results for numerical error estimation comparable to detailed simula ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
We introduce a static error analysis technique, based on smart interval methods from a#ne arithmetic, to help designers translate DSP codes from fullprecision floatingpoint to smaller finiteprecision formats. The technique gives results for numerical error estimation comparable to detailed simulation, but achieves speedups of three orders of magnitude by avoiding actual bitlevel simulation. We show results for experiments mapping common DSP transform algorithms to implementations using small custom floating point formats.
Taylor models and floatingpoint arithmetic: proof that arithmetic operations are validated in COSY
, 2003
"... The goal of this paper is to prove that the implementation of Taylor models in COSY, based on oatingpoint arithmetic, computes results satisfying the containment property, i.e. guaranteed results. First, ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
The goal of this paper is to prove that the implementation of Taylor models in COSY, based on oatingpoint arithmetic, computes results satisfying the containment property, i.e. guaranteed results. First,
Scalar fused multiplyadd instructions produce floatingpoint matrix arithmetic provably accurate to the penultimate digit
 ACM Transactions on Mathematical Software
"... Combined with doubly compensated summation, scalar fused multiplyadd instructions redefine the concept of floatingpoint arithmetic, because they allow for the computation of sums of real or complex matrix products accurate to the penultimate digit. Particular cases include complex arithmetic, dot ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
Combined with doubly compensated summation, scalar fused multiplyadd instructions redefine the concept of floatingpoint arithmetic, because they allow for the computation of sums of real or complex matrix products accurate to the penultimate digit. Particular cases include complex arithmetic, dot products, cross products, residuals of linear systems, determinants of small matrices, discriminants of quadratic, cubic, or quartic equations, and polynomials.
Polynomial Real Root Finding in Bernstein Form
, 1994
"... This dissertation addresses the problem of approximating, in floatingpoint arithmetic, all real roots (simple, clustered, and multiple) over the unit interval of polynomials in Bernstein... ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
This dissertation addresses the problem of approximating, in floatingpoint arithmetic, all real roots (simple, clustered, and multiple) over the unit interval of polynomials in Bernstein...
FloatingPoint Error Analysis Based On Affine Arithmetic
 Proc. IEEE Int. Conf. on Acoust., Speech, and Signal Processing
, 2003
"... During the development of floatingpoint signal processing systems, an efficient error analysis method is needed to guarantee the output quality. We present a novel approach to floatingpoint error bound analysis based on affine arithmetic. The proposed method not only provides a tighter bound than ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
During the development of floatingpoint signal processing systems, an efficient error analysis method is needed to guarantee the output quality. We present a novel approach to floatingpoint error bound analysis based on affine arithmetic. The proposed method not only provides a tighter bound than the conventional approach, but also is applicable to any arithmetic operation. The error estimation accuracy is evaluated across several different applications which cover linear operations, nonlinear operations, and feedback systems. The accuracy decreases with the depth of computation path and also is affected by the linearity of the floatingpoint operations.