Results 1  10
of
18
Accurate Sum and Dot Product
 SIAM J. Sci. Comput
, 2005
"... Algorithms for summation and dot product of floating point numbers are presented which are fast in terms of measured computing time. We show that the computed results are as accurate as if computed in twice or Kfold working precision, K 3. For twice the working precision our algorithms for summa ..."
Abstract

Cited by 64 (5 self)
 Add to MetaCart
Algorithms for summation and dot product of floating point numbers are presented which are fast in terms of measured computing time. We show that the computed results are as accurate as if computed in twice or Kfold working precision, K 3. For twice the working precision our algorithms for summation and dot product are some 40 % faster than the corresponding XBLAS routines while sharing similar error estimates. Our algorithms are widely applicable because they require only addition, subtraction and multiplication of floating point numbers in the same working precision as the given data. Higher precision is unnecessary, algorithms are straight loops without branch, and no access to mantissa or exponent is necessary.
Propagation of roundoff errors in finite precision computations: a semantics approach
 In ESOP’02, number 2305 in LNCS
, 2002
"... Abstract. We introduce a concrete semantics for floatingpoint operations which describes the propagation of roundoff errors throughout a interpretation which can be straightforwardly derived from it. In our model, every elementary operation introduces a new first order error term, which is later co ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
Abstract. We introduce a concrete semantics for floatingpoint operations which describes the propagation of roundoff errors throughout a interpretation which can be straightforwardly derived from it. In our model, every elementary operation introduces a new first order error term, which is later combined with other error terms, yielding higher order error terms. The semantics is parameterized by the maximal order of error to be examined and verifies whether higher order errors actually are negligible. We consider also coarser semantics computing the contribution, to the final error, of the errors due to some intermediate computations.
An Aspect Language for Robust Programming
, 1999
"... this paper because robustness is particularly important in this #eld. The robustness aspect should be extended to be able to express directives on the other data types. If booleans are a trivial extension, a proper treatmentofpointerbased data structures implies the integration of an alias #or poin ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
this paper because robustness is particularly important in this #eld. The robustness aspect should be extended to be able to express directives on the other data types. If booleans are a trivial extension, a proper treatmentofpointerbased data structures implies the integration of an alias #or pointsto# analysis. Second, we are investigating the application of the generic framework to other aspects, in particular a debugging and a security aspect. The former should allow the de#nition of debugging properties such as #trace the value of variable x in procedure p as soon as y becomes null". A #rst approach to the latter is the integration of resourcebased securityschemes such as #deny access to I#O port 123 from processes belonging to process group pg".
Semantics of roundoff error propagation in finite precision computations
 Journal of Higher Order and Symbolic Computation
, 2006
"... Abstract. We introduce a concrete semantics for floatingpoint operations which describes the propagation of roundoff errors throughout a calculation. This semantics is used to assert the correctness of a static analysis which can be straightforwardly derived from it. In our model, every elementary ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
Abstract. We introduce a concrete semantics for floatingpoint operations which describes the propagation of roundoff errors throughout a calculation. This semantics is used to assert the correctness of a static analysis which can be straightforwardly derived from it. In our model, every elementary operation introduces a new first order error term, which is later propagated and combined with other error terms, yielding higher order error terms. The semantics is parameterized by the maximal order of error to be examined and verifies whether higher order errors actually are negligible. We consider also coarser semantics computing the contribution, to the final error, of the errors due to some intermediate computations. As a result, we obtain a family of semantics and we show that the less precise ones are abstractions of the more precise ones.
Accurate floatingpoint summation
, 2005
"... Given a vector of floatingpoint numbers with exact sum s, we present an algorithm for calculating a faithful rounding of s into the set of floatingpoint numbers, i.e. one of the immediate floatingpoint neighbors of s. If the s is a floatingpoint number, we prove that this is the result of our a ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Given a vector of floatingpoint numbers with exact sum s, we present an algorithm for calculating a faithful rounding of s into the set of floatingpoint numbers, i.e. one of the immediate floatingpoint neighbors of s. If the s is a floatingpoint number, we prove that this is the result of our algorithm. The algorithm adapts to the condition number of the sum, i.e. it is very fast for mildly conditioned sums with slowly increasing computing time proportional to the condition number. All statements are also true in the presence of underflow. Furthermore algorithms with Kfold accuracy are derived, where in that case the result is stored in a vector of K floatingpoint numbers. We also present an algorithm for rounding the sum s to the nearest floatingpoint number. Our algorithms are fast in terms of measured computing time because they neither require special operations such as access to mantissa or exponent, they contain no branch in the inner loop, nor do they require extra precision: The only operations used are standard floatingpoint addition, subtraction and multiplication in one working precision, for example double precision. Moreover, in contrast to other approaches, the algorithms are ideally suited for parallelization. We also sketch dot product algorithms with similar properties.
Applications of fast and accurate summation in computational geometry
, 2005
"... In this paper, we present a recent algorithm given by Ogita, Rump and Oishi [39] for accurately computing the sum of n floating point numbers. They also give a computational
error bound for the computed result. We apply this algorithm in computing determinant and more particularly in computing robus ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
In this paper, we present a recent algorithm given by Ogita, Rump and Oishi [39] for accurately computing the sum of n floating point numbers. They also give a computational
error bound for the computed result. We apply this algorithm in computing determinant and more particularly in computing robust geometric predicates used in computational geometry.
We improve existing results that use either a multiprecision libraries or extended large accumulators.
Borneo 1.0.2  Adding IEEE 754 floating point support to Java
, 1998
"... 1 2. INTRODUCTION 1 2.1. Portability and Purity 2 2.2. Goals of Borneo 3 2.3. Brief Description of an IEEE 754 Machine 3 2.4. Language Features for Floating Point Computation 6 3. FUTURE WORK 9 3.1. Incorporating Java 1.1 Features 9 3.2. Unicode Support 10 3.3. Flush to Zero 10 3.4. Variable Trappin ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
1 2. INTRODUCTION 1 2.1. Portability and Purity 2 2.2. Goals of Borneo 3 2.3. Brief Description of an IEEE 754 Machine 3 2.4. Language Features for Floating Point Computation 6 3. FUTURE WORK 9 3.1. Incorporating Java 1.1 Features 9 3.2. Unicode Support 10 3.3. Flush to Zero 10 3.4. Variable Trapping Status 10 3.5. Parametric Polymorphism 10 4. CONCLUSION 10 5. ACKNOWLEDGMENTS 11 6. BORNEO LANGUAGE SPECIFICATION 13 6.1. indigenous 13 6.2. Floating Point Literals 16 6.3. Float, Double, and Indigenous classes 17 6.4. New Numeric Types 18 6.5. Floating Point System Properties 20 + This material is based upon work supported under a National Science Foundation Graduate Fellowship. Any opinions, findings, conclusions or recommendations expressed in this publication are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. ii 6.6. Fused mac 21 6.7. Rounding Modes 21 6.8. Floating Point Exception Handling 31 6.9. Operator Overloading 51 6.10...
On the Computation of Correctly Rounded Sums
"... Abstract—This paper presents a study of some basic blocks needed in the design of floatingpoint summation algorithms. In particular, in radix2 floatingpoint arithmetic, we show that among the set of the algorithms with no comparisons performing only floatingpoint additions/subtractions, the 2Sum ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract—This paper presents a study of some basic blocks needed in the design of floatingpoint summation algorithms. In particular, in radix2 floatingpoint arithmetic, we show that among the set of the algorithms with no comparisons performing only floatingpoint additions/subtractions, the 2Sum algorithm introduced by Knuth is minimal, both in terms of number of operations and depth of the dependency graph. We investigate the possible use of another algorithm, Dekker’s Fast2Sum algorithm, in radix10 arithmetic. We give methods for computing, in radix 10, the floatingpoint number nearest the average value of two floatingpoint numbers. We also prove that under reasonable conditions, an algorithm performing only roundtonearest additions/subtractions cannot compute the roundtonearest sum of at least three floatingpoint numbers. Starting from an algorithm due to Boldo and Melquiond, we also present new results about the computation of the correctlyrounded sum of three floatingpoint numbers. For a few of our algorithms, we assume new operations defined by the recent IEEE 7542008 Standard are available. Index Terms—Floatingpoint arithmetic, summation algorithms, correct rounding, 2Sum and Fast2Sum algorithms. Ç 1
Automatic Detection of FloatingPoint Exceptions
, 1996
"... It is wellknown that floatingpoint exceptions can be disastrous and writing exceptionfree numerical programs is very difficult. Thus, it is important to automatically detect such errors. In this paper, we present Ariadne, a practical symbolic execution system specifically designed and implemented ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
It is wellknown that floatingpoint exceptions can be disastrous and writing exceptionfree numerical programs is very difficult. Thus, it is important to automatically detect such errors. In this paper, we present Ariadne, a practical symbolic execution system specifically designed and implemented for detecting floatingpoint exceptions. Ariadne systematically transforms a numerical program to explicitly check each exception triggering condition. Ariadne symbolically executes the transformed program using real arithmetic to find candidate realvalued inputs that can reach and trigger an exception. Ariadne converts each candidate input into a floatingpoint number, then tests it against the original program. In general, approximating floatingpoint arithmetic with real arithmetic can change paths from feasible to infeasible and vice versa. The key insight of this work is that, for the problem of detecting floatingpoint exceptions, this approximation works well in practice because, if one input reaches an exception, many are likely to, and at least one of them will do so over both floatingpoint and real arithmetic. To realize Ariadne, we also devised a novel, practical linearization technique to solve nonlinear constraints. We extensively evaluated Ariadne over 467 scalar functions in the widely used GNU Scientific Library (GSL). Our results show that Ariadne is practical and identifies a large number of real runtime exceptions in GSL. The GSL developers confirmed our preliminary findings and look forward to Ariadne’s public release, which we plan to do in the near future.