Results 1  10
of
19
Accurate Sum and Dot Product
 SIAM J. Sci. Comput
, 2005
"... Algorithms for summation and dot product of floating point numbers are presented which are fast in terms of measured computing time. We show that the computed results are as accurate as if computed in twice or Kfold working precision, K 3. For twice the working precision our algorithms for summa ..."
Abstract

Cited by 64 (5 self)
 Add to MetaCart
Algorithms for summation and dot product of floating point numbers are presented which are fast in terms of measured computing time. We show that the computed results are as accurate as if computed in twice or Kfold working precision, K 3. For twice the working precision our algorithms for summation and dot product are some 40 % faster than the corresponding XBLAS routines while sharing similar error estimates. Our algorithms are widely applicable because they require only addition, subtraction and multiplication of floating point numbers in the same working precision as the given data. Higher precision is unnecessary, algorithms are straight loops without branch, and no access to mantissa or exponent is necessary.
THE ACCURACY OF FLOATING POINT SUMMATION
, 1993
"... The usual recursive summation technique is just one of several ways of computing the sum of n floating point numbers. Five summation methods and their variations are analyzed here. The accuracy of the methods is compared using rounding error analysis and numerical experiments. Four ofthe methods are ..."
Abstract

Cited by 39 (0 self)
 Add to MetaCart
The usual recursive summation technique is just one of several ways of computing the sum of n floating point numbers. Five summation methods and their variations are analyzed here. The accuracy of the methods is compared using rounding error analysis and numerical experiments. Four ofthe methods are shown to be special cases of a general class of methods, and an error analysis is given for this class. No one method is uniformly more accurate than the others, but some guidelines are givenon the choice of method in particular cases.
A distillation algorithm for floatingpoint summation
 SIAM J. Sci. Comput
, 1999
"... Abstract. The addition of two or more floatingpoint numbers is fundamental to numerical computations. This paper describes an efficient “distillation ” style algorithm which produces a precise sum by exploiting the natural accuracy of compensated cancellation. The algorithm is applicable to all set ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
Abstract. The addition of two or more floatingpoint numbers is fundamental to numerical computations. This paper describes an efficient “distillation ” style algorithm which produces a precise sum by exploiting the natural accuracy of compensated cancellation. The algorithm is applicable to all sets of data but is particularly appropriate for illconditioned data, where standard methods fail due to the accumulation of rounding error and its subsequent exposure by cancellation. The method uses only standard floatingpoint arithmetic and does not rely on the radix used by the arithmetic model, the architecture of specific machines, or the use of accumulators.
LIA InC++: A Local Interval Arithmetic Library for Discontinuous Intervals
, 1995
"... This paper documents LIA InC++ library for local interval arithmetic in C++. The main innovation of the library is the idea of extending traditional interval arithmetic with "complement" intervals and discontinuous intervals. By these extensions it is possible to evaluate not only ranges of possible ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
This paper documents LIA InC++ library for local interval arithmetic in C++. The main innovation of the library is the idea of extending traditional interval arithmetic with "complement" intervals and discontinuous intervals. By these extensions it is possible to evaluate not only ranges of possible values (i.e., intervals) but ranges of impossible values as well. LIA InC++ contains classes for interval types and overloaded definitions for primitive interval arithmetic operators and functions. Open ended intervals can be used in addition to the traditional closed ones. Intervals of infinite width (e.g., (,2], (,),...) are accepted as inputs and are used for managing problems of overflowing values during function evaluation. The library uses double precision machine arithmetic with optional outward rounding, makes use of interval properties such as scalarity and symmetry, and uses some bitlevel manipulations for efficient computation. LIA library is the most fundamental one in our In...
InC++ Library Family for Interval Computations
 INTERNATIONAL JOURNAL OF RELIABLE COMPUTING. SUPPLEMENT TO THE INTERNATIONAL WORKSHOP ON APPLICATIONS OF INTERVAL COMPUTATIONS
, 1995
"... This paper presents a series of C++ libraries for interval function evaluation and constraint satisfaction. Classical interval arithmetic (IA) (Moore, 1966) is extended by open ended intervals, the notion of infinity and by "complement" and discontinuous intervals. Both algebraic and numerical IA te ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
This paper presents a series of C++ libraries for interval function evaluation and constraint satisfaction. Classical interval arithmetic (IA) (Moore, 1966) is extended by open ended intervals, the notion of infinity and by "complement" and discontinuous intervals. Both algebraic and numerical IA techniques are combined for obtaining the actual range of interval functions efficiently and for determining better than local solutions for interval constraint satisfaction problems. Our practical goal is a set of portable C++ libraries that can be used in applications without deep understanding of interval analysis.
Accurate floatingpoint summation
, 2005
"... Given a vector of floatingpoint numbers with exact sum s, we present an algorithm for calculating a faithful rounding of s into the set of floatingpoint numbers, i.e. one of the immediate floatingpoint neighbors of s. If the s is a floatingpoint number, we prove that this is the result of our a ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Given a vector of floatingpoint numbers with exact sum s, we present an algorithm for calculating a faithful rounding of s into the set of floatingpoint numbers, i.e. one of the immediate floatingpoint neighbors of s. If the s is a floatingpoint number, we prove that this is the result of our algorithm. The algorithm adapts to the condition number of the sum, i.e. it is very fast for mildly conditioned sums with slowly increasing computing time proportional to the condition number. All statements are also true in the presence of underflow. Furthermore algorithms with Kfold accuracy are derived, where in that case the result is stored in a vector of K floatingpoint numbers. We also present an algorithm for rounding the sum s to the nearest floatingpoint number. Our algorithms are fast in terms of measured computing time because they neither require special operations such as access to mantissa or exponent, they contain no branch in the inner loop, nor do they require extra precision: The only operations used are standard floatingpoint addition, subtraction and multiplication in one working precision, for example double precision. Moreover, in contrast to other approaches, the algorithms are ideally suited for parallelization. We also sketch dot product algorithms with similar properties.
PASCALXSC  New Concepts for Scientific Computation and Numerical Data Processing
 Scientific Computing with Automatic Result Verification
, 1993
"... 1 Introduction These days, the elementary arithmetic operations on electronic computers are usually approximated by floatingpoint operations of highest accuracy. In particular, for any choice of operands this means that the computed result coincides with the rounded exact result of the operation ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
1 Introduction These days, the elementary arithmetic operations on electronic computers are usually approximated by floatingpoint operations of highest accuracy. In particular, for any choice of operands this means that the computed result coincides with the rounded exact result of the operation. See the IEEE Arithmetic Standard [3] as an example. This arithmetical standard also requires the four basic arithmetic operations +; \Gamma; ; and = with directed roundings. A large number of processors already on the market provide these operations. So far, however, no common programming language allows access to them. On the other hand, there has been a noticeable shift in scientific computation from general purpose computers to vector and parallel computers. These socalled 15 16 R. Hammer, M. Neaga, and D. Ratz supercomputers provide additional arithmetic operations such as "multiply and
Interval Computations On The Spreadsheet
 Applications of Interval Computations
, 1996
"... This paper reviews work on using interval arithmetic as the basis for next generation spreadsheet programs capable of dealing with rounding errors, imprecise data, and numerical constraints. A series of ever more versatile computational models for spreadsheets are presented beginning from classical ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
This paper reviews work on using interval arithmetic as the basis for next generation spreadsheet programs capable of dealing with rounding errors, imprecise data, and numerical constraints. A series of ever more versatile computational models for spreadsheets are presented beginning from classical interval arithmetic and ending up with interval constraint satisfaction. In order to demonstrate the ideas, an actual implementation of each model as a class library is presented and its integration with a commercial spreadsheet program is explained. 1 LIMITATIONS OF SPREADSHEET COMPUTING Spreadsheet programs, such as MS Excel, Quattro Pro, Lotus 123, etc., are among the most widely used applications of computer science. Since the pioneering days of VisiCalc and others, spreadsheet programs have been enhanced immensely with new features. However, the underlying computational paradigm of evaluating arithmetical functions by using ordinary machine arithmetic has remained the same. The wor...
The IAX Architecture  Interval Arithmetic Extension
, 1999
"... In this paper we discuss a processor architecture for interval arithmetic. Firstly it is shown that double precision FPUs can cheaply be split to support single precision interval addition/subtraction or multiplication, secondly we propose hardware support for double precision interval arithmetic an ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
In this paper we discuss a processor architecture for interval arithmetic. Firstly it is shown that double precision FPUs can cheaply be split to support single precision interval addition/subtraction or multiplication, secondly we propose hardware support for double precision interval arithmetic and compare the effort and performance with software implementations on current architectures. 1 Introduction In this paper we present an architecture extension of a double precision floating point unit (FPU) in order to support single precision interval arithmetic very much alike the extensions which have been proposed for multimedia [6] or 3D graphics support [3, 2]. And indeed, the most promising application of single precision interval arithmetic is in the same area. For rendering images, raytracing and other problems in computer graphics or constructive solid geometry interesting algorithms using interval arithmetic have been proposed [10, 4]. The interval algorithms are superior to the...