Results 1  10
of
85
The Tera computer system
 In International Conference on Supercomputing
, 1990
"... The Tera architecture was designed with several ma jor goals in mind. First, it needed to be suitable for very high speed implementations, i. e., admit a short clock period and be scalable to many processors. This ..."
Abstract

Cited by 371 (2 self)
 Add to MetaCart
The Tera architecture was designed with several ma jor goals in mind. First, it needed to be suitable for very high speed implementations, i. e., admit a short clock period and be scalable to many processors. This
Adaptive Precision FloatingPoint Arithmetic and Fast Robust Geometric Predicates
 Discrete & Computational Geometry
, 1996
"... Exact computer arithmetic has a variety of uses including, but not limited to, the robust implementation of geometric algorithms. This report has three purposes. The first is to offer fast softwarelevel algorithms for exact addition and multiplication of arbitrary precision floatingpoint values. T ..."
Abstract

Cited by 134 (5 self)
 Add to MetaCart
Exact computer arithmetic has a variety of uses including, but not limited to, the robust implementation of geometric algorithms. This report has three purposes. The first is to offer fast softwarelevel algorithms for exact addition and multiplication of arbitrary precision floatingpoint values. The second is to propose a technique for adaptiveprecision arithmetic that can often speed these algorithms when one wishes to perform multiprecision calculations that do not always require exact arithmetic, but must satisfy some error bound. The third is to provide a practical demonstration of these techniques, in the form of implementations of several common geometric calculations whose required degree of accuracy depends on their inputs. These robust geometric predicates are adaptive; their running time depends on the degree of uncertainty of the result, and is usually small. These algorithms work on computers whose floatingpoint arithmetic uses radix two and exact rounding, including machines complying with the IEEE 754 standard. The inputs to the predicates may be arbitrary single or double precision floatingpoint numbers. C code is publicly available for the 2D and 3D orientation and incircle tests, and robust Delaunay triangulation using these tests. Timings of the implementations demonstrate their effectiveness. Supported in part by the Natural Sciences and Engineering Research Council of Canada under a 1967 Science and Engineering Scholarship and by the National Science Foundation under Grant CMS9318163. The views and conclusions contained in this document are those of the author and should not be interpreted as representing the official policies, either express or implied, of NSERC, NSF, or the U.S. Government. Keywords: arbitrary precision floatingpoint arit...
Algorithms for Arbitrary Precision Floating Point Arithmetic
 Proceedings of the 10th Symposium on Computer Arithmetic
, 1991
"... We present techniques which may be used to perform computations of very high accuracy using only straightforward floating point arithmetic operations of limited precision, and we prove the validity of these techniques under very general hypotheses satisfied by most implementations of floating point ..."
Abstract

Cited by 65 (1 self)
 Add to MetaCart
We present techniques which may be used to perform computations of very high accuracy using only straightforward floating point arithmetic operations of limited precision, and we prove the validity of these techniques under very general hypotheses satisfied by most implementations of floating point arithmetic. To illustrate the application of these techniques, we present an algorithm which computes the intersection of a line and a line segment. The algorithm is guaranteed to correctly decide whether an intersection exists and, if so, to produce the coordinates of the intersection point accurate to full precision. Moreover, the algorithm is usually quite efficient; only in a few cases does guaranteed accuracy necessitate an expensive computation. 1. Introduction "How accurate is a computed result if each intermediate quantity is computed using floating point arithmetic of a given precision?" The casual reader of Wilkinson's famous treatise [21] and similar roundoff error analyses might...
Accurate Sum and Dot Product
 SIAM J. Sci. Comput
, 2005
"... Algorithms for summation and dot product of floating point numbers are presented which are fast in terms of measured computing time. We show that the computed results are as accurate as if computed in twice or Kfold working precision, K 3. For twice the working precision our algorithms for summa ..."
Abstract

Cited by 64 (5 self)
 Add to MetaCart
Algorithms for summation and dot product of floating point numbers are presented which are fast in terms of measured computing time. We show that the computed results are as accurate as if computed in twice or Kfold working precision, K 3. For twice the working precision our algorithms for summation and dot product are some 40 % faster than the corresponding XBLAS routines while sharing similar error estimates. Our algorithms are widely applicable because they require only addition, subtraction and multiplication of floating point numbers in the same working precision as the given data. Higher precision is unnecessary, algorithms are straight loops without branch, and no access to mantissa or exponent is necessary.
Robust Adaptive FloatingPoint Geometric Predicates
 in Proc. 12th Annu. ACM Sympos. Comput. Geom
, 1996
"... Fast C implementations of four geometric predicates, the 2D and 3D orientation and incircle tests, are publicly available. Their inputs are ordinary single or double precision floatingpoint numbers. They owe their speed to two features. First, they employ new fast algorithms for arbitrary precision ..."
Abstract

Cited by 48 (1 self)
 Add to MetaCart
Fast C implementations of four geometric predicates, the 2D and 3D orientation and incircle tests, are publicly available. Their inputs are ordinary single or double precision floatingpoint numbers. They owe their speed to two features. First, they employ new fast algorithms for arbitrary precision arithmetic that have a strong advantage over other software techniques in computations that manipulate values of extended but small precision. Second, they are adaptive; their running time depends on the degree of uncertainty of the result, and is usually small. These algorithms work on computers whose floatingpoint arithmetic uses radix two and exact rounding, including machines that comply with the IEEE 754 floatingpoint standard. Timings of the predicates, in isolation and embedded in 2D and 3D Delaunay triangulation programs, verify their effectiveness. 1 Introduction Algorithms that make decisions based on geometric tests, such as determining which side of a line a point falls on, ...
THE ACCURACY OF FLOATING POINT SUMMATION
, 1993
"... The usual recursive summation technique is just one of several ways of computing the sum of n floating point numbers. Five summation methods and their variations are analyzed here. The accuracy of the methods is compared using rounding error analysis and numerical experiments. Four ofthe methods are ..."
Abstract

Cited by 40 (0 self)
 Add to MetaCart
The usual recursive summation technique is just one of several ways of computing the sum of n floating point numbers. Five summation methods and their variations are analyzed here. The accuracy of the methods is compared using rounding error analysis and numerical experiments. Four ofthe methods are shown to be special cases of a general class of methods, and an error analysis is given for this class. No one method is uniformly more accurate than the others, but some guidelines are givenon the choice of method in particular cases.
Algorithms for QuadDouble Precision Floating Point Arithmetic
 Proceedings of the 15th Symposium on Computer Arithmetic
, 2001
"... A quaddouble number is an unevaluated sum of four IEEE double precision numbers, capable of representing at least 212 bits of significand. We present the algorithms for various arithmetic operations (including the four basic operations and various algebraic and transcendental operations) on quaddo ..."
Abstract

Cited by 36 (9 self)
 Add to MetaCart
A quaddouble number is an unevaluated sum of four IEEE double precision numbers, capable of representing at least 212 bits of significand. We present the algorithms for various arithmetic operations (including the four basic operations and various algebraic and transcendental operations) on quaddouble numbers. The performance of the algorithms, implemented in C++, is also presented. 1.