Results 1  10
of
29
Static analysis yields efficient exact integer arithmetic for computational geometry
 in ACM Conference on Computational Geometry
, 1996
"... ..."
Complexity of Bezout's theorem V: Polynomial time
 Theoretical Computer Science
, 1994
"... this paper is to show that the problem of finding approximately a zero of a polynomial system of equations can be solved in polynomial time, on the average. The number of arithmetic operations is bounded by cN ..."
Abstract

Cited by 52 (5 self)
 Add to MetaCart
this paper is to show that the problem of finding approximately a zero of a polynomial system of equations can be solved in polynomial time, on the average. The number of arithmetic operations is bounded by cN
Algorithms for QuadDouble Precision Floating Point Arithmetic
 Proceedings of the 15th Symposium on Computer Arithmetic
, 2001
"... A quaddouble number is an unevaluated sum of four IEEE double precision numbers, capable of representing at least 212 bits of significand. We present the algorithms for various arithmetic operations (including the four basic operations and various algebraic and transcendental operations) on quaddo ..."
Abstract

Cited by 37 (9 self)
 Add to MetaCart
A quaddouble number is an unevaluated sum of four IEEE double precision numbers, capable of representing at least 212 bits of significand. We present the algorithms for various arithmetic operations (including the four basic operations and various algebraic and transcendental operations) on quaddouble numbers. The performance of the algorithms, implemented in C++, is also presented. 1.
Robust Plane Sweep for Intersecting Segments
, 1997
"... In this paper, we reexamine in the framework of robust computation the BentleyOttmann algorithm for reporting intersecting pairs of segments in the plane. This algorithm has been reported as being very sensitive to numerical errors. Indeed, a simple analysis reveals that it involves predicates of d ..."
Abstract

Cited by 24 (2 self)
 Add to MetaCart
In this paper, we reexamine in the framework of robust computation the BentleyOttmann algorithm for reporting intersecting pairs of segments in the plane. This algorithm has been reported as being very sensitive to numerical errors. Indeed, a simple analysis reveals that it involves predicates of degree 5, presumably never evaluated exactly in most implementation. Within the exactcomputation paradigm we introduce two models of computation aimed at replacing the conventional model of realnumber arithmetic. The first model (predicate arithmetic) assumes the exact evaluation of the signs of algebraic expressions of some degree, and the second model (exact arithmetic) assumes the exact computation of the value of...
Efficient Algorithms for Line and Curve Segment Intersection Using Restricted Predicates
, 1999
"... We consider whether restricted sets of geometric predicates support efficient algorithms to solve line and curve segment intersection problems in the plane. Our restrictions are based on the notion of algebraic degree, proposed by Preparata and others as a way to guide the search for efficient al ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
We consider whether restricted sets of geometric predicates support efficient algorithms to solve line and curve segment intersection problems in the plane. Our restrictions are based on the notion of algebraic degree, proposed by Preparata and others as a way to guide the search for efficient algorithms that can be implemented in more realistic computational models than the Real RAM.
A distillation algorithm for floatingpoint summation
 SIAM J. Sci. Comput
, 1999
"... Abstract. The addition of two or more floatingpoint numbers is fundamental to numerical computations. This paper describes an efficient “distillation ” style algorithm which produces a precise sum by exploiting the natural accuracy of compensated cancellation. The algorithm is applicable to all set ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
Abstract. The addition of two or more floatingpoint numbers is fundamental to numerical computations. This paper describes an efficient “distillation ” style algorithm which produces a precise sum by exploiting the natural accuracy of compensated cancellation. The algorithm is applicable to all sets of data but is particularly appropriate for illconditioned data, where standard methods fail due to the accumulation of rounding error and its subsequent exposure by cancellation. The method uses only standard floatingpoint arithmetic and does not rely on the radix used by the arithmetic model, the architecture of specific machines, or the use of accumulators.
QuadDouble Arithmetic: Algorithms, Implementation, and Application
, 2000
"... A quaddouble number is an unevaluated sum of four IEEE double precision numbers, capable of representing at least 212 bits of significand. Algorithms for various arithmetic operations (including the four basic operations and various algebraic and transcendental operations) are presented. A C++ i ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
A quaddouble number is an unevaluated sum of four IEEE double precision numbers, capable of representing at least 212 bits of significand. Algorithms for various arithmetic operations (including the four basic operations and various algebraic and transcendental operations) are presented. A C++ implementation of these algorithms is also described, as well as an application of this quaddouble library. # This research was supported by the Director, O#ce of Science, Division of Mathematical, Information, and Computational Sciences of the U.S. Department of Energy under contract number DEAC0376SF00098. + Computer Science Division, University of California, Berkeley, CA 94720 (yozo@cs.berkeley.edu). # NERSC, Lawrence Berkeley National Laboratory, 1 Cycloton Rd, Berkeley, CA 94720 (xiaoye@nersc.gov, dhbailey@lbl.gov). 1 Contents 1
Automatic Generation of Staged Geometric Predicates
, 2002
"... Algorithms in Computational Geometry and Computer Aided Design are often developed for the Real RAM model of computation, which assumes exactness of all the input arguments and operations. In practice, however, the exactness imposes tremendous limitations on the algorithms – even the basic operation ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Algorithms in Computational Geometry and Computer Aided Design are often developed for the Real RAM model of computation, which assumes exactness of all the input arguments and operations. In practice, however, the exactness imposes tremendous limitations on the algorithms – even the basic operations become uncomputable, or prohibitively slow. In some important cases, however, the computations of interest are limited to determining the sign of polynomial expressions. In such circumstances, a faster approach is available: one can evaluate the polynomial in floating point first, together with some estimate of the rounding error, and fall back to exact arithmetic only if this error is too big to determine the sign reliably. A particularly efficient variation on this approach has been used by Shewchuk in his robust implementations of Orient and InSphere geometric predicates. We extend Shewchuk’s method to arbitrary polynomial expressions. The expressions are given as programs in a suitable source language featuring basic arithmetic operations of addition, subtraction, multiplication and squaring, which are to be perceived by the programmer as exact. The source language also allows for anonymous
ULTIMATELY FAST ACCURATE SUMMATION

, 2009
"... We present two new algorithms FastAccSum and FastPrecSum, one to compute a faithful rounding of the sum of floatingpoint numbers and the other for a result “as if” computed in Kfold precision. Faithful rounding means the computed result either is one of the immediate floatingpoint neighbors of th ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
We present two new algorithms FastAccSum and FastPrecSum, one to compute a faithful rounding of the sum of floatingpoint numbers and the other for a result “as if” computed in Kfold precision. Faithful rounding means the computed result either is one of the immediate floatingpoint neighbors of the exact result or is equal to the exact sum if this is a floatingpoint number. The algorithms are based on our previous algorithms AccSum and PrecSum and improve them by up to 25%. The first algorithm adapts to the condition number of the sum; i.e., the computing time is proportional to the difficulty of the problem. The second algorithm does not need extra memory, and the computing time depends only on the number of summands and K. Both algorithms are the fastest known in terms of flops. They allow good instructionlevel parallelism so that they are also fast in terms of measured computing time. The algorithms require only standard floatingpoint addition, subtraction, and multiplication in one working precision, for example, double precision.