Results 1  10
of
11
The Exact Computation Paradigm
, 1994
"... We describe a paradigm for numerical computing, based on exact computation. This emerging paradigm has many advantages compared to the standard paradigm which is based on fixedprecision. We first survey the literature on multiprecision number packages, a prerequisite for exact computation. Next ..."
Abstract

Cited by 95 (10 self)
 Add to MetaCart
We describe a paradigm for numerical computing, based on exact computation. This emerging paradigm has many advantages compared to the standard paradigm which is based on fixedprecision. We first survey the literature on multiprecision number packages, a prerequisite for exact computation. Next we survey some recent applications of this paradigm. Finally, we outline some basic theory and techniques in this paradigm. 1 This paper will appear as a chapter in the 2nd edition of Computing in Euclidean Geometry, edited by D.Z. Du and F.K. Hwang, published by World Scientific Press, 1994. 1 1 Two Numerical Computing Paradigms Computation has always been intimately associated with numbers: computability theory was early on formulated as a theory of computable numbers, the first computers have been number crunchers and the original massproduced computers were pocket calculators. Although one's first exposure to computers today is likely to be some nonnumerical application, numeri...
Motivations for an arbitrary precision interval arithmetic and the MPFI library
 Reliable Computing
, 2002
"... Nowadays, computations involve more and more operations and consequently errors. The limits of applicability of some numerical algorithms are now reached: for instance the theoretical stability of a dense matrix factorization (LU or QR) is ensured under the assumption that n 3 u < 1, where n is the ..."
Abstract

Cited by 29 (7 self)
 Add to MetaCart
Nowadays, computations involve more and more operations and consequently errors. The limits of applicability of some numerical algorithms are now reached: for instance the theoretical stability of a dense matrix factorization (LU or QR) is ensured under the assumption that n 3 u < 1, where n is the dimension of the matrix and u = 1 + − 1, with 1 + the smallest floatingpoint larger than 1; this means that n must be less than 200,000, which is almost reached by modern simulations. The numerical quality of solvers is now an issue, and not only their mathematical quality. Let us cite studies performed by the CEA (French Nuclear Agency) on the simulation of nuclear plant accidents and also softwares controlling and possibly correcting numerical programs, such as Cadna [10] or Cena [20]. Another approach consists in computing with certified enclosures, namely interval arithmetic [21, 2, 18]. The fundamental principle of this arithmetic consists in replacing every number by an interval enclosing it. For instance, π cannot be exactly represented using a binary or decimal arithmetic, but it
Esolid  a system for exact boundary evaluation
 ComputerAided Design
, 2002
"... We present a system, ESOLID, that performs exact boundary evaluation of lowdegree curved solids in reasonable amounts of time. ESOLID performs accurate Boolean operations using exact representations and exact computations throughout. The demands of exact computation require a different set of algor ..."
Abstract

Cited by 23 (2 self)
 Add to MetaCart
We present a system, ESOLID, that performs exact boundary evaluation of lowdegree curved solids in reasonable amounts of time. ESOLID performs accurate Boolean operations using exact representations and exact computations throughout. The demands of exact computation require a different set of algorithms and efficiency improvements than those found in a traditional inexact floating point based modeler. We describe the system architecture, representations, and issues in implementing the algorithms. We also describe a number of techniques that increase the efficiency of the system based on lazy evaluation, use of floating point filters, arbitrary floating point arithmetic with error bounds, and lower dimensional formulation of subproblems. ESOLID has been used for boundary evaluation of many complex solids. These include both synthetic datasets and parts of a Bradley Fighting Vehicle designed using the BRLCAD solid modeling system. It is shown that ESOLID can correctly evaluate the boundary of solids that are very hard to compute using a fixedprecision floating point modeler. In terms of performance, it is about an order of magnitude slower as compared to a floating point boundary evaluation system on most cases. 1
Precise: Efficient multiprecision evaluation of algebraic roots and predicates for reliable geometric computation
, 2000
"... Many geometric problems like generalized Voronoi diagrams, medial axis computations and boundary evaluation involve computation and manipulation of nonlinear algebraic primitives like curves and surfaces. The algorithms designed for these problems make decisions based on signs of geometric predicat ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
Many geometric problems like generalized Voronoi diagrams, medial axis computations and boundary evaluation involve computation and manipulation of nonlinear algebraic primitives like curves and surfaces. The algorithms designed for these problems make decisions based on signs of geometric predicates or on the roots of polynomials characterizing the problem. The reliability of the algorithm depends on the accurate evaluation of these signs and roots. In this paper, we present a naive precisiondriven computational model to perform these computations reliably and demonstrate its effectiveness on a certain class of problems like sign of determinants with rational entries, boundary evaluation and curve arrangements. We also present a novel algorithm to compute all the roots of a univariate polynomial to any desired accuracy. The computational model along with the underlying number representation, precisiondriven arithmetic and all the algorithms are implemented as part of a standalone software library, PRECISE. 1.
Multiple Precision Interval Packages: Comparing Different Approaches
, 2003
"... We give a survey on packages for multiple precision interval arithmetic, with the main focus on three specific packages. One is within a Maple environment, intpakX, and two are C/C++ libraries, GMPXSC and MPFI. We discuss their different features, present timing results and show several application ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
We give a survey on packages for multiple precision interval arithmetic, with the main focus on three specific packages. One is within a Maple environment, intpakX, and two are C/C++ libraries, GMPXSC and MPFI. We discuss their different features, present timing results and show several applications from various fields, where high precision intervals are fundamental.
Rigorous and portable standard functions
 BIT
"... Abstract. Today’s floating point implementations of elementary transcendental functions are usually very accurate. However, with few exceptions, the actual accuracy is not known. In the present paper we describe a rigorous, accurate, fast and portable implementation of the elementary standard functi ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Abstract. Today’s floating point implementations of elementary transcendental functions are usually very accurate. However, with few exceptions, the actual accuracy is not known. In the present paper we describe a rigorous, accurate, fast and portable implementation of the elementary standard functions based on some existing approximate standard functions. The scheme is outlined for IEEE 754, but not difficult to adapt to other floating point formats. A Matlab implementation is available on the net. Accuracy of the proposed algorithms can be rigorously estimated. As an example we prove that the relative accuracy of the exponential function is better than 2.07eps in a slightly reduced argument range (eps denoting the relative rounding error unit). Otherwise, extensive computational tests suggest for all elementary functions and all suitable arguments an accuracy better than about 3eps. 1. A general approach for rigorous standard functions. Todays libraries for the approximation of elementary transcendental functions are very fast and the results are mostly of very high accuracy. For a good introduction and summary of stateoftheart methods cf. [19]. The achieved accuracy does not exceed one or two ulp for almost all input arguments; however, there is no proof for that. Today computers are more and more used for socalled computerassisted proofs, where assumptions of mathematical theorems are verified on the computer in order to draw anticipated conclusions. Famous examples are the celebrated Kepler conjecture [9], the enclosure of the Feigenbaum constant [8], bounds for
Using Commutativity Properties for Controlling Coercions
"... . This paper investigates some soundness conditions which have to be fulfilled in systems with coercions and generic operators. A result of Reynolds on unrestricted generic operators is extended to generic operators which obey certain constraints. We get natural conditions for such operators, whi ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
. This paper investigates some soundness conditions which have to be fulfilled in systems with coercions and generic operators. A result of Reynolds on unrestricted generic operators is extended to generic operators which obey certain constraints. We get natural conditions for such operators, which are expressed within the theoretic framework of category theory. However, in the context of computer algebra, there arise examples of coercions and generic operators which do not fulfil these conditions. We describe a framework  relaxing the above conditions  that allows distinguishing between cases of ambiguities which can be resolved in a quite natural sense and those which cannot. An algorithm is presented that detects such unresolvable ambiguities in expressions. 1 Introduction Reynolds [10] uses category theory to investigate the problems of the interaction of coercions (implicit conversions) and generic operators (also called overloaded operators) . He concludes with ...
Composite Arithmetic  A: Storage Form
"... This paper is in the form of a Draft Standard, patterned somewhat after the IEEE Standard for Binary FloatingPoint Arithmetic [17]. The proposals of this paper are based on a previously published paper [15] which canvasses the need for, and the possibilities of, a standard for general purpose arith ..."
Abstract
 Add to MetaCart
This paper is in the form of a Draft Standard, patterned somewhat after the IEEE Standard for Binary FloatingPoint Arithmetic [17]. The proposals of this paper are based on a previously published paper [15] which canvasses the need for, and the possibilities of, a standard for general purpose arithmetic. The proposals of this paper relate to one of the representational forms to be used in the proposed arithmetic, the storage form. The other two representational forms, display form and register form, are proposed in companion papers, which should be read in conjunction with this paper. 0
Reliable Geometric Computations With Algebraic Primitives And Predicates
"... this paper. Exact Computation as a Practical Approach. There is no question that EGC is slower than computation relying solely on machine precision arithmetic. The question is whether the slowdown is worth the gain in precision. Indeed, in many scientific or engineering applications the input data ..."
Abstract
 Add to MetaCart
this paper. Exact Computation as a Practical Approach. There is no question that EGC is slower than computation relying solely on machine precision arithmetic. The question is whether the slowdown is worth the gain in precision. Indeed, in many scientific or engineering applications the input data is inexact, and the question arises whether an exact result is even meaningful. But the main reason for using EGC is not exactness in itself, but rather reliability. A common cause of program failure is that rounding errors lead to inconsistent combinatorial decisions, e.g. about where a point lies with regard to a surface. By making a single interpretation of the data and performing calculations that are consistent with that interpretation, we can avoid this source of failure. Solving the problems of accuracy and consistency is the first step towards a general solution to the robustness problem, which also involves handling degeneracies and special cases