Results 1  10
of
14
The Exact Computation Paradigm
, 1994
"... We describe a paradigm for numerical computing, based on exact computation. This emerging paradigm has many advantages compared to the standard paradigm which is based on fixedprecision. We first survey the literature on multiprecision number packages, a prerequisite for exact computation. Next ..."
Abstract

Cited by 93 (10 self)
 Add to MetaCart
We describe a paradigm for numerical computing, based on exact computation. This emerging paradigm has many advantages compared to the standard paradigm which is based on fixedprecision. We first survey the literature on multiprecision number packages, a prerequisite for exact computation. Next we survey some recent applications of this paradigm. Finally, we outline some basic theory and techniques in this paradigm. 1 This paper will appear as a chapter in the 2nd edition of Computing in Euclidean Geometry, edited by D.Z. Du and F.K. Hwang, published by World Scientific Press, 1994. 1 1 Two Numerical Computing Paradigms Computation has always been intimately associated with numbers: computability theory was early on formulated as a theory of computable numbers, the first computers have been number crunchers and the original massproduced computers were pocket calculators. Although one's first exposure to computers today is likely to be some nonnumerical application, numeri...
Numerical Evaluation of Special Functions
 In W. Gautschi (Ed.), AMS Proceedings of Symposia in Applied Mathematics 48
, 1994
"... . This document is an excerpt from the current hypertext version of an article that appeared in Walter Gautschi (ed.), Mathematics of Computation 19431993: A HalfCentury of Computational Mathematics, Proceedings of Symposia in Applied Mathematics 48, American Mathematical Society, Providence, ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
. This document is an excerpt from the current hypertext version of an article that appeared in Walter Gautschi (ed.), Mathematics of Computation 19431993: A HalfCentury of Computational Mathematics, Proceedings of Symposia in Applied Mathematics 48, American Mathematical Society, Providence, RI 02940, 1994. The symposium was held at the University of British Columbia August 913, 1993, in honor of the fiftieth anniversary of the journal Mathematics of Computation. The original abstract follows. Higher transcendental functions continue to play varied and important roles in investigations by engineers, mathematicians, scientists and statisticians. The purpose of this paper is to assist in locating useful approximations and software for the numerical generation of these functions, and to offer some suggestions for future developments in this field. 5.9. Mathieu, Lam'e, and Spheroidal Wave Functions. 5.9.1. Characteristic Values of Mathieu's Equation. Software Packages:...
VariablePrecision, Interval Arithmetic Processors
"... This chapter presents the design and analysis of variableprecision, interval arithmetic processors. The processors give the user the ability to specify the precision of the computation, determine the accuracy of the results, and recompute inaccurate results with higher precision. The processors sup ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
This chapter presents the design and analysis of variableprecision, interval arithmetic processors. The processors give the user the ability to specify the precision of the computation, determine the accuracy of the results, and recompute inaccurate results with higher precision. The processors support a wide variety of arithmetic operations on variableprecision floating point numbers and intervals. Efficient hardware algorithms and specially designed functional units increase the speed, accuracy, and reliability of numerical computations. Area and delay estimates indicate that the processors can be implemented with areas and cycle times that are comparable to conventional IEEE doubleprecision floating point coprocessors. Execution time estimates indicate that the processors are two to three orders of magnitude faster than a conventional software package for variableprecision, interval arithmetic. 1.1 INTRODUCTION Floating point arithmetic provides a highspeed method for perform...
Software Needs in Special Functions
 J. Comput. Appl. Math
, 1996
"... . Currently available software for special functions exhibits gaps and defects in comparison to the needs of modern highperformance scientific computing and also, surprisingly, in comparison to what could be constructed from current algorithms. In this paper we expose some of these deficiencies and ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
. Currently available software for special functions exhibits gaps and defects in comparison to the needs of modern highperformance scientific computing and also, surprisingly, in comparison to what could be constructed from current algorithms. In this paper we expose some of these deficiencies and identify the related need for useroriented testing software. 1. Introduction A recent article by Lozier and Olver [21] provides a survey of algorithms and software for the numerical evaluation of special functions. Its emphasis is on the generation of function values although selected resources for zeros and integrals are included also. Journals, books, conference proceedings, and software documents were examined and a bibliography of nearly 500 references was constructed. Based on this investigation, the functions were classified and crossreferenced to bibliographic entries and to specific software libraries and systems 1 . The bibliography was prepared using the authors' professional...
Multiplications of Floating Point Expansions
 IN PROCEEDINGS OF THE 14TH SYMPOSIUM ON COMPUTER ARITHMETIC, I. KOREN AND P. KORNERUP (EDS
, 1999
"... In modern computers, the floating point unit is the part of the processor delivering the highest computing power and getting most attention from the design team. Performance of any multiple precision application will be dramatically enhanced by adequate use of floating point expansions. We present i ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
In modern computers, the floating point unit is the part of the processor delivering the highest computing power and getting most attention from the design team. Performance of any multiple precision application will be dramatically enhanced by adequate use of floating point expansions. We present in this work three multiplication algorithms faster and more integrated than the stepwise algorithm proposed earlier. We have tested these new algorithms on an application that computes the determinant of a matrix. In the absence of overflow or underflow, the process is error free and possibly more efficient than its integer based counterpart.
Solving Triangular Systems More Accurately and Efficiently
, 2005
"... We present a new algorithm that solves linear triangular systems accurately and efficiently. By accurately, we mean that this algorithm should yield a solution as accurate as the one computed in twice the working precision. By efficiently, we mean that its implementation should run faster than the c ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We present a new algorithm that solves linear triangular systems accurately and efficiently. By accurately, we mean that this algorithm should yield a solution as accurate as the one computed in twice the working precision. By efficiently, we mean that its implementation should run faster than the corresponding XBLAS routine with the same output accuracy.
Improving the Compensated Horner Scheme with a Fused Multiply and Add
, 2006
"... Several different techniques and softwares intend to improve the accuracy of results computed in a fixed finite precision. Here we focus on a method to improve the accuracy of the polynomial evaluation. It is well known that the use of the Fused Multiply and Add operation available on some microproc ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Several different techniques and softwares intend to improve the accuracy of results computed in a fixed finite precision. Here we focus on a method to improve the accuracy of the polynomial evaluation. It is well known that the use of the Fused Multiply and Add operation available on some microprocessors like Intel Itanium improves slightly the accuracy of the Horner scheme. In this paper, we propose an accurate compensated Horner scheme specially designed to take advantage of the Fused Multiply and Add. We prove that the computed result is as accurate as if computed in twice the working precision. The algorithm we present is fast since it only requires well optimizable floating point operations, performed in the same working precision as the given data.
A multipleprecision division algorithm
 Math. Comp
, 1996
"... Abstract. The classical algorithm for multipleprecision division normalizes digits during each step and sometimes makes correction steps when the initial guess for the quotient digit turns out to be wrong. A method is presented that runs faster by skipping most of the intermediate normalization and ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Abstract. The classical algorithm for multipleprecision division normalizes digits during each step and sometimes makes correction steps when the initial guess for the quotient digit turns out to be wrong. A method is presented that runs faster by skipping most of the intermediate normalization and recovers from wrong guesses without separate correction steps. 1.
Operator dependant compensated algorithms
 In Proceedings of the 12th GAMM  IMACS  SCAN
, 2007
"... Compensated algorithms improve the accuracy of a result evaluating a correcting term that compensates the finite precision of the computation. The implementation core of compensated algorithms is the computation of the rounding errors generated by the floating point operators. We focus this operator ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Compensated algorithms improve the accuracy of a result evaluating a correcting term that compensates the finite precision of the computation. The implementation core of compensated algorithms is the computation of the rounding errors generated by the floating point operators. We focus this operator dependency discussing how to manage and to benefit from floating point arithmetic implemented through a fused multiply and add operator. We consider the compensation of dot product and polynomial evaluation with Horner iteration. In each case we provide theoretical a priori error bounds and numerical experiments to exhibit the best algorithmic choices with respect to accuracy or performance issues. 1.
Applications of fast and accurate summation in computational geometry
, 2005
"... In this paper, we present a recent algorithm given by Ogita, Rump and Oishi [39] for accurately computing the sum of n floating point numbers. They also give a computational
error bound for the computed result. We apply this algorithm in computing determinant and more particularly in computing robus ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
In this paper, we present a recent algorithm given by Ogita, Rump and Oishi [39] for accurately computing the sum of n floating point numbers. They also give a computational
error bound for the computed result. We apply this algorithm in computing determinant and more particularly in computing robust geometric predicates used in computational geometry.
We improve existing results that use either a multiprecision libraries or extended large accumulators.