Results 1 
4 of
4
On Properties of Floating Point Arithmetics: Numerical Stability and the Cost of Accurate Computations
, 1992
"... Floating point arithmetics generally possess many regularity properties in addition to those that are typically used in roundoff error analyses; these properties can be exploited to produce computations that are more accurate and cost effective than many programmers might think possible. Furthermore ..."
Abstract

Cited by 26 (0 self)
 Add to MetaCart
Floating point arithmetics generally possess many regularity properties in addition to those that are typically used in roundoff error analyses; these properties can be exploited to produce computations that are more accurate and cost effective than many programmers might think possible. Furthermore, many of these properties are quite simple to state and to comprehend, but few programmers seem to be aware of them (or at least willing to rely on them). This dissertation presents some of these properties and explores their consequences for computability, accuracy, cost, and portability. For example, we consider several algorithms for summing a sequence of numbers and show that under very general hypotheses, we can compute a sum to full working precision at only somewhat greater cost than a simple accumulation, which can often produce a sum with no significant figures at all. This example, as well as others we present, can be generalized further by substituting still more complex algorith...
Computational Complexity and Numerical Stability
 SIAM J. Comput
, 1975
"... ABSTRACT: Limiting consideration to algorithms satisfying various numerical stability requirements may change lower bounds for computational complexity and/or make lower bounds easier to prove. We will show that, under a sufficiently strong restriction upon numerical stability, any algorithm for mul ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
ABSTRACT: Limiting consideration to algorithms satisfying various numerical stability requirements may change lower bounds for computational complexity and/or make lower bounds easier to prove. We will show that, under a sufficiently strong restriction upon numerical stability, any algorithm for multiplying two n x n matrices using only +, and x requires at least n 3 multiplications. We conclude with a survey of results concerning the numerical stability of several algorithms which have been considered by complexity theorists. I.
Fast FloatingPoint Processing in Common Lisp
 ACM Trans. on Math. Software
, 1995
"... this paper we explore an approach which enables all of the problems listed above to be solved at a single stroke: use Lisp as the source language for the numeric and graphical code! This is not a new idea  it was tried at MIT and UCB in the 1970's. While these experiments were modestly successful ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
this paper we explore an approach which enables all of the problems listed above to be solved at a single stroke: use Lisp as the source language for the numeric and graphical code! This is not a new idea  it was tried at MIT and UCB in the 1970's. While these experiments were modestly successful, the particular systems are obsolete. Fortunately, some of those ideas used in Maclisp [37], NIL [38] and Franz Lisp [20] were incorporated in the subsequent standardization of Common Lisp (CL) [35]. In this new setting it is appropriate to reexamine the theoretical and practical implications of writing numeric code in Lisp. The popular conceptions of Lisp's inefficiency for numerics have been based on rumor, supposition, and experience with early and (in fact) inefficient implementations. It is certainly possible to continue to write inefficient programs: As one example of the results of deemphasizing numerics in the design, consider the situation of the basic arithmetic operators. The definitions of these functions require that they are generic, (e.g. "+" must be able to add any combination of several precisions of floats, arbitraryprecision integers, rational numbers, and complexes), The very simple way of implementing this arithmetic  by subroutine calls  is also very inefficient. Even with appropriate declarations to enable more specific treatment of numeric types, compilers are free to ignore declarations and such implementations naturally do not accommodate the needs of intensive numbercrunching. (See the appendix for further discussion of declarations). Be this as it may, the situation with respect to Lisp has changed for the better in recent years. With the advent of ANSI standard Common Lisp, several active vendors of implementations and one active universi...
Symbolic Computation of Divided Differences
, 1999
"... Divided differences are enormously useful in developing stable and accurate numerical formulas. For example, programs to compute f(x)  f(y) as might occur in integration, can be notoriously inaccurate. Such problems can be cured by approaching these computations through divided difference formulati ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Divided differences are enormously useful in developing stable and accurate numerical formulas. For example, programs to compute f(x)  f(y) as might occur in integration, can be notoriously inaccurate. Such problems can be cured by approaching these computations through divided difference formulations. This paper provides a guide to divided difference theory and practice, with a special eye toward the needs of computer algebra systems that should be programmed to deal with these oftenmessy formulas.