Results 1 
6 of
6
ARCHITECTUREAWARE CLASSICAL TAYLOR SHIFT BY 1
, 2005
"... We present algorithms that outperform straightforward implementations of classical Taylor shift by 1. For input polynomials of low degrees a method of the SACLIB library is faster than straightforward implementations by a factor of at least 2; for higher degrees we develop a method that is faster th ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
We present algorithms that outperform straightforward implementations of classical Taylor shift by 1. For input polynomials of low degrees a method of the SACLIB library is faster than straightforward implementations by a factor of at least 2; for higher degrees we develop a method that is faster than straightforward implementations by a factor of up to 7. Our Taylor shift algorithm requires more word additions than straightforward implementations but it reduces the number of cycles per word addition by reducing memory tra c and the number of carry computations. The introduction of signed digits, suspended normalization, radix reduction, and delayed carry propagation enables our algorithm to take advantage of the technique of register tiling which is commonly used by optimizing compilers. While our algorithm is written in a highlevel language, it depends on several parameters that can be tuned to the underlying architecture.
Efficient Multiprecision Floating Point Multiplication with Exact Rounding
, 1993
"... An algorithm is described for multiplying multiprecision floating point numbers. The returned result is equal to the floating point number obtained by rounding the exact product. Software implementations of multiprecision floating point multiplication can reduce the computing time by a factor of two ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
An algorithm is described for multiplying multiprecision floating point numbers. The returned result is equal to the floating point number obtained by rounding the exact product. Software implementations of multiprecision floating point multiplication can reduce the computing time by a factor of two if they do not compute the low order digits of the product of the two mantissas. However, these algorithms do not necessarily provide exactly rounded results. The algorithm described in this paper is guaranteed to produce exactly rounded results and typically obtains the same savings. 1 Introduction We present an algorithm for multiplying multiprecision floating point numbers. The returned result is equal to the floating point number obtained by rounding the exact product. A rounding operation which satisfies this requirement is called exact rounding. Exact rounding provides a well defined, implementation independent semantics for floating point arithmetic. For this reason, floating point ...
A Hybrid Method for High Precision Calculation of Polynomial Real Roots
 In Bronstein, M. (Ed.): Proceedings of the 1993 International Symposium on Symbolic and Algebraic Computation, ACM
, 1993
"... A straightforward implementation of Newton's method for polynomial real root calculation using exact arithmetic is inefficient. In each step the length of the iterate multiplies by the degree of the polynomial while its accuracy merely doubles. We present an exact algorithm which keeps the leng ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
A straightforward implementation of Newton's method for polynomial real root calculation using exact arithmetic is inefficient. In each step the length of the iterate multiplies by the degree of the polynomial while its accuracy merely doubles. We present an exact algorithm which keeps the length of each iterate proportional to its accuracy. The resulting speedup is dramatic. The average computing time can be further reduced by trying floating point computations. Several floating point Newton steps are executed; interval arithmetic is used to check whether the result is sufficiently close to the root; if this condition cannot be verified the exact algorithm is invoked. 1 Introduction Real roots of a univariate integral polynomial A can be calculated in two steps, called "root isolation" and "root refinement". Root isolation computes isolating intervals for the roots of A; those are intervals which contain exactly one polynomial root each. Root refinement refines an isolating interva...
Compilerenforced memory semantics in the SACLIB computer algebra library
 International Workshop on Computer Algebra in Scientific Computing
, 2005
"... ..."
A Data Structure for Approximation
 DEPARTMENT OF MATHEMATICS AND STATISTICS, THE UNIVERSITY OF EDINBURGH
, 1996
"... To approximate a real number ff one might compute an interval I containing ff and refine it to the desired width. As mathematical objects the endpoints of I are rational numbers. In a computer they might be floating point numbers or pairs of integersnumerator and denominator. We identify a da ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
To approximate a real number ff one might compute an interval I containing ff and refine it to the desired width. As mathematical objects the endpoints of I are rational numbers. In a computer they might be floating point numbers or pairs of integersnumerator and denominator. We identify a data structure for I that supports interval refinement and provides an interface between exact arithmetic and floating point arithmetic. We exploit the data structure in an algorithm that finds optimal floating point approximations for polynomial real roots.