Results 1 
7 of
7
ULTIMATELY FAST ACCURATE SUMMATION

, 2009
"... We present two new algorithms FastAccSum and FastPrecSum, one to compute a faithful rounding of the sum of floatingpoint numbers and the other for a result “as if” computed in Kfold precision. Faithful rounding means the computed result either is one of the immediate floatingpoint neighbors of th ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
We present two new algorithms FastAccSum and FastPrecSum, one to compute a faithful rounding of the sum of floatingpoint numbers and the other for a result “as if” computed in Kfold precision. Faithful rounding means the computed result either is one of the immediate floatingpoint neighbors of the exact result or is equal to the exact sum if this is a floatingpoint number. The algorithms are based on our previous algorithms AccSum and PrecSum and improve them by up to 25%. The first algorithm adapts to the condition number of the sum; i.e., the computing time is proportional to the difficulty of the problem. The second algorithm does not need extra memory, and the computing time depends only on the number of summands and K. Both algorithms are the fastest known in terms of flops. They allow good instructionlevel parallelism so that they are also fast in terms of measured computing time. The algorithms require only standard floatingpoint addition, subtraction, and multiplication in one working precision, for example, double precision.
Reliable computing with GNU MPFR
"... Abstract. This article presents a few applications where reliable computations are obtained using the GNU MPFR library. ..."
Abstract
 Add to MetaCart
Abstract. This article presents a few applications where reliable computations are obtained using the GNU MPFR library.
GRENOBLE – RHÔNEALPES
, 2011
"... Abstract: SIPE (Small Integer Plus Exponent) is a minilibrary in the form of a C header file, to perform computations in very low precisions with correct rounding to nearest. The goal of such a tool is to do proofs of algorithms/properties or computations of error bounds in these precisions, in ord ..."
Abstract
 Add to MetaCart
Abstract: SIPE (Small Integer Plus Exponent) is a minilibrary in the form of a C header file, to perform computations in very low precisions with correct rounding to nearest. The goal of such a tool is to do proofs of algorithms/properties or computations of error bounds in these precisions, in order to generalize them to higher precisions. The supported operations are the addition, the subtraction, the multiplication, the FMA, and miscellaneous comparisons and conversions. Keywords: low precision, arithmetic operations, correct rounding, C language
1 SIPE: a MiniLibrary for Very Low Precision Computations with Correct Rounding
, 2013
"... Abstract—SIPE is a minilibrary in the form of a C header file, to perform radix2 floatingpoint computations in very low precisions with correct rounding, either to nearest or toward zero. The goal of such a tool is to do proofs of algorithms/properties or computations of tight error bounds in the ..."
Abstract
 Add to MetaCart
Abstract—SIPE is a minilibrary in the form of a C header file, to perform radix2 floatingpoint computations in very low precisions with correct rounding, either to nearest or toward zero. The goal of such a tool is to do proofs of algorithms/properties or computations of tight error bounds in these precisions by exhaustive tests, in order to try to generalize them to higher precisions. The currently supported operations are addition, subtraction, multiplication (possibly with the error term), fused multiplyadd/subtract (FMA/FMS), and miscellaneous comparisons and conversions. SIPE provides two implementations of these operations, with the same API and the same behavior: one based on integer arithmetic, and a new one based on floatingpoint arithmetic. Timing comparisons have been done with hardware IEEE754 floating point and with GNU MPFR. Index Terms—low precision; arithmetic operations; correct rounding; I.
Algorithms, Certification, and CryptographyTable of contents
"... 6.2.1. Mixedprecision fused multiplyandadd 11 6.2.2. Multiplication by rational constants versus division by a constant 11 6.2.3. Floatingpoint exponentiation on FPGA 11 6.2.4. Arithmetic around the bit heap 11 6.2.5. Improving computing architectures 11 ..."
Abstract
 Add to MetaCart
6.2.1. Mixedprecision fused multiplyandadd 11 6.2.2. Multiplication by rational constants versus division by a constant 11 6.2.3. Floatingpoint exponentiation on FPGA 11 6.2.4. Arithmetic around the bit heap 11 6.2.5. Improving computing architectures 11
1 On the Computation of CorrectlyRounded Sums
, 2010
"... Abstract—This paper presents a study of some basic blocks needed in the design of floatingpoint summation algorithms. In particular, in radix2 floatingpoint arithmetic, we show that among the set of the algorithms with no comparisons performing only floatingpoint additions/subtractions, the 2Sum ..."
Abstract
 Add to MetaCart
Abstract—This paper presents a study of some basic blocks needed in the design of floatingpoint summation algorithms. In particular, in radix2 floatingpoint arithmetic, we show that among the set of the algorithms with no comparisons performing only floatingpoint additions/subtractions, the 2Sum algorithm introduced by Knuth is minimal, both in terms of number of operations and depth of the dependency graph. We investigate the possible use of another algorithm, Dekker’s Fast2Sum algorithm, in radix10 arithmetic. We give methods for computing, in radix 10, the floatingpoint number nearest the average value of two floatingpoint numbers. We also prove that under reasonable conditions, an algorithm performing only roundtonearest additions/subtractions cannot compute the roundtonearest sum of at least three floatingpoint numbers. Starting from an algorithm due to Boldo and Melquiond, we also present new results about the computation of the correctlyrounded sum of three floatingpoint numbers. For a few of our algorithms, we assume new operations defined by the recent IEEE 7542008 Standard are available.