Results 1  10
of
61
Estimationbased local search for stochastic combinatorial optimization
 IRIDIA, Université Libre de Bruxelles
, 2007
"... informs doi 10.1287/ijoc.1080.0276 ..."
Time and spaceefficient evaluation of some hypergeometric constants
, 2007
"... apport de recherche ISSN 02496399 ISRN INRIA/RR6105FR+ENG ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
apport de recherche ISSN 02496399 ISRN INRIA/RR6105FR+ENG
Kahan’s algorithm for a correct discriminant computation at last formally proven, in
 n o 2, February 2009
"... Abstract—This article tackles Kahan’s algorithm to compute accurately the discriminant. This is a known difficult problem, and this algorithm leads to an error bounded by 2 ulps of the floatingpoint result. The proofs involved are long and tricky and even trickier than expected as the test involved ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
Abstract—This article tackles Kahan’s algorithm to compute accurately the discriminant. This is a known difficult problem, and this algorithm leads to an error bounded by 2 ulps of the floatingpoint result. The proofs involved are long and tricky and even trickier than expected as the test involved may give a result different from the result of the same test without rounding. We give here the total demonstration of the validity of this algorithm, and we provide sufficient conditions to guarantee that neither overflow nor underflow will jeopardize the result. The IEEE754 doubleprecision program is annotated using the Why platform and the proof obligations are done using the Coq automatic proof checker. Index Terms—Floating point, discriminant, formal proof, Why platform, Coq.
ULTIMATELY FAST ACCURATE SUMMATION

, 2009
"... We present two new algorithms FastAccSum and FastPrecSum, one to compute a faithful rounding of the sum of floatingpoint numbers and the other for a result “as if” computed in Kfold precision. Faithful rounding means the computed result either is one of the immediate floatingpoint neighbors of th ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
We present two new algorithms FastAccSum and FastPrecSum, one to compute a faithful rounding of the sum of floatingpoint numbers and the other for a result “as if” computed in Kfold precision. Faithful rounding means the computed result either is one of the immediate floatingpoint neighbors of the exact result or is equal to the exact sum if this is a floatingpoint number. The algorithms are based on our previous algorithms AccSum and PrecSum and improve them by up to 25%. The first algorithm adapts to the condition number of the sum; i.e., the computing time is proportional to the difficulty of the problem. The second algorithm does not need extra memory, and the computing time depends only on the number of summands and K. Both algorithms are the fastest known in terms of flops. They allow good instructionlevel parallelism so that they are also fast in terms of measured computing time. The algorithms require only standard floatingpoint addition, subtraction, and multiplication in one working precision, for example, double precision.
Floatingpoint arithmetic in the Coq system
"... The process of proving some mathematical theorems can be greatly reduced by relying on numericallyintensive computations with a certified arithmetic. This article presents a formalization of floatingpoint arithmetic that makes it possible to efficiently compute inside the proofs of the Coq system. T ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
The process of proving some mathematical theorems can be greatly reduced by relying on numericallyintensive computations with a certified arithmetic. This article presents a formalization of floatingpoint arithmetic that makes it possible to efficiently compute inside the proofs of the Coq system. This certified library is a multiradix and multiprecision implementation free from underflow and overflow. It provides the basic arithmetic operators and a few elementary functions. 1
Towards optimal use of multiprecision arithmetic: a remark
 Reliable Computing
, 2006
"... If standardprecision computations do not lead to the desired accuracy, then it is reasonable to increase precision until we reach this accuracy. What is the optimal way of increasing precision? One possibility is to choose a constant q> 1, so that if the precision which requires the time t did not ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
If standardprecision computations do not lead to the desired accuracy, then it is reasonable to increase precision until we reach this accuracy. What is the optimal way of increasing precision? One possibility is to choose a constant q> 1, so that if the precision which requires the time t did not lead to a success, we select the next precision that requires time q · t. It was shown that among such strategies, the optimal (worstcase) overhead is attained when q = 2. In this paper, we show that this “timedoubling ” strategy is optimal among all possible strategies, not only among the ones in which we always increase time by a constant q> 1. Formulation of the problem. In multiprecision arithmetic, it is possible to pick a precision and make all computations with this precision; see, e.g., [1, 2]. If we use validated computations, then after the corresponding computations, we learn the accuracy of the results. Usually, we want to compute the result of an algorithm with a given accuracy. We can start with a certain precision. If this precision leads to the desired results accuracy, we are done; if not, we repeat the computations with the increased precision, etc. The question is: what is the best approach to increasing precision?
NumGfun: a Package for Numerical and Analytic Computation with Dfinite Functions
"... This article describes the implementation in the software package NumGfun of classical algorithms that operate on solutions of linear differential equations or recurrence relations with polynomial coefficients, including what seems to be the first general implementation of the fast highprecision nu ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
This article describes the implementation in the software package NumGfun of classical algorithms that operate on solutions of linear differential equations or recurrence relations with polynomial coefficients, including what seems to be the first general implementation of the fast highprecision numerical evaluation algorithms of Chudnovsky & Chudnovsky. In some cases, our descriptions contain improvements over existing algorithms. We also provide references to relevant ideas not currently used in NumGfun.