Results 1  10
of
13
A proven correctly rounded logarithm in doubleprecision
 In Real Numbers and Computers, Schloss Dagstuhl
, 2004
"... Abstract. This article is a case study in the implementation of a portable, proven and efficient correctly rounded elementary function in doubleprecision. We describe the methodology used to achieve these goals in the crlibm library. There are two novel aspects to this approach. The first is the pr ..."
Abstract

Cited by 19 (9 self)
 Add to MetaCart
Abstract. This article is a case study in the implementation of a portable, proven and efficient correctly rounded elementary function in doubleprecision. We describe the methodology used to achieve these goals in the crlibm library. There are two novel aspects to this approach. The first is the proof framework, and in general the techniques used to balance performance and provability. The second is the introduction of processorspecific optimization to get performance equivalent to the best current mathematical libraries, while trying to minimize the proof work. The implementation of the natural logarithm is detailed to illustrate these questions. Mathematics Subject Classification. 2604, 65D15, 65Y99. 1.
Combining Coq and Gappa for Certifying FloatingPoint Programs
, 2009
"... Formal verification of numerical programs is notoriously difficult. On the one hand, there exist automatic tools specialized in floatingpoint arithmetic, such as Gappa, but they target very restrictive logics. On the other hand, there are interactive theorem provers based on the LCF approach, such ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
Formal verification of numerical programs is notoriously difficult. On the one hand, there exist automatic tools specialized in floatingpoint arithmetic, such as Gappa, but they target very restrictive logics. On the other hand, there are interactive theorem provers based on the LCF approach, such as Coq, that handle a generalpurpose logic but that lack proof automation for floatingpoint properties. To alleviate these issues, we have implemented a mechanism for calling Gappa from a Coq interactive proof. This paper presents this combination and shows on several examples how this approach offers a significant speedup in the process of verifying floatingpoint programs.
Computing Correctly Rounded Integer Powers in FloatingPoint Arithmetic
"... We introduce several algorithms for accurately evaluating powers to a positive integer in floatingpoint arithmetic, assuming a fused multiplyadd (fma) instruction is available. For bounded, yet very large values of the exponent, we aim at obtaining correctlyrounded results in roundtonearest mod ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We introduce several algorithms for accurately evaluating powers to a positive integer in floatingpoint arithmetic, assuming a fused multiplyadd (fma) instruction is available. For bounded, yet very large values of the exponent, we aim at obtaining correctlyrounded results in roundtonearest mode, that is, our algorithms return the floatingpoint number that is nearest the exact value.
A Scalable Approach for Automated Precision Analysis
"... The freedom over the choice of numerical precision is one of the key factors that can only be exploited throughout the datapath of an FPGA accelerator, providing the ability to trade the accuracy of the final computational result with the silicon area, power, operating frequency, and latency. Howeve ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
The freedom over the choice of numerical precision is one of the key factors that can only be exploited throughout the datapath of an FPGA accelerator, providing the ability to trade the accuracy of the final computational result with the silicon area, power, operating frequency, and latency. However, in order to tune the precision used throughout hardware accelerators automatically, a tool is required to verify that the hardware will meet an error or range specification for a given precision. Existing tools to perform this task typically suffer either from a lack of tightness of bounds or require a large execution time when applied to large scale algorithms; in this work, we propose an approach that can both scale to larger examples and obtain tighter bounds, within a smaller execution time, than the existing methods. The approach we describe also provides a user with the ability to trade the quality of bounds with execution time of the procedure, making it suitable within a wordlength optimization framework for both small and largescale algorithms. We demonstrate the use of our approach on instances of iterative algorithms to solve a system of linear equations. We show that because our approach can track how the relative error decreases with increasing precision, unlike the existing methods, we can use it to create smaller hardware with guaranteed numerical properties. This results in a saving of 25 % of the area in comparison to optimizing the precision using competing analytical techniques, whilst requiring a smaller execution time than the these methods, and saving almost 80 % of area in comparison to adopting IEEE double precision arithmetic.
1 A SCALABLE PRECISION ANALYSIS FRAMEWORK
"... Abstract—In embedded computing, typically some form of silicon area or power budget restricts the potential performance achievable. For algorithms with limited dynamic range, custom hardware accelerators manage to extract significant additional performance for such a budget via mapping operations in ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract—In embedded computing, typically some form of silicon area or power budget restricts the potential performance achievable. For algorithms with limited dynamic range, custom hardware accelerators manage to extract significant additional performance for such a budget via mapping operations in the algorithm to fixedpoint. However, for complex applications requiring floatingpoint computation, the potential performance improvement over software is reduced. Nonetheless, custom hardware can still customise the precision of floatingpoint operators, unlike software which is restricted to IEEE standard single or double precision, to increase the overall performance at the cost of increasing the error observed in the final computational result. Unfortunately, because it is difficult to determine if this error increase is tolerable, this task is rarely performed. We present a new analytical technique to calculate bounds on the range or relative error of output variables, enabling custom hardware accelerators to be tolerant of floating point errors by design. In contrast to existing tools that perform this task, our approach scales to larger examples and obtains tighter bounds, within a smaller execution time. Furthermore, it allows a user to trade the quality of bounds with execution time of the procedure, making it suitable for both small and largescale algorithms. I.
Florent de Dinechin
"... Generating certified and efficient numerical codes requires information ranging from the mathematical level to the representation of numbers. Even though the mathematical semantics can be expressed using the content part of MathML, this language does not encompass the implementation on computers. In ..."
Abstract
 Add to MetaCart
Generating certified and efficient numerical codes requires information ranging from the mathematical level to the representation of numbers. Even though the mathematical semantics can be expressed using the content part of MathML, this language does not encompass the implementation on computers. Indeed various arithmetics may be involved, like floatingpoint or fixedpoint, in fixed precision or arbitrary precision, and current tools do not handle all of these. Therefore we propose in this paper LEMA (Langage pour les Expressions Mathématiques Annotées), a descriptive language based on MathML with additional expressiveness. LEMA will be used during the automatic generation of certified numerical codes. Such a generation process typically involves several steps, and LEMA would thus act as a glue
Optimizing polynomials for floatingpoint implementation ∗
"... The floatingpoint implementation of a function often reduces to a polynomial approximation on an interval. Remez algorithm provides the polynomial closest to the function, but the evaluation of this polynomial in floatingpoint may lead to catastrophic cancellations when the approximation interval ..."
Abstract
 Add to MetaCart
The floatingpoint implementation of a function often reduces to a polynomial approximation on an interval. Remez algorithm provides the polynomial closest to the function, but the evaluation of this polynomial in floatingpoint may lead to catastrophic cancellations when the approximation interval contains zero and some of the polynomial coefficients are very small in magnitude with respects to others. To obtain cancellationfree polynomials while reducing operation count, an algorithm is presented that forces to zero the smaller coefficients thanks to a modified Remez algorithm targeting an incomplete monomial basis. This algorithm generalizes wellknown techniques used for odd or even functions to a wider class of functions, and in a purely numerical way, the function being used as a numerical black box. This algorithm is demonstrated, within a larger polynomial implementation tool, on a range of examples, resulting in polynomials with less coefficients than those obtained the usual way.
Refining Abstract Interpretationbased Approximations with Constraint Solvers
, 2011
"... Abstract. Programs with floatingpoint computations are tricky to develop because floatingpoint arithmetic differs from real arithmetic and has many counterintuitive properties. A classical approach to verify such programs consists in estimating the precision of floatingpoint computations with res ..."
Abstract
 Add to MetaCart
Abstract. Programs with floatingpoint computations are tricky to develop because floatingpoint arithmetic differs from real arithmetic and has many counterintuitive properties. A classical approach to verify such programs consists in estimating the precision of floatingpoint computations with respect to the same sequence of operations in an idealized interpretation—have been designed to address this problem. However, such tools compute an overapproximation of the domains of the variables, both in the semantics of the floatingpoint numbers and in the semantics of the real numbers. This overapproximation can be very coarse on some programs. In this paper, we show that constraint solvers over floatingpoint numbers and real numbers can significantly refine the approximations computed by Fluctuat. We managed to reduce drastically the domains of variables of C programs that are difficult to handle for abstract interpretation techniques implemented in Fluctuat. Key words: Program verification; Floatingpoint computation; C programs; Abstract interpretationbased approximation; Intervalbased constraint solvers over real and floatingpoint numbers 1