Results 1  10
of
34
Classroom examples of robustness problems in geometric computations
 In Proc. 12th European Symposium on Algorithms, volume 3221 of Lecture Notes Comput. Sci
, 2004
"... ..."
S.: Static analysis of finite precision computations
 In: VMCAI’11. LNCS
, 2011
"... Abstract. We define several abstract semantics for the static analysis of finite precision computations, that bound not only the ranges of values taken by numerical variables of a program, but also the difference with the result of the same sequence of operations in an idealized real number semantic ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
Abstract. We define several abstract semantics for the static analysis of finite precision computations, that bound not only the ranges of values taken by numerical variables of a program, but also the difference with the result of the same sequence of operations in an idealized real number semantics. These domains point out with more or less detail (control point, block, function for instance) sources of numerical errors in the program and the way they were propagated by further computations, thus allowing to evaluate not only the rounding error, but also sensitivity to inputs or parameters of the program. We describe two classes of abstractions, a non relational one based on intervals, and a weakly relational one based on parametrized zonotopic abstract domains called affine sets, especially well suited for sensitivity analysis and test generation. These abstract domains are implemented in the Fluctuat static analyzer, and we finally present some experiments. 1
Some functions computable with a fusedmac
 in Proceedings of the 17th Symposium on Computer Arithmetic, P. Montuschi and E. Schwarz, Eds., Cape Cod
, 2005
"... The fused multiply accumulate instruction (fusedmac) that is available on some current processors such as the Power PC or the Itanium eases some calculations. We give examples of some floatingpoint functions (such as ulp(x) or Nextafter(x, y)), or some useful tests, that are easily computable usin ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
The fused multiply accumulate instruction (fusedmac) that is available on some current processors such as the Power PC or the Itanium eases some calculations. We give examples of some floatingpoint functions (such as ulp(x) or Nextafter(x, y)), or some useful tests, that are easily computable using a fusedmac. Then, we show that, with rounding to the nearest, the error of a fusedmac instruction is exactly representable as the sum of two floatingpoint numbers. We give an algorithm that computes that error. 1
Certifying the floatingpoint implementation of an elementary function using Gappa
 IEEE TRANSACTIONS ON COMPUTERS, 2010. 9 HTTP://DX.DOI.ORG/10.1145/1772954.1772987 10 HTTP://DX.DOI.ORG/10.1145/1838599.1838622 11 HTTP://SHEMESH.LARC.NASA.GOV/NFM2010/PAPERS/NFM2010_14_23.PDF 12 HTTP://DX.DOI.ORG/10.1007/9783642142031_11 13 HTTP://DX.
, 2011
"... High confidence in floatingpoint programs requires proving numerical properties of final and intermediate values. One may need to guarantee that a value stays within some range, or that the error relative to some ideal value is well bounded. This certification may require a timeconsuming proof fo ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
High confidence in floatingpoint programs requires proving numerical properties of final and intermediate values. One may need to guarantee that a value stays within some range, or that the error relative to some ideal value is well bounded. This certification may require a timeconsuming proof for each line of code, and it is usually broken by the smallest change to the code, e.g., for maintenance or optimization purpose. Certifying floatingpoint programs by hand is, therefore, very tedious and errorprone. The Gappa proof assistant is designed to make this task both easier and more secure, due to the following novel features: It automates the evaluation and propagation of rounding errors using interval arithmetic. Its input format is very close to the actual code to validate. It can be used incrementally to prove complex mathematical properties pertaining to the code. It generates a formal proof of the results, which can be checked independently by a lower level proof assistant like Coq. Yet it does not require any specific knowledge about automatic theorem proving, and thus, is accessible to a wide community. This paper demonstrates the practical use of this tool for a widely used class of floatingpoint programs: implementations of elementary functions in a mathematical library.
Correctly Rounded Exponential Function in Double Precision Arithmetic
, 2001
"... We present an algorithm for implementing correctly rounded exponentials in doubleprecision floating point arithmetic. This algorithm is based on floatingpoint operations in the widespread IEEE754 standard, and is therefore more ecient than those using multiprecision arithmetic, while being fully ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
We present an algorithm for implementing correctly rounded exponentials in doubleprecision floating point arithmetic. This algorithm is based on floatingpoint operations in the widespread IEEE754 standard, and is therefore more ecient than those using multiprecision arithmetic, while being fully portable. It requires a table of reasonable size and IEEE754 double precision multiplications and additions. In a preliminary implementation, the overhead due to correct rounding is a 2:3 times slowdown when compared to the standard library function.
Computing Correctly Rounded Integer Powers in FloatingPoint Arithmetic
"... We introduce several algorithms for accurately evaluating powers to a positive integer in floatingpoint arithmetic, assuming a fused multiplyadd (fma) instruction is available. For bounded, yet very large values of the exponent, we aim at obtaining correctlyrounded results in roundtonearest mod ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We introduce several algorithms for accurately evaluating powers to a positive integer in floatingpoint arithmetic, assuming a fused multiplyadd (fma) instruction is available. For bounded, yet very large values of the exponent, we aim at obtaining correctlyrounded results in roundtonearest mode, that is, our algorithms return the floatingpoint number that is nearest the exact value.
Precise numerical computation
 Journal of Logic and Algebraic Programming. Special Issue on Practical Development of Exact Real Number Computation
, 2005
"... ..."
A FormallyVerified C Compiler Supporting FloatingPoint Arithmetic
, 2012
"... Abstract—Floatingpoint arithmetic is known to be tricky: roundings, formats, exceptional values. The IEEE754 standard was a push towards straightening the field and made formal reasoning about floatingpoint computations possible. Unfortunately, this is not sufficient to guarantee the final result ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Abstract—Floatingpoint arithmetic is known to be tricky: roundings, formats, exceptional values. The IEEE754 standard was a push towards straightening the field and made formal reasoning about floatingpoint computations possible. Unfortunately, this is not sufficient to guarantee the final result of a program, as several other actors are involved: programming language, compiler, architecture. The CompCert formallyverified compiler provides a solution to this problem: this compiler comes with a mathematical specification of the semantics of its source language (ISO C90) and target platforms (ARM, PowerPC, x86SSE2), and with a proof that compilation preserves semantics. In this paper, we report on our recent success in formally specifying and proving correct CompCert’s compilation of floatingpoint arithmetic. Since CompCert is verified using the Coq proof assistant, this effort required a suitable Coq formalization of the IEEE754 standard; we extended the Flocq library for this purpose. As a result, we obtain the first formally verified compiler that provably preserves the semantics of floatingpoint programs. Index Terms—floatingpoint arithmetic; verified compilation; formal proof; floatingpoint semantic preservation; I.
Provably faithful evaluation of polynomials
 In Proceedings of the 21st Annual ACM Symposium on Applied Computing
, 2006
"... We provide sufficient conditions that formally guarantee that the floatingpoint computation of a polynomial evaluation is faithful. To this end, we develop a formalization of floatingpoint numbers and rounding modes in the Program Verification System (PVS). Our work is based on a wellknown formali ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
We provide sufficient conditions that formally guarantee that the floatingpoint computation of a polynomial evaluation is faithful. To this end, we develop a formalization of floatingpoint numbers and rounding modes in the Program Verification System (PVS). Our work is based on a wellknown formalization of floatingpoint arithmetic in the proof assistant Coq, where polynomial evaluation has been already studied. However, thanks to the powerful proof automation provided by PVS, the sufficient conditions proposed in our work are more general than the original ones.
A correctly rounded implementation of the exponential function . . .
, 2003
"... This article presents an efficient implementation of a correctly rounded exponential function in double precision on the Intel Itanium processor family. This work combines advanced processor features (like the doubleextended precision fused multiplyandadd units of the Itanium processors) with rec ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
This article presents an efficient implementation of a correctly rounded exponential function in double precision on the Intel Itanium processor family. This work combines advanced processor features (like the doubleextended precision fused multiplyandadd units of the Itanium processors) with recent research results giving the worstcase precision needed for correctly rounding the exponential function. We give and prove an algorithm which returns a correctly rounded result (in any of the four IEEE754 rounding modes) within 172 machine cycles on the Intel Itanium 2 processor. This is about four times slower than the less accurate function present in the standard Intel mathematical library. The evaluation is performed in one phase only and is therefore fast even in the worst case, contrary to other implementations which use a multilevel strategy [18, 6]: We show that the worstcase required precision of 157 bits can always be stored in the sum of two doubleextended oatingpoint numbers. Another algorithm is given with a 92 cycles execution time, but its proof has to be formally completed.