Results 1  10
of
10
A MachineChecked Theory of Floating Point Arithmetic
, 1999
"... . Intel is applying formal verification to various pieces of mathematical software used in Merced, the first implementation of the new IA64 architecture. This paper discusses the development of a generic floating point library giving definitions of the fundamental terms and containing formal pr ..."
Abstract

Cited by 31 (5 self)
 Add to MetaCart
. Intel is applying formal verification to various pieces of mathematical software used in Merced, the first implementation of the new IA64 architecture. This paper discusses the development of a generic floating point library giving definitions of the fundamental terms and containing formal proofs of important lemmas. We also briefly describe how this has been used in the verification effort so far. 1 Introduction IA64 is a new 64bit computer architecture jointly developed by HewlettPackard and Intel, and the forthcoming Merced chip from Intel will be its first silicon implementation. To avoid some of the limitations of traditional architectures, IA64 incorporates a unique combination of features, including an instruction format encoding parallelism explicitly, instruction predication, and speculative /advanced loads [4]. Nevertheless, it also offers full upwardscompatibility with IA32 (x86) code. 1 IA64 incorporates a number of floating point operations, the centerpi...
Formal Verification of Floating Point Trigonometric Functions
 Formal Methods in ComputerAided Design: Third International Conference FMCAD 2000, volume 1954 of Lecture Notes in Computer Science
, 2000
"... Abstract. We have formal verified a number of algorithms for evaluating transcendental functions in doubleextended precision floating point arithmetic in the Intel ® IA64 architecture. These algorithms are used in the Itanium TM processor to provide compatibility with IA32 (x86) hardware transcen ..."
Abstract

Cited by 25 (4 self)
 Add to MetaCart
Abstract. We have formal verified a number of algorithms for evaluating transcendental functions in doubleextended precision floating point arithmetic in the Intel ® IA64 architecture. These algorithms are used in the Itanium TM processor to provide compatibility with IA32 (x86) hardware transcendentals, and similar ones are used in mathematical software libraries. In this paper we describe in some depth the formal verification of the sin and cos functions, including the initial range reduction step. This illustrates the different facets of verification in this field, covering both pure mathematics and the detailed analysis of floating point rounding. 1
Computing machineefficient polynomial approximations
 TRANSACTIONS ON MATHEMATICAL SOFTWARE
, 2006
"... Polynomial approximations are almost always used when implementing functions on a computing system. In most cases, the polynomial that best approximates (for a given distance and in a given interval) a function has coefficients that are not exactly representable with a finite number of bits. And yet ..."
Abstract

Cited by 20 (6 self)
 Add to MetaCart
Polynomial approximations are almost always used when implementing functions on a computing system. In most cases, the polynomial that best approximates (for a given distance and in a given interval) a function has coefficients that are not exactly representable with a finite number of bits. And yet, the polynomial approximations that are actually implemented do have coefficients that are represented with a finite—and sometimes small—number of bits. This is due to the finiteness of the floatingpoint representations (for software implementations), and to the need to have small, hence fast and/or inexpensive, multipliers (for hardware implementations). We then have to consider polynomial approximations for which the degreei coefficient has at most mi fractional bits; in other words, it is a rational number with denominator 2mi. We provide a general and efficient method for finding the best polynomial approximation under this constraint. Moreover, our method also applies if some other constraints (such as requiring some coefficients to be equal to some predefined constants or minimizing relative error instead of absolute error) are required.
The Computation of Transcendental Functions on the IA64 Architecture
 Intel Technology Journal
, 1999
"... The fast and accurate evaluation of transcendental functions (e.g. exp, log, sin, and atan) is vitally important in many fields of scientific computing. Intel provides a software library of these functions that can be called from both the C and FORTRAN programming languages. By exploiting some of th ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
The fast and accurate evaluation of transcendental functions (e.g. exp, log, sin, and atan) is vitally important in many fields of scientific computing. Intel provides a software library of these functions that can be called from both the C and FORTRAN programming languages. By exploiting some of the key features of the IA64 floatingpoint architecture, we have been able to provide doubleprecision transcendental functions that are highly accurate yet can typically be evaluated in between 50 and 70 clock cycles. In this paper, we discuss some of the design principles and implementation details of these functions.
Efficient polynomial L∞ approximations
 In Proceedings of the 18th IEEE Symposium on Computer Arithmetic (ARITH18). IEEE Computer
"... We address the problem of computing a good floatingpointcoefficient polynomial approximation to a function, with respect to the supremum norm. This is a key step in most processes of evaluation of a function. We present a fast and efficient method, based on lattice basis reduction, that often give ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
We address the problem of computing a good floatingpointcoefficient polynomial approximation to a function, with respect to the supremum norm. This is a key step in most processes of evaluation of a function. We present a fast and efficient method, based on lattice basis reduction, that often gives the best polynomial possible and most of the time returns a very good approximation.
Certifying the floatingpoint implementation of an elementary function using Gappa
 IEEE TRANSACTIONS ON COMPUTERS, 2010. 9 HTTP://DX.DOI.ORG/10.1145/1772954.1772987 10 HTTP://DX.DOI.ORG/10.1145/1838599.1838622 11 HTTP://SHEMESH.LARC.NASA.GOV/NFM2010/PAPERS/NFM2010_14_23.PDF 12 HTTP://DX.DOI.ORG/10.1007/9783642142031_11 13 HTTP://DX.
, 2011
"... High confidence in floatingpoint programs requires proving numerical properties of final and intermediate values. One may need to guarantee that a value stays within some range, or that the error relative to some ideal value is well bounded. This certification may require a timeconsuming proof fo ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
High confidence in floatingpoint programs requires proving numerical properties of final and intermediate values. One may need to guarantee that a value stays within some range, or that the error relative to some ideal value is well bounded. This certification may require a timeconsuming proof for each line of code, and it is usually broken by the smallest change to the code, e.g., for maintenance or optimization purpose. Certifying floatingpoint programs by hand is, therefore, very tedious and errorprone. The Gappa proof assistant is designed to make this task both easier and more secure, due to the following novel features: It automates the evaluation and propagation of rounding errors using interval arithmetic. Its input format is very close to the actual code to validate. It can be used incrementally to prove complex mathematical properties pertaining to the code. It generates a formal proof of the results, which can be checked independently by a lower level proof assistant like Coq. Yet it does not require any specific knowledge about automatic theorem proving, and thus, is accessible to a wide community. This paper demonstrates the practical use of this tool for a widely used class of floatingpoint programs: implementations of elementary functions in a mathematical library.
Correctly Rounded Exponential Function in Double Precision Arithmetic
, 2001
"... We present an algorithm for implementing correctly rounded exponentials in doubleprecision floating point arithmetic. This algorithm is based on floatingpoint operations in the widespread IEEE754 standard, and is therefore more ecient than those using multiprecision arithmetic, while being fully ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
We present an algorithm for implementing correctly rounded exponentials in doubleprecision floating point arithmetic. This algorithm is based on floatingpoint operations in the widespread IEEE754 standard, and is therefore more ecient than those using multiprecision arithmetic, while being fully portable. It requires a table of reasonable size and IEEE754 double precision multiplications and additions. In a preliminary implementation, the overhead due to correct rounding is a 2:3 times slowdown when compared to the standard library function.
Software techniques for perfect elementary functions in floatingpoint interval arithmetic
 IN REAL NUMBERS AND COMPUTERS
, 2006
"... ..."
Certifying floatingpoint implementations using Gappa
, 2008
"... High confidence in floatingpoint programs requires proving numerical properties of final and intermediate values. One may need to guarantee that a value stays within some range, or that the error relative to some ideal value is well bounded. Such work may require several lines of proof for each lin ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
High confidence in floatingpoint programs requires proving numerical properties of final and intermediate values. One may need to guarantee that a value stays within some range, or that the error relative to some ideal value is well bounded. Such work may require several lines of proof for each line of code, and will usually be broken by the smallest change to the code (e.g. for maintenance or optimization purpose). Certifying these programs by hand is therefore very tedious and errorprone. This article discusses the use of the Gappa proof assistant in this context. Gappa has two main advantages over previous approaches: Its input format is very close to the actual C code to validate, and it automates error evaluation and propagation using interval arithmetic. Besides, it can be used to incrementally prove complex mathematical properties pertaining to the C code. Yet it does not require any specific knowledge about automatic theorem proving, and thus is accessible to a wide community. Moreover, Gappa may generate a formal proof of the results that can be checked independently by a lowerlevel proof assistant like Coq, hence providing an even higher confidence in the certification of the numerical code. The article demonstrates the use of this tool on a realsize example, an elementary function with correctly rounded output. 1
Powering by Table LookUp using a seconddegree minimax approximation with fused accumulation tree
, 2000
"... A new algorithm for the calculation of singleprecision floatingpoint powering (X p ) is proposed in this report. This algorithm employs table lookup and polynomial approximation, a seconddegree minimax approximation. The use of this polynomial approximation allows the employment of small ta ..."
Abstract
 Add to MetaCart
A new algorithm for the calculation of singleprecision floatingpoint powering (X p ) is proposed in this report. This algorithm employs table lookup and polynomial approximation, a seconddegree minimax approximation. The use of this polynomial approximation allows the employment of small tables to store the coefficients. Both unfolded and pipelined architectures are presented, and the results of a pre layout synthesis performed using CMOS 0.35 m technology are shown, achieving a 50% area reduction from linear approximation methods, and with improved speed over other seconddegree aproximation based algorithms. The unfolded architecture presented has a cycle time of about 11.2 ns. For the pipelined architecture, an operation frequency above 200 MHz has been achieved, with a latency of three cycles and a throughput of one result per cycle. 1 INTRODUCTION Powering function (X p ) is a very interesting function for applications such as computer 3D graphics and digital i...