Results 1  10
of
14
A MachineChecked Theory of Floating Point Arithmetic
, 1999
"... . Intel is applying formal verification to various pieces of mathematical software used in Merced, the first implementation of the new IA64 architecture. This paper discusses the development of a generic floating point library giving definitions of the fundamental terms and containing formal pr ..."
Abstract

Cited by 31 (5 self)
 Add to MetaCart
. Intel is applying formal verification to various pieces of mathematical software used in Merced, the first implementation of the new IA64 architecture. This paper discusses the development of a generic floating point library giving definitions of the fundamental terms and containing formal proofs of important lemmas. We also briefly describe how this has been used in the verification effort so far. 1 Introduction IA64 is a new 64bit computer architecture jointly developed by HewlettPackard and Intel, and the forthcoming Merced chip from Intel will be its first silicon implementation. To avoid some of the limitations of traditional architectures, IA64 incorporates a unique combination of features, including an instruction format encoding parallelism explicitly, instruction predication, and speculative /advanced loads [4]. Nevertheless, it also offers full upwardscompatibility with IA32 (x86) code. 1 IA64 incorporates a number of floating point operations, the centerpi...
Formal verification of IA64 division algorithms
 Proceedings, Theorem Proving in Higher Order Logics (TPHOLs), LNCS 1869
, 2000
"... Abstract. The IA64 architecture defers floating point and integer division to software. To ensure correctness and maximum efficiency, Intel provides a number of recommended algorithms which can be called as subroutines or inlined by compilers and assembly language programmers. All these algorithms ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
Abstract. The IA64 architecture defers floating point and integer division to software. To ensure correctness and maximum efficiency, Intel provides a number of recommended algorithms which can be called as subroutines or inlined by compilers and assembly language programmers. All these algorithms have been subjected to formal verification using the HOL Light theorem prover. As well as improving our level of confidence in the algorithms, the formal verification process has led to a better understanding of the underlying theory, allowing some significant efficiency improvements. 1
Correctness Proofs Outline for NewtonRaphson Based FloatingPoint Divide and Square Root Algorithms
"... This paper describes a study of a class of algorithms for the floatingpoint divide and square root operations, based on the NewtonRaphson iterative method. The two main goals were: (1) Proving the IEEE correctness of these iterative floatingpoint algorithms, i.e. compliance with the IEEE754 sta ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
This paper describes a study of a class of algorithms for the floatingpoint divide and square root operations, based on the NewtonRaphson iterative method. The two main goals were: (1) Proving the IEEE correctness of these iterative floatingpoint algorithms, i.e. compliance with the IEEE754 standard for binary floatingpoint operations [1]. The focus was on software driven iterative algorithms, instead of the hardware based implementations that dominated until now. (2) Identifying the special cases of operands that require software assistance due to possible overflow, underflow, or loss of precision of intermediate results. This study was initiated in an attempt to prove the IEEE correctness for a class of divide and square root algorithms based on the NewtonRapshson iterative methods. As more insight into the inner workings of these algorithms was gained, it became obvious that a formal study and proof were necessary in order to achieve the desired objectives. The result is a ...
Formal Verification of the VAMP Floating Point Unit
 In CHARME 2001, volume 2144 of LNCS
, 2001
"... We report on the formal verification of the floating point unit used in the VAMP processor. The FPU is fully IEEE compliant, and supports denormals and exceptions in hardware. The supported operations are addition, subtraction, multiplication, division, comparison, and conversions. The hardware is v ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
We report on the formal verification of the floating point unit used in the VAMP processor. The FPU is fully IEEE compliant, and supports denormals and exceptions in hardware. The supported operations are addition, subtraction, multiplication, division, comparison, and conversions. The hardware is verified on the gate level against a formal description of the IEEE standard by means of the theorem prover PVS.
Formal verification of square root algorithms
 Formal Methods in Systems Design
, 2003
"... Abstract. We discuss the formal verification of some lowlevel mathematical software for the Intel ® Itanium ® architecture. A number of important algorithms have been proven correct using the HOL Light theorem prover. After briefly surveying some of our formal verification work, we discuss in more ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
Abstract. We discuss the formal verification of some lowlevel mathematical software for the Intel ® Itanium ® architecture. A number of important algorithms have been proven correct using the HOL Light theorem prover. After briefly surveying some of our formal verification work, we discuss in more detail the verification of a square root algorithm, which helps to illustrate why some features of HOL Light, in particular programmability, make it especially suitable for these applications. 1. Overview The Intel ® Itanium ® architecture is a new 64bit architecture jointly developed by Intel and HewlettPackard, implemented in the Itanium® processor family (IPF). Among the software supplied by Intel to support IPF processors are some optimized mathematical functions to supplement or replace less efficient generic libraries. Naturally, the correctness of the algorithms used in such software is always a major concern. This is particularly so for division, square root and certain transcendental function kernels, which are intimately tied to the basic architecture. First, in IA32 compatibility mode, these algorithms are used by hardware instructions like fptan and fdiv. And while in “native ” mode, division and square root are implemented in software, typical users are likely to see them as part of the basic architecture. The formal verification of some of the division algorithms is described by Harrison (2000b), and a representative verification of a transcendental function by Harrison (2000a). In this paper we complete the picture by considering a square root algorithm. Division, transcendental functions and square roots all have quite distinctive features and their formal verifications differ widely from each other. The present proofs have a number of interesting features, and show how important some theorem prover features — in particular programmability — are. The formal verifications are conducted using the freely available 1 HOL Light prover (Harrison, 1996). HOL Light is a version of HOL (Gordon and Melham, 1993), itself a descendent of Edinburgh LCF
Floatingpoint verification using theorem proving
 Formal Methods for Hardware Verification, 6th International School on Formal Methods for the Design of Computer, Communication, and Software Systems, SFM 2006, volume 3965 of Lecture Notes in Computer Science
, 2006
"... Abstract. This chapter describes our work on formal verification of floatingpoint algorithms using the HOL Light theorem prover. 1 ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Abstract. This chapter describes our work on formal verification of floatingpoint algorithms using the HOL Light theorem prover. 1
IA64 FloatingPoint Operations and the IEEE Standard for Binary FloatingPoint Arithmetic
"... This paper examines the implementation of floatingpoint operations in the IA64 architecture from the perspective of the IEEE Standard for Binary FloatingPoint Arithmetic [1]. The floatingpoint data formats, operations, and special values are compared with the mandatory or recommended ones from t ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
This paper examines the implementation of floatingpoint operations in the IA64 architecture from the perspective of the IEEE Standard for Binary FloatingPoint Arithmetic [1]. The floatingpoint data formats, operations, and special values are compared with the mandatory or recommended ones from the IEEE Standard, showing the potential gains in performance that result from specific choices. Two subsections are dedicated to the floatingpoint divide, remainder, and square root operations, which are implemented in software. It is shown how IEEE compliance was achieved using new IA64 features such as fused multiplyadd operations, predication, and multiple status fields for IEEE status flags. Derived integer operations (the integer divide and remainder) are also illustrated. IA64 floatingpoint exceptions and traps are described, including the Software Assistance faults and traps that can lead to further IEEEdefined exceptions. The software extensions to the hardware needed to comply with the IEEE Standard's recommendations in handling floatingpoint exceptions are specified. The special case of the Single Instruction Multiple Data (SIMD) instructions is described. Finally, a subsection is dedicated to speculation, a new feature in IA processors.
Floatingpoint verification
 International Journal Of ManMachine Studies
, 1995
"... Abstract: This paper overviews the application of formal verification techniques to hardware in general, and to floatingpoint hardware in particular. A specific challenge is to connect the usual mathematical view of continuous arithmetic operations with the discrete world, in a credible and verifia ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Abstract: This paper overviews the application of formal verification techniques to hardware in general, and to floatingpoint hardware in particular. A specific challenge is to connect the usual mathematical view of continuous arithmetic operations with the discrete world, in a credible and verifiable way.
A precision and range independent tool for testing floatingpoint arithmetic I: basic operations, square root and remainder
, 1999
"... This paper introduces a precision and range independent tool for testing the compliance of hardware or software implementations of (multiprecision) floatingpoint arithmetic with the principles of the IEEE standards 754 and 854. The tool consists of a driver program, o#ering many options to test onl ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
This paper introduces a precision and range independent tool for testing the compliance of hardware or software implementations of (multiprecision) floatingpoint arithmetic with the principles of the IEEE standards 754 and 854. The tool consists of a driver program, o#ering many options to test only specific aspects of the IEEE standards, and a large set of test vectors, encoded in a precision independent syntax to allow the testing of basic and extended hardware formats as well as multiprecision floatingpoint implementations.
Isolating critical cases for reciprocals using integer factorization
"... One approach to testing and/or proving correctness of a floatingpoint algorithm computing a function f is based on finding input floatingpoint numbers a such that the exact result f(a) is very close to a “rounding boundary”, i.e. a floatingpoint number or a midpoint between them. In the present p ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
One approach to testing and/or proving correctness of a floatingpoint algorithm computing a function f is based on finding input floatingpoint numbers a such that the exact result f(a) is very close to a “rounding boundary”, i.e. a floatingpoint number or a midpoint between them. In the present paper we show how to do this for the reciprocal function by utilizing prime factorizations. We present the method and show examples, as well as making a fairly detailed study of its expected and worstcase behavior. We point out how this analysis of reciprocals can be useful in analyzing certain reciprocal algorithms, and also show how the approach can be trivially adapted to the reciprocal square root function.