Results 1 
5 of
5
A MachineChecked Theory of Floating Point Arithmetic
, 1999
"... . Intel is applying formal verification to various pieces of mathematical software used in Merced, the first implementation of the new IA64 architecture. This paper discusses the development of a generic floating point library giving definitions of the fundamental terms and containing formal pr ..."
Abstract

Cited by 31 (5 self)
 Add to MetaCart
. Intel is applying formal verification to various pieces of mathematical software used in Merced, the first implementation of the new IA64 architecture. This paper discusses the development of a generic floating point library giving definitions of the fundamental terms and containing formal proofs of important lemmas. We also briefly describe how this has been used in the verification effort so far. 1 Introduction IA64 is a new 64bit computer architecture jointly developed by HewlettPackard and Intel, and the forthcoming Merced chip from Intel will be its first silicon implementation. To avoid some of the limitations of traditional architectures, IA64 incorporates a unique combination of features, including an instruction format encoding parallelism explicitly, instruction predication, and speculative /advanced loads [4]. Nevertheless, it also offers full upwardscompatibility with IA32 (x86) code. 1 IA64 incorporates a number of floating point operations, the centerpi...
Formal verification of IA64 division algorithms
 Proceedings, Theorem Proving in Higher Order Logics (TPHOLs), LNCS 1869
, 2000
"... Abstract. The IA64 architecture defers floating point and integer division to software. To ensure correctness and maximum efficiency, Intel provides a number of recommended algorithms which can be called as subroutines or inlined by compilers and assembly language programmers. All these algorithms ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
Abstract. The IA64 architecture defers floating point and integer division to software. To ensure correctness and maximum efficiency, Intel provides a number of recommended algorithms which can be called as subroutines or inlined by compilers and assembly language programmers. All these algorithms have been subjected to formal verification using the HOL Light theorem prover. As well as improving our level of confidence in the algorithms, the formal verification process has led to a better understanding of the underlying theory, allowing some significant efficiency improvements. 1
Formal verification of square root algorithms
 Formal Methods in Systems Design
, 2003
"... Abstract. We discuss the formal verification of some lowlevel mathematical software for the Intel ® Itanium ® architecture. A number of important algorithms have been proven correct using the HOL Light theorem prover. After briefly surveying some of our formal verification work, we discuss in more ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
Abstract. We discuss the formal verification of some lowlevel mathematical software for the Intel ® Itanium ® architecture. A number of important algorithms have been proven correct using the HOL Light theorem prover. After briefly surveying some of our formal verification work, we discuss in more detail the verification of a square root algorithm, which helps to illustrate why some features of HOL Light, in particular programmability, make it especially suitable for these applications. 1. Overview The Intel ® Itanium ® architecture is a new 64bit architecture jointly developed by Intel and HewlettPackard, implemented in the Itanium® processor family (IPF). Among the software supplied by Intel to support IPF processors are some optimized mathematical functions to supplement or replace less efficient generic libraries. Naturally, the correctness of the algorithms used in such software is always a major concern. This is particularly so for division, square root and certain transcendental function kernels, which are intimately tied to the basic architecture. First, in IA32 compatibility mode, these algorithms are used by hardware instructions like fptan and fdiv. And while in “native ” mode, division and square root are implemented in software, typical users are likely to see them as part of the basic architecture. The formal verification of some of the division algorithms is described by Harrison (2000b), and a representative verification of a transcendental function by Harrison (2000a). In this paper we complete the picture by considering a square root algorithm. Division, transcendental functions and square roots all have quite distinctive features and their formal verifications differ widely from each other. The present proofs have a number of interesting features, and show how important some theorem prover features — in particular programmability — are. The formal verifications are conducted using the freely available 1 HOL Light prover (Harrison, 1996). HOL Light is a version of HOL (Gordon and Melham, 1993), itself a descendent of Edinburgh LCF
Choosing Starting Values for NewtonRaphson Computation of Reciprocals, SquareRoots and SquareRoot Reciprocals
, 2003
"... We aim at finding the best possible seed values when computing reciprocals, squareroots and squareroot reciprocals in a given interval using NewtonRaphson iterations. A natural choice of the seed value would be the one that best approximates the expected result. It turns out that in most cases, t ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We aim at finding the best possible seed values when computing reciprocals, squareroots and squareroot reciprocals in a given interval using NewtonRaphson iterations. A natural choice of the seed value would be the one that best approximates the expected result. It turns out that in most cases, the best seed value can be quite far from this natural choice. When we evaluate a monotone function f(a) in the interval [a min , amax ], by building the sequence xn defined by the NewtonRaphson iteration, the natural choice consists in choosing x 0 equal to the arithmetic mean of the endpoint values. This minimizes the maximum possible distance between x 0 and f(a). And yet, if we perform n iterations, what matters is to minimize the maximum possible distance between xn and f(a).