Results 1 
8 of
8
Formal Verification of the VAMP Floating Point Unit
 In CHARME 2001, volume 2144 of LNCS
, 2001
"... We report on the formal verification of the floating point unit used in the VAMP processor. The FPU is fully IEEE compliant, and supports denormals and exceptions in hardware. The supported operations are addition, subtraction, multiplication, division, comparison, and conversions. The hardware is v ..."
Abstract

Cited by 13 (6 self)
 Add to MetaCart
We report on the formal verification of the floating point unit used in the VAMP processor. The FPU is fully IEEE compliant, and supports denormals and exceptions in hardware. The supported operations are addition, subtraction, multiplication, division, comparison, and conversions. The hardware is verified on the gate level against a formal description of the IEEE standard by means of the theorem prover PVS.
A comparison of Three Rounding Algorithms for IEEE FloatingPoint Multiplication
 IEEE Transactions on Computers
, 2000
"... ..."
Emulation of a FMA and CorrectlyRounded Sums: Proved Algorithms Using Rounding to Odd
 IEEE Trans. Computers
, 2008
"... Rounding to odd is a nonstandard rounding on floatingpoint numbers. By using it for some intermediate values instead of rounding to nearest, correctly rounded results can be obtained at the end of computations. We present an algorithm to emulate the fused multiplyandadd operator. We also present ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Rounding to odd is a nonstandard rounding on floatingpoint numbers. By using it for some intermediate values instead of rounding to nearest, correctly rounded results can be obtained at the end of computations. We present an algorithm to emulate the fused multiplyandadd operator. We also present an iterative algorithm for computing the correctly rounded sum of a set floatingpoint numbers under mild assumptions. A variation on both previous algorithms is the correctly rounded sum of any three floatingpoint numbers. This leads to efficient implementations, even when this rounding is not available. In order to guarantee the correctness of these properties and algorithms, we formally proved them using the Coq proof checker.
Performing Arithmetic Operations on RoundtoNearest Representations
, 2008
"... During any composite computation there is a constant need for rounding intermediate results before they can participate in further processing. Recently a class of number representations denoted RNCodings were introduced, allowing rounding to take place by a simple truncation, with the additional pr ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
During any composite computation there is a constant need for rounding intermediate results before they can participate in further processing. Recently a class of number representations denoted RNCodings were introduced, allowing rounding to take place by a simple truncation, with the additional property that problems with doubleroundings are avoided. In this paper we first investigate a particular encoding of the binary representation. This encoding is generalized to any radix and digit set; however radix complement representations for even values of the radix turn out to be particularly feasible. The encoding is essentially an ordinary radix complement representation with an appended roundbit, but still allowing rounding by truncation and doublerounding without errors. Conversions from radix complement to these roundtonearest representations can be performed in constant time, whereas conversion the other way in general takes at least logarithmic time. Addition and multiplication on such fixedpoint representations are analyzed and defined in such a way that rounding information can be carried along in a meaningful way. 1
unknown title
"... comparison of three rounding algorithms for IEEE floatingpoint multiplication ..."
Abstract
 Add to MetaCart
comparison of three rounding algorithms for IEEE floatingpoint multiplication
1 Performing Arithmetic Operations on RoundtoNearest Representations
"... During any composite computation there is a constant need for rounding intermediate results before they can participate in further processing. Recently a class of number representations denoted RNCodings were introduced, allowing an unbiased roundingtonearest to take place by a simple truncation ..."
Abstract
 Add to MetaCart
During any composite computation there is a constant need for rounding intermediate results before they can participate in further processing. Recently a class of number representations denoted RNCodings were introduced, allowing an unbiased roundingtonearest to take place by a simple truncation, with the property that problems with doubleroundings are avoided. In this paper we first investigate a particular encoding of the binary representation. This encoding is generalized to any radix and digit set; however radix complement representations for even values of the radix turn out to be particularly feasible. The encoding is essentially an ordinary radix complement representation with an appended roundbit, but still allowing rounding to nearest by truncation and thus avoiding problems with doubleroundings. Conversions from radix complement to these roundtonearest representations can be performed in constant time, whereas conversion the other way in general takes at least logarithmic time. Not only is roundingtonearest a constant time operation, but so is also sign inversion, both of which are at best logtime operations on ordinary 2’s complement representations. Addition and multiplication on such fixedpoint representations are first analyzed and defined in such a way that rounding information can be carried along in a meaningful way, at minimal cost. The analysis is carried through for a compact (canonical) encoding using 2’s complement representation, supplied with a roundbit. Based on the fixedpoint encoding it is shown possible to define floating point representations, and a sketch of the implementation of an FPU is presented. I.
Type System Support for FloatingPoint Computation
, 2001
"... Floatingpoint arithmetic is often seen as untrustworthy. We show how manipulating precisions according to the following rules of thumb enhances the reliability of and removes surprises from calculations: • Store data narrowly, • compute intermediates widely, and • derive properties widely. Further, ..."
Abstract
 Add to MetaCart
Floatingpoint arithmetic is often seen as untrustworthy. We show how manipulating precisions according to the following rules of thumb enhances the reliability of and removes surprises from calculations: • Store data narrowly, • compute intermediates widely, and • derive properties widely. Further, we describe a typing system for floating point that both supports and is supported by these rules. A single type is established for all intermediate computations. The type describes a precision at least as wide as all inputs to and results from the computation. Picking a single type provides benefits to users, compilers, and interpreters. The type system also extends cleanly to encompass intervals and higher precisions. 1