Results 1 
7 of
7
Formal Verification of the VAMP Floating Point Unit
 In CHARME 2001, volume 2144 of LNCS
, 2001
"... We report on the formal verification of the floating point unit used in the VAMP processor. The FPU is fully IEEE compliant, and supports denormals and exceptions in hardware. The supported operations are addition, subtraction, multiplication, division, comparison, and conversions. The hardware is v ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
We report on the formal verification of the floating point unit used in the VAMP processor. The FPU is fully IEEE compliant, and supports denormals and exceptions in hardware. The supported operations are addition, subtraction, multiplication, division, comparison, and conversions. The hardware is verified on the gate level against a formal description of the IEEE standard by means of the theorem prover PVS.
A comparison of three rounding algorithms for IEEE floatingpoint multiplication
, 1998
"... A new IEEE compliant floatingpoint rounding algorithm for computing the rounded product from a carrysave representation of the product is presented. The new rounding algorithm is compared with the rounding algorithms of Yu and Zyner [23] and of Quach et al. [18]. For each rounding algorithm, a log ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
A new IEEE compliant floatingpoint rounding algorithm for computing the rounded product from a carrysave representation of the product is presented. The new rounding algorithm is compared with the rounding algorithms of Yu and Zyner [23] and of Quach et al. [18]. For each rounding algorithm, a logical description and a block diagram is given and the latency is analyzed. We conclude that the new rounding algorithm is the fastest rounding algorithm, provided that an injection (which depends only on the rounding mode and the sign) can be added in during the reduction of the partial products into a carrysave encoded digit string. In double precision the latency of the new rounding algorithm is 12 logic levels compared to 14 logic levels in the algorithm of Quach et al., and 16 logic levels in the algorithm of Yu and Zyner. 1. Introduction Every modern microprocessor includes a floatingpoint (FP) multiplier that complies with the IEEE 754 Standard [9]. The latency of the FP multiplier...
Emulation of a FMA and CorrectlyRounded Sums: Proved Algorithms Using Rounding to Odd
 IEEE Trans. Computers
, 2008
"... Rounding to odd is a nonstandard rounding on floatingpoint numbers. By using it for some intermediate values instead of rounding to nearest, correctly rounded results can be obtained at the end of computations. We present an algorithm to emulate the fused multiplyandadd operator. We also present ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Rounding to odd is a nonstandard rounding on floatingpoint numbers. By using it for some intermediate values instead of rounding to nearest, correctly rounded results can be obtained at the end of computations. We present an algorithm to emulate the fused multiplyandadd operator. We also present an iterative algorithm for computing the correctly rounded sum of a set floatingpoint numbers under mild assumptions. A variation on both previous algorithms is the correctly rounded sum of any three floatingpoint numbers. This leads to efficient implementations, even when this rounding is not available. In order to guarantee the correctness of these properties and algorithms, we formally proved them using the Coq proof checker.
Performing Arithmetic Operations on RoundtoNearest Representations
, 2008
"... During any composite computation there is a constant need for rounding intermediate results before they can participate in further processing. Recently a class of number representations denoted RNCodings were introduced, allowing rounding to take place by a simple truncation, with the additional pr ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
During any composite computation there is a constant need for rounding intermediate results before they can participate in further processing. Recently a class of number representations denoted RNCodings were introduced, allowing rounding to take place by a simple truncation, with the additional property that problems with doubleroundings are avoided. In this paper we first investigate a particular encoding of the binary representation. This encoding is generalized to any radix and digit set; however radix complement representations for even values of the radix turn out to be particularly feasible. The encoding is essentially an ordinary radix complement representation with an appended roundbit, but still allowing rounding by truncation and doublerounding without errors. Conversions from radix complement to these roundtonearest representations can be performed in constant time, whereas conversion the other way in general takes at least logarithmic time. Addition and multiplication on such fixedpoint representations are analyzed and defined in such a way that rounding information can be carried along in a meaningful way. 1
Type System Support for FloatingPoint Computation
, 2001
"... Floatingpoint arithmetic is often seen as untrustworthy. We show how manipulating precisions according to the following rules of thumb enhances the reliability of and removes surprises from calculations: • Store data narrowly, • compute intermediates widely, and • derive properties widely. Further, ..."
Abstract
 Add to MetaCart
Floatingpoint arithmetic is often seen as untrustworthy. We show how manipulating precisions according to the following rules of thumb enhances the reliability of and removes surprises from calculations: • Store data narrowly, • compute intermediates widely, and • derive properties widely. Further, we describe a typing system for floating point that both supports and is supported by these rules. A single type is established for all intermediate computations. The type describes a precision at least as wide as all inputs to and results from the computation. Picking a single type provides benefits to users, compilers, and interpreters. The type system also extends cleanly to encompass intervals and higher precisions. 1
1 Performing Arithmetic Operations on RoundtoNearest Representations
"... During any composite computation there is a constant need for rounding intermediate results before they can participate in further processing. Recently a class of number representations denoted RNCodings were introduced, allowing an unbiased roundingtonearest to take place by a simple truncation ..."
Abstract
 Add to MetaCart
During any composite computation there is a constant need for rounding intermediate results before they can participate in further processing. Recently a class of number representations denoted RNCodings were introduced, allowing an unbiased roundingtonearest to take place by a simple truncation, with the property that problems with doubleroundings are avoided. In this paper we first investigate a particular encoding of the binary representation. This encoding is generalized to any radix and digit set; however radix complement representations for even values of the radix turn out to be particularly feasible. The encoding is essentially an ordinary radix complement representation with an appended roundbit, but still allowing rounding to nearest by truncation and thus avoiding problems with doubleroundings. Conversions from radix complement to these roundtonearest representations can be performed in constant time, whereas conversion the other way in general takes at least logarithmic time. Not only is roundingtonearest a constant time operation, but so is also sign inversion, both of which are at best logtime operations on ordinary 2’s complement representations. Addition and multiplication on such fixedpoint representations are first analyzed and defined in such a way that rounding information can be carried along in a meaningful way, at minimal cost. The analysis is carried through for a compact (canonical) encoding using 2’s complement representation, supplied with a roundbit. Based on the fixedpoint encoding it is shown possible to define floating point representations, and a sketch of the implementation of an FPU is presented. I.