Results 1 
6 of
6
A Semantics for Imprecise Exceptions
 In SIGPLAN Conference on Programming Language Design and Implementation
, 1999
"... Some modern superscalar microprocessors provide only imprecise exceptions. That is, they do not guarantee to report the same exception that would be encountered by a straightforward sequential execution of the program. In exchange, they offer increased performance or decreased chip area (which amoun ..."
Abstract

Cited by 52 (6 self)
 Add to MetaCart
Some modern superscalar microprocessors provide only imprecise exceptions. That is, they do not guarantee to report the same exception that would be encountered by a straightforward sequential execution of the program. In exchange, they offer increased performance or decreased chip area (which amount to much the same thing). This performance/precision tradeoff has not so far been much explored at the programming language level. In this paper we propose a design for imprecise exceptions in the lazy functional programming language Haskell. We discuss several designs, and conclude that imprecision is essential if the language is still to enjoy its current rich algebra of transformations. We sketch a precise semantics for the language extended with exceptions. The paper shows how to extend Haskell with exceptions without crippling the language or its compilers. We do not yet have enough experience of using the new mechanism to know whether it strikes an appropriate balance between expressiveness and pwrformance.
Multiplications of Floating Point Expansions
 IN PROCEEDINGS OF THE 14TH SYMPOSIUM ON COMPUTER ARITHMETIC, I. KOREN AND P. KORNERUP (EDS
, 1999
"... In modern computers, the floating point unit is the part of the processor delivering the highest computing power and getting most attention from the design team. Performance of any multiple precision application will be dramatically enhanced by adequate use of floating point expansions. We present i ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
In modern computers, the floating point unit is the part of the processor delivering the highest computing power and getting most attention from the design team. Performance of any multiple precision application will be dramatically enhanced by adequate use of floating point expansions. We present in this work three multiplication algorithms faster and more integrated than the stepwise algorithm proposed earlier. We have tested these new algorithms on an application that computes the determinant of a matrix. In the absence of overflow or underflow, the process is error free and possibly more efficient than its integer based counterpart.
Provably faithful evaluation of polynomials
 In Proceedings of the 21st Annual ACM Symposium on Applied Computing
, 2006
"... We provide sufficient conditions that formally guarantee that the floatingpoint computation of a polynomial evaluation is faithful. To this end, we develop a formalization of floatingpoint numbers and rounding modes in the Program Verification System (PVS). Our work is based on a wellknown formali ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
We provide sufficient conditions that formally guarantee that the floatingpoint computation of a polynomial evaluation is faithful. To this end, we develop a formalization of floatingpoint numbers and rounding modes in the Program Verification System (PVS). Our work is based on a wellknown formalization of floatingpoint arithmetic in the proof assistant Coq, where polynomial evaluation has been already studied. However, thanks to the powerful proof automation provided by PVS, the sufficient conditions proposed in our work are more general than the original ones.
A program for testing IEEE decimalbinary conversion
, 1991
"... Regardless of how accurately a computer performs floatingpoint operations, if the data to operate on must be initially converted from the decimalbased representation used by humans into the internal representation used by the machine, then errors in that conversion will irrevocably pollute the res ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Regardless of how accurately a computer performs floatingpoint operations, if the data to operate on must be initially converted from the decimalbased representation used by humans into the internal representation used by the machine, then errors in that conversion will irrevocably pollute the results of subsequent
Division and Modulus for Computer Scientists
"... Introduction There exist many denitions of the div and mod functions in computer science literature and programming languages. Boute (Boute, 1992) describes most of these and discusses their mathematical properties in depth. We shall therefore only briey review the most common denitions and the rare ..."
Abstract
 Add to MetaCart
Introduction There exist many denitions of the div and mod functions in computer science literature and programming languages. Boute (Boute, 1992) describes most of these and discusses their mathematical properties in depth. We shall therefore only briey review the most common denitions and the rare, but mathematically elegant, Euclidean division. We also give an algorithm for the Euclidean div and mod functions and prove it correct with respect to Euclid's theorem. 1.1 Common denitions Most common denitions are based on the following mathematical denition. For any two real numbers D (dividend) and d (divisor) with d 6= 0, there exists a pair of numbers q (quotient) and r (remainder) that satisfy the following basic conditions of division: (1) q 2 Z (the quot
Decimal FloatingPoint: Algorism for Computers
 Proceedings of the 16th IEEE Symposium on Computer Arithmetic
, 2003
"... Decimal arithmetic is the norm in human calculations, and humancentric applications must use a decimal floatingpoint arithmetic to achieve the same results. ..."
Abstract
 Add to MetaCart
Decimal arithmetic is the norm in human calculations, and humancentric applications must use a decimal floatingpoint arithmetic to achieve the same results.