Results 1  10
of
18
ACV: An Arithmetic Circuit Verifier
 In Int'l Conf. on CAD
, 1996
"... Based on a hierarchical verification methodology, we present an arithmetic circuit verifier ACV, in which circuits expressed in a hardware description language, also called ACV, are symbolically verified using Binary Decision Diagrams for Boolean functions and multiplicative Binary Moment Diagrams ( ..."
Abstract

Cited by 26 (5 self)
 Add to MetaCart
Based on a hierarchical verification methodology, we present an arithmetic circuit verifier ACV, in which circuits expressed in a hardware description language, also called ACV, are symbolically verified using Binary Decision Diagrams for Boolean functions and multiplicative Binary Moment Diagrams (*BMDs) for wordlevel functions. A circuit is described in ACV as a hierarchy of modules. Each module hasa structural definition as an interconnection of logic gates and other modules. Modules may also have functional descriptions, declaring the numeric encodings of the inputs and outputs, as well as specifying their functionality in terms of arithmetic expressions. Verification then proceeds recursively, proving that each module in the hierarchy having a functional description, including the toplevel one, realizes its specification. The language and the verifier contain additional enhancements for overcoming some of the difficulties in applying *BMDbased verification to circuits computing...
BitLevel Analysis of an SRT Divider Circuit
 IN PROCEEDINGS OF THE 33RD DESIGN AUTOMATION CONFERENCE, PAGES 661665, LAS VEGAS, NV
, 1995
"... It is impractical to verify multiplier or divider circuits entirely at the bitlevel using ordered Binary Decision Diagrams (BDDs), because the BDD representations for these functions grow exponentially with the word size. It is possible, however, to analyze individual stages of these circuits using ..."
Abstract

Cited by 24 (0 self)
 Add to MetaCart
It is impractical to verify multiplier or divider circuits entirely at the bitlevel using ordered Binary Decision Diagrams (BDDs), because the BDD representations for these functions grow exponentially with the word size. It is possible, however, to analyze individual stages of these circuits using BDDs. Such analysis can be helpful when implementing complex arithmetic algorithms. As a demonstration, we show that Intel could haveused BDDs to detect erroneous lookup table entries in the Pentium(TM) floating point divider. Going beyond verification, we show that bitlevel analysis can be used to generate a correct version of the table.
Verification of All Circuits in a FloatingPoint Unit Using WordLevel Model Checking
 In Proceedings of the Formal Methods on ComputerAided Design
, 1996
"... This paper presents the formal verification of all subcircuits in a floatingpoint arithmetic unit (FPU) from an Intel microprocessor using a wordlevel model checker. This work represents the first largescale application of wordlevel model checking techniques. The FPU can perform addition, subtra ..."
Abstract

Cited by 23 (7 self)
 Add to MetaCart
This paper presents the formal verification of all subcircuits in a floatingpoint arithmetic unit (FPU) from an Intel microprocessor using a wordlevel model checker. This work represents the first largescale application of wordlevel model checking techniques. The FPU can perform addition, subtraction, multiplication, square root, division, remainder, and rounding operations; verifying such a broad range of functionality required coupling the model checker with a number of other techniques, such as property decomposition, propertyspecific model extraction, and latch removal. We will illustrate our verification techniques using the Weitek WTL3170/3171 Sparc floating point coprocessor as an example. The principal contribution of this paper is a practical verification methodology explaining what techniques to apply (and where to apply them) when verifying floatingpoint arithmetic circuits. We have applied our methods to the floatingpoint unit of a stateoftheart Intel microprocesso...
Anatomy of the Pentium Bug
 In TAPSOFT’95: Theory and Practice of Software Development
, 1995
"... The Pentium computer chip’s division algorithm relies on a table from which five entries were inadvertently omitted, with the result that 1738 single precision dividenddivisor pairs yield relative errors whose most significant bit is uniformly distributed from the 14th to the 23rd (least significant ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
(Show Context)
The Pentium computer chip’s division algorithm relies on a table from which five entries were inadvertently omitted, with the result that 1738 single precision dividenddivisor pairs yield relative errors whose most significant bit is uniformly distributed from the 14th to the 23rd (least significant) bit. This corresponds to a rate of one error every 40 billion random single precision divisions. The same general pattern appears at double precision, with an error rate of one in every 9 billion divisions or 75 minutes of division time. These rates assume randomly distributed data. The distribution of the faulty pairs themselves however is far from random, with the effect that if the data is so nonrandom as to be just the constant 1, then random calculations started from that constant produce a division error once every few minutes, and these errors will sometimes propagate many more steps. A much higher rate yet is obtained when dividing small (< 100) integers “bruised ” by subtracting one millionth, where every 400 divisions will see a relative error of at least one in a million. The software engineering implications of the bug include the observations that the method of exercising reachable components cannot detect reachable components mistakenly believed unreachable, and that handchecked proofs build false confidence. 1
Design Issues In High Performance Floating Point Arithmetic Units
, 1996
"... In recent years computer applications have increased in their computational complexity. The industrywide usage of performance benchmarks, such as SPECmarks, forces processor designers to pay particular attention to implementation of the floating point unit, or FPU. Special purpose applications, suc ..."
Abstract

Cited by 21 (3 self)
 Add to MetaCart
In recent years computer applications have increased in their computational complexity. The industrywide usage of performance benchmarks, such as SPECmarks, forces processor designers to pay particular attention to implementation of the floating point unit, or FPU. Special purpose applications, such as high performance graphics rendering systems, have placed further demands on processors. High speed floating point hardware is a requirement to meet these increasing demands. This work examines the stateoftheart in FPU design and proposes techniques for improving the performance and the performance/area ratio of future FPUs. In recent FPUs, emphasis has been placed on designing everfaster adders and multipliers, with division receiving less attention. The design space of FP dividers is large, comprising five different classes of division algorithms: digit recurrence, functional iteration, very high radix, table lookup, and variable latency. While division is an infrequent operation...
SRT Division Architectures and Implementations
 IN PROC. 13TH IEEE SYMP. COMPUTER ARITHMETIC
, 1997
"... SRT dividers are common in modern floating point units. Higher division performance is achieved by retiring more quotient bits in each cycle. Previous research has shown that realistic stages are limited to radix2 and radix4. Higher radix dividers are therefore formed by a combination of lowradix ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
SRT dividers are common in modern floating point units. Higher division performance is achieved by retiring more quotient bits in each cycle. Previous research has shown that realistic stages are limited to radix2 and radix4. Higher radix dividers are therefore formed by a combination of lowradix stages. In this paper, we present an analysis of the effects of radix2 and radix4 SRT divider architectures and circuit families on divider area and performance. We show the performance and area results for a wide variety of divider architectures and implementations. We conclude that divider performance is only weakly sensitive to reasonable choices of architecture but significantly improved by aggressive circuit techniques.
A Comparative Analysis of Hardware and Software Fault Tolerance: Impact on Software Reliability Engineering
, 1999
"... this paper, we focus on methods of fault tolerance, and investigate the differences between hardware fault tolerance and software fault tolerance. 1.2 Fault, Error and Failure ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
this paper, we focus on methods of fault tolerance, and investigate the differences between hardware fault tolerance and software fault tolerance. 1.2 Fault, Error and Failure
Computer Assisted Analysis Of Multiprocessor Memory Systems
, 1996
"... Parallel architecture becomes more and more attractive as the demand for performance increases. One of the most important classes of parallel machines is that of shared memory architectures, which are perceived as easier to program than other parallel architectures. In a shared memory multiprocessor ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Parallel architecture becomes more and more attractive as the demand for performance increases. One of the most important classes of parallel machines is that of shared memory architectures, which are perceived as easier to program than other parallel architectures. In a shared memory multiprocessor architecture, a memory model describes the behavior of the memory system as observed at the userlevel. A cache coherence protocol aims to conform to a memory model by maintaining consistency among the multiple copies of cached data and the data in main memory. Memory models and cache coherence protocols can be quite complex and subtle, creating a real possibility of misunderstandings and actual design errors. In this thesis, we will present solutions to the problems of specifying memory models and verifying the correctness of cache coherence protocols. Weaker memory models for multiprocessor systems allow higherperformance implementation techniques for memory systems. However, weak memor...
Ordered Binary Decision Diagrams and Their Significance in ComputerAided Design of VLSI Circuits  a Survey
, 1998
"... Many problems in computeraided design of highly integrated circuits (CAD for VLSI) can be transformed to the task of manipulating objects over finite domains. The efficiency of these operations depends substantially on the chosen data structures. In the last years, ordered binary decision diagra ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Many problems in computeraided design of highly integrated circuits (CAD for VLSI) can be transformed to the task of manipulating objects over finite domains. The efficiency of these operations depends substantially on the chosen data structures. In the last years, ordered binary decision diagrams (OBDDs) have proven to be a very efficient data structure in this context. Here, we give a survey on these developments and stress the deep interactions between basic research and practically relevant applied research with its immediate impact on the performance improvement of modern CAD design and verification tools.