Results 1  10
of
49
DIVA: A Reliable Substrate for Deep Submicron Microarchitecture Design
 In Proc. 32nd Annual Intl. Symp. on Microarchitecture
, 1999
"... Building a highpetformance microprocessor presents many reliability challenges. Designers must verify the correctness of large complex systems and construct implementations that work reliably in varied (and occasionally adverse) operating conditions. To&rther complicate this task, deep submicron fa ..."
Abstract

Cited by 300 (14 self)
 Add to MetaCart
Building a highpetformance microprocessor presents many reliability challenges. Designers must verify the correctness of large complex systems and construct implementations that work reliably in varied (and occasionally adverse) operating conditions. To&rther complicate this task, deep submicron fabrication technologies present new reliability challenges in the form of degraded signal quality and logic failures caused by natural radiation interference. In this paper; we introduce dynamic verification, a novel microarchitectural technique that can significantly reduce the burden of correctness in microprocessor designs. The approach works by augmenting the commit phase of the processor pipeline with a functional checker unit. Thefunctional checker verifies the correctness of the core processor’s computation, only permitting correct results to commit. Overall design cost can be dramatically reduced because designers need only veri ’ the correctness of the checker unit. We detail the DIVA checker architecture, a design optimized for simplicity and low cost. Using detailed timing simulation, we show that even resourcefrugal DIVA checkers have little impact on core processor peflormance. To make the case for reduced verification costs, we argue that the DIVA checker should lend itself to functional and electrical verification better than a complex core processor Finally, future applications that leverage dynamic veri@cation to increase processor performance and availability are suggested. 1
ACL2 Theorems about Commercial Microprocessors
, 1996
"... ACL2 is a mechanized mathematical logic intended for use in specifying and proving properties of computing machines. In two independent projects, industrial engineers have collaborated with researchers at Computational Logic, Inc. (CLI), to use ACL2 to model and prove properties of stateoftheart ..."
Abstract

Cited by 68 (14 self)
 Add to MetaCart
ACL2 is a mechanized mathematical logic intended for use in specifying and proving properties of computing machines. In two independent projects, industrial engineers have collaborated with researchers at Computational Logic, Inc. (CLI), to use ACL2 to model and prove properties of stateoftheart commercial microprocessors prior to fabrication. In the first project, Motorola, Inc., and CLI collaborated to specify Motorola's complex arithmetic processor (CAP), a singlechip, digital signal processor (DSP) optimized for communications signal processing. Using the specification, we proved the correctness of several CAP microcode programs. The second industrial collaboration involving ACL2 was between Advanced Micro Devices, Inc. (AMD) and CLI. In this work we proved the correctness of the kernel of the floatingpoint division operation on AMD's first Pentiumclass microprocessor, the AMD5K 86. In this paper, we discuss ACL2 and these industrial applications, with particular attention ...
Effective Theorem Proving for Hardware Verification
, 1994
"... . The attractiveness of using theorem provers for system design verification lies in their generality. The major practical challenge confronting theorem proving technology is in combining this generality with an acceptable degree of automation. We describe an approach for enhancing the effectiveness ..."
Abstract

Cited by 37 (6 self)
 Add to MetaCart
. The attractiveness of using theorem provers for system design verification lies in their generality. The major practical challenge confronting theorem proving technology is in combining this generality with an acceptable degree of automation. We describe an approach for enhancing the effectiveness of theorem provers for hardware verification through the use of efficient automatic procedures for rewriting, arithmetic and equality reasoning, and an offtheshelf BDDbased propositional simplifier. These automatic procedures can be combined into generalpurpose proof strategies that can efficiently automate a number of proofs including those of hardware correctness. The inference procedures and proof strategies have been implemented in the PVS verification system. They are applied to several examples including an Nbit adder, the Saxe pipelined processor, and the benchmark Tamarack microprocessor design. These examples illustrate the basic design philosophy underlying PVS where powerful...
The BoyerMoore Theorem Prover and Its Interactive Enhancement
, 1995
"... . The socalled "BoyerMoore Theorem Prover" (otherwise known as "Nqthm") has been used to perform a variety of verification tasks for two decades. We give an overview of both this system and an interactive enhancement of it, "PcNqthm," from a number of perspectives. First we introduce the logic in ..."
Abstract

Cited by 31 (0 self)
 Add to MetaCart
. The socalled "BoyerMoore Theorem Prover" (otherwise known as "Nqthm") has been used to perform a variety of verification tasks for two decades. We give an overview of both this system and an interactive enhancement of it, "PcNqthm," from a number of perspectives. First we introduce the logic in which theorems are proved. Then we briefly describe the two mechanized theorem proving systems. Next, we present a simple but illustrative example in some detail in order to give an impression of how these systems may be used successfully. Finally, we give extremely short descriptions of a large number of applications of these systems, in order to give an idea of the breadth of their uses. This paper is intended as an informal introduction to systems that have been described in detail and similarly summarized in many other books and papers; no new results are reported here. Our intention here is merely to present Nqthm to a new audience. This research was supported in part by ONR Contract N...
A Provably Correct Compiler Generator
, 1992
"... We have designed, implemented, and proved the correctness of a compiler generator that accepts action semantic descriptions of imperative programming languages. The generated compilers emit absolute code for an abstract RISC machine language that currently is assembled into code for the SPARC and th ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
We have designed, implemented, and proved the correctness of a compiler generator that accepts action semantic descriptions of imperative programming languages. The generated compilers emit absolute code for an abstract RISC machine language that currently is assembled into code for the SPARC and the HP Precision Architecture. Our machine language needs no runtime typechecking and is thus more realistic than those considered in previous compiler proofs. We use solely algebraic specifications; proofs are given in the initial model. 1 Introduction The previous approaches to proving correctness of compilers for nontrivial languages all use target code with runtime typechecking. The following semantic rule is typical for these target languages: (FIRST : C; hv 1 ; v 2 i : S) ! (C; v 1 : S) The rule describes the semantics of an instruction that extracts the first component of the topelement of the stack, provided that the topelement is a pair. If not, then it is implicit that the...
Hardware Verification using Monadic SecondOrder Logic
 IN COMPUTER AIDED VERIFICATION : 7TH INTERNATIONAL CONFERENCE, CAV '95, LNCS 939
, 1995
"... We show how the secondorder monadic theory of strings can be used to specify hardware components and their behavior. This logic admits a decision procedure and countermodel generator based on canonical automata for formulas. We have used a system implementing these concepts to verify, or find e ..."
Abstract

Cited by 25 (10 self)
 Add to MetaCart
We show how the secondorder monadic theory of strings can be used to specify hardware components and their behavior. This logic admits a decision procedure and countermodel generator based on canonical automata for formulas. We have used a system implementing these concepts to verify, or find errors in, a number of circuits proposed in the literature. The techniques we use make it easier to identify regularity in circuits, including those that are parameterized or have parameterized behavioral specifications. Our proofs are semantic and do not require lemmas or induction as would be needed when employing a conventional theory of strings as a recursive data type.
A Theorem Prover for a Computational Logic
, 1990
"... We briefly review a mechanical theoremprover for a logic of recursive functions over finitely generated objects including the integers, ordered pairs, and symbols. The prover, known both as NQTHM and as the BoyerMoore prover, contains a mechanized principle of induction and implementations of line ..."
Abstract

Cited by 24 (0 self)
 Add to MetaCart
We briefly review a mechanical theoremprover for a logic of recursive functions over finitely generated objects including the integers, ordered pairs, and symbols. The prover, known both as NQTHM and as the BoyerMoore prover, contains a mechanized principle of induction and implementations of linear resolution, rewriting, and arithmetic decision procedures. We describe some applications of the prover, including a proof of the correct implementation of a higher level language on a microprocessor defined at the gate level. We also describe the ongoing project of recoding the entire prover as an applicative function within its own logic.
DIVA: A Dynamic Approach to Microprocessor Verification
 Journal of InstructionLevel Parallelism
, 2000
"... Building a highperformance microprocessor presents many reliability challenges. Designers must verify the correctness of large complex systems and construct implementations that work reliably in varied (and occasionally adverse) operating conditions. To further complicate this task, deep submicr ..."
Abstract

Cited by 23 (3 self)
 Add to MetaCart
Building a highperformance microprocessor presents many reliability challenges. Designers must verify the correctness of large complex systems and construct implementations that work reliably in varied (and occasionally adverse) operating conditions. To further complicate this task, deep submicron fabrication technologies present new reliability challenges in the form of degraded signal quality and logic failures caused by natural radiation interference.
A Practical Method for Rigorously Controllable Hardware Design
 ZUM’97: The Z Formal Specification Notation, volume 1212 of LNCS
, 1996
"... We describe a method for rigorously specifying and verifying the control of pipelined microprocessors which can be used by the hardware designer for a precise documentation and justification of the correctness of his design techniques. We proceed by successively refining a oneinstructionatatime ..."
Abstract

Cited by 21 (3 self)
 Add to MetaCart
We describe a method for rigorously specifying and verifying the control of pipelined microprocessors which can be used by the hardware designer for a precise documentation and justification of the correctness of his design techniques. We proceed by successively refining a oneinstructionatatimeview of a RISC processor to a description of its pipelined implementation; the structure of the refinement hierarchy is determined by standard instruction pipelining principles (grouped following the kind of conflict they are designed to avoid: structural hazards, data hazards and control hazards). We illustrate our approach through a formal specification with correctness proof of Hennessy and Patterson's RISC processor DLX but the method can be extended to complex commercial microprocessor design where traditional or purely automatic methods do not scale up. The specification method supports incremental design techniques; the modular proof method offers reusing proofs and supports the designer's intuitive reasoning.
Toward Formalizing a Validation Methodology Using Simulation Coverage
 In Proceedings of the 34 th Design Automation Conference
, 1997
"... The biggest obstacle in the formal verification of large designs is their very large state spaces, which cannot be handled even by techniques such as implicit state space traversal. The only viable solution in most cases is validation by functional simulation. Unfortunately, this has the drawbacksof ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
The biggest obstacle in the formal verification of large designs is their very large state spaces, which cannot be handled even by techniques such as implicit state space traversal. The only viable solution in most cases is validation by functional simulation. Unfortunately, this has the drawbacksof high computational requirements due to the large number of test vectors needed, and the lack of adequate coverage measures to characterize the quality of a given test set. To overcome these limitations, there has been recent interest in hybrid techniques which combine the strengths of formal verification and simulation. Formal verificationbased techniques are used on a test model (usually muchsmaller than the design) to derive a set of functional test vectors, which are then used for design validation through simulation. The test set generated typically satisfies some coverage measure on the test model. Recent research has proposed the use of state or transition coverage. However, no effor...