Results 1  10
of
74
Bounded model checking
, 2009
"... Besides Equivalence Checking [KK97, KPKG02] the most important industrial application of SAT is currently Bounded Model Checking (BMC) [BCCZ99]. Both techniques are used for formal hardware verification in the context of electronic design automation (EDA), but have successfully been applied to many ..."
Abstract

Cited by 99 (2 self)
 Add to MetaCart
Besides Equivalence Checking [KK97, KPKG02] the most important industrial application of SAT is currently Bounded Model Checking (BMC) [BCCZ99]. Both techniques are used for formal hardware verification in the context of electronic design automation (EDA), but have successfully been applied to many other domains as well. In this chapter, we focus on BMC. In practice, BMC is mainly used for falsification resp. testing, which is concerned with violations of temporal properties. However, the original paper on BMC [BCCZ99] already discussed extensions that can prove properties. A considerable part of this chapter discusses these complete extensions, which are often called “unbounded ” model checking techniques, even though they are build upon the same principles as plain BMC. Two further related applications, in which BMC becomes more and more important, are automatic test case generation for closing coverage holes, and disproving redundancy in designs. Most of the techniques discussed in this chapter transfer to this more general setting as well, even though our focus is on property
Dynamic Transition Relation Simplification for Bounded Property Checking
, 2004
"... Bounded Model Checking (BMC) is an incomplete property checking method that is based on a finite unfolding of the transition relation to disprove the correctness of a set of properties or to prove them for a limited execution lengths from the initial states. Current BMC techniques repeatedly concate ..."
Abstract

Cited by 36 (2 self)
 Add to MetaCart
Bounded Model Checking (BMC) is an incomplete property checking method that is based on a finite unfolding of the transition relation to disprove the correctness of a set of properties or to prove them for a limited execution lengths from the initial states. Current BMC techniques repeatedly concatenate the original transition relation to unfold the circuit with increasing depths. In this paper we present a new method that is based on a dual unfolding scheme. The first unfolding is noninitialized and progressively simplifies concatenated frames of the transition relation. The tail of the simplified frames are then applied in the second unfolding, which starts from the initial state and checks the properties. We use a circuit graph representation for all functions and performs simplification by merging vertices that are functionally equivalent under given input constraints. In the noninitialized unfolding, previous time frames progressively tighten these constraints thus leading to an asymptotic simplification of the transition relation. As a side benefit, our method can find inductive invariants constructively by detecting when vertices are functionally equivalent across time frames. This information is then used to further simplify the transition relation and, in some cases, prove unbounded correctness of properties. Our experiments using industrial property checking problems demonstrate that the presented method significantly improves the efficiency of BMC.
Scalable automated verification via expertsystem guided transformations
 in FMCAD
, 2004
"... Abstract. Transformationbased verification has been proposed to synergistically leverage various transformations to successively simplify and decompose large problems to ones which may be formally discharged. While powerful, such systems require a fair amount of user sophistication and experimentat ..."
Abstract

Cited by 30 (14 self)
 Add to MetaCart
(Show Context)
Abstract. Transformationbased verification has been proposed to synergistically leverage various transformations to successively simplify and decompose large problems to ones which may be formally discharged. While powerful, such systems require a fair amount of user sophistication and experimentation to yield greatest benefits – every verification problem is different, hence the most efficient transformation flow differs widely from problem to problem. Finding an efficient proof strategy not only enables exponential reductions in computational resources, it often makes the difference between obtaining a conclusive result or not. In this paper, we propose the use of an expert system to automate this proof strategy development process. We discuss the types of rules used by the expert system, and the type of feedback necessary between the algorithms and expert system, all oriented towards yielding a conclusive result with minimal resources. Experimental results are provided to demonstrate that such a system is able to automatically discover efficient proof strategies, even on large and complex problems with more than 100,000 state elements in their respective cones of influence. These results also demonstrate numerous types of algorithmic synergies that are critical to the automation of such complex proofs. 1
Exploiting suspected redundancy without proving it
"... We present several improvements to generalpurpose sequential redundancy removal. (1) We propose using a robust variety of synergistic transformation and verification algorithms to process the individual proof obligations. This enables greater speed and scalability, and identifies a significantl ..."
Abstract

Cited by 27 (10 self)
 Add to MetaCart
(Show Context)
We present several improvements to generalpurpose sequential redundancy removal. (1) We propose using a robust variety of synergistic transformation and verification algorithms to process the individual proof obligations. This enables greater speed and scalability, and identifies a significantly greater degree of redundancy, than prior approaches. (2) We generalize upon traditional redundancy removal and utilize the speculativelyreduced model to enhance bounded search, without needing to complete any proofs.
DAGAware Circuit Compression For Formal Verification
"... The choice of representation for circuits and boolean formulae in a formal verification tool is important for two reasons. First of all, representation compactness is necessary in order to keep the memory consumption low. This is witnessed by the importance of maximum processable design size for equ ..."
Abstract

Cited by 27 (0 self)
 Add to MetaCart
The choice of representation for circuits and boolean formulae in a formal verification tool is important for two reasons. First of all, representation compactness is necessary in order to keep the memory consumption low. This is witnessed by the importance of maximum processable design size for equivalence checkers. Second, many formal verification algorithms are sensitive to redundancies in the design that is processed. To address these concerns, three different autocompressing representations for boolean circuit networks and formulas have been suggested in the literature. In this paper, we attempt to find a blend of features from these alternatives that will allow us to remove as much redundancy as possible while not sacrificing runtime. By studying how the network representation size varies when we change parameters, we show that the use of only one operator node is suboptimal, and demonstrate that the most powerful of the proposed reduction rules, twolevel minimization, actually can be harmful. We correct the bad behavior of twolevel optimization by devising a simple linear simplification algorithm that can remove tens of thousands of nodes on examples where all obvious redundancies already have been removed. The combination of our compactor with the simplest representation outperforms all of the alternatives we have studied, with a theoretical runtime bound that is at least as good as the three studied representations.
Scalable sequential equivalence checking across arbitrary design transformations
 Proc. ICCD’06
, 2006
"... Highend hardware design flows mandate a variety of sequential transformations to address needs such as performance, power, postsilicon debug and test. Industrial demand for robust sequential equivalence checking (SEC) solutions is thus becoming increasingly prevalent. In this paper, we discuss the ..."
Abstract

Cited by 26 (3 self)
 Add to MetaCart
(Show Context)
Highend hardware design flows mandate a variety of sequential transformations to address needs such as performance, power, postsilicon debug and test. Industrial demand for robust sequential equivalence checking (SEC) solutions is thus becoming increasingly prevalent. In this paper, we discuss the role of SEC within IBM. We motivate the need for a highlyautomated scalable solution, which is robust against a variety of design transformations – including those that alter initialization sequences. This motivation has caused us to embrace the paradigm of SEC with respect to designated initial states. We furthermore describe the diverse set of algorithms comprised within our SEC framework, which we have found necessary for the automated solution of the most complex SEC problems. Finally, we provide several experiments illustrating the necessity of our diverse algorithm flow to efficiently solve difficult SEC problems involving a variety of design transformations. I.
Experimental Analysis of Different Techniques for Bounded Model Checking
 Proc. of the 9 th TACAS, volume 2619 of LNCS
, 2003
"... Abstract. Bounded model checking (BMC) is a procedure that searches for counterexamples to a given property through bounded executions of a nonterminating system. This paper compares the performance of SATbased, BDDbased and explicit state based BMC on benchmarks drawn from commercial designs. Ou ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Bounded model checking (BMC) is a procedure that searches for counterexamples to a given property through bounded executions of a nonterminating system. This paper compares the performance of SATbased, BDDbased and explicit state based BMC on benchmarks drawn from commercial designs. Our experimental framework provides a uniform and comprehensive basis to evaluate each of these approaches. The experimental results in this paper suggest that for designs with deep counterexamples, BDDbased BMC is much faster. For designs with shallow counterexamples, we observe that indeed SATbased BMC is more effective than BDDbased BMC, but we also observe that explicit state based BMC is comparably effective, a new observation. 1
Scalable and scalablyverifiable sequential synthesis
 Proc. ICCAD'08. http://www.eecs.berkeley.edu/~alanmi/publications/2008/iccad08_se q.pdf A. Mishchenko
"... This paper describes an efficient implementation of sequential synthesis that uses induction to detect and merge sequentiallyequivalent nodes. Stateencoding, scan chains, and test vectors are essentially preserved. Moreover, the sequential synthesis results are sequentially verifiable using an inde ..."
Abstract

Cited by 18 (14 self)
 Add to MetaCart
(Show Context)
This paper describes an efficient implementation of sequential synthesis that uses induction to detect and merge sequentiallyequivalent nodes. Stateencoding, scan chains, and test vectors are essentially preserved. Moreover, the sequential synthesis results are sequentially verifiable using an independent inductive prover similar to that used for synthesis, with guaranteed completeness. Experiments with this sequential synthesis show effectiveness. When applied to a set of 20 industrial benchmarks ranging up to 26K registers and up to 53K 6LUTs, average reductions in register and area are 12.9 % and 13.1 % respectively while delay is reduced by 1.4%. When applied to the largest academic benchmarks, an average reduction in both registers and area is more than 30%. The associated sequential verification is also scalable and runs about 2x slower than synthesis. The implementation is available in the synthesis and verification system ABC. 1
Automatic Generalized Phase Abstraction for Formal Verification
, 2005
"... A standard approach to improving circuit performance is to use an Nphase design style where combinational logic is interspersed freely between level sensitive latches controlled by separate clocks. Unfortunately, the use of an Nphase design style will increase the number of state variables by a f ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
A standard approach to improving circuit performance is to use an Nphase design style where combinational logic is interspersed freely between level sensitive latches controlled by separate clocks. Unfortunately, the use of an Nphase design style will increase the number of state variables by a factor of N, making formal verification many orders of magnitude harder. Previous approaches to solving this problem restrict the kind of designs that can be handled severely and construct an abstracted netlist with fewer state variables by a syntactic analysis that requires the user to identify clocks. We extend the current state of the art by introducing a phase abstraction algorithm that (1) poses no restrictions on the design style that can be used, that (2) avoids an error prone syntactic analysis, that (3) requires no input from users, and that (4) can be integrated into any model checker without requiring HDL code analysis.
A SAT characterization of booleanprogram correctness
, 2003
"... Boolean programs, imperative programs where all variables have type boolean, have been used effectively as abstractions of device drivers (in Ball and Rajamani's SLAM project). To find errors in these boolean programs, SLAM uses a model checker based on binary decision diagrams (BDDs). As an ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Boolean programs, imperative programs where all variables have type boolean, have been used effectively as abstractions of device drivers (in Ball and Rajamani's SLAM project). To find errors in these boolean programs, SLAM uses a model checker based on binary decision diagrams (BDDs). As an alternative checking method, this paper defines the semantics of boolean programs by weakest solutions of recursive weakestprecondition equations. These equations are then translated into a satisfiability (SAT) problem. The method uses both BDDs and SAT solving, and it allows an onthefly tradeoff between symbolic and explicitstate representation of the program's initial state.