Results 1  10
of
41
Chaff: Engineering an Efficient SAT Solver
, 2001
"... Boolean Satisfiability is probably the most studied of combinatorial optimization/search problems. Significant effort has been devoted to trying to provide practical solutions to this problem for problem instances encountered in a range of applications in Electronic Design Automation (EDA), as well ..."
Abstract

Cited by 1119 (13 self)
 Add to MetaCart
Boolean Satisfiability is probably the most studied of combinatorial optimization/search problems. Significant effort has been devoted to trying to provide practical solutions to this problem for problem instances encountered in a range of applications in Electronic Design Automation (EDA), as well as in Artificial Intelligence (AI). This study has culminated in the development of several SAT packages, both proprietary and in the public domain (e.g. GRASP, SATO) which find significant use in both research and industry. Most existing complete solvers are variants of the DavisPutnam (DP) search algorithm. In this paper we describe the development of a new complete solver, Chaff, which achieves significant performance gains through careful engineering of all aspects of the search – especially a particularly efficient implementation of Boolean constraint propagation (BCP) and a novel low overhead decision strategy. Chaff has been able to obtain one to two orders of magnitude performance improvement on difficult SAT benchmarks in comparison with other solvers (DP or otherwise), including GRASP and SATO.
Efficient Conflict Driven Learning in a Boolean Satisfiability Solver
 In ICCAD
, 2001
"... One of the most important features of current stateoftheart SAT solvers is the use of conflict based backtracking and learning techniques. In this paper, we generalize various conflict driven learning strategies in terms of different partitioning schemes of the implication graph. We reexamine th ..."
Abstract

Cited by 292 (8 self)
 Add to MetaCart
One of the most important features of current stateoftheart SAT solvers is the use of conflict based backtracking and learning techniques. In this paper, we generalize various conflict driven learning strategies in terms of different partitioning schemes of the implication graph. We reexamine the learning techniques used in various SAT solvers and propose an array of new learning schemes. Extensive experiments with real world examples show that the best performing new learning scheme has at least a 2X speedup compared with learning schemes employed in stateoftheart SAT solvers.
Automating FirstOrder Relational Logic
, 2000
"... An analysis is described that can automatically find models of firstorder formulas with relational operators and scalar quantifiers. The formula is translated to a quantifierfree boolean formula that has a model exactly when the original formula has a model within a given scope (that is, involving ..."
Abstract

Cited by 116 (19 self)
 Add to MetaCart
An analysis is described that can automatically find models of firstorder formulas with relational operators and scalar quantifiers. The formula is translated to a quantifierfree boolean formula that has a model exactly when the original formula has a model within a given scope (that is, involving no more than some finite number of atoms). The paper presents a simple logic and gives a compositional translation scheme. It reports on the use of Alcoa, a tool based on the scheme, to analyze a variety of specifications expressed in Alloy, an object modelling notation based on the logic.
Validating sat solvers using an independent resolutionbased checker: Practical implementations and other applications
 In Proceedings of Design, Automation and Test in Europe (DATE2003
, 2003
"... As the use of SAT solvers as core engines in EDA applications grows, it becomes increasingly important to validate their correctness. In this paper, we describe the implementation of an independent resolutionbased checking procedure that can check the validity of unsatisfiable claims produced by th ..."
Abstract

Cited by 99 (6 self)
 Add to MetaCart
As the use of SAT solvers as core engines in EDA applications grows, it becomes increasingly important to validate their correctness. In this paper, we describe the implementation of an independent resolutionbased checking procedure that can check the validity of unsatisfiable claims produced by the SAT solver zchaff. We examine the practical implementation issues of such a checker and describe two implementations with different pros and cons. Experimental results show low overhead for the checking process. Our checker can work with many other modern SAT solvers with minor modifications, and it can provide information for debugging when checking fails. Finally we describe additional results that can be obtained by the validation process and briefly discuss their applications. 1. Introduction and Previous
Scalable automated verification via expertsystem guided transformations
 in FMCAD
, 2004
"... Abstract. Transformationbased verification has been proposed to synergistically leverage various transformations to successively simplify and decompose large problems to ones which may be formally discharged. While powerful, such systems require a fair amount of user sophistication and experimentat ..."
Abstract

Cited by 28 (13 self)
 Add to MetaCart
Abstract. Transformationbased verification has been proposed to synergistically leverage various transformations to successively simplify and decompose large problems to ones which may be formally discharged. While powerful, such systems require a fair amount of user sophistication and experimentation to yield greatest benefits – every verification problem is different, hence the most efficient transformation flow differs widely from problem to problem. Finding an efficient proof strategy not only enables exponential reductions in computational resources, it often makes the difference between obtaining a conclusive result or not. In this paper, we propose the use of an expert system to automate this proof strategy development process. We discuss the types of rules used by the expert system, and the type of feedback necessary between the algorithms and expert system, all oriented towards yielding a conclusive result with minimal resources. Experimental results are provided to demonstrate that such a system is able to automatically discover efficient proof strategies, even on large and complex problems with more than 100,000 state elements in their respective cones of influence. These results also demonstrate numerous types of algorithmic synergies that are critical to the automation of such complex proofs. 1
Bounded model checking with QBF
 in Int’l Conf. on Theory and Applications of Satisfiability Testing
, 2005
"... Abstract. Current algorithms for bounded model checking (BMC) use SAT methods for checking satisfiability of Boolean formulas. These BMC methods suffer from a potential memory explosion problem. Methods based on the validity of Quantified Boolean Formulas (QBF) allow an exponentially more succinct r ..."
Abstract

Cited by 23 (1 self)
 Add to MetaCart
Abstract. Current algorithms for bounded model checking (BMC) use SAT methods for checking satisfiability of Boolean formulas. These BMC methods suffer from a potential memory explosion problem. Methods based on the validity of Quantified Boolean Formulas (QBF) allow an exponentially more succinct representation of the checked formulas, but have not been widely used, because of the lack of an efficient decision procedure for QBF. We evaluate the usage of QBF in BMC, using generalpurpose SAT and QBF solvers. We also present a specialpurpose decision procedure for QBF used in BMC, and compare our technique with the methods using generalpurpose SAT and QBF solvers on reallife industrial benchmarks. Our procedure performs much better for BMC than the generalpurpose QBF solvers, without incurring the space overhead of propositional SAT. 1
BDDBased Decision Procedures for the Modal Logic K
 Journal of Applied Nonclassical Logics
, 2005
"... We describe BDDbased decision procedures for the modal logic K. Our approach is inspired by the automatatheoretic approach, but we avoid explicit automata construction. Instead, we compute certain fixpoints of a set of typeswhich can be viewed as an onthefly emptiness of the automaton. We use ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
We describe BDDbased decision procedures for the modal logic K. Our approach is inspired by the automatatheoretic approach, but we avoid explicit automata construction. Instead, we compute certain fixpoints of a set of typeswhich can be viewed as an onthefly emptiness of the automaton. We use BDDs to represent and manipulate such type sets, and investigate different kinds of representations as well as a "levelbased" representation scheme. The latter turns out to speed up construction and reduce memory consumption considerably. We also study the effect of formula simplification on our decision procedures. To proof the viability of our approach, we compare our approach with a representative selection of other approaches, including a translation of to QBF. Our results indicate that the BDDbased approach dominates for modally heavy formulae, while searchbased approaches dominate for propositionally heavy formulae.
Is your Model Checker on Time?  On the Complexity of Model Checking for Timed Modal Logics
, 2001
"... This paper studies the structural complexity of model checking for several timed modal logics presented in the literature. More precisely, we consider (variations on) the specification formalisms used in the tools CMC and Uppaal, and fragments of a timed calculus. For each of the logics, we charact ..."
Abstract

Cited by 14 (6 self)
 Add to MetaCart
This paper studies the structural complexity of model checking for several timed modal logics presented in the literature. More precisely, we consider (variations on) the specification formalisms used in the tools CMC and Uppaal, and fragments of a timed calculus. For each of the logics, we characterize the computational complexity of model checking, as well as its specification and program complexity, using (parallel compositions of) timed automata as our system model. In particular, we show that the complexity of model checking for a timed calculus interpreted over (networks of) timed automata is EXPTIMEcomplete, no matter whether the complexity is measured with respect to the size of the specification, of the model or of both. All the flavours of model checking for timed versions of HennessyMilner logic, and the restricted fragments of the timed µcalculus studied in the literature on CMC and Uppaal, are shown to be PSPACEcomplete or EXPTIMEcomplete. Amongst the complexity results o ered in the paper is a theorem to the effect that the model checking problem for the sublanguage L s of the timed calculus, proposed by Larsen, Pettersson and Yi, is PSPACEcomplete. This result is accompanied by an array of statements showing that any extension of L s has an EXPTIMEcomplete model checking problem. We also argue that the model checking problem for the timed propositional µcalculus T is EXPTIMEcomplete, thus improving upon results by Henzinger, Nicollin, Sifakis and Yovine.
Theoretical framework for compositional sequential hardware equivalence verification in presence of design constraints
 In Proceedings of the International Conference on ComputerAided Design
, 2004
"... We are interested in sequential hardware equivalence (or alignability equivalence) verification of synchronous sequential circuits [Pix92]. To cope with large industrial designs, the circuits must be divided into smaller subcircuits and verified separately. Furthermore, in order to succeed in verify ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
We are interested in sequential hardware equivalence (or alignability equivalence) verification of synchronous sequential circuits [Pix92]. To cope with large industrial designs, the circuits must be divided into smaller subcircuits and verified separately. Furthermore, in order to succeed in verifying the subcircuits, design constraints must be added to the subcircuits. These constraints mimic “essential ” behavior of the subcircuit environment. In this work, we extend the classical alignability theory in the presence of design constraints, and prove a compositionality result allowing inferring alignability of the circuits from alignability of the subcircuits. As a result, we build a divide and conquer framework for alignability verification. This framework is successfully used on Intel designs. 1.
ωregular languages are testable with a constant number of queries
, 2005
"... We continue the study of combinatorial property testing. For a property ψ, an εtest for ψ, for 0 < ε ≤ 1, is a randomized algorithm that given an input x, returns “yes” if x satisfies ψ, and returns “no ” with high probability if x is εfar from satisfying ψ, where εfar essentially means that an ε ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
We continue the study of combinatorial property testing. For a property ψ, an εtest for ψ, for 0 < ε ≤ 1, is a randomized algorithm that given an input x, returns “yes” if x satisfies ψ, and returns “no ” with high probability if x is εfar from satisfying ψ, where εfar essentially means that an εfraction of x needs to be changed in order for it to satisfy ψ. In [AKNS99], Alon et al. show that regular languages are εtestable with a constant (depends on ψ and ε and independent of x) number of queries. We extend the result in [AKNS99] to ωregular languages: given a nondeterministic Büchi automaton A on infinite words and a small ε> 0, we describe an algorithm that gets as input an infinite lassoshape word of the form x · y ω, for finite words x and y, samples only a constant number of letters in x and y, returns “yes ” if w ∈ L(A), and returns “no ” with probability 2/3 if w is εfar from L(A). We also discuss the applicability of property testing to formal verification, where ωregular languages are used for the specification of the behavior of nonterminating reactive systems, and computations correspond to lassoshape words.