Results 1  10
of
89
Synthesis of reactive(1) designs
 In Proc. Verification, Model Checking, and Abstract Interpretation (VMCAI’06
, 2006
"... Abstract. We consider the problem of synthesizing digital designs from their LTL specification. In spite of the theoretical double exponential lower bound for the general case, we show that for many expressive specifications of hardware designs the problem can be solved in time N 3, where N is the s ..."
Abstract

Cited by 118 (9 self)
 Add to MetaCart
Abstract. We consider the problem of synthesizing digital designs from their LTL specification. In spite of the theoretical double exponential lower bound for the general case, we show that for many expressive specifications of hardware designs the problem can be solved in time N 3, where N is the size of the state space of the design. We describe the context of the problem, as part of the Prosyd European Project which aims to provide a propertybased development flow for hardware designs. Within this project, synthesis plays an important role, first in order to check whether a given specification is realizable, and then for synthesizing part of the developed system. The class of LTL formulas considered is that of Generalized Reactivity(1) (generalized Streett(1)) formulas, i.e., formulas of the form: ( p1 ∧ · · · ∧ pm) → ( q1 ∧ · · · ∧ qn) where each pi, qi is a boolean combination of atomic propositions. We also consider the more general case in which each pi, qi is an arbitrary past LTL formula over atomic propositions. For this class of formulas, we present an N 3time algorithm which checks whether such a formula is realizable, i.e., there exists a circuit which satisfies the formula under any set of inputs provided by the environment. In the case that the specification is realizable, the algorithm proceeds to construct an automaton which represents one of the possible implementing circuits. The automaton is computed and presented symbolically. 1
Branching vs. Linear Time: Final Showdown
 Proceedings of the 2001 Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2001 (LNCS Volume 2031
, 2001
"... The discussion of the relative merits of linear versus branchingtime frameworks goes back to early 1980s. One of the beliefs dominating this discussion has been that "while specifying is easier in LTL (lineartemporal logic), verification is easier for CTL (branchingtemporal logic)". ..."
Abstract

Cited by 79 (8 self)
 Add to MetaCart
(Show Context)
The discussion of the relative merits of linear versus branchingtime frameworks goes back to early 1980s. One of the beliefs dominating this discussion has been that "while specifying is easier in LTL (lineartemporal logic), verification is easier for CTL (branchingtemporal logic)". Indeed, the restricted syntax of CTL limits its expressive power and many important behaviors (e.g., strong fairness) can not be specified in CTL. On the other hand, while model checking for CTL can be done in time that is linear in the size of the specification, it takes time that is exponential in the specification for LTL. Because of these arguments, and for historical reasons, the dominant temporal specification language in industrial use is CTL.
Proving that programs eventually do something good
 In POPL’06: Principles of Programming Languages
, 2007
"... In recent years we have seen great progress made in the area of automatic sourcelevel static analysis tools. However, most of today’s program verification tools are limited to properties that guarantee the absence of bad events (safety properties). Until now no formal software analysis tool has pro ..."
Abstract

Cited by 44 (16 self)
 Add to MetaCart
(Show Context)
In recent years we have seen great progress made in the area of automatic sourcelevel static analysis tools. However, most of today’s program verification tools are limited to properties that guarantee the absence of bad events (safety properties). Until now no formal software analysis tool has provided fully automatic support for proving properties that ensure that good events eventually happen (liveness properties). In this paper we present such a tool, which handles liveness properties of large systems written in C. Liveness properties are described in an extension of the specification language used in the SDV system. We have used the tool to automatically prove critical liveness properties of Windows device drivers and found several previously unknown liveness bugs.
Enhanced Vacuity Detection in Linear Temporal Logic
, 2003
"... One of the advantages of temporallogic modelchecking tools is their ability to accompany a negative answer to a correctness query with a counterexample to the satisfaction of the specification in the system. On the other hand, when the answer to the correctness query is positive, most modelche ..."
Abstract

Cited by 41 (4 self)
 Add to MetaCart
One of the advantages of temporallogic modelchecking tools is their ability to accompany a negative answer to a correctness query with a counterexample to the satisfaction of the specification in the system. On the other hand, when the answer to the correctness query is positive, most modelchecking tools provide no witness for the satisfaction of the specification. In the last few years there has been growing awareness of the importance of suspecting the system or the specification of containing an error also in cases where model checking succeeds.
MultipleCounterexample Guided Iterative Abstraction Refinement: An Industrial Evaluation
, 2003
"... In this paper, we describe a completely automated framework for iterative abstraction refinement that is fully integrated into a formalverification environment. This environment consists of three basic software tools: Forecast, a BDDbased model checker, Thunder, a SATbased bounded model checke ..."
Abstract

Cited by 29 (0 self)
 Add to MetaCart
In this paper, we describe a completely automated framework for iterative abstraction refinement that is fully integrated into a formalverification environment. This environment consists of three basic software tools: Forecast, a BDDbased model checker, Thunder, a SATbased bounded model checker, and MCE, a technology for multiplecounterexample analysis. In our framework, the initial abstraction is chosen relative to the property under verification. The abstraction is model checked by Forecast; in case of failure, a counterexample is returned. Our framework includes an abstract counterexample analyzer module that applies techniques for bounded model checking to check whether the abstract counterexample holds in the concrete model. If it does, it is extended to a concrete counterexample. This important capability is provided as a separate tool that also addresses one of the major problems of verification by manual abstraction.
Experimental evaluation of classical automata constructions
 In In LPAR 2005, LNCS 3835
, 2005
"... Abstract. There are several algorithms for producing the canonical DFA from a given NFA. While the theoretical complexities of these algorithms are known, there has not been a systematic empirical comparison between them. In this work we propose a probabilistic framework for testing the performance ..."
Abstract

Cited by 28 (3 self)
 Add to MetaCart
(Show Context)
Abstract. There are several algorithms for producing the canonical DFA from a given NFA. While the theoretical complexities of these algorithms are known, there has not been a systematic empirical comparison between them. In this work we propose a probabilistic framework for testing the performance of automatatheoretic algorithms. We conduct a direct experimental comparison between Hopcroft’s and Brzozowski’s algorithms. We show that while Hopcroft’s algorithm has better overall performance, Brzozowski’s algorithm performs better for “highdensity” NFA. We also consider the universality problem, which is traditionally solved explicitly via the subset construction. We propose an encoding that allows this problem to be solved symbolically via a modelchecker. We compare the performance of this approach to that of the standard explicit algorithm, and show that the explicit approach performs significantly better. 1
Büchi Complementation Made Tighter
 INTERNATIONAL JOURNAL OF FOUNDATIONS OF COMPUTER SCIENCE
, 2004
"... The complementation problem for nondeterministic word automata has numerous applications in formal verification. In particular, the languagecontainment problem, to which many verification problems is reduced, involves complementation. For automata on finite words, which correspond to safety propert ..."
Abstract

Cited by 28 (10 self)
 Add to MetaCart
The complementation problem for nondeterministic word automata has numerous applications in formal verification. In particular, the languagecontainment problem, to which many verification problems is reduced, involves complementation. For automata on finite words, which correspond to safety properties, complementation involves determinization. The 2 n blowup that is caused by the subset construction is justified by a tight lower bound. For Büchi automata on infinite words, which are required for the modeling of liveness properties, optimal complementation constructions are quite complicated, as the subset construction is not sufficient. From a theoretical point of view, the problem is considered solved since 1988, when Safra came up with a determinization construction for Büchi automata, leading to a 2 O(nlogn) complementation construction, and Michel came up with a matching lower bound. A careful analysis, however, of the exact blowup in Safra’s and Michel’s bounds reveals an exponential gap in the constants hiding in the O() notations: while the upper bound on the number of states in Safra’s complementary automaton is n 2n, Michel’s lower bound involves only an n! blow up,
Reasoning with temporal logic on truncated paths
 In: CAV’03. LNCS 2725
, 2003
"... Abstract. We consider the problem of reasoning with linear temporal logic on truncated paths. A truncated path is a path that is finite, but not necessarily maximal. Truncated paths arise naturally in several areas, among which are incomplete verification methods (such as simulation or bounded model ..."
Abstract

Cited by 27 (4 self)
 Add to MetaCart
(Show Context)
Abstract. We consider the problem of reasoning with linear temporal logic on truncated paths. A truncated path is a path that is finite, but not necessarily maximal. Truncated paths arise naturally in several areas, among which are incomplete verification methods (such as simulation or bounded model checking) and hardware resets. We present a formalism for reasoning about truncated paths, and analyze its characteristics. 1
The Büchi complementation saga
 In Proceedings of the International Symposium on Theoretical Aspects of Computer Science, STACS 2007
, 2007
"... Abstract. The complementation problem for nondeterministic word automata has numerous applications in formal verification. In particular, the languagecontainment problem, to which many verification problems are reduced, involves complementation. For automata on finite words, which correspond to sa ..."
Abstract

Cited by 26 (3 self)
 Add to MetaCart
(Show Context)
Abstract. The complementation problem for nondeterministic word automata has numerous applications in formal verification. In particular, the languagecontainment problem, to which many verification problems are reduced, involves complementation. For automata on finite words, which correspond to safety properties, complementation involves determinization. The 2n blowup that is caused by the subset construction is justified by a tight lower bound. For Büchi automata on infinite words, which are required for the modeling of liveness properties, optimal complementation constructions are quite complicated, as the subset construction is not sufficient. We review here progress on this problem, which dates back to its introduction in Büchi’s seminal 1962 paper. 1
On Complementing Nondeterministic Büchi Automata
, 2003
"... Several optimal algorithms have been proposed for the complementation of nondeterministic B uchi word automata. Due to the intricacy of the problem and the exponential blowup that complementation involves, these algorithms have never been used in practice, even though an effective complementatio ..."
Abstract

Cited by 25 (8 self)
 Add to MetaCart
Several optimal algorithms have been proposed for the complementation of nondeterministic B uchi word automata. Due to the intricacy of the problem and the exponential blowup that complementation involves, these algorithms have never been used in practice, even though an effective complementation construction would be of significant practical value. Recently, Kupferman and Vardi described a complementation algorithm that goes through weak alternating automata and that seems simpler than previous algorithms. We combine their algorithm with known and new minimization techniques. Our approach is based on optimizations of both the intermediate weak alternating automaton and the final nondeterministic automaton, and involves techniques of rank and height reductions, as well as direct and fair simulation.