Results 1  10
of
26
The Murφ Verification System
 IN COMPUTER AIDED VERIFICATION. 8TH INTERNATIONAL CONFERENCE
, 1996
"... This is a brief overview of the Murφ verification system. ..."
Abstract

Cited by 136 (8 self)
 Add to MetaCart
This is a brief overview of the Murφ verification system.
Efficient BDD Algorithms for FSM Synthesis and Verification
 In IEEE/ACM Proceedings International Workshop on Logic Synthesis, Lake Tahoe (NV
, 1995
"... We describe a set of BDD based algorithms for efficient FSM synthesis and verification. We establish that the core computation in both synthesis and verification is forming the image and preimage of sets of states under the transition relation characterizing the design. To make these steps as effic ..."
Abstract

Cited by 60 (2 self)
 Add to MetaCart
We describe a set of BDD based algorithms for efficient FSM synthesis and verification. We establish that the core computation in both synthesis and verification is forming the image and preimage of sets of states under the transition relation characterizing the design. To make these steps as efficient as possible, we address BDD variable ordering, use of partitioned transition relations, and use of clustering. We provide an integrated set of algorithms and give references and comparisons with previous work. We report experimental results on a series of seven industrial examples containing from 28 to 172 binary valued latches. 1 Introduction The advent of modern VLSI CAD tools has radically changed the process of designing digital systems. The first CAD tools automated the final stages of design, such as placement and routing. As the low level steps became better understood, the focus shifted to the higher stages. In particular logic synthesis, the science of optimizing designs (for ...
Parallelizing the Murφ verifier
 Computer Aided Verification. 9th International Conference
, 1997
"... With the use of state and memory reduction techniques in verification by explicit state enumeration, runtime becomes a major limiting factor. We describe a parallel version of the explicit state enumeration verifier Murφ for distributed memory multiprocessors and networks of workstations that is ba ..."
Abstract

Cited by 56 (0 self)
 Add to MetaCart
With the use of state and memory reduction techniques in verification by explicit state enumeration, runtime becomes a major limiting factor. We describe a parallel version of the explicit state enumeration verifier Murφ for distributed memory multiprocessors and networks of workstations that is based on the message passing paradigm. In experiments with three complex cache coherence protocols, parallel Murφ shows close to linear speedups, which are largely insensitive to communication latency and bandwidth. There is some slowdown with increasing communication overhead, for which a simple yet relatively accurate approximation formula is given. Techniques to reduce overhead and required bandwidth and to allow heterogeneity and dynamically changing load in the parallel machine are discussed, which we expect will allow good speedups when using conventional networks of workstations.
Distributed Explicit Fair Cycle Detection (Set Based Approach)
"... The fair cycle detectiou problem is at the heart of both LTL and fair CTL model checking. This paper preseuts a new distributed scalable algorithm for explicit fair cycle detection. Our method combines the simplicity of the distributiou of explicitly preseuted data structure and the features of ..."
Abstract

Cited by 40 (12 self)
 Add to MetaCart
The fair cycle detectiou problem is at the heart of both LTL and fair CTL model checking. This paper preseuts a new distributed scalable algorithm for explicit fair cycle detection. Our method combines the simplicity of the distributiou of explicitly preseuted data structure and the features of symbolic algorithm allowing for an efficient parallelisa tion. If a fair cycle (i.e. couuterexample) is detected, theu the algorithm produces a cycle, which is in general shorter than that produced by depthfirst search based algorithms, Experimental results confirm that our approach outperforms that based ou a direct implementation of the best sequential algorithm.
Improved Probabilistic Verification by Hash Compaction
 In Advanced Research Working Conference on Correct Hardware Design and Verification Methods
, 1995
"... . We present and analyze a probabilistic method for verification by explicit state enumeration, which improves on the "hashcompact" method of Wolper and Leroy. The hashcompact method maintains a hash table in which compressed values for states instead of full state descriptors are stored. ..."
Abstract

Cited by 36 (7 self)
 Add to MetaCart
. We present and analyze a probabilistic method for verification by explicit state enumeration, which improves on the "hashcompact" method of Wolper and Leroy. The hashcompact method maintains a hash table in which compressed values for states instead of full state descriptors are stored. This method saves space but allows a nonzero probability of omitting states during verification, which may cause verification to miss design errors (i.e. verification may produce "false positives"). Our method improves on Wolper and Leroy's by calculating the hash and compressed values independently, and by using a specific hashing scheme that requires a low number of probes in the hash table. The result is a large reduction in the probability of omitting a state. Hence, we can achieve a given upper bound on the probability of omitting a state using fewer bits per compressed state. For example, we can reduce the number of bytes stored for each state from the eight recommended by Wolper and Leroy to o...
Using Magnetic Disk instead of Main Memory in the Mur phi Verifier
, 1998
"... In verification by explicit state enumeration a randomly accessed state table is maintained. In practice, the total main memory available for this state table is a major limiting factor in verification. We describe a version of the explicit state enumeration verifier Mur' that allows using m ..."
Abstract

Cited by 34 (2 self)
 Add to MetaCart
In verification by explicit state enumeration a randomly accessed state table is maintained. In practice, the total main memory available for this state table is a major limiting factor in verification. We describe a version of the explicit state enumeration verifier Mur' that allows using magnetic disk instead of main memory for storing almost all of the state table. The algorithm avoids costly random accesses to disk and amortizes the cost of linearly reading the state table from disk over all states in a certain breadthfirst level. The remaining runtime overhead for accessing the disk can be strongly reduced by combining the scheme with hash compaction. We show how to do this combination efficiently and analyze the resulting algorithm. In experiments with three complex cache coherence protocols, the new algorithm achieves memory savings factors of one to two orders of magnitude with a runtime overhead of typically only around 15%. Keywords protocol verification, expli...
A New Scheme for MemoryEfficient Probabilistic Verification
 in IFIP TC6/WG6.1 Joint International Conference on Formal Description Techniques for Distributed Systems and Communication Protocols, and Protocol Specification, Testing, and Verification
, 1996
"... In verification by explicit state enumeration, for each reachable state of the protocol being verified the full state descriptor is stored in a state table. Two probabilistic methods  bitstate hashing and hash compaction  have been proposed in the literature that store much fewer bits for each s ..."
Abstract

Cited by 21 (6 self)
 Add to MetaCart
In verification by explicit state enumeration, for each reachable state of the protocol being verified the full state descriptor is stored in a state table. Two probabilistic methods  bitstate hashing and hash compaction  have been proposed in the literature that store much fewer bits for each state but come at the price of some probability that not all reachable states will be explored during the search, and that the verifier may thus produce false positives. Holzmann introduced bitstate hashing and derived an approximation formula for the average probability that a particular state is not omitted during the search, but this formula does not give a bound on the probability of false positives. In contrast, the analysis for hash compaction, introduced by Wolper and Leroy and improved upon by Stern and Dill, yielded a bound on the probability that not even one state is omitted during the search, thus providing a bound on the probability of false positives. In this paper, we propose a...
Improving Efficiency of Symbolic Model Checking for StateBased System Requirements
 ISSTA 98: Proceedings of the ACM SIGSOFT International Symposium on Software Testing and Analysis
, 1998
"... We present various techniques for improving the time and space efficiency of symbolic model checking for system requirements specified as synchronous finite state machines. We used these techniques in our analysis of the system requirements specification of TCAS II, a complex aircraft collision avoi ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
We present various techniques for improving the time and space efficiency of symbolic model checking for system requirements specified as synchronous finite state machines. We used these techniques in our analysis of the system requirements specification of TCAS II, a complex aircraft collision avoidance system. They together reduce the time and space complexities by orders of magnitude, making feasible some analysis that was previously intractable. The TCAS II requirements were written in RSML, a dialect of statecharts. Keywords Formal verification, symbolic model checking, reachability analysis, binary decision diagrams, partitioned transition relation, statecharts, RSML, TCAS II, system requirements specification, abstraction. 1 Introduction Formal verification based on state exploration can be considered an extreme form of simulation: every possible behavior of the system is checked for correctness. Symbolic model checking [?] using binary decision diagrams (BDDs) [?] is an effic...
Techniques For Efficient Formal Verification Using Binary Decision Diagrams
, 1995
"... The appeal of automatic formal verification is that it's automatic  minimal human labor and expertise should be needed to get useful results and counterexamples. BDD(binary decision diagram)based approaches have promised to allow automatic verification of complex, real systems. For large cl ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
The appeal of automatic formal verification is that it's automatic  minimal human labor and expertise should be needed to get useful results and counterexamples. BDD(binary decision diagram)based approaches have promised to allow automatic verification of complex, real systems. For large classes of problems, however, (including many distributed protocols, multiprocessor systems, and network architectures) this promise has yet to be fulfilled. Indeed, the few successes have required extensive time and effort from sophisticated researchers in the field. Clearly, techniques are needed that are more sophisticated than the obvious direct implementation of theoretical results. This thesis addresses that need, emphasizing an application domain that has been particularly difficult for BDDbased methods  highlevel models of systems or distributed protocols  rather than gatelevel descriptions of circuits. Additionally, the emphasis is on providing useful debugging information for the...
Combining State Space Caching and Hash Compaction
 In Methoden des Entwurfs und der Verifikation digitaler Systeme, 4. GI/ITG/GME Workshop
, 1996
"... In verification by explicit state enumeration, for each reachable state the full state descriptor is stored in a state table. Two methods  state space caching and hash compaction  that reduce the memory requirements for this table have been proposed in the literature. In state space caching, &qu ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
In verification by explicit state enumeration, for each reachable state the full state descriptor is stored in a state table. Two methods  state space caching and hash compaction  that reduce the memory requirements for this table have been proposed in the literature. In state space caching, "old" states are replaced by newly reached ones once the table fills up, which might increase the runtime requirements for verification. In hash compaction, introduced by Wolper and Leroy and improved upon by Stern and Dill, a compressed state descriptor is stored instead of the full one. Here, the memory savings come at the price of a small probability that not all reachable states will be explored during the state enumeration. In this paper, we propose and analyze a new scheme to combine state space caching and hash compaction. In the new scheme, an open addressing collision resolution scheme with a limit on the number of probes in the state table is employed. The new scheme saves roughly 60...