Results 1  10
of
26
Protocol Verification as a Hardware Design Aid
 IN IEEE INTERNATIONAL CONFERENCE ON COMPUTER DESIGN: VLSI IN COMPUTERS AND PROCESSORS
, 1992
"... The role of automatic formal protocol verification in hardware design is considered. Principles are identified that maximize the benefits of protocol verification while minimizing the labor and computation required. A new protocol description language and verifier (both called Mur') are described, ..."
Abstract

Cited by 234 (27 self)
 Add to MetaCart
The role of automatic formal protocol verification in hardware design is considered. Principles are identified that maximize the benefits of protocol verification while minimizing the labor and computation required. A new protocol description language and verifier (both called Mur') are described, along with experiences in applying them to two industrial protocols that were developed as part of hardware designs.
Better Verification Through Symmetry
, 1996
"... A fundamental difficulty in automatic formal verification of finitestate systems is the state explosion problem  even relatively simple systems can produce very large state spaces, causing great difficulties for methods that rely on explicit state enumeration. We address the problem by exploiting ..."
Abstract

Cited by 185 (8 self)
 Add to MetaCart
A fundamental difficulty in automatic formal verification of finitestate systems is the state explosion problem  even relatively simple systems can produce very large state spaces, causing great difficulties for methods that rely on explicit state enumeration. We address the problem by exploiting structural symmetries in the description of the system to be verified. We make symmetries easy to detect by introducing a new data type scalarset, a finite and unordered set, to our description language. The operations on scalarsets are restricted so that states are guaranteed to have the same future behaviors, up to permutation of the elements of the scalarsets. Using the symmetries implied by scalarsets, a verifier can automatically generate a reduced state space, on the fly. We provide a proof of the soundness of the new symmetrybased verification algorithm based on a definition of the formal semantics of a simple description language with scalarsets. The algorithm has been implemented ...
Coverage Preserving Reduction Strategies for Reachability Analysis
"... We study the effect of three new reduction strategies for conventional reachability analysis, as used in automated protocol validation algorithms. The first two strategies are implementations of partial order semantics rules that attempt to minimize the number of execution sequences that need to be ..."
Abstract

Cited by 58 (8 self)
 Add to MetaCart
We study the effect of three new reduction strategies for conventional reachability analysis, as used in automated protocol validation algorithms. The first two strategies are implementations of partial order semantics rules that attempt to minimize the number of execution sequences that need to be explored for a full state space exploration. The third strategy is the implementation of a state compression scheme that attempts to minimize the amount of memory that is used to built a state space. The three strategies are shown to have a potential for substantially improving the performance of a conventional search. The paper discusses the optimal choices for reducing either run time or memory requirements by four to six times. The strategies can readily be combined with each other and with alternative state space reduction techniques such as supertrace or state space caching methods.
Verifying Systems with Replicated Components in Murφ
, 1997
"... An extension to the Murphi verifier is presented to verify systems with replicated identical components. Although most systems are finitestate in nature, many of them are also designed to be scalable, so that a description gives a family of systems, each member of which has a different number of re ..."
Abstract

Cited by 42 (3 self)
 Add to MetaCart
An extension to the Murphi verifier is presented to verify systems with replicated identical components. Although most systems are finitestate in nature, many of them are also designed to be scalable, so that a description gives a family of systems, each member of which has a different number of replicated components. It is therefore desirable to be able to verify the entire family of systems, independent of the exact number of replicated components. The verification is performed by explicit state enumeration in an abstract state space where states do not record the exact numbers of components. We provide an extension to the existing Murphi language, by which a designer can easily specify a system in its concrete form. Through a new datatype, called RepetitiveID, a designer can suggest the use of this abstraction to verify a family of systems. First of all, Murphi automatically checks the soundness of this abstraction. Then it automatically translates the system description to an abstract ...
A Toolbox for the Verification of LOTOS Programs
, 1992
"... This paper presents the tools Ald' ebaran, Caesar, Caesar.adt and Cl' eop atre which constitute a toolbox for compiling and verifying Lotos programs. The principles of these tools are described, as well as their performances and limitations. Finally, the formal verification of the rel/REL atomic mu ..."
Abstract

Cited by 33 (4 self)
 Add to MetaCart
This paper presents the tools Ald' ebaran, Caesar, Caesar.adt and Cl' eop atre which constitute a toolbox for compiling and verifying Lotos programs. The principles of these tools are described, as well as their performances and limitations. Finally, the formal verification of the rel/REL atomic multicast protocol is given as an example to illustrate the practical use of the toolbox. Keywords: reliability, formal methods, Lotos, verification, validation, modelbased methods, modelchecking, transition systems, bisimulations, temporal logics, diagnostics Introduction There is an increasing need for reliable software, which is especially critical in some areas such as communication protocols, distributed systems, realtime control systems, and hardware synthesis systems. It is now agreed that reliability can only be achieved through the use of rigorous design techniques. This has motivated a lot of research on specification formalisms and associated verification methods and tools. Ver...
Symbolic Bisimulation Minimisation
 In Computer Aided Verification
"... We adapt the Coarsest Partition Refinement algorithm to its computation using the specific data structures of Binary Decision Diagrams. This allows to generate symbolically the set of equivalence classes of a finite automaton with respect to bisimulation, without constructing the automaton itself ..."
Abstract

Cited by 32 (6 self)
 Add to MetaCart
We adapt the Coarsest Partition Refinement algorithm to its computation using the specific data structures of Binary Decision Diagrams. This allows to generate symbolically the set of equivalence classes of a finite automaton with respect to bisimulation, without constructing the automaton itself. These equivalence classes represent of course the (new) states of the canonical minimal automaton bisimilar to the early one. The method works from labeled synchronised vectors of automata as the distributed system description. We report on performances of Hoggar, a tool implementing our method. 1 Introduction Bisimulation is a central notion in the domain of verification of concurrent systems [18]. It was introduced as the major behavioural equivalence in the setting of process algebras [18, 2], but works at the interpretation level of labeled transition systems. Algorithmic properties of bisimulation in the finite state case have been widely studied [16, 20, 11], leading to a lar...
Verifying Bisimulations "On the Fly"
, 1990
"... This paper describes a decision procedure for bisimulationbased equivalence relations between labeled transition systems. The algorithm usually performed in order to verify bisimulation consists in refining some initial equivalence relation until it becomes compatible with the transition relation u ..."
Abstract

Cited by 24 (4 self)
 Add to MetaCart
This paper describes a decision procedure for bisimulationbased equivalence relations between labeled transition systems. The algorithm usually performed in order to verify bisimulation consists in refining some initial equivalence relation until it becomes compatible with the transition relation under consideration. However, this method requires to store the transition relation explicitly, which limits it to mediumsized labeled transition systems. The algorithm proposed here does not need to previously construct the two transition systems: the verification can be performed during their generation. Thus, the amount of memory required can be significantly reduced, and verification of larger size systems becomes possible. This algorithm has been implemented in the tool Ald' ebaran and has been used in the framework of verification of Lotos specifications. 1 Introduction One of the successful approaches used for the verification of systems of communicating processes is provided by beha...
Efficient Verification of Symmetric Concurrent Systems
, 1993
"... Previously, we proposed a reduction technique [ID93] based on symmetries to alleviate the state explosion problem in automatic verification of concurrent systems. This paper describes the results of testing the technique on a wide range of algorithms and protocols, including realistic multiprocessor ..."
Abstract

Cited by 21 (6 self)
 Add to MetaCart
Previously, we proposed a reduction technique [ID93] based on symmetries to alleviate the state explosion problem in automatic verification of concurrent systems. This paper describes the results of testing the technique on a wide range of algorithms and protocols, including realistic multiprocessor synchronization algorithms and cache coherence protocols. Memory requirements were reduced by amounts ranging from 83% to over 99%, and time requirements were often reduced as well. We also consider the effectiveness of the technique on different types of symmetries, such as symmetries in identical system components and symmetries in data values.
Validating SDL Specifications: an Experiment
 INTERNATIONAL WORKSHOP ON PROTOCOL SPECIFICATION, TESTING AND VERIFICATION IX (TWENTE, THE NETHERLANDS
, 1989
"... This paper describes a method for validating specifications written in the CCITT language SDL. The method has been implemented as part of an experimental validation system. With the ..."
Abstract

Cited by 20 (6 self)
 Add to MetaCart
This paper describes a method for validating specifications written in the CCITT language SDL. The method has been implemented as part of an experimental validation system. With the
Fast and Accurate Bitstate Verification for SPIN
 In Proceedings of the 11th International SPIN Workshop on Model Checking of Software (SPIN
, 2004
"... Bitstate hashing in SPIN has proved invaluable in probabilistically detecting errors in large models, but in many cases, the number of omitted states is much higher than it would be if SPIN allowed more than two hash functions to be used. For example, adding just one more hash function can reduce th ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
Bitstate hashing in SPIN has proved invaluable in probabilistically detecting errors in large models, but in many cases, the number of omitted states is much higher than it would be if SPIN allowed more than two hash functions to be used. For example, adding just one more hash function can reduce the probability of omitting states at all from 99% to under 3%. Because hash computation accounts for an overwhelming portion of the total execution cost of bitstate verification with SPIN, adding additional independent hash functions would slow down the process tremendously. We present efficient ways of computing multiple hash values that, despite sacrificing independence, give virtually the same accuracy and even yield a speed improvement in the two hash function case when compared to the current SPIN implementation. Another key to accurate bitstate hashing is utilizing as much memory as is available. The current SPIN implementation is limited to only 512MB and allows only poweroftwo granularity (256MB, 128MB, etc). However, using 768MB instead of 512MB could reduce the probability of a single omission from 20% to less than one chance in 10,000, which demonstrates the magnitude of both the maximum and the granularity limitation. We have modified SPIN to utilize any addressable amount of memory and use any number of efficientlycomputed hash functions, and we present empirical results from extensive experimentation comparing various configurations of our modified version to the original SPIN.