Results 1  10
of
14
A New Scheme for MemoryEfficient Probabilistic Verification
 in IFIP TC6/WG6.1 Joint International Conference on Formal Description Techniques for Distributed Systems and Communication Protocols, and Protocol Specification, Testing, and Verification
, 1996
"... In verification by explicit state enumeration, for each reachable state of the protocol being verified the full state descriptor is stored in a state table. Two probabilistic methods  bitstate hashing and hash compaction  have been proposed in the literature that store much fewer bits for each s ..."
Abstract

Cited by 22 (7 self)
 Add to MetaCart
In verification by explicit state enumeration, for each reachable state of the protocol being verified the full state descriptor is stored in a state table. Two probabilistic methods  bitstate hashing and hash compaction  have been proposed in the literature that store much fewer bits for each state but come at the price of some probability that not all reachable states will be explored during the search, and that the verifier may thus produce false positives. Holzmann introduced bitstate hashing and derived an approximation formula for the average probability that a particular state is not omitted during the search, but this formula does not give a bound on the probability of false positives. In contrast, the analysis for hash compaction, introduced by Wolper and Leroy and improved upon by Stern and Dill, yielded a bound on the probability that not even one state is omitted during the search, thus providing a bound on the probability of false positives. In this paper, we propose a...
D.: Model checking via delayed duplicate detection on the GPU
, 2008
"... In this paper we improve largescale diskbased model checking by shifting complex numerical operations to the graphic card, enjoying that during the last decade graphics processing units (GPUs) have become very powerful. For diskbased graph search, the delayed elimination of duplicates is the perf ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
(Show Context)
In this paper we improve largescale diskbased model checking by shifting complex numerical operations to the graphic card, enjoying that during the last decade graphics processing units (GPUs) have become very powerful. For diskbased graph search, the delayed elimination of duplicates is the performance bottleneck as it amounts to sorting large state vector sets. We perform parallel processing on the GPU to improve the sorting speed significantly. Since existing GPU sorting solutions like Bitonic Sort and Quicksort do not obey any speedup on state vectors, we propose a refined GPUbased Bucket Sort algorithm. Alternatively, we study sorting a compressed state vector and obtain speedups for delayed duplicate detection of more than one order of magnitude with a single GPU, located on an ordinary graphic card. 1
Path Finding with the SweepLine Method using External Storage
 In ICFEM
, 2003
"... The sweepline method deletes states onthey during state space exploration to reclaim memory and thereby reduce peak memory usage. This deletion of states prohibits the immediate generation of, e.g., an errortrace when the violation of a safety property is detected. We address this problem by ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
(Show Context)
The sweepline method deletes states onthey during state space exploration to reclaim memory and thereby reduce peak memory usage. This deletion of states prohibits the immediate generation of, e.g., an errortrace when the violation of a safety property is detected. We address this problem by combining the sweepline method with storing a spanning tree of the explored state space in external storage on a magnetic disk. We show how this allows us to easily obtain paths in the state space, such as errortraces. A key property of the proposed technique is that it avoids searching in external storage during the state space exploration and gives the same reduction in peak memory usage as the standalone sweepline method. The subsequent generation of the path then requires one seek on disk for each state on the path. We evaluate the proposed technique on a number of example systems by means of an implementation, and compare its performance to a related technique.
Algorithmic Techniques in Verification by Explicit State Enumeration
, 1997
"... Modern digital systems often employ sophisticated protocols. Unfortunately, designing correct protocols is a subtle art. Even when using great care, a designer typically cannot foresee all possible interactions among the components of the system; thus, bugs like subtle race conditions or deadlocks a ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
Modern digital systems often employ sophisticated protocols. Unfortunately, designing correct protocols is a subtle art. Even when using great care, a designer typically cannot foresee all possible interactions among the components of the system; thus, bugs like subtle race conditions or deadlocks are easily overlooked. One way a computer can support the designer is by simulating random executions of the system. There is, however, a high probability of missing executions containing errors  especially in complex systems  using this simulation approach. In contrast, an automatic verifier tries to examine all states reachable from a given set of startstates. The biggest obstacle in this exhaustive approach is that often there is a very large number of reachable states. This thesis describes three techniques to increase the size of the reachable state spaces that can be handled in automatic verifiers. The techniques work in verifiers that are based on explicitly storing each reachable ...
Theory and Practice of TimeSpace TradeOffs in Memory Limited Search
 In Proceedings of KI01, Lecture Notes in Computer Science
, 2001
"... . Having to cope with memory limitations is an ubiquitous issue in heuristic search. We present theoretical and practical results on new variants for exploring statespace with respect to memory limitations. We establish ##### ## minimumspace algorithms that omit both the open and the closed li ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
(Show Context)
. Having to cope with memory limitations is an ubiquitous issue in heuristic search. We present theoretical and practical results on new variants for exploring statespace with respect to memory limitations. We establish ##### ## minimumspace algorithms that omit both the open and the closed list to determine the shortest path between every two nodes and study the gap in between full memorization in a hash table and the informationtheoretic lower bound. The proposed structure of suffixlists elaborates on a concise binary representation of states by applying bitstate hashing techniques. Significantly more states can be stored while searching and inserting # items into suffix lists is still available in ### ### ## time. Bitstate hashing leads to the new paradigm of partial iterativedeepening heuristic search, in which full exploration is sacrificed for a better detection of duplicates in large search depth. We give first promising results in the application area of communication protocols. 1
Finding optimal solutions to Atomix
 KI 2001: ADVANCES IN ARTIFICIAL INTELLIGENCE, VOLUME 2174 OF LNCS/LNAI
, 2001
"... We present solutions of benchmark instances to the solitaire computer game Atomix found with different heuristic search methods. The problem is PSPACEcomplete. An implementation of the heuristic algorithm A * is presented that needs no priority queue, thereby having very low memory overhead. The li ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
We present solutions of benchmark instances to the solitaire computer game Atomix found with different heuristic search methods. The problem is PSPACEcomplete. An implementation of the heuristic algorithm A * is presented that needs no priority queue, thereby having very low memory overhead. The limited memory algorithm IDA * is handicapped by the fact that, due to move transpositions, duplicates appear very frequently in the problem space; several schemes of using memory to mitigate this weakness are explored, among those, “partial” schemes which trade memory savings for a small probability of not finding an optimal solution. Even though the underlying search graph is directed, backward search is shown to be viable, since the branching factor can be proven to be the same as for forward search.
Randomization Helps in LTL Model Checking
, 2001
"... We present and analyze a new probabilistic method for automata based LTL model checking of nonprobabilistic systems with intention to reduce memory requirements. The main idea of our approach is to use randomness to decide which of the needed information (visited states) should be stored during a c ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
We present and analyze a new probabilistic method for automata based LTL model checking of nonprobabilistic systems with intention to reduce memory requirements. The main idea of our approach is to use randomness to decide which of the needed information (visited states) should be stored during a computation and which could be omitted. We propose two strategies of probabilistic storing of states. The algorithm never errs, i.e. it always delivers correct results. On the other hand the computation time can increase. The method has been embedded into the SPIN model checker and a series of experiments has been performed. The results con rm that randomization can help to increase the applicability of model checkers in practice. 1
Reliable Probabilistic Verification Using Hash Compaction
"... This paper describes and analyzes a probabilistic technique to reduce the memory requirement of the table of reached states maintained in verification by explicit state enumeration. The memory savings of the new scheme come at the price of a certain probability that the search becomes incomplete. Ho ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
This paper describes and analyzes a probabilistic technique to reduce the memory requirement of the table of reached states maintained in verification by explicit state enumeration. The memory savings of the new scheme come at the price of a certain probability that the search becomes incomplete. However, this probability can be made negligibly small by using typically 40 bits of memory per state. From this point of view, this new scheme improves substantially on Holzmann's bitstate hashing, which has a high probability of producing an incomplete search even when using close to 1000 bits per state. The proposed scheme has been implemented in the contexts of the SPIN and Mur' verification systems. Experiments on sample protocols nicely match the predictions of the analysis. For large protocols, memory savings of two orders of magnitude are obtained. We also show how to efficiently combine the new scheme with state space caching, and we analyze bitstate hashing in order to compare it wit...
Hierarchical Adaptive State Space Caching based on Level Sampling
 in "Proceedings of the 15th International Conference on Tools and Algorithms for the Construction and Analysis of Systems TACAS’2009
, 2009
"... Abstract. In the past, several attempts have been made to deal with the state space explosion problem by equipping a depthfirst search (DFS) algorithm with a state cache, or by avoiding collision detection, thereby keeping the state hash table at a fixed size. Most of these attempts are tailored sp ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. In the past, several attempts have been made to deal with the state space explosion problem by equipping a depthfirst search (DFS) algorithm with a state cache, or by avoiding collision detection, thereby keeping the state hash table at a fixed size. Most of these attempts are tailored specifically for DFS, and are often not guaranteed to terminate and/or to exhaustively visit all the states. In this paper, we propose a general framework of hierarchical caches which can also be used by breadthfirst searches (BFS). Our method, based on an adequate sampling of BFS levels during the traversal, guarantees that the BFS terminates and traverses all transitions of the state space. We define several (static or adaptive) configurations of hierarchical caches and we study experimentally their effectiveness on benchmark examples of state spaces and on several communication protocols, using a generic implementation of the cache framework that we developed within the CADP toolbox. 1
Incremental hashing in state space search
 IN WORKSHOP ”NEW RESULTS IN PLANNING, SCHEDULING AND DESIGN
, 2004
"... State memorization is essential for statespace search to avoid redundant expansions and hashing serves as a method to, address store and retrieve states efficiently. In this paper we introduce incremental state hashing to compute hash values in constant time. The method will be most effective in g ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
State memorization is essential for statespace search to avoid redundant expansions and hashing serves as a method to, address store and retrieve states efficiently. In this paper we introduce incremental state hashing to compute hash values in constant time. The method will be most effective in guided depthfirst search traversals of state space graphs, like in IDA*, where the computation of the set of successors and their heuristic estimates is extremely fast: heuristic values are often computed incrementally or retrieved from precomputed pattern database tables, and backtracking keeps the changes in the state representation vector during the exploration small. The approach quickly decides if a given state is not present in a hash table, and accelerates successful search. It can further accelerate perfect hashing for pattern storage and lookup. If, for a better coverage of the state space, partial search methods without collision resolving is used, we establish another benefit for incremental state hashing. We exemplify our considerations in the (n² − 1)Puzzle, in action planning, and conduct experiments in Atomix.