Results 11  20
of
53
Proving ptolemy right: The environment abstraction framework for model checking concurrent systems
 In TACAS
, 2008
"... Abstract. The parameterized verification of concurrent algorithms and protocols has been addressed by a variety of recent methods. Experience shows that there is a tradeoff between techniques which are widely applicable but depend on nontrivial human guidance, and fully automated approaches which a ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
(Show Context)
Abstract. The parameterized verification of concurrent algorithms and protocols has been addressed by a variety of recent methods. Experience shows that there is a tradeoff between techniques which are widely applicable but depend on nontrivial human guidance, and fully automated approaches which are tailored for narrow classes of applications. In this spectrum, we propose a new framework based on environment abstraction which exhibits a large degree of automation and can be easily adjusted to different fields of application. Our approach is based on two insights: First, we argue that natural abstractions for concurrent software are derived from the “Ptolemaic ” perspective of a human engineer who focuses on a single reference process. For this class of abstractions, we demonstrate soundness of abstraction under very general assumptions. Second, most protocols in given a class of protocols – for instance, cache coherence protocols and mutual exclusion protocols – can be modeled by small sets of high level compound statements. These two insights allow to us efficiently build precise abstract models for given protocols which can then be model checked. We demonstrate the power of our method by applying it to various well known classes of protocols. 1
Inferring disjunctive postconditions
 In ASIAN CS Conference
, 2006
"... Abstract. Polyhedral analysis [9] is an abstract interpretation used for automatic discovery of invariant linear inequalities among numerical variables of a program. Convexity of this abstract domain allows efficient analysis but also loses precision via convexhull and widening operators. To select ..."
Abstract

Cited by 18 (6 self)
 Add to MetaCart
(Show Context)
Abstract. Polyhedral analysis [9] is an abstract interpretation used for automatic discovery of invariant linear inequalities among numerical variables of a program. Convexity of this abstract domain allows efficient analysis but also loses precision via convexhull and widening operators. To selectively recover the loss of precision, sets of polyhedra (disjunctive elements) may be used to capture more precise invariants. However a balance must be struck between precision and cost. We introduce the notion of affinity to characterize how closely related is a pair of polyhedra. Finding related elements in the polyhedron (base) domain allows the formulation of precise hull and widening operators lifted to the disjunctive (powerset extension of the) polyhedron domain. We have implemented a modular static analyzer based on the disjunctive polyhedral analysis where the relational domain and the proposed operators can progressively enhance precision at a reasonable cost. 1
Handling Parameterized Systems with NonAtomic Global Conditions
"... We consider verification of safety properties for parameterized systems with linear topologies. A process in the system is an extended automaton, where the transitions are guarded by both local and global conditions. The global conditions are nonatomic, i.e., a process allows arbitrary interleaving ..."
Abstract

Cited by 13 (8 self)
 Add to MetaCart
(Show Context)
We consider verification of safety properties for parameterized systems with linear topologies. A process in the system is an extended automaton, where the transitions are guarded by both local and global conditions. The global conditions are nonatomic, i.e., a process allows arbitrary interleavings with other transitions while checking the states of all (or some) of the other processes. We translate the problem into model checking of infinite transition systems where each configuration is a labeled finite graph. We derive an overapproximation of the induced transition system, which leads to a symbolic scheme for analyzing safety properties. We have implemented a prototype and run it on several nontrivial case studies, namely nonatomic versions of Burn’s protocol, Dijkstra’s protocol, the Bakery algorithm, Lamport’s distributed mutual exclusion protocol, and a twophase commit protocol used for handling transactions in distributed systems. As far as we know, these protocols have not previously been verified in a fully automated framework. 1
Verifying complex properties using symbolic shape analysis
 In Workshop on heap abstraction and verification
, 2007
"... One of the main challenges in the verification of software systems is the analysis of unbounded data structures with dynamic memory allocation, such as linked data structures and arrays. We describe Bohne, a new analysis for verifying data structures. Bohne verifies data structure operations and sho ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
(Show Context)
One of the main challenges in the verification of software systems is the analysis of unbounded data structures with dynamic memory allocation, such as linked data structures and arrays. We describe Bohne, a new analysis for verifying data structures. Bohne verifies data structure operations and shows that 1) the operations preserve data structure invariants and 2) the operations satisfy their specifications expressed in terms of changes to the set of objects stored in the data structure. During the analysis, Bohne infers loop invariants in the form of disjunctions of universally quantified Boolean combinations of formulas, represented as sets of binary decision diagrams. To synthesize loop invariants of this form, Bohne uses a combination of decision procedures for Monadic SecondOrder Logic over trees, SMTLIB decision procedures (currently CVC Lite), and an automated reasoner within the Isabelle interactive theorem prover. This architecture shows that synthesized loop invariants can serve as a useful communication mechanism between different decision procedures. In addition, Bohne uses field constraint analysis, a combination mechanism that enables the use of uninterpreted function symbols
On Linear Arithmetic with Stars
"... Abstract. We consider an extension of integer linear arithmetic with a star operator that takes closure under vector addition of the set of solutions of linear arithmetic subformula. We show that the satisfiability problem for this language is in NP (and therefore NPcomplete). Our proof uses a gene ..."
Abstract

Cited by 10 (7 self)
 Add to MetaCart
(Show Context)
Abstract. We consider an extension of integer linear arithmetic with a star operator that takes closure under vector addition of the set of solutions of linear arithmetic subformula. We show that the satisfiability problem for this language is in NP (and therefore NPcomplete). Our proof uses a generalization of a recent result on sparse solutions of integer linear programming problems. We present two consequences of our result. The first one is an optimal decision procedure for a logic of sets, multisets, and cardinalities that has applications in verification, interactive theorem proving, and description logics. The second is NPcompleteness of the reachability problem for a class of “homogeneous ” transition systems whose transitions are defined using integer linear arithmetic formulas. 1
Abstraction Refinement for Quantified Array Assertions
 IN: SAS, SPRINGERVERLAG (2009) 3
, 2009
"... We present an abstraction refinement technique for the verification of universally quantified array assertions such as “all elements in the array are sorted”. Our technique can be seamlessly combined with existing software model checking algorithms. We implemented our technique in the ACSAR softwar ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
(Show Context)
We present an abstraction refinement technique for the verification of universally quantified array assertions such as “all elements in the array are sorted”. Our technique can be seamlessly combined with existing software model checking algorithms. We implemented our technique in the ACSAR software model checker and successfully verified quantified array assertions for both text book examples and reallife examples taken from the Linux operating system kernel.
Verification by network decomposition
 IN 15 TH CONCUR, LNCS 3170
, 2004
"... We describe a new method to verify networks of homogeneous processes which communicate by token passing. Given an arbitrary network graph and an indexed LT L \ X property, we show how to decompose the network graph into multiple constant size networks, thereby reducing one model checking call on a ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
(Show Context)
We describe a new method to verify networks of homogeneous processes which communicate by token passing. Given an arbitrary network graph and an indexed LT L \ X property, we show how to decompose the network graph into multiple constant size networks, thereby reducing one model checking call on a large network to several calls on small networks. We thus obtain cutoffs for arbitrary classes of networks, adding to previous work by Emerson and Namjoshi on the ring topology. Our results on LT L \ X are complemented by a negative result which precludes the existence of reductions for CT L \ X on general networks.
An overview of the Jahob analysis system: Project goals and current status
 In NSF Next Generation Software Workshop
, 2006
"... We present an overview of the Jahob system for modular analysis of data structure properties. Jahob uses a subset of Java as the implementation language and annotations with formulas in a subset of Isabelle as the specification language. It uses monadic secondorder logic over trees to reason about ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
(Show Context)
We present an overview of the Jahob system for modular analysis of data structure properties. Jahob uses a subset of Java as the implementation language and annotations with formulas in a subset of Isabelle as the specification language. It uses monadic secondorder logic over trees to reason about reachability in linked data structures, the Isabelle theorem prover and NelsonOppen style theorem provers to reason about highlevel properties and arrays, and a new technique to combine reasoning about constraints on uninterpreted function symbols with other decision procedures. It also incorporates new decision procedures for reasoning about sets with cardinality constraints. The system can infer loop invariants using new symbolic shape analysis. Initial results in the use of our system are promising; we are continuing to develop and evaluate it. 1.
Automatic Verification of Integer Array Programs
, 2009
"... Abstract. We provide a verification technique for a class of programs working on integer arrays of finite, but not a priori bounded length. We use the logic of integer arrays SIL [13] to specify pre and postconditions of programs and their parts. Effects of nonlooping parts of code are computed s ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We provide a verification technique for a class of programs working on integer arrays of finite, but not a priori bounded length. We use the logic of integer arrays SIL [13] to specify pre and postconditions of programs and their parts. Effects of nonlooping parts of code are computed syntactically on the level of SIL. Loop preconditions derived during the computation in SIL are converted into counter automata (CA). Loops are automatically translated— purely on the syntactical level—to transducers. Precondition CA and transducers are composed, and the composition overapproximated by flat automata with difference bound constraints, which are next converted back into SIL formulae, thus inferring postconditions of the loops. Finally, validity of postconditions specified by the user in SIL may be checked as entailment is decidable for SIL. 1
Verifying Reference Counting Implementations
"... Reference counting is a widelyused resource management idiom which maintains a count of references to each resource by incrementing the count upon an acquisition, and decrementing upon a release; resources whose counts fall to zero may be recycled. We present an algorithm to verify the correctness ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
(Show Context)
Reference counting is a widelyused resource management idiom which maintains a count of references to each resource by incrementing the count upon an acquisition, and decrementing upon a release; resources whose counts fall to zero may be recycled. We present an algorithm to verify the correctness of reference counting with minimal user interaction. Our algorithm performs compositional verification through the combination of symbolic temporal case splitting and predicate abstractionbased reachability. Temporal case splitting reduces the verification of an unbounded number of processes and resources to verification of a finite number through the use of Skolem variables. The finite state instances are discharged by symbolic model checking, with an auxiliary invariant correlating reference counts with the number of held references. We have implemented our algorithm in Referee, a reference counting analysis tool for C programs, and applied Referee to two real programs: the memory allocator of an OS kernel and the file interface of the Yaffs file system. In both cases our algorithm proves correct the use of reference counts in less than one minute.