Results 1  10
of
28
Efficiently Solving Quantified BitVector Formulas
"... Abstract—In recent years, bitprecise reasoning has gained importance in hardware and software verification. Of renewed interest is the use of symbolic reasoning for synthesising loop invariants, ranking functions, or whole program fragments and hardware circuits. Solvers for the quantifierfree fra ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
(Show Context)
Abstract—In recent years, bitprecise reasoning has gained importance in hardware and software verification. Of renewed interest is the use of symbolic reasoning for synthesising loop invariants, ranking functions, or whole program fragments and hardware circuits. Solvers for the quantifierfree fragment of bitvector logic exist and often rely on SAT solvers for efficiency. However, many techniques require quantifiers in bitvector formulas to avoid an exponential blowup during construction. Solvers for quantified formulas usually flatten the input to obtain a quantified Boolean formula, losing much of the wordlevel information in the formula. We present a new approach based on a set of effective wordlevel simplifications that are traditionally employed in automated theorem proving, heuristic quantifier instantiation methods used in SMT solvers, and model finding techniques based on skeletons/templates. Experimental results on two different types of benchmarks indicate that our method outperforms the traditional flattening approach by multiple orders of magnitude of runtime. I.
Abstraction Refinement for Quantified Array Assertions
 IN: SAS, SPRINGERVERLAG (2009) 3
, 2009
"... We present an abstraction refinement technique for the verification of universally quantified array assertions such as “all elements in the array are sorted”. Our technique can be seamlessly combined with existing software model checking algorithms. We implemented our technique in the ACSAR softwar ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
(Show Context)
We present an abstraction refinement technique for the verification of universally quantified array assertions such as “all elements in the array are sorted”. Our technique can be seamlessly combined with existing software model checking algorithms. We implemented our technique in the ACSAR software model checker and successfully verified quantified array assertions for both text book examples and reallife examples taken from the Linux operating system kernel.
SMTBased Array Invariant Generation
, 2013
"... This paper presents a constraintbased method for generating universally quantified loop invariants over array and scalar variables. Constraints are solved by means of an SMT solver, thus leveraging recent advances in SMT solving for the theory of nonlinear arithmetic. The method has been implemen ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
This paper presents a constraintbased method for generating universally quantified loop invariants over array and scalar variables. Constraints are solved by means of an SMT solver, thus leveraging recent advances in SMT solving for the theory of nonlinear arithmetic. The method has been implemented in a prototype of program analyzer, and a wide sample of examples illustrating its power is shown.
GPUVerify: a verifier for GPU kernels
 In OOPSLA
, 2012
"... We present a technique for verifying race and divergencefreedom of GPU kernels that are written in mainstream kernel programming languages such as OpenCL and CUDA. Our approach is founded on a novel formal operational semantics for GPU programming termed synchronous, delayed visibility (SDV) sem ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
We present a technique for verifying race and divergencefreedom of GPU kernels that are written in mainstream kernel programming languages such as OpenCL and CUDA. Our approach is founded on a novel formal operational semantics for GPU programming termed synchronous, delayed visibility (SDV) semantics. The SDV semantics provides a precise definition of barrier divergence in GPU kernels and allows kernel verification to be reduced to analysis of a sequential program, thereby completely avoiding the need to reason about thread interleavings, and allowing existing modular techniques for program verification to be leveraged. We describe an efficient encoding for data race detection and propose a method for automatically inferring loop invariants required for verification. We have implemented these techniques as a practical verification tool, GPUVerify, which can be applied directly to OpenCL and CUDA source code. We evaluate GPUVerify with respect to a set of 163 kernels drawn from public and commercial sources. Our evaluation demonstrates that GPUVerify is capable of efficient, automatic verification of a large number of realworld kernels.
2011): Synthesis of firstorder dynamic programming algorithms
 In: OOPSLA
"... To solve a problem with a dynamic programming algorithm, one must reformulate the problem such that its solution can be formed from solutions to overlapping subproblems. Because overlapping subproblems may not be apparent in the specification, it is desirable to obtain the algorithm directly from th ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
To solve a problem with a dynamic programming algorithm, one must reformulate the problem such that its solution can be formed from solutions to overlapping subproblems. Because overlapping subproblems may not be apparent in the specification, it is desirable to obtain the algorithm directly from the specification. We describe a semiautomatic synthesizer of lineartime dynamic programming algorithms. The programmer supplies a declarative specification of the problem and the operators that might appear in the solution. The synthesizer obtains the algorithm by searching a space of candidate algorithms; internally, the search is implemented with constraint solving. The space of candidate algorithms is defined with a program template reusable across all lineartime dynamic programming algorithms, which we characterize as firstorder recurrences. This paper focuses on how to write the template so that the constraint solving process scales to realworld lineartime dynamic programming algorithms. We show how to reduce the space with (i) symmetry reduction and (ii) domain knowledge of dynamic programming algorithms. We have synthesized algorithms for variants of maximal substring matching, an assemblyline optimization, and the extended Euclid algorithm. We have also synthesized a problem outside the class of firstorder recurrences, by composing three instances of the algorithm template.
Automatically Refining Partial Specification for Program Verification
, 2010
"... Abstract. Automatically verifying heapmanipulating programs is a challenging task, especially when dealing with complex data structures with strong invariants, such as sorted lists and AVL/redblack trees. The verification process can greatly benefit from human assistance through specification anno ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Automatically verifying heapmanipulating programs is a challenging task, especially when dealing with complex data structures with strong invariants, such as sorted lists and AVL/redblack trees. The verification process can greatly benefit from human assistance through specification annotations, but this process requires intellectual effort from users and is errorprone. In this paper, we propose a new approach to program verification that allows users to provide only partial specification to methods. Our approach will then refine the given annotation into a more complete specification by discovering missing constraints. The discovered constraints may involve both numerical and multiset properties that could be later confirmed or revised by users. We further augment our approach by requiring only partial specification to be given for primary methods. Specifications for loops and auxiliary methods can then be systematically discovered by our augmented mechanism, with the help of information propagated from the primary methods. Our work is aimed at verifying beyond shape properties, with the eventual goal of analysing fullfunctional properties forpointerbased data structures.Initial experiments have confirmed that we can automatically refine partial specifications with nontrivial constraints, thus making it easier for users to handle specifications with richer properties. 1
Generalizing the Template Polyhedral Domain
"... Template polyhedra generalize weakly relational domains by specifying arbitrary fixed linear expressions on the lefthand sides of inequalities and undetermined constants on the right. The domain operations required for analysis over template polyhedra can be computed in polynomial time using line ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Template polyhedra generalize weakly relational domains by specifying arbitrary fixed linear expressions on the lefthand sides of inequalities and undetermined constants on the right. The domain operations required for analysis over template polyhedra can be computed in polynomial time using linear programming. In this paper, we introduce the generalized template polyhedral domain that extends template polyhedra using fixed lefthand side expressions with bilinear forms involving program variables and unknown parameters to the right. We prove that the domain operations over generalized templates can be defined as the “best possible abstractions ” of the corresponding polyhedral domain operations. The resulting analysis can straddle the entire space of linear relation analysis starting from the template domain to the full polyhedral domain. We show that analysis in the generalized template domain can be performed by dualizing the join, postcondition and widening operations. We also investigate the special case of template polyhedra wherein each bilinear form has at most two parameters. For this domain, we use the special properties of two dimensional polyhedra and techniques from fractional linear programming to derive domain operations that can be implemented in polynomial time over the number of variables in the program and the size of the polyhedra. We present applications of generalized template polyhedra to strengthen previously obtained invariants by converting them into templates. We describe an experimental evaluation of an implementation over several benchmark systems.
D.: Learning universally quantified invariants of linear data structures
 CoRR
, 2013
"... Abstract. We propose a new automaton model, called quantified data automata over words, that can model quantified invariants over linear data structures, and build polytime active learning algorithms for them, where the learner is allowed to query the teacher with membership and equivalence queries ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We propose a new automaton model, called quantified data automata over words, that can model quantified invariants over linear data structures, and build polytime active learning algorithms for them, where the learner is allowed to query the teacher with membership and equivalence queries. In order to express invariants in decidable logics, we invent a decidable subclass of QDAs, called elastic QDAs, and prove that every QDA has a unique minimallyoverapproximating elastic QDA. We then give an application of these theoretically sound and efficient active learning algorithms in a passive learning framework and show that we can efficiently learn quantified linear data structure invariants from samples obtained from dynamic runs for a large class of programs. 1
KnowledgeBased Verification of Service Compositions – An SMT Approach
 in Engineering of Complex Computer Systems (ICECCS), 2013 18th International Conference on, July 2013. doi: 10.1109/ICECCS.2013.14
"... Abstract—In the Semantic (Web) Services area, services are considered black boxes with a semantic description of their interfaces as to allow for precise service selection and configuration. The semantic description is usually grounded on domainspecific concepts as modeled in ontologies. This acco ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Abstract—In the Semantic (Web) Services area, services are considered black boxes with a semantic description of their interfaces as to allow for precise service selection and configuration. The semantic description is usually grounded on domainspecific concepts as modeled in ontologies. This accounts to types used in service signatures, but also to predicates occurring in preconditions and effects of services. Ontologies, in particular those enhanced with rules, capture the knowledge of domain experts on properties of and relations between domain concepts. In this paper, we present a verification technique for service compositions which makes use of this domain knowledge. We consider a service composition to be an assembly of services of which we just know signatures, preconditions, and effects. We aim at proving that a composition satisfies a (userdefined) requirement, specified in terms of guaranteed preconditions and required postconditions. As an underlying verification engine we use an SMT solver. To take advantage of the domain knowledge (and often, to enable verification at all), the knowledge is fed into the solver in the form of sorts, uninterpreted functions and in particular assertions as to enhance the solver’s reasoning capabilities. Thereby, we allow for deductions within a domain previously unknown to the solver. We exemplify our technique on a case study from the area of water network optimization software. I.
Solving existentially quantified Horn clauses
 IN CAV
, 2013
"... Temporal verification of universal (i.e., valid for all computation paths) properties of various kinds of programs, e.g., procedural, multithreaded, or functional, can be reduced to finding solutions for equations in form of universally quantified Horn clauses extended with wellfoundedness condit ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Temporal verification of universal (i.e., valid for all computation paths) properties of various kinds of programs, e.g., procedural, multithreaded, or functional, can be reduced to finding solutions for equations in form of universally quantified Horn clauses extended with wellfoundedness conditions. Dealing with existential properties (e.g., whether there exists a particular computation path), however, requires solving forallexists quantified Horn clauses, where the conclusion part of some clauses contains existentially quantified variables. For example, a deductive approach to CTL verification reduces to solving such clauses. In this paper we present a method for solving forallexists quantified Horn clauses extended with wellfoundedness conditions. Our method is based on a counterexampleguided abstraction refinement scheme to discover witnesses for existentially quantified variables. We also present an application of our solving method to automation of CTL verification of software, as well as its experimental evaluation.