Results 1  10
of
22
Efficiently Solving Quantified BitVector Formulas
"... Abstract—In recent years, bitprecise reasoning has gained importance in hardware and software verification. Of renewed interest is the use of symbolic reasoning for synthesising loop invariants, ranking functions, or whole program fragments and hardware circuits. Solvers for the quantifierfree fra ..."
Abstract

Cited by 24 (7 self)
 Add to MetaCart
(Show Context)
Abstract—In recent years, bitprecise reasoning has gained importance in hardware and software verification. Of renewed interest is the use of symbolic reasoning for synthesising loop invariants, ranking functions, or whole program fragments and hardware circuits. Solvers for the quantifierfree fragment of bitvector logic exist and often rely on SAT solvers for efficiency. However, many techniques require quantifiers in bitvector formulas to avoid an exponential blowup during construction. Solvers for quantified formulas usually flatten the input to obtain a quantified Boolean formula, losing much of the wordlevel information in the formula. We present a new approach based on a set of effective wordlevel simplifications that are traditionally employed in automated theorem proving, heuristic quantifier instantiation methods used in SMT solvers, and model finding techniques based on skeletons/templates. Experimental results on two different types of benchmarks indicate that our method outperforms the traditional flattening approach by multiple orders of magnitude of runtime. I.
Using Bounded Model Checking to Focus Fixpoint Iterations
, 2011
"... Two classical sources of imprecision in static analysis by abstract interpretation are widening and merge operations. Merge operations can be done away by distinguishing paths, as in trace partitioning, at the expense of enumerating an exponential number of paths. In this article, we describe how to ..."
Abstract

Cited by 13 (6 self)
 Add to MetaCart
(Show Context)
Two classical sources of imprecision in static analysis by abstract interpretation are widening and merge operations. Merge operations can be done away by distinguishing paths, as in trace partitioning, at the expense of enumerating an exponential number of paths. In this article, we describe how to avoid such systematic exploration by focusing on a single path at a time, designated by SMTsolving. Our method combines well with acceleration techniques, thus doing away with widenings as well in some cases. We illustrate it over the wellknown domain of convex polyhedra.
Instantiationbased invariant discovery
 In NFM, volume 6617 of LNCS
, 2011
"... Abstract. We present a general scheme for automated instantiationbased invariant discovery. Given a transition system, the scheme produces kinductive invariants from templates representing decidable predicates over the system’s data types. The proposed scheme relies on efficient reasoning engine ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
(Show Context)
Abstract. We present a general scheme for automated instantiationbased invariant discovery. Given a transition system, the scheme produces kinductive invariants from templates representing decidable predicates over the system’s data types. The proposed scheme relies on efficient reasoning engines such as SAT and SMT solvers, and capitalizes on their ability to quickly generate countermodels of noninvariant conjectures. We discuss in detail two practical specializations of the general scheme in which templates represent partial orders. Our experimental results show that both specializations are able to quickly produce invariants from a variety of synchronous systems which prove quite useful in proving safety properties for these systems. 1
Inductive Invariant Generation via Abductive Inference
"... This paper presents a new method for generating inductive loop invariants that are expressible as boolean combinations of linear integer constraints. The key idea underlying our technique is to perform a backtracking search that combines Hoarestyle verification condition generation with a logical a ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
(Show Context)
This paper presents a new method for generating inductive loop invariants that are expressible as boolean combinations of linear integer constraints. The key idea underlying our technique is to perform a backtracking search that combines Hoarestyle verification condition generation with a logical abduction procedure based on quantifier elimination to speculate candidate invariants. Starting with true, our method iteratively strengthens loop invariants until they are inductive and strong enough to verify the program. A key feature of our technique is that it is lazy: It only infers those invariants that are necessary for verifying program correctness. Furthermore, our technique can infer arbitrary boolean combinations (including disjunctions) of linear invariants. We have implemented the proposed approach in a tool called HOLA. Our experiments demonstrate that HOLA can infer interesting invariants that are beyond the reach of existing stateoftheart invariant generation tools. 1.
K.: Deriving invariants by algorithmic learning, decision procedures, and predicate abstraction. Technical Memorandum ROSAEC2009004, Research On Software Analysis for ErrorFree Computing
, 2009
"... Abstract. By combining algorithmic learning, decision procedures, and predicate abstraction, we present an automated technique for finding loop invariants in propositional formulae. Given invariant approximations derived from pre and postconditions, our new technique exploits the flexibility in i ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
Abstract. By combining algorithmic learning, decision procedures, and predicate abstraction, we present an automated technique for finding loop invariants in propositional formulae. Given invariant approximations derived from pre and postconditions, our new technique exploits the flexibility in invariants by a simple randomized mechanism. The proposed technique is able to generate invariants for some Linux device drivers and SPEC2000 benchmarks in our experiments. 1
Complexity and algorithms for monomial and clausal predicate abstraction
 IN CADE
, 2009
"... In this paper, we investigate the asymptotic complexity of various predicate abstraction problems relative to the asymptotic complexity of checking an annotated program in a given assertion logic. Unlike previous approaches, we pose the predicate abstraction problem as a decision problem, instead of ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
(Show Context)
In this paper, we investigate the asymptotic complexity of various predicate abstraction problems relative to the asymptotic complexity of checking an annotated program in a given assertion logic. Unlike previous approaches, we pose the predicate abstraction problem as a decision problem, instead of the traditional inference problem. For assertion logics closed under weakest (liberal) precondition and Boolean connectives, we show two restrictions of the predicate abstraction problem where the two complexities match. The restrictions correspond to the case of monomial and clausal abstraction. For these restrictions, we show a symbolic encoding that reduces the predicate abstraction problem to checking the satisfiability of a single formula whose size is polynomial in the size of the program and the set of predicates. We also provide a new iterative algorithm for solving the clausal abstraction problem that can be seen as the dual of the Houdini algorithm for solving the monomial abstraction problem.
Weakest precondition synthesis for compiler optimizations
 In Proc. of the 15th International Conference on Verification, Model Checking, and Abstract Interpretation
, 2014
"... Abstract. Compiler optimizations play an increasingly important role in code generation. This is especially true with the advent of resourcelimited mobile devices. We rely on compiler optimizations to improve performance, reduce code size, and reduce power consumption of our programs. Despite being ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
Abstract. Compiler optimizations play an increasingly important role in code generation. This is especially true with the advent of resourcelimited mobile devices. We rely on compiler optimizations to improve performance, reduce code size, and reduce power consumption of our programs. Despite being a mature field, compiler optimizations are still designed and implemented by hand, and usually without providing any guarantee of correctness. In addition to devising the code transformations, designers and implementers have to come up with an analysis that determines in which cases the optimization can be safely applied. In other words, the optimization designer has to specify a precondition that ensures that the optimization is semanticspreserving. However, devising preconditions for optimizations by hand is a nontrivial task. It is easy to specify a precondition that, although correct, is too restrictive, and therefore misses some optimization opportunities. In this paper, we propose, to the best of our knowledge, the first algorithm for the automatic synthesis of preconditions for compiler optimizations. The synthesized preconditions are provably correct by construction, and they are guaranteed to be the weakest in the precondition language that we consider. We implemented the proposed technique in a tool named PSyCO. We present examples of preconditions synthesized by PSyCO, as well as the results of running PSyCO on a set of optimizations. 1
2011): Synthesis of firstorder dynamic programming algorithms
 In: OOPSLA
"... To solve a problem with a dynamic programming algorithm, one must reformulate the problem such that its solution can be formed from solutions to overlapping subproblems. Because overlapping subproblems may not be apparent in the specification, it is desirable to obtain the algorithm directly from th ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
To solve a problem with a dynamic programming algorithm, one must reformulate the problem such that its solution can be formed from solutions to overlapping subproblems. Because overlapping subproblems may not be apparent in the specification, it is desirable to obtain the algorithm directly from the specification. We describe a semiautomatic synthesizer of lineartime dynamic programming algorithms. The programmer supplies a declarative specification of the problem and the operators that might appear in the solution. The synthesizer obtains the algorithm by searching a space of candidate algorithms; internally, the search is implemented with constraint solving. The space of candidate algorithms is defined with a program template reusable across all lineartime dynamic programming algorithms, which we characterize as firstorder recurrences. This paper focuses on how to write the template so that the constraint solving process scales to realworld lineartime dynamic programming algorithms. We show how to reduce the space with (i) symmetry reduction and (ii) domain knowledge of dynamic programming algorithms. We have synthesized algorithms for variants of maximal substring matching, an assemblyline optimization, and the extended Euclid algorithm. We have also synthesized a problem outside the class of firstorder recurrences, by composing three instances of the algorithm template.
Modular Abstractions of Reactive Nodes using Disjunctive Invariants ∗
, 2011
"... We wish to abstract nodes in a reactive programming language, such as Lustre, into nodes with a simpler control structure, with a bound on the number of control states. In order to do so, we compute disjunctive invariants in predicate abstraction, with a bounded number of disjuncts, then we abstract ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
We wish to abstract nodes in a reactive programming language, such as Lustre, into nodes with a simpler control structure, with a bound on the number of control states. In order to do so, we compute disjunctive invariants in predicate abstraction, with a bounded number of disjuncts, then we abstract the node, each disjunct representing an abstract state. The computation of the disjunctive invariant is performed by a form of quantifier elimination expressed using SMTsolving. The same method can also be used to obtain disjunctive loop invariants. 1
From Invariant Checking to Invariant Inference Using Randomized Search
"... Abstract. We describe a general framework c2i for generating an invariant inference procedure from an invariant checking procedure. Given a checker and a language of possible invariants, c2i generates an inference procedure that iteratively invokes two phases. The search phase uses randomized sear ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We describe a general framework c2i for generating an invariant inference procedure from an invariant checking procedure. Given a checker and a language of possible invariants, c2i generates an inference procedure that iteratively invokes two phases. The search phase uses randomized search to discover candidate invariants and the validate phase uses the checker to either prove or refute that the candidate is an actual invariant. To demonstrate the applicability of c2i, we use it to generate inference procedures that prove safety properties of numerical programs, prove nontermination of numerical programs, prove functional specifications of array manipulating programs, prove safety properties of string manipulating programs, and prove functional specifications of heap manipulating programs that use linked list data structures. 1