Results 1  10
of
78
Integrating decision procedures into heuristic theorem provers: A case study of linear arithmetic
 Machine Intelligence
, 1988
"... We discuss the problem of incorporating into a heuristic theorem prover a decision procedure for a fragment of the logic. An obvious goal when incorporating such a procedure is to reduce the search space explored by the heuristic component of the system, as would be achieved by eliminating from the ..."
Abstract

Cited by 107 (9 self)
 Add to MetaCart
We discuss the problem of incorporating into a heuristic theorem prover a decision procedure for a fragment of the logic. An obvious goal when incorporating such a procedure is to reduce the search space explored by the heuristic component of the system, as would be achieved by eliminating from the system’s data base some explicitly stated axioms. For example, if a decision procedure for linear inequalities is added, one would hope to eliminate the explicit consideration of the transitivity axioms. However, the decision procedure must then be used in all the ways the eliminated axioms might have been. The difficulty of achieving this degree of integration is more dependent upon the complexity of the heuristic component than upon that of the decision procedure. The view of the decision procedure as a "black box " is frequently destroyed by the need pass large amounts of search strategic information back and forth between the two components. Finally, the efficiency of the decision procedure may be virtually irrelevant; the efficiency of the final system may depend most heavily on how easy it is to communicate between the two components. This paper is a case study of how we integrated a linear arithmetic procedure into a heuristic theorem prover. By linear arithmetic here we mean the decidable subset of number theory dealing with universally quantified formulas composed of the logical connectives, the identity relation, the Peano "less than " relation, the Peano addition and subtraction functions, Peano constants,
Nonlinear Array Dependence Analysis
, 1991
"... Standard array data dependence techniques can only reason about linear constraints. There has also been work on analyzing some dependences involving polynomial constraints. Analyzing array data dependences in realworld programs requires handling many "unanalyzable" terms: subscript arrays, runtime ..."
Abstract

Cited by 74 (6 self)
 Add to MetaCart
Standard array data dependence techniques can only reason about linear constraints. There has also been work on analyzing some dependences involving polynomial constraints. Analyzing array data dependences in realworld programs requires handling many "unanalyzable" terms: subscript arrays, runtime tests, function calls. The standard approach to analyzing such programs has been to omit and ignore any constraints that cannot be reasoned about. This is unsound when reasoning about valuebased dependences and whether privatization is legal. Also, this prevents us from determining the conditions that must be true to disprove the dependence. These conditions could be checked by a runtime test or verified by a programmer or aggressive, demanddriven interprocedural analysis. We describe a solution to these problems. Our solution makes our system sound and more accurate for analyzing valuebased dependences and derives conditions that can be used to disprove dependences. We also give some p...
Using Integer Sets for DataParallel Program Analysis and Optimization
 In Proceedings of the SIGPLAN '98 Conference on Programming Language Design and Implementation
, 1998
"... In this paper, we describe our experience with using an abstract integerset framework to develop the Rice dHPF compiler, a compiler for High Performance Fortran. We present simple, yet general formulations of the major computation partitioning and communication analysis tasks as well as a number of ..."
Abstract

Cited by 57 (29 self)
 Add to MetaCart
In this paper, we describe our experience with using an abstract integerset framework to develop the Rice dHPF compiler, a compiler for High Performance Fortran. We present simple, yet general formulations of the major computation partitioning and communication analysis tasks as well as a number of important optimizations in terms of abstract operations on sets of integer tuples. This approach has made it possible to implement a comprehensive collection of advanced optimizations in dHPF, and to do so in the context of a more general computation partitioning model than previous compilers. One potential limitation of the approach is that the underlying class of integer set problems is fundamentally unable to represent HPF data distributions on a symbolic number of processors. We describe how we extend the approach to compile codes for a symbolic number of processors, without requiring any changes to the set formulations for the above optimizations. We show experimentally that the set re...
A classification of symbolic transition systems
 ACM TRANSACTIONS ON COMPUTATIONAL LOGIC
, 2005
"... We define five increasingly comprehensive classes of infinitestate systems, called STS1STS5, whose state spaces have finitary structure. For four of these classes, we provide examples from hybrid systems.STS1 These are the systems with finite bisimilarity quotients. They can be analyzed symbolica ..."
Abstract

Cited by 44 (5 self)
 Add to MetaCart
We define five increasingly comprehensive classes of infinitestate systems, called STS1STS5, whose state spaces have finitary structure. For four of these classes, we provide examples from hybrid systems.STS1 These are the systems with finite bisimilarity quotients. They can be analyzed symbolically by iteratively applying predecessor and Boolean operations on state sets, starting from a finite number of observable state sets. Any such iteration is guaranteed to terminate in that only a finite number of state sets can be generated. This enables model checking of the μcalculus.STS2 These are the systems with finite similarity quotients. They can be analyzed symbolically by iterating the predecessor and positive Boolean operations. This enables model checking of the existential and universal fragments of the μcalculus.STS3 These are the systems with finite traceequivalence quotients. They can be analyzed symbolically by iterating the predecessor operation and a restricted form of positive Boolean operations (intersection is restricted to intersection with observables). This enables model checking of all ωregular properties, including linear temporal logic.STS4 These are the systems with finite distanceequivalence quotients (two states are equivalent if for every distance d, the same observables can be reached in d transitions). The systems in this class can be analyzed symbolically by iterating the predecessor operation and terminating when no new state sets are generated. This enables model checking of the existential conjunctionfree and universal disjunctionfree fragments of the μcalculus.STS5 These are the systems with finite boundedreachability quotients (two states are equivalent if for every distance d, the same observables can be reached in d or fewer transitions). The systems in this class can be analyzed symbolically by iterating the predecessor operation and terminating when no new states are encountered (this is a weaker termination condition than above). This enables model checking of reachability properties.
ModelTheoretic Methods in Combined Constraint Satisfiability
 Journal of Automated Reasoning
, 2004
"... We extend NelsonOppen combination procedure to the case of theories which are compatible with respect to a common subtheory in the shared signature. The notion of compatibility relies on model completions and related concepts from classical model theory. ..."
Abstract

Cited by 40 (10 self)
 Add to MetaCart
We extend NelsonOppen combination procedure to the case of theories which are compatible with respect to a common subtheory in the shared signature. The notion of compatibility relies on model completions and related concepts from classical model theory.
Modular Data Structure Verification
 EECS DEPARTMENT, MASSACHUSETTS INSTITUTE OF TECHNOLOGY
, 2007
"... This dissertation describes an approach for automatically verifying data structures, focusing on techniques for automatically proving formulas that arise in such verification. I have implemented this approach with my colleagues in a verification system called Jahob. Jahob verifies properties of Java ..."
Abstract

Cited by 36 (21 self)
 Add to MetaCart
This dissertation describes an approach for automatically verifying data structures, focusing on techniques for automatically proving formulas that arise in such verification. I have implemented this approach with my colleagues in a verification system called Jahob. Jahob verifies properties of Java programs with dynamically allocated data structures. Developers write Jahob specifications in classical higherorder logic (HOL); Jahob reduces the verification problem to deciding the validity of HOL formulas. I present a new method for proving HOL formulas by combining automated reasoning techniques. My method consists of 1) splitting formulas into individual HOL conjuncts, 2) soundly approximating each HOL conjunct with a formula in a more tractable fragment and 3) proving the resulting approximation using a decision procedure or a theorem prover. I present three concrete logics; for each logic I show how to use it to approximate HOL formulas, and how to decide the validity of formulas in this logic. First, I present an approximation of HOL based on a translation to firstorder logic, which enables the use of existing resolutionbased theorem provers. Second, I present an approximation of HOL based on field constraint analysis, a new technique that enables
Deciding Boolean Algebra with Presburger Arithmetic
 J. of Automated Reasoning
"... Abstract. We describe an algorithm for deciding the firstorder multisorted theory BAPA, which combines 1) Boolean algebras of sets of uninterpreted elements (BA) and 2) Presburger arithmetic operations (PA). BAPA can express the relationship between integer variables and cardinalities of unbounded ..."
Abstract

Cited by 31 (26 self)
 Add to MetaCart
Abstract. We describe an algorithm for deciding the firstorder multisorted theory BAPA, which combines 1) Boolean algebras of sets of uninterpreted elements (BA) and 2) Presburger arithmetic operations (PA). BAPA can express the relationship between integer variables and cardinalities of unbounded finite sets, and supports arbitrary quantification over sets and integers. Our original motivation for BAPA is deciding verification conditions that arise in the static analysis of data structure consistency properties. Data structures often use an integer variable to keep track of the number of elements they store; an invariant of such a data structure is that the value of the integer variable is equal to the number of elements stored in the data structure. When the data structure content is represented by a set, the resulting constraints can be captured in BAPA. BAPA formulas with quantifier alternations arise when verifying programs with annotations containing quantifiers, or when proving simulation relation conditions for refinement and equivalence of program fragments. Furthermore, BAPA constraints can be used for proving the termination of programs that manipulate data structures, as well as
Complete Functional Synthesis
"... Synthesis of program fragments from specifications can make programs easier to write and easier to reason about. To integrate synthesis into programming languages, synthesis algorithms should behave in a predictable way—they should succeed for a welldefined class of specifications. They should also ..."
Abstract

Cited by 28 (12 self)
 Add to MetaCart
Synthesis of program fragments from specifications can make programs easier to write and easier to reason about. To integrate synthesis into programming languages, synthesis algorithms should behave in a predictable way—they should succeed for a welldefined class of specifications. They should also support unbounded data types such as numbers and data structures. We propose to generalize decision procedures into predictable and complete synthesis procedures. Such procedures are guaranteed to find code that satisfies the specification if such code exists. Moreover, we identify conditions under which synthesis will statically decide whether the solution is guaranteed to exist, and whether it is unique. We demonstrate our approach by starting from decision procedures for linear arithmetic and data structures and transforming them into synthesis procedures. We establish results on the size and the efficiency of the synthesized code. We show that such procedures are useful as a language extension with implicit value definitions, and we show how to extend a compiler to support such definitions. Our constructs provide the benefits of synthesis to programmers, without requiring them to learn new concepts or give up a deterministic execution model.
An algorithm for deciding BAPA: Boolean Algebra with Presburger Arithmetic
 In 20th International Conference on Automated Deduction, CADE20
, 2005
"... Abstract. We describe an algorithm for deciding the firstorder multisorted theory BAPA, which combines 1) Boolean algebras of sets of uninterpreted elements (BA) and 2) Presburger arithmetic operations (PA). BAPA can express the relationship between integer variables and cardinalities of a priory u ..."
Abstract

Cited by 26 (13 self)
 Add to MetaCart
Abstract. We describe an algorithm for deciding the firstorder multisorted theory BAPA, which combines 1) Boolean algebras of sets of uninterpreted elements (BA) and 2) Presburger arithmetic operations (PA). BAPA can express the relationship between integer variables and cardinalities of a priory unbounded finite sets, and supports arbitrary quantification over sets and integers. Our motivation for BAPA is deciding verification conditions that arise in the static analysis of data structure consistency properties. Data structures often use an integer variable to keep track of the number of elements they store; an invariant of such a data structure is that the value of the integer variable is equal to the number of elements stored in the data structure. When the data structure content is represented by a set, the resulting constraints can be captured in BAPA. BAPA formulas with quantifier alternations arise when verifying programs with annotations containing quantifiers, or when proving simulation relation conditions for refinement and equivalence of program fragments. Furthermore, BAPA constraints can be used for proving the termination of programs that manipulate data structures, and have applications in constraint databases. We give a formal description of a decision procedure for BAPA, which implies the decidability of BAPA. We analyze our algorithm and obtain an elementary upper bound on the running time, thereby giving the first complexity bound for BAPA. Because it works by a reduction to PA, our algorithm yields the decidability of a combination of sets of uninterpreted elements with any decidable extension of PA. Our algorithm can also be used to yield an optimal decision procedure for BA through a reduction to PA with bounded quantifiers. We have implemented our algorithm and used it to discharge verification conditions in the Jahob system for data structure consistency checking of Java programs; our experience with the algorithm is promising. 1
Termination analysis of integer linear loops
 In CONCUR
, 2005
"... Abstract. Usually, ranking function synthesis and invariant generation over a loop with integer variables involves abstracting the loop to have real variables. Integer division and modulo arithmetic must be soundly abstracted away so that the analysis over the abstracted loop is sound for the origin ..."
Abstract

Cited by 24 (3 self)
 Add to MetaCart
Abstract. Usually, ranking function synthesis and invariant generation over a loop with integer variables involves abstracting the loop to have real variables. Integer division and modulo arithmetic must be soundly abstracted away so that the analysis over the abstracted loop is sound for the original loop. Consequently, the analysis loses precision. In contrast, we introduce a technique for handling loops over integer variables directly. The resulting analysis is more precise than previous analyses.