Results 1  10
of
27
Integrating decision procedures into heuristic theorem provers: A case study of linear arithmetic
 Machine Intelligence
, 1988
"... We discuss the problem of incorporating into a heuristic theorem prover a decision procedure for a fragment of the logic. An obvious goal when incorporating such a procedure is to reduce the search space explored by the heuristic component of the system, as would be achieved by eliminating from the ..."
Abstract

Cited by 107 (9 self)
 Add to MetaCart
We discuss the problem of incorporating into a heuristic theorem prover a decision procedure for a fragment of the logic. An obvious goal when incorporating such a procedure is to reduce the search space explored by the heuristic component of the system, as would be achieved by eliminating from the systemâ€™s data base some explicitly stated axioms. For example, if a decision procedure for linear inequalities is added, one would hope to eliminate the explicit consideration of the transitivity axioms. However, the decision procedure must then be used in all the ways the eliminated axioms might have been. The difficulty of achieving this degree of integration is more dependent upon the complexity of the heuristic component than upon that of the decision procedure. The view of the decision procedure as a &quot;black box &quot; is frequently destroyed by the need pass large amounts of search strategic information back and forth between the two components. Finally, the efficiency of the decision procedure may be virtually irrelevant; the efficiency of the final system may depend most heavily on how easy it is to communicate between the two components. This paper is a case study of how we integrated a linear arithmetic procedure into a heuristic theorem prover. By linear arithmetic here we mean the decidable subset of number theory dealing with universally quantified formulas composed of the logical connectives, the identity relation, the Peano &quot;less than &quot; relation, the Peano addition and subtraction functions, Peano constants,
Enhancing the Nuprl Proof Development System and Applying it to Computational Abstract Algebra
, 1995
"... This thesis describes substantial enhancements that were made to the software tools in the Nuprl system that are used to interactively guide the production of formal proofs. Over 20,000 lines of code were written for these tools. Also, a corpus of formal mathematics was created that consists of rou ..."
Abstract

Cited by 45 (4 self)
 Add to MetaCart
This thesis describes substantial enhancements that were made to the software tools in the Nuprl system that are used to interactively guide the production of formal proofs. Over 20,000 lines of code were written for these tools. Also, a corpus of formal mathematics was created that consists of roughly 500 definitions and 1300 theorems. Much of this material is of a foundational nature and supports all current work in Nuprl. This thesis concentrates on describing the half of this corpus that is concerned with abstract algebra and that covers topics central to the mathematics of the co...
Beyond Finite Domains
, 1994
"... Introduction A finite domain constraint system can be viewed as an linear integer constraint system in which each variable has an upper and lower bound. Finite domains have been used successfully in Constraint Logic Programming (CLP) languages, for example CHIP [4], to attack combinatorial problems ..."
Abstract

Cited by 37 (3 self)
 Add to MetaCart
Introduction A finite domain constraint system can be viewed as an linear integer constraint system in which each variable has an upper and lower bound. Finite domains have been used successfully in Constraint Logic Programming (CLP) languages, for example CHIP [4], to attack combinatorial problems such as resource allocation, digital circuit verification, etc. In these problems, finite domains allow a natural expression of the problem constraints because bounds on the problem variables are explicit in the problem. In other problems however, for example in temporal reasoning and some scheduling problems, there may not be natural bounds. For these problems, a standard approach has been to use ad hoc bounds, giving rise to a twofold problem. If a bound is too tight, then important solutions could be lost. If a bound is too loose, then significant inefficiency may result. This is because the algorithms used in finite domains work by propagating bounds on variables 1<F12.
From Surfaces to Objects: Computer Vision and ThreeDimensional Scene Analysis
, 1989
"... This book was originally published by John Wiley and Sons, ..."
Abstract

Cited by 30 (10 self)
 Add to MetaCart
This book was originally published by John Wiley and Sons,
Going beyond Integer Programming with the Omega Test to Eliminate False Data Dependences
 IEEE Transactions on Parallel and Distributed Systems
, 1992
"... Array data dependence analysis methods currently in use generate false dependences that can prevent useful program transformations. These false dependences arise because the questions asked are conservative approximations to the questions we really should be asking. Unfortunately, the questions we r ..."
Abstract

Cited by 28 (11 self)
 Add to MetaCart
Array data dependence analysis methods currently in use generate false dependences that can prevent useful program transformations. These false dependences arise because the questions asked are conservative approximations to the questions we really should be asking. Unfortunately, the questions we really should be asking go beyond integer programming and require decision procedures for a subclass of Presburger formulas. In this paper, we describe how to extend the Omega test so that it can answer these queries and allow us to eliminate these false data dependences. We have implemented the techniques described here and believe they are suitable for use in production compilers.
Experiences with Constraintbased Array Dependence Analysis
 IN PRINCIPLES AND PRACTICE OF CONSTRAINT PROGRAMMING
, 1994
"... Array data dependence analysis provides important information for optimization of scientific programs. Array dependence testing can be viewed as constraint analysis, although traditionally generalpurpose constraint manipulation algorithms have been thought to be too slow for dependence analysis. We ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
Array data dependence analysis provides important information for optimization of scientific programs. Array dependence testing can be viewed as constraint analysis, although traditionally generalpurpose constraint manipulation algorithms have been thought to be too slow for dependence analysis. We have explored the use of exact constraint analysis, based on Fourier's method, for array data dependence analysis. We have found these techniques can be used without a great impact on total compile time. Furthermore, the use of generalpurpose algorithms has allowed us to address problems beyond traditional dependence analysis. In this paper, we summarize some of the constraint manipulation techniques we use for dependence analysis, and discuss some of the reasons for our performance results.
A Transformation Method for DynamicSized Tabulation
, 1995
"... Tupling is a transformation tactic to obtain new functions, without redundant calls and/or multiple traversals of common inputs. It achieves this feat by allowing each set (tuple) of function calls to be computed recursively from its previous set. In previous works by Chin and Khoo [8, 9], a safe (t ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
Tupling is a transformation tactic to obtain new functions, without redundant calls and/or multiple traversals of common inputs. It achieves this feat by allowing each set (tuple) of function calls to be computed recursively from its previous set. In previous works by Chin and Khoo [8, 9], a safe (terminating) fold/unfold transformation algorithm was developed for some classes of functions which are guaranteed to be successfully tupled. However, these classes of functions currently use staticsized tables for eliminating the redundant calls. As shown by Richard Bird in [3], there are also other classes of programs whose redundant calls could only be eliminated by using dynamicsized tabulation. This paper proposes a new solution to dynamicsized tabulation by an extension to the tupling tactic. Our extension uses lambda abstractions which can be viewed as either dynamicsized tables or applications of the higherorder generalisation technique to facilitate tupling. Significant speedups could be obtained after the transformed programs were vectorised, as confirmed by experiment.
Mixing List Recursion and Arithmetic
 Proc. 7th IEEE Symp. on Logic in Computer Science
, 1991
"... We consider in this paper a special class of predicates. Every class predicate, say p, has one argument denoted Y of type list. All the other arguments are integers and form a vector denoted X. The predicates are primitive recursively defined over the structure of Y, and the auxiliary predicates are ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
We consider in this paper a special class of predicates. Every class predicate, say p, has one argument denoted Y of type list. All the other arguments are integers and form a vector denoted X. The predicates are primitive recursively defined over the structure of Y, and the auxiliary predicates are arithmetic relations over the integer arguments. When two atoms of this class, say p 1 (Y, X 1 ) and p 2 (Y, X 2 ), share the same list argument Y, it induces an implicit relation between the integers of X 1 and X 2 . We describe a method for generating under certain conditions an arithmetic expression that characterizes this relation. The method is useful for proving or synthesizing inductive assertions about programs with arrays. 1 Introduction When proving assertions about programs with arrays, one often meets boundary conditions which can be encoded under the following form (~) p 1 (Y, X 1 ):::p n (Y, Xn ) ) a(X 1 [ :::[ Xn ) where Y is a list variable, X 1 ,...,X n are vectors of natu...
A comparison of decision procedures in Presburger arithmetic. Research paper no. 872, Division of Informatics
 University of Novi Sad
, 1997
"... It is part of the tradition and folklore of automated reasoning that the intractability of Cooper's decision procedure for Presburger integer arithmetic makes is too expensive for practical use. More than 25 years of work has resulted in numerous approximate procedures via rational arithmetic, ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
It is part of the tradition and folklore of automated reasoning that the intractability of Cooper's decision procedure for Presburger integer arithmetic makes is too expensive for practical use. More than 25 years of work has resulted in numerous approximate procedures via rational arithmetic, all of which are incomplete and restricted to the quanti erfree fragment. In this paper we report on an experiment which strongly questions this tradition. We measured the performance of procedures due to Hodes, Cooper (and heuristic variants thereof which detect counterexamples), across a corpus of 10 000 randomly generated quanti erfree Presburger formulae. The results are startling: avariant of Cooper's procedure outperforms Hodes ' procedure on both valid and invalid formulae, and is fast enough for practical use. These results contradict much perceived wisdom that decision procedures for integer arithmetic are too expensive to use in practice. 1
Temporal logic theorem proving and its application to the feature interaction problem
 University of Siena
, 2001
"... Abstract. We describe work in progress on a theorem prover for linear temporal logic (LTL) that will be used to automatically detect feature interactions in telecommunications systems. We build on previous work where we identified a class of LTL formulas used to specify the requirements of features, ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Abstract. We describe work in progress on a theorem prover for linear temporal logic (LTL) that will be used to automatically detect feature interactions in telecommunications systems. We build on previous work where we identified a class of LTL formulas used to specify the requirements of features, and developed a model checking tool to help find conflicts among feature requirements. The present work will generalize and improve our method in two ways. It will increase the class of conflicts we can find, and it will improve efficiency. 1