Results 1  10
of
23
Caching and Lemmaizing in Model Elimination Theorem Provers
, 1992
"... Theorem provers based on model elimination have exhibited extremely high inference rates but have lacked a redundancy control mechanism such as subsumption. In this paper we report on work done to modify a model elimination theorem prover using two techniques, caching and lemmaizing, that have reduc ..."
Abstract

Cited by 49 (2 self)
 Add to MetaCart
Theorem provers based on model elimination have exhibited extremely high inference rates but have lacked a redundancy control mechanism such as subsumption. In this paper we report on work done to modify a model elimination theorem prover using two techniques, caching and lemmaizing, that have reduced by more than an order of magnitude the time required to find proofs of several problems and that have enabled the prover to prove theorems previously unobtainable by topdown model elimination theorem provers.
Experiments with DiscriminationTree Indexing and Path Indexing for Term Retrieval
 JOURNAL OF AUTOMATED REASONING
, 1990
"... This article addresses the problem of indexing and retrieving firstorder predicate calculus terms in the context of automated deduction programs. The four retrieval operations of concern are to find variants, generalizations, instances, and terms that unify with a given term. Discriminationtree ..."
Abstract

Cited by 44 (0 self)
 Add to MetaCart
This article addresses the problem of indexing and retrieving firstorder predicate calculus terms in the context of automated deduction programs. The four retrieval operations of concern are to find variants, generalizations, instances, and terms that unify with a given term. Discriminationtree indexing is reviewed, and several variations are presented. The pathindexing method is also reviewed. Experiments were conducted on large sets of terms to determine how the properties of the terms affect the performance of the two indexing methods. Results of the experiments are presented.
Focusing the inverse method for linear logic
 Proceedings of CSL 2005
, 2005
"... 1.1 Quantification and the subformula property.................. 3 1.2 Ground forward sequent calculus......................... 5 1.3 Lifting to free variables............................... 10 ..."
Abstract

Cited by 38 (11 self)
 Add to MetaCart
1.1 Quantification and the subformula property.................. 3 1.2 Ground forward sequent calculus......................... 5 1.3 Lifting to free variables............................... 10
Substitution Tree Indexing
, 1994
"... The performance of a theorem prover crucially depends on the speed of the basic retrieval operations, such as finding terms that are unifiable with (instances of, or more general than) a given query term. In this paper a new indexing method is presented, which outperforms traditional methods such as ..."
Abstract

Cited by 31 (1 self)
 Add to MetaCart
The performance of a theorem prover crucially depends on the speed of the basic retrieval operations, such as finding terms that are unifiable with (instances of, or more general than) a given query term. In this paper a new indexing method is presented, which outperforms traditional methods such as path indexing, discrimination tree indexing and abstraction trees. Additionally, the new index not only supports term indexing but also provides maintenance and efficient retrieval of substitutions. As confirmed in multiple experiments, substitution trees combine maximal search speed and minimal memory requirements.
Promoting Rewriting to a Programming Language: A Compiler for NonDeterministic Rewrite Programs in AssociativeCommutative Theories
, 2001
"... Firstorder languages based on rewrite rules share many features with functional languages. But one difference is that matching and rewriting can be made much more expressive and powerful by incorporating some builtin equational theories. To provide reasonable programming environments, compilation ..."
Abstract

Cited by 30 (6 self)
 Add to MetaCart
Firstorder languages based on rewrite rules share many features with functional languages. But one difference is that matching and rewriting can be made much more expressive and powerful by incorporating some builtin equational theories. To provide reasonable programming environments, compilation techniques for such languages based on rewriting have to be designed. This is the topic addressed in this paper. The proposed techniques are independent from the rewriting language and may be useful to build a compiler for any system using rewriting modulo associative and commutative (AC) theories. An algorithm for manytoone AC matching is presented, that works efficiently for a restricted class of patterns. Other patterns are transformed to fit into this class. A refined data structure, namely compact bipartite graph, allows encoding all matching problems relative to a set of rewrite rules. A few optimisations concerning the construction of the substitution and of the reduced term are described. We also address the problem of nondeterminism related to AC rewriting and show how to handle it through the concept of strategies. We explain how an analysis of the determinism can be performed at compile time and we illustrate the benefits of this analysis for the performance of the compiled evaluation process. Then we briefly introduce the ELAN system and its compiler, in order to give some experimental results and comparisons with other languages or rewrite engines.
Adaptive Pattern Matching
, 1992
"... Pattern matching is an important operation used in many applications such as functional programming, rewriting and rulebased expert systems. By preprocessing the patterns into a DFAlike automaton, we can rapidly select the matching pattern(s) in a single scan of the relevant portions of the inp ..."
Abstract

Cited by 22 (5 self)
 Add to MetaCart
Pattern matching is an important operation used in many applications such as functional programming, rewriting and rulebased expert systems. By preprocessing the patterns into a DFAlike automaton, we can rapidly select the matching pattern(s) in a single scan of the relevant portions of the input term. This automaton is typically based on lefttoright traversal of the patterns. By adapting the traversal order to suit the set of input patterns, it is possible to considerably reduce the space and matching time requirements of the automaton.
WALDMEISTER: Development of a High Performance CompletionBased Theorem Prover
, 1996
"... : In this report we give an overview of the development of our new Waldmeister prover for equational theories. We elaborate a systematic stepwise design process, starting with the inference system for unfailing KnuthBendix completion and ending up with an implementation which avoids the main dise ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
: In this report we give an overview of the development of our new Waldmeister prover for equational theories. We elaborate a systematic stepwise design process, starting with the inference system for unfailing KnuthBendix completion and ending up with an implementation which avoids the main diseases today's provers suffer from: overindulgence in time and space. Our design process is based on a logical threelevel system model consisting of basic operations for inference step execution, aggregated inference machine, and overall control strategy. Careful analysis of the inference system for unfailing completion has revealed the crucial points responsible for time and space consumption. For the low level of our model, we introduce specialized data structures and algorithms speeding up the running system and cutting it down in size  both by one order of magnitude compared with standard techniques. Flexible control of the midlevel aggregation inside the resulting prover is made po...
Learning Proof Heuristics By Adapting Parameters
 In Proc. of the 12th International Workshop on Machine Learning
, 1995
"... We present a method for learning heuristics employed by an automated prover to control its inference machine. The hub of the method is the adaptation of the parameters of a heuristic. Adaptation is accomplished by a genetic algorithm. The necessary guidance during the learning process is provided by ..."
Abstract

Cited by 14 (5 self)
 Add to MetaCart
We present a method for learning heuristics employed by an automated prover to control its inference machine. The hub of the method is the adaptation of the parameters of a heuristic. Adaptation is accomplished by a genetic algorithm. The necessary guidance during the learning process is provided by a proof problem and a proof of it found in the past. The objective of learning consists in finding a parameter configuration that avoids redundant effort w.r.t. this problem and the particular proof of it. A heuristic learned (adapted) this way can then be applied profitably when searching for a proof of a similar problem. So, our method can be used to train a proof heuristic for a class of similar problems. A number of experiments (with an automated prover for purely equational logic) show that adapted heuristics are not only able to speed up enormously the search for the proof learned during adaptation. They also reduce redundancies in the search for proofs of similar theorems. This not o...
A compiler for rewrite programs in associativecommutative theories
, 1998
"... We address the problem of term normalisation modulo associativecommutative (AC) theories, and describe several techniques for compiling manytoone AC matching and reduced term construction. The proposed matching method is based on the construction of compact bipartite graphs, and is designed for w ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
We address the problem of term normalisation modulo associativecommutative (AC) theories, and describe several techniques for compiling manytoone AC matching and reduced term construction. The proposed matching method is based on the construction of compact bipartite graphs, and is designed for working very efficiently on specific classes of AC patterns. We show how to refine this algorithm to work in an eager way. General patterns are handled through a program transformation process. Variable instantiation resulting from the matching phase and construction of the resulting term are also addressed. Our experimental results with the system ELAN provide strong evidence that compilation of manytoone AC normalisation using the combination of these few techniques is crucial for improving the performance of algebraic programming languages.