Results 1  10
of
102
Controlled Integrations of the Cut Rule into Connection Tableau Calculi
"... In this paper techniques are developed and compared which increase the inferential power of tableau systems for classical firstorder logic. The mechanisms are formulated in the framework of connection tableaux, which is an amalgamation of the connection method and the tableau calculus, and a genera ..."
Abstract

Cited by 62 (3 self)
 Add to MetaCart
In this paper techniques are developed and compared which increase the inferential power of tableau systems for classical firstorder logic. The mechanisms are formulated in the framework of connection tableaux, which is an amalgamation of the connection method and the tableau calculus, and a generalization of model elimination. Since connection tableau calculi are among the weakest proof systems with respect to proof compactness, and the (backward) cut rule is not suitable for the firstorder case, we study alternative methods for shortening proofs. The techniques we investigate are the folding up and the folding down operation. Folding up represents an efficient way of supporting the basic calculus, which is topdown oriented, with lemmata derived in a bottomup manner. It is shown that both techniques can also be viewed as controlled integrations of the cut rule. In order to remedy the additional redundancy imported into tableau proof procedures by the new inference rules, we develop and apply an extension of the regularity condition on tableaux and the mechanism of antilemmata which realizes a subsumption concept on tableaux. Using the framework of the theorem prover SETHEO, we have implemented three new proof procedures which overcome the deductive weakness of cutfree tableau systems. Experimental results demonstrate the superiority of the systems with folding up over the cutfree variant and the one with folding down.
Caching and Lemmaizing in Model Elimination Theorem Provers
, 1992
"... Theorem provers based on model elimination have exhibited extremely high inference rates but have lacked a redundancy control mechanism such as subsumption. In this paper we report on work done to modify a model elimination theorem prover using two techniques, caching and lemmaizing, that have reduc ..."
Abstract

Cited by 49 (2 self)
 Add to MetaCart
Theorem provers based on model elimination have exhibited extremely high inference rates but have lacked a redundancy control mechanism such as subsumption. In this paper we report on work done to modify a model elimination theorem prover using two techniques, caching and lemmaizing, that have reduced by more than an order of magnitude the time required to find proofs of several problems and that have enabled the prover to prove theorems previously unobtainable by topdown model elimination theorem provers.
Theory completion using Inverse Entailment
, 2000
"... The main realworld applications of Inductive Logic Programming (ILP) to date involve the "Observation Predicate Learning" (OPL) assumption, in which both the examples and hypotheses define the same predicate. However, in both scientific discovery and language learning potential applicatio ..."
Abstract

Cited by 48 (22 self)
 Add to MetaCart
The main realworld applications of Inductive Logic Programming (ILP) to date involve the "Observation Predicate Learning" (OPL) assumption, in which both the examples and hypotheses define the same predicate. However, in both scientific discovery and language learning potential applications exist in which OPL does not hold. OPL is ingrained within the theory and performance testing of Machine Learning. A general ILP technique called "Theory Completion using Inverse Entailment" (TCIE) is introduced which is applicable to nonOPL applications. TCIE is based on inverse entailment and is closely allied to abductive inference. The implementation of TCIE within Progol5.0 is described. The implementation uses contrapositives in a similar way to Stickel's Prolog Technology Theorem Prover. Progol5.0 is tested on two different datasets. The first dataset involves a grammar which translates numbers to their representation in English. The second dataset involves hypothesising the fu...
A Prologlike Inference System for Computing MinimumCost Abductive Explanations in NaturalLanguage Interpretation
, 1988
"... By determining what added assumptions would suffice to make the logical form of a sentence in natural language provable, abductive inference can be used in the interpretation of sentences to determine what information should be added to the listener's knowledge, i.e., what he should learn from ..."
Abstract

Cited by 48 (1 self)
 Add to MetaCart
By determining what added assumptions would suffice to make the logical form of a sentence in natural language provable, abductive inference can be used in the interpretation of sentences to determine what information should be added to the listener's knowledge, i.e., what he should learn from the sentence. This is a comparatively new application of mechanized abduction. A new form of abductionleast specific abductionis proposed as being more appropriate to the task of interpreting natural language than the forms that have been used in the traditional diagnostic and designsynthesis applications of abduction. The assignment of numerical costs to axioms and assumable literals permits specification of preferences on different abductive explanations. A new Prologlike inference system that computes abductive explanations and their costs is given. To facilitate the computation of minimumcost explanations, the inference system, unlike others such as Prolog, is designed to avoid the repeated use of the same instance of an axiom or assumption.
Approaches to Abductive Reasoning  An Overview
 ARTIFICIAL INTELLIGENCE REVIEW
, 1993
"... Abduction is a form of nonmonotonic reasoning that has gained increasing interest in the last few years. The key idea behind it can be represented by the following inference rule
$$O = \mathop C\limits_  N = \mathop P\limits_^  O  \mathop C\limits_^  .$$
i.e., from an occurrence of ohgr an ..."
Abstract

Cited by 42 (1 self)
 Add to MetaCart
Abduction is a form of nonmonotonic reasoning that has gained increasing interest in the last few years. The key idea behind it can be represented by the following inference rule
$$O = \mathop C\limits_  N = \mathop P\limits_^  O  \mathop C\limits_^  .$$
i.e., from an occurrence of ohgr and the rule ldquophiv implies ohgrrdquo, infer an occurrence of phiv as aplausible hypothesis or explanation for ohgr. Thus, in contrast to deduction, abduction is as well as induction a form of ldquodefeasiblerdquo inference, i.e., the formulae sanctioned are plausible and submitted to verification.
In this paper, a formal description of current approaches is given. The underlying reasoning process is treated independently and divided into two parts. This includes a description of methods for hypotheses generation and methods for finding the best explanations among a set of possible ones. Furthermore, the complexity of the abductive task is surveyed in connection with its relationship to default reasoning. We conclude with the presentation of applications of the discussed approaches focusing on plan recognition and plan generation.
PROTEIN: A PROver with a Theory Extension Interface
 AUTOMATED DEDUCTION  CADE12, VOLUME 814 OF LNAI
, 1994
"... PROTEIN (PROver with a Theory Extension INterface) is a PTTPbased first order theorem prover over builtin theories. Besides various standardrefinements known for model elimination, PROTEIN also offers a variant of model elimination for casebased reasoning and which does not need contrapositives. ..."
Abstract

Cited by 40 (10 self)
 Add to MetaCart
PROTEIN (PROver with a Theory Extension INterface) is a PTTPbased first order theorem prover over builtin theories. Besides various standardrefinements known for model elimination, PROTEIN also offers a variant of model elimination for casebased reasoning and which does not need contrapositives.
Heuristic Search
, 2011
"... Heuristic search is used to efficiently solve the singlenode shortest path problem in weighted graphs. In practice, however, one is not only interested in finding a short path, but an optimal path, according to a certain cost notion. We propose an algebraic formalism that captures many cost notions ..."
Abstract

Cited by 40 (22 self)
 Add to MetaCart
Heuristic search is used to efficiently solve the singlenode shortest path problem in weighted graphs. In practice, however, one is not only interested in finding a short path, but an optimal path, according to a certain cost notion. We propose an algebraic formalism that captures many cost notions, like typical Quality of Service attributes. We thus generalize A*, the popular heuristic search algorithm, for solving optimalpath problem. The paper provides an answer to a fundamental question for AI search, namely to which general notion of cost, heuristic search algorithms can be applied. We proof correctness of the algorithms and provide experimental results that validate the feasibility of the approach.
Inductive Logic Programming: derivations, successes and shortcomings
 SIGART Bulletin
, 1993
"... Inductive Logic Programming (ILP) is a research area which investigates the construction of firstorder definite clause theories from examples and background knowledge. ILP systems have been applied successfully in a number of realworld domains. These include the learning of structureactivity rules ..."
Abstract

Cited by 31 (3 self)
 Add to MetaCart
Inductive Logic Programming (ILP) is a research area which investigates the construction of firstorder definite clause theories from examples and background knowledge. ILP systems have been applied successfully in a number of realworld domains. These include the learning of structureactivity rules for drug design, finiteelement mesh design rules, rules for primarysecondary prediction of protein structure and fault diagnosis rules for satellites. There is a well established tradition of learninginthelimit results in ILP. Recently some results within Valiant's PAClearning framework have also been demonstrated for ILP systems. In this paper it is argued that algorithms can be directly derived from the formal specifications of ILP. This provides a common basis for Inverse Resolution, ExplanationBased Learning, Abduction and Relative Least General Generalisation. A new generalpurpose, efficient approach to predicate invention is demonstrated. ILP is underconstrained by its logical ...
Compiling A Default Reasoning System into Prolog
 New Generation Computing
, 1990
"... Artificial intelligence researchers have been designing representation systems for default and abductive reasoning. Logic Programming researchers have been working on techniques to improve the efficiency of Horn Clause deduction systems. This paper describes how one such default and abductive reason ..."
Abstract

Cited by 30 (4 self)
 Add to MetaCart
Artificial intelligence researchers have been designing representation systems for default and abductive reasoning. Logic Programming researchers have been working on techniques to improve the efficiency of Horn Clause deduction systems. This paper describes how one such default and abductive reasoning system (namely Theorist) can be translated into Horn clauses (with negation as failure), so that we can use the clarity of abductive reasoning systems and the efficiency of Horn clause deduction systems. We thus show how advances in expressive power that artificial intelligence workers are working on can directly utilise advances in efficiency that logic programming researchers are working on. Actual code from a running system is given. 1 Introduction Many people in Artificial Intelligence have been working on default reasoning and abductive diagnosis systems [35, 20, 4, 29]. The systems implemented so far (eg., [1, 16, 12, 34, 32]) are only prototypes or have been developed in A Theo...