Results 1  10
of
17
Learning Stochastic Logic Programs
, 2000
"... Stochastic Logic Programs (SLPs) have been shown to be a generalisation of Hidden Markov Models (HMMs), stochastic contextfree grammars, and directed Bayes' nets. A stochastic logic program consists of a set of labelled clauses p:C where p is in the interval [0,1] and C is a firstorder range ..."
Abstract

Cited by 1057 (71 self)
 Add to MetaCart
Stochastic Logic Programs (SLPs) have been shown to be a generalisation of Hidden Markov Models (HMMs), stochastic contextfree grammars, and directed Bayes' nets. A stochastic logic program consists of a set of labelled clauses p:C where p is in the interval [0,1] and C is a firstorder rangerestricted definite clause. This paper summarises the syntax, distributional semantics and proof techniques for SLPs and then discusses how a standard Inductive Logic Programming (ILP) system, Progol, has been modied to support learning of SLPs. The resulting system 1) nds an SLP with uniform probability labels on each definition and nearmaximal Bayes posterior probability and then 2) alters the probability labels to further increase the posterior probability. Stage 1) is implemented within CProgol4.5, which differs from previous versions of Progol by allowing userdefined evaluation functions written in Prolog. It is shown that maximising the Bayesian posterior function involves nding SLPs with short derivations of the examples. Search pruning with the Bayesian evaluation function is carried out in the same way as in previous versions of CProgol. The system is demonstrated with worked examples involving the learning of probability distributions over sequences as well as the learning of simple forms of uncertain knowledge.
Inverse entailment and Progol
, 1995
"... This paper firstly provides a reappraisal of the development of techniques for inverting deduction, secondly introduces ModeDirected Inverse Entailment (MDIE) as a generalisation and enhancement of previous approaches and thirdly describes an implementation of MDIE in the Progol system. Progol ..."
Abstract

Cited by 631 (59 self)
 Add to MetaCart
This paper firstly provides a reappraisal of the development of techniques for inverting deduction, secondly introduces ModeDirected Inverse Entailment (MDIE) as a generalisation and enhancement of previous approaches and thirdly describes an implementation of MDIE in the Progol system. Progol is implemented in C and available by anonymous ftp. The reassessment of previous techniques in terms of inverse entailment leads to new results for learning from positive data and inverting implication between pairs of clauses.
Controlling the Complexity of Learning in Logic through Syntactic and TaskOriented Models
 INDUCTIVE LOGIC PROGRAMMING
, 1992
"... Due to the inadequacy of attributeonly representations for many learning problems, there is now a renewed interest in algorithms employing firstorder logic or restricted variants thereof as their knowledge representation. In this paper, we give a brief overview of the dimensions along which the ..."
Abstract

Cited by 95 (7 self)
 Add to MetaCart
Due to the inadequacy of attributeonly representations for many learning problems, there is now a renewed interest in algorithms employing firstorder logic or restricted variants thereof as their knowledge representation. In this paper, we give a brief overview of the dimensions along which the complexity of learning in such representations can be controlled. We then present RDT, a modelbased learning algorithm for functionfree Horn clauses with negation that introduces two new means of complexity control, namely the use of syntactic rule models, and the use of a taskoriented domain topology. We briefly describe some preliminary application results of RDT within the knowledge acquisition system MOBAL, and present directions of further research.
Learning Semantic Grammars with Constructive Inductive Logic Programming
 In Proceedings of the Eleventh National Conference on Artificial Intelligence
, 1993
"... Automating the construction of semantic grammars is a difficult and interesting problem for machine learning. This paper shows how the semanticgrammar acquisition problem can be viewed as the learning of searchcontrol heuristics in a logic program. Appropriate control rules are learned using a new ..."
Abstract

Cited by 72 (14 self)
 Add to MetaCart
Automating the construction of semantic grammars is a difficult and interesting problem for machine learning. This paper shows how the semanticgrammar acquisition problem can be viewed as the learning of searchcontrol heuristics in a logic program. Appropriate control rules are learned using a new firstorder induction algorithm that automatically invents useful syntactic and semantic categories. Empirical results show that the learned parsers generalize well to novel sentences and outperform previous approaches based on connectionist techniques. Introduction Designing computer systems to "understand" natural language input is a difficult task. The laboriously handcrafted computational grammars supporting natural language applications are often inefficient, incomplete and ambiguous. The difficulty in constructing adequate grammars is an example of the "knowledge acquisition bottleneck" which has motivated much research in machine learning. While numerous researchers have studied ...
Inverting Implication
 Artificial Intelligence Journal
, 1992
"... All generalisations within logic involve inverting implication. Yet, ever since Plotkin's work in the early 1970's methods of generalising firstorder clauses have involved inverting the clausal subsumption relationship. However, even Plotkin realised that this approach was incomplete. Since inversi ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
All generalisations within logic involve inverting implication. Yet, ever since Plotkin's work in the early 1970's methods of generalising firstorder clauses have involved inverting the clausal subsumption relationship. However, even Plotkin realised that this approach was incomplete. Since inversion of subsumption is central to many Inductive Logic Programming approaches, this form of incompleteness has been propagated to techniques such as Inverse Resolution and Relative Least General Generalisation. A more complete approach to inverting implication has been attempted with some success recently by Lapointe and Matwin. In the present paper the author derives general solutions to this problem from first principles. It is shown that clausal subsumption is only incomplete for selfrecursive clauses. Avoiding this incompleteness involves algorithms which find "nth roots" of clauses. Completeness and correctness results are proved for a nondeterministic algorithms which constructs nth ro...
Inductive Logic Programming for Natural Language Processing
 IN MUGGLETON, S. (ED.), INDUCTIVE LOGIC PROGRAMMING: SELECTED PAPERS FROM THE 6TH INTERNATIONAL WORKSHOP
, 1997
"... This paper reviews our recent work on applying inductive logic programming to the construction of natural language processing systems. We have developed a system, Chill, that learns a parser from a training corpus of parsed sentences by inducing heuristics that control an initial overlygenera ..."
Abstract

Cited by 23 (1 self)
 Add to MetaCart
This paper reviews our recent work on applying inductive logic programming to the construction of natural language processing systems. We have developed a system, Chill, that learns a parser from a training corpus of parsed sentences by inducing heuristics that control an initial overlygeneral shiftreduce parser. Chill learns syntactic parsers as well as ones that translate English database queries directly into executable logical form. The ATIS corpus of airline information queries was used to test the acquisition of syntactic parsers, and Chill performed competitively with recent statistical methods. English queries to a small database on U.S. geography were used to test the acquisition of a complete natural language interface, and the parser that Chill acquired was more accurate than an existing handcoded system. The paper also includes a discussion of several issues this work has raised regarding the capabilities and testing of ILP systems as well as a summary of our current research directions.
Learning Logical Exceptions In Chess
, 1994
"... This thesis is about inductive learning, or learning from examples. The goal has been to investigate ways of improving learning algorithms. The chess endgame "King and Rook against King" (KRK) was chosen, and a number of benchmark learning tasks were defined within this domain, sufficient to overc ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
This thesis is about inductive learning, or learning from examples. The goal has been to investigate ways of improving learning algorithms. The chess endgame "King and Rook against King" (KRK) was chosen, and a number of benchmark learning tasks were defined within this domain, sufficient to overchallenge stateof theart learning algorithms. The tasks comprised learning rules to distinguish (1) illegal positions and (2) legal positions won optimally in a fixed number of moves. From our experimental results with task (1) the bestperforming algorithm was selected and a number of improvements were made. The principal extension to this generalisation method was to alter its representation from classical logic to a nonmonotonic formalism. A novel algorithm was developed in this framework to implement rule specialisation, relying on the invention of new predicates. When experimentally tested this combined approach did not at first deliver the expected performance gains due to restrictio...
Towards a model of grounded concept formation
 In Proc. 12th International Joint Conference on Artificial Intelligence
, 1991
"... In most research on concept formation within machine learning and cognitive psychology, the features from which concepts are built are assumed to be provided as elementary vocabulary. In this paper, we argue that this is an unnecessarily limited paradigm within which to examine concept formation. Ba ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
In most research on concept formation within machine learning and cognitive psychology, the features from which concepts are built are assumed to be provided as elementary vocabulary. In this paper, we argue that this is an unnecessarily limited paradigm within which to examine concept formation. Based on evidence from psychology and machine learning, we contend that a principled account of the origin of features can only be given with a grounded model of concept formation, i.e., with a model that incorporates direct access to the world via sensors and manipulators. We discuss the domain of process control as a suitable framework for research into such models, and present a first approach to the problem of developing elementary vocabularies from perceptual sensor data. 1
Multiple Predicate Learning in Two Inductive Logic Programming Settings
, 1996
"... Inductive logic programming (ILP) is a research area which has its roots in inductive machine learning and computational logic. The paper gives an introduction to this area based on a distinction between two different semantics used in inductive logic programming, and illustrates their application i ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
Inductive logic programming (ILP) is a research area which has its roots in inductive machine learning and computational logic. The paper gives an introduction to this area based on a distinction between two different semantics used in inductive logic programming, and illustrates their application in knowledge discovery and programming. Whereas most research in inductive logic programming has focussed on learning single predicates from given datasets using the normal ILP semantics (e.g. the well known ILP systems GOLEM and FOIL), the paper investigates also the nonmonotonic ILP semantics and the learning problems involving multiple predicates. The nonmonotonic ILP setting avoids the order dependency problem of the normal setting when learning multiple predicates, extends the representation of the induced hypotheses to full clausal logic, and can be applied to different types of application. Keywords: inductive logic programming, induction, logic programming, machine learning 1 Intro...
Relating Relational Learning Algorithms
 Inductive Logic Programming
, 1992
"... Relational learning algorithms are of special interest to members of the machine learning community; they offer practical methods for extending the representations used in algorithms that solve supervised learning tasks. Five approaches are currently being explored to address issues involved with us ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Relational learning algorithms are of special interest to members of the machine learning community; they offer practical methods for extending the representations used in algorithms that solve supervised learning tasks. Five approaches are currently being explored to address issues involved with using relational representations. This paper surveys algorithms embodying these approaches, summarizes their empirical evaluations, highlights their commonalities, and suggests potential directions for future research. Keywords: supervised learning, representation, relational learning 1 Introduction Relational learning algorithms extend the capabilities of propositional or monadic supervised learning algorithms. Supervised learning algorithms input a set of instances, which are described by a set of predictor descriptors and a target descriptor. These algorithms construct a function (i.e., a concept description) that can predict an instance's target descriptor value given its predictor desc...