Results 1 
6 of
6
Learning Stochastic Logic Programs
, 2000
"... Stochastic Logic Programs (SLPs) have been shown to be a generalisation of Hidden Markov Models (HMMs), stochastic contextfree grammars, and directed Bayes' nets. A stochastic logic program consists of a set of labelled clauses p:C where p is in the interval [0,1] and C is a firstorder range ..."
Abstract

Cited by 1057 (71 self)
 Add to MetaCart
Stochastic Logic Programs (SLPs) have been shown to be a generalisation of Hidden Markov Models (HMMs), stochastic contextfree grammars, and directed Bayes' nets. A stochastic logic program consists of a set of labelled clauses p:C where p is in the interval [0,1] and C is a firstorder rangerestricted definite clause. This paper summarises the syntax, distributional semantics and proof techniques for SLPs and then discusses how a standard Inductive Logic Programming (ILP) system, Progol, has been modied to support learning of SLPs. The resulting system 1) nds an SLP with uniform probability labels on each definition and nearmaximal Bayes posterior probability and then 2) alters the probability labels to further increase the posterior probability. Stage 1) is implemented within CProgol4.5, which differs from previous versions of Progol by allowing userdefined evaluation functions written in Prolog. It is shown that maximising the Bayesian posterior function involves nding SLPs with short derivations of the examples. Search pruning with the Bayesian evaluation function is carried out in the same way as in previous versions of CProgol. The system is demonstrated with worked examples involving the learning of probability distributions over sequences as well as the learning of simple forms of uncertain knowledge.
Parameter Estimation in Stochastic Logic Programs
 Machine Learning
, 2000
"... . Stochastic logic programs (SLPs) are logic programs with labelled clauses which dene a loglinear distribution over refutations of goals. The loglinear distribution provides, by marginalisation, a distribution over variable bindings, allowing SLPs to compactly represent quite complex distributions ..."
Abstract

Cited by 69 (4 self)
 Add to MetaCart
. Stochastic logic programs (SLPs) are logic programs with labelled clauses which dene a loglinear distribution over refutations of goals. The loglinear distribution provides, by marginalisation, a distribution over variable bindings, allowing SLPs to compactly represent quite complex distributions. We analyse the fundamental statistical properties of SLPs addressing issues concerning innite derivations, `unnormalised' SLPs and impure SLPs. After detailing existing approaches to parameter estimation for loglinear models and their application to SLPs, we present a new algorithm called failureadjusted maximisation (FAM). FAM is an instance of the EM algorithm that applies specically to normalised SLPs and provides a closedform for computing parameter updates within an iterative maximisation approach. We empirically show that FAM works on some small examples and discuss methods for applying it to bigger problems. c 2000 Kluwer Academic Publishers. Printed in the Netherlands. ...
ILP: Just Do It
, 2000
"... Inductive logic programming (ILP) is built on a foundation laid by research in other areas of computational logic. But in spite of this strong foundation, at 10 years of age ILP now faces a number of new challenges brought on by exciting application opportunities. The purpose of this paper is to int ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
Inductive logic programming (ILP) is built on a foundation laid by research in other areas of computational logic. But in spite of this strong foundation, at 10 years of age ILP now faces a number of new challenges brought on by exciting application opportunities. The purpose of this paper is to interest researchers from other areas of computational logic in contributing their special skill sets to help ILP meet these challenges. The paper presents five future research directions for ILP and points to initial approaches or results where they exist. It is hoped that the paper will motivate researchers from throughout computational logic to invest some time into "doing" ILP.
Learning LogLinear Models on ConstraintBased Grammars for Disambiguation
 In James Cussens and Saso Dzeroski, editors, Learning Language in Logic, volume 1925 of LNCS
"... We discuss the probabilistic modeling of constraintbased grammars by loglinear distributions and present a novel technique for statistical inference of the parameters and properties of such models from unannotated training data. We report on an experiment with a loglinear grammar model which empl ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We discuss the probabilistic modeling of constraintbased grammars by loglinear distributions and present a novel technique for statistical inference of the parameters and properties of such models from unannotated training data. We report on an experiment with a loglinear grammar model which employs sophisticated linguistically motivated features of parses as properties of the probability model. We report the results of statistical parameter estimation and empirical evaluation of this model on a small scale. These show that loglinear models on the parses of constraintbased grammars are useful for accurate disambiguation.
Statistical Aspects of LogicBased Machine Learning
"... this paper surveys a growing body of literature concerned with probabilistic and statistical aspects of ILP. A Bayesian statistical framework is provided, within which probabilities are used to compare the degrees of belief of competing hypotheses. This approach has been used in the general learning ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
this paper surveys a growing body of literature concerned with probabilistic and statistical aspects of ILP. A Bayesian statistical framework is provided, within which probabilities are used to compare the degrees of belief of competing hypotheses. This approach has been used in the general learning framework of Ulearnability. The paper surveys results of applying Bayesian analysis to the problem of learning from positive examples as well as that of learning in the presence of incomplete background knowledge and noise. More recently, probabilities have been used directly within the representation of rstorder hypotheses. A survey of a number of such probabilistic representations is provided together with initial results on approaches to machine learning of such representations.
Topics for ILP Research
"... Inductive logic programming (ILP) is built on a foundation laid by research in machine learning and computational logic. Armed with this strong foundation, ILP has been applied to important and interesting problems in the life sciences, engineering and the arts. In turn, the applications have bro ..."
Abstract
 Add to MetaCart
Inductive logic programming (ILP) is built on a foundation laid by research in machine learning and computational logic. Armed with this strong foundation, ILP has been applied to important and interesting problems in the life sciences, engineering and the arts. In turn, the applications have brought into focus the need for more research into speci c topics. We enumerate ve of these: (1) novel search methods; (2) incorporation of explicit probabilities; (3) incorporation of specialpurpose reasoners; (4) parallel execution using commodity components; and (5) enhanced human interaction. It is our hypothesis that progress in each of these areas can greatly improve the contributions that can be made with ILP; and that, with assistance from research workers in other areas, signi cant progress in each of these areas is possible.