Results 1  10
of
14
Learning Stochastic Logic Programs
, 2000
"... Stochastic Logic Programs (SLPs) have been shown to be a generalisation of Hidden Markov Models (HMMs), stochastic contextfree grammars, and directed Bayes' nets. A stochastic logic program consists of a set of labelled clauses p:C where p is in the interval [0,1] and C is a firstorder range ..."
Abstract

Cited by 1057 (71 self)
 Add to MetaCart
Stochastic Logic Programs (SLPs) have been shown to be a generalisation of Hidden Markov Models (HMMs), stochastic contextfree grammars, and directed Bayes' nets. A stochastic logic program consists of a set of labelled clauses p:C where p is in the interval [0,1] and C is a firstorder rangerestricted definite clause. This paper summarises the syntax, distributional semantics and proof techniques for SLPs and then discusses how a standard Inductive Logic Programming (ILP) system, Progol, has been modied to support learning of SLPs. The resulting system 1) nds an SLP with uniform probability labels on each definition and nearmaximal Bayes posterior probability and then 2) alters the probability labels to further increase the posterior probability. Stage 1) is implemented within CProgol4.5, which differs from previous versions of Progol by allowing userdefined evaluation functions written in Prolog. It is shown that maximising the Bayesian posterior function involves nding SLPs with short derivations of the examples. Search pruning with the Bayesian evaluation function is carried out in the same way as in previous versions of CProgol. The system is demonstrated with worked examples involving the learning of probability distributions over sequences as well as the learning of simple forms of uncertain knowledge.
Nonmonotonic Learning
 Inductive Logic Programming
, 1992
"... This paper addresses methods of specialising firstorder theories within the context of incremental learning systems. We demonstrate the shortcomings of existing firstorder incremental learning systems with regard to their specialisation mechanisms. We prove that these shortcomings are fundamental ..."
Abstract

Cited by 58 (11 self)
 Add to MetaCart
This paper addresses methods of specialising firstorder theories within the context of incremental learning systems. We demonstrate the shortcomings of existing firstorder incremental learning systems with regard to their specialisation mechanisms. We prove that these shortcomings are fundamental to the use of classical logic. In particular, minimal "correcting " specialisations are not always obtainable within this framework. We propose instead the adoption of a specialisation scheme based on an existing nonmonotonic logic formalism. This approach overcomes the problems that arise with incremental learning systems which employ classical logic. As a sideeffect of the formal proofs developed for this paper we define a function called "deriv" which turns out to be an improvement on an existing explanationbasedgeneralisation (EBG) algorithm. Prolog code and a description of the relationship between "deriv" and the previous EBG algorithm are described in an appendix. 1 Introduction ...
Inverting Implication
 Artificial Intelligence Journal
, 1992
"... All generalisations within logic involve inverting implication. Yet, ever since Plotkin's work in the early 1970's methods of generalising firstorder clauses have involved inverting the clausal subsumption relationship. However, even Plotkin realised that this approach was incomplete. Since inversi ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
All generalisations within logic involve inverting implication. Yet, ever since Plotkin's work in the early 1970's methods of generalising firstorder clauses have involved inverting the clausal subsumption relationship. However, even Plotkin realised that this approach was incomplete. Since inversion of subsumption is central to many Inductive Logic Programming approaches, this form of incompleteness has been propagated to techniques such as Inverse Resolution and Relative Least General Generalisation. A more complete approach to inverting implication has been attempted with some success recently by Lapointe and Matwin. In the present paper the author derives general solutions to this problem from first principles. It is shown that clausal subsumption is only incomplete for selfrecursive clauses. Avoiding this incompleteness involves algorithms which find "nth roots" of clauses. Completeness and correctness results are proved for a nondeterministic algorithms which constructs nth ro...
Learning Logical Exceptions In Chess
, 1994
"... This thesis is about inductive learning, or learning from examples. The goal has been to investigate ways of improving learning algorithms. The chess endgame "King and Rook against King" (KRK) was chosen, and a number of benchmark learning tasks were defined within this domain, sufficient to overc ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
This thesis is about inductive learning, or learning from examples. The goal has been to investigate ways of improving learning algorithms. The chess endgame "King and Rook against King" (KRK) was chosen, and a number of benchmark learning tasks were defined within this domain, sufficient to overchallenge stateof theart learning algorithms. The tasks comprised learning rules to distinguish (1) illegal positions and (2) legal positions won optimally in a fixed number of moves. From our experimental results with task (1) the bestperforming algorithm was selected and a number of improvements were made. The principal extension to this generalisation method was to alter its representation from classical logic to a nonmonotonic formalism. A novel algorithm was developed in this framework to implement rule specialisation, relying on the invention of new predicates. When experimentally tested this combined approach did not at first deliver the expected performance gains due to restrictio...
A Strategy for Constructing New Predicates in First Order Logic
 In Proceedings of the Third European Working Session on Learning
, 1988
"... There is increasing interest within the Machine Learning community in systems which automatically reformulate their problem representation by defining and constructing new predicates. A previous paper discussed such a system, called CIGOL, and gave a derivation for the mechanism of inverting individ ..."
Abstract

Cited by 16 (6 self)
 Add to MetaCart
There is increasing interest within the Machine Learning community in systems which automatically reformulate their problem representation by defining and constructing new predicates. A previous paper discussed such a system, called CIGOL, and gave a derivation for the mechanism of inverting individual steps in first order resolution proofs. In this paper we describe an enhancement to CIGOL's learning strategy which strongly constrains the formation of new concepts and hypotheses. The new strategy is based on results from algorithmic information theory. Using these results it is possible to compute the probability that the simplifications produced by adopting new concepts or hypotheses are not based on chance regularities within the examples. This can be derived from the amount of information compression produced by replacing the examples with the hypothesised concepts. CIGOL's improved performance, based on an approximation of this strategy, is demonstrated by way of the automatic "di...
PAL: a patternbased firstorder inductive system
 Machine Learning
, 1997
"... . It has been argued that much of human intelligence can be viewed as the process of matching stored patterns. In particular, it is believed that chess masters use a patternbased knowledge to analyze a position, followed by a patternbased controlled search to verify or correct the analysis. In t ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
. It has been argued that much of human intelligence can be viewed as the process of matching stored patterns. In particular, it is believed that chess masters use a patternbased knowledge to analyze a position, followed by a patternbased controlled search to verify or correct the analysis. In this paper, a firstorder system, called PAL, that can learn patterns in the form of Horn clauses from simple example descriptions and general purpose knowledge is described. The learning model is based on (i) a constrained least general generalization algorithm to structure the hypothesis space and guide the learning process, and (ii) a patternbased representation knowledge to constrain the construction of hypothesis. It is shown how PAL can learn chess patterns which are beyond the learning capabilities of current inductive systems. The same patternbased approach is used to learn qualitative models of simple dynamic systems and counterpoint rules for two voice musical pieces. Limitat...
Multiple Predicate Learning in Two Inductive Logic Programming Settings
, 1996
"... Inductive logic programming (ILP) is a research area which has its roots in inductive machine learning and computational logic. The paper gives an introduction to this area based on a distinction between two different semantics used in inductive logic programming, and illustrates their application i ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
Inductive logic programming (ILP) is a research area which has its roots in inductive machine learning and computational logic. The paper gives an introduction to this area based on a distinction between two different semantics used in inductive logic programming, and illustrates their application in knowledge discovery and programming. Whereas most research in inductive logic programming has focussed on learning single predicates from given datasets using the normal ILP semantics (e.g. the well known ILP systems GOLEM and FOIL), the paper investigates also the nonmonotonic ILP semantics and the learning problems involving multiple predicates. The nonmonotonic ILP setting avoids the order dependency problem of the normal setting when learning multiple predicates, extends the representation of the induced hypotheses to full clausal logic, and can be applied to different types of application. Keywords: inductive logic programming, induction, logic programming, machine learning 1 Intro...
A Comparative Study Of Structural Most Specific Generalizations Used In Machine Learning
 In Proc. Third International Workshop on Inductive Logic Programming
, 1997
"... In this paper we compare the two main lines of research in learning most specific generalizations (MSG's) in a unifying framework. By reducing them to each other we show that even in some simple subset of firstorder logic, the MSG grows exponentially in the number of examples. We then review tw ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
In this paper we compare the two main lines of research in learning most specific generalizations (MSG's) in a unifying framework. By reducing them to each other we show that even in some simple subset of firstorder logic, the MSG grows exponentially in the number of examples. We then review two polynomial approaches, learning most specific Hornclauses without existential variables and learning most specific ijdeterminate Hornclauses within this framework. We also show that ijdeterminate Hornclauses are a maximal subset of Hornlogic, which is polynomial learnable, as the relaxation from ijdeterminate Hornclauses to determinate Hornclauses lead to exponentially longer MSGs. Keywords: Inductive Logic Programming, Most Specific Generalization, Least General Generalization. 1 Introduction Recently the limited expressiveness of attributebased descriptions has lead to an increased interest in learning from logical descriptions which, however, leads to an increased complexit...
Specifications of the HAIKU system
, 1994
"... Interest in adaptable Machine Learning systems grows as the number of concept learning applications increases. We present here a generic algorithm expressed in terms of elementary learning operations and biases that control these operations. By shifting the selection of learning operations and biase ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Interest in adaptable Machine Learning systems grows as the number of concept learning applications increases. We present here a generic algorithm expressed in terms of elementary learning operations and biases that control these operations. By shifting the selection of learning operations and biases, one gets different Machine Learning systems in terms of representation language, complexity and learning results. This report develops the work of [ Mitchell, 1982 ] and [ De Raedt and Bruynooghe, 1992 ] by eliciting learning strategies in GenerateandTest systems. The generic Generateand Test algorithm we present has been implemented in a system called HAIKU, as a framework to study and compare the effects of the choice of biases and learning operators on the characteristics of the learning process and the learning results. Keywords: Symbolic Machine Learning, Inductive Logic Programming, Declarative Bias, Parameterisation of ML systems. 1 Introduction 1.1 Motivations Eliciting the ...
Learning Musical Rules
, 1995
"... Traditional musical analysis attempts to understand and explain the choices made by a composer in a particular piece. History in composition and analysis have shown that composers using the same patterns in structure and harmony get different results depending on the way these patterns are resolved. ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Traditional musical analysis attempts to understand and explain the choices made by a composer in a particular piece. History in composition and analysis have shown that composers using the same patterns in structure and harmony get different results depending on the way these patterns are resolved. In particular, musical analysis can be described by a sequence of states and transitions between states representing the personal criteria that each composer follows when solving a musical structure. A system that could be trained with the preference criteria, used by a composer, for transitions between states, could be used to analyze his/her work and provide suggestions for his/her compositions. A firstorder learning system, called Pal, is used to learn transition criteria for counterpoint analysis, in the form of Horn clauses from pairs of musical states (given as sets of notes) and general purpose musical knowledge. It is shown how the rules learned by Pal can be used for musical analy...