Results 1  10
of
60
Learning concepts by asking questions
 In
, 1986
"... Tw o important issues in machine learning are explored: the role that memory plays in acquiring new concepts; and the extent to which the learner can take anactive part in acquiring these concepts. This chapter describes a program, called Marvin, which uses concepts it has learned previously to lear ..."
Abstract

Cited by 99 (6 self)
 Add to MetaCart
Tw o important issues in machine learning are explored: the role that memory plays in acquiring new concepts; and the extent to which the learner can take anactive part in acquiring these concepts. This chapter describes a program, called Marvin, which uses concepts it has learned previously to learn new concepts. The program forms hypotheses about the concept being learned and tests the hypotheses by asking the trainer questions. Learning begins when the trainer shows Marvin an example of the concept to be learned. The program determines which objects in the example belong to concepts stored in the memory. A description of the new concept is formed by using the information obtained from the memory to generalize the description of the training example. The generalized description is tested when the program constructs new examples and shows these to the trainer, asking if they belong to the target concept. 1.
Distance Between Herbrand Interpretations: a measure for approximations to a target concept
, 1997
"... . We can use a metric to measure the di#erences between elements in a domain or subsets of that domain #i.e. concepts#. Which particular metric should be chosen, depends on the kind of di#erence wewant to measure. The well known Euclidean metric on # n and its generalizations are often used f ..."
Abstract

Cited by 38 (0 self)
 Add to MetaCart
. We can use a metric to measure the di#erences between elements in a domain or subsets of that domain #i.e. concepts#. Which particular metric should be chosen, depends on the kind of di#erence wewant to measure. The well known Euclidean metric on # n and its generalizations are often used for this purpose, but such metrics are not always suitable for concepts where elements have some structure di#erent from real numbers. For example, in #Inductive# Logic Programming a concept is often expressed as an Herbrand interpretation of some #rstorder language. Every element in an Herbrand interpretation is a ground atom which has a tree structure. We start by de#ning a metric d on the set of expressions #ground atoms and ground terms#, motivated by the structure and complexity of the expressions and the symbols used therein. This metric induces the Hausdor # metric h on the set of all sets of ground atoms, which allows us to measure the distance between Herbrand interpretatio...
Inductive Synthesis of Recursive Logic Programs
, 1997
"... The inductive synthesis of recursive logic programs from incomplete information, such as input/output examples, is a challenging subfield both of ILP (Inductive Logic Programming) and of the synthesis (in general) of logic programs from formal specifications. We first overview past and present achie ..."
Abstract

Cited by 34 (8 self)
 Add to MetaCart
The inductive synthesis of recursive logic programs from incomplete information, such as input/output examples, is a challenging subfield both of ILP (Inductive Logic Programming) and of the synthesis (in general) of logic programs from formal specifications. We first overview past and present achievements, focusing on the techniques that were designed specifically for the inductive synthesis of recursive logic programs, but also discussing a few general ILP techniques that can also induce nonrecursive hypotheses. Then we analyse the prospects of these techniques in this task, investigating their applicability to software engineering as well as to knowledge acquisition and discovery.
A Formal Definition of Intelligence Based on an Intensional Variant of Algorithmic Complexity
 In Proceedings of the International Symposium of Engineering of Intelligent Systems (EIS'98
, 1998
"... Machine Due to the current technology of the computers we can use, we have chosen an extremely abridged emulation of the machine that will effectively run the programs, instead of more proper languages, like lcalculus (or LISP). We have adapted the "toy RISC" machine of [Hernndez & Hernndez 1993] ..."
Abstract

Cited by 30 (17 self)
 Add to MetaCart
Machine Due to the current technology of the computers we can use, we have chosen an extremely abridged emulation of the machine that will effectively run the programs, instead of more proper languages, like lcalculus (or LISP). We have adapted the "toy RISC" machine of [Hernndez & Hernndez 1993] with two remarkable features inherited from its objectoriented coding in C++: it is easily tunable for our needs, and it is efficient. We have made it even more reduced, removing any operand in the instruction set, even for the loop operations. We have only three registers which are AX (the accumulator), BX and CX. The operations Q b we have used for our experiment are in Table 1: LOOPTOP Decrements CX. If it is not equal to the first element jump to the program top.
On Tests for Hypothetical Reasoning
 Readings in ModelBased Diagnosis
, 1992
"... Suppose that HY P is a set of hypotheses which we currently entertain about some state of affairs represented by a propositional sentence \Sigma. In a diagnostic setting, HY P might consist of all the diagnoses of some device whose description is given by \Sigma, although our analysis is not restric ..."
Abstract

Cited by 24 (4 self)
 Add to MetaCart
Suppose that HY P is a set of hypotheses which we currently entertain about some state of affairs represented by a propositional sentence \Sigma. In a diagnostic setting, HY P might consist of all the diagnoses of some device whose description is given by \Sigma, although our analysis is not restricted to diagnosis. Our concern is with tests  how they can be designed, and what conclusions can be drawn about the hypotheses in HY P as a result of performing tests. Specifically, we define the concept of a test and the concept of the outcome of a test. We characterize those tests whose outcomes refute or confirm an hypothesis, and discriminate between competing hypotheses. These characterizations are in terms of the prime implicates of \Sigma, and hence are implementable using assumptionbased truth maintenance systems. In addition, we characterize the impact of a test outcome on consistencybased and abductive hypothesis spaces. Finally, we provide a characterization of differential dia...
Finding Minimal Generalizations for Unions of Pattern Languages and Its Application to Inductive Inference from Positive Data.
 In Proc. the 11th STACS, LNCS 775
, 1994
"... A pattern is a string of constant symbols and variables. The language defined by a pattern p is the set of constant strings obtained from p by substituting nonempty constant strings for variables in p. In this paper we are concerning with polynomial time inference from positive data of the class of ..."
Abstract

Cited by 23 (12 self)
 Add to MetaCart
A pattern is a string of constant symbols and variables. The language defined by a pattern p is the set of constant strings obtained from p by substituting nonempty constant strings for variables in p. In this paper we are concerning with polynomial time inference from positive data of the class of unions of a bounded number of pattern languages. We introduce a syntactic notion of minimal multiple generalizations (mmg for short) to study the inferability of classes of unions. If a pattern p is obtained from another pattern q by substituting nonempty patterns for variables in q, q is said to be more general than p. A set of patterns defines a union of their languages. A set Q of patterns is said to be more general than a set P of patterns if for any pattern p in P there exists a more general pattern q in Q than p. Clearly more general set of patterns defines larger unions. A kminimal multiple generalization (kmmg) of a set S of strings is a minimally general set of at most k pattern...
Learning FunctionFree Horn Expressions
, 1998
"... The problem of learning universally quantified function free first order Horn expressions is studied. Several models of learning from equivalence and membership queries are considered, including the model where interpretations are examples (Learning from Interpretations), the model where clauses are ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
The problem of learning universally quantified function free first order Horn expressions is studied. Several models of learning from equivalence and membership queries are considered, including the model where interpretations are examples (Learning from Interpretations), the model where clauses are examples (Learning from Entailment), models where extentional or intentional background knowledge is given to the learner (as done in Inductive Logic Programming), and the model where the reasoning performance of the learner rather than identification is of interest (Learning to Reason). We present learning algorithms for all these tasks for the class of universally quantified function free Horn expressions. The algorithms are polynomial in the number of predicate symbols in the language and the number of clauses in the target Horn expression but exponential in the arity of predicates and the number of universally quantified variables. We also provide lower bounds for these tasks by way of ...
Learning Logical Exceptions In Chess
, 1994
"... This thesis is about inductive learning, or learning from examples. The goal has been to investigate ways of improving learning algorithms. The chess endgame "King and Rook against King" (KRK) was chosen, and a number of benchmark learning tasks were defined within this domain, sufficient to overc ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
This thesis is about inductive learning, or learning from examples. The goal has been to investigate ways of improving learning algorithms. The chess endgame "King and Rook against King" (KRK) was chosen, and a number of benchmark learning tasks were defined within this domain, sufficient to overchallenge stateof theart learning algorithms. The tasks comprised learning rules to distinguish (1) illegal positions and (2) legal positions won optimally in a fixed number of moves. From our experimental results with task (1) the bestperforming algorithm was selected and a number of improvements were made. The principal extension to this generalisation method was to alter its representation from classical logic to a nonmonotonic formalism. A novel algorithm was developed in this framework to implement rule specialisation, relying on the invention of new predicates. When experimentally tested this combined approach did not at first deliver the expected performance gains due to restrictio...
Inductive Learning from Good Examples
 In Proceedings of IJCAI91
, 1991
"... We study what kind of data may ease the computational complexity of learning of Horn clause theories (in Gold's paradigm) and Boolean functions (in PAClearning paradigm). We give several definitions of good data (basic and generative representative sets), and develop datadriven algorithms that lea ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
We study what kind of data may ease the computational complexity of learning of Horn clause theories (in Gold's paradigm) and Boolean functions (in PAClearning paradigm). We give several definitions of good data (basic and generative representative sets), and develop datadriven algorithms that learn faster from good examples, and degenerate to learn in the limit from the "worst" possible examples. We show that Horn clause theories, kterm DNF and general DNF Boolean functions are polynomially learnable from generative representative presentations. 1 Introduction In any inductive learning model, how data of the target theory are supplied to the learning programs is a crucial assumption. Identification in the limit [ Gold, 1967 ] assumes that the series of examples is an admissible enumeration of all (positive and/or negative) examples of the target concept, and requires the learning algorithm to produce a correct hypothesis in some finite time. However, the computational time and th...
A Classification of Abduction : Abduction for Logic Programming
 In Proceedings of the Fourteenth International Machine Learning Workshop, ML14
, 1995
"... Abduction is a methodology of scientific researches. Peirce showed three types of abduction, and expressed them by one syllogism. Recently various researches on abduction or abductive logic have been developed in the fields of automated reasoning and machine learning. In order to systematically unde ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
Abduction is a methodology of scientific researches. Peirce showed three types of abduction, and expressed them by one syllogism. Recently various researches on abduction or abductive logic have been developed in the fields of automated reasoning and machine learning. In order to systematically understand such researches and to clearly discuss abduction, this paper classifies abduction into five types. This new classification is based on an interpretation of the syllogism in abduction and the definitions of hypotheses. We examine various researches on abduction so far developed and show that many researches on abduction can be placed in our classification. Furthermore, we discuss the most essential type of abduction in our classification for logic programming and default logic, and describe Prolog programs for the abduction. 1 Introduction Charles Sanders Peirce, who was a philosopher, scientist and logician, asserted that a scientific research consists of three stages, abduction, ded...