Results 1  10
of
28
Knowledge acquisition via incremental conceptual clustering
 Machine Learning
, 1987
"... hill climbing Abstract. Conceptual clustering is an important way of summarizing and explaining data. However, the recent formulation of this paradigm has allowed little exploration of conceptual clustering as a means of improving performance. Furthermore, previous work in conceptual clustering has ..."
Abstract

Cited by 640 (6 self)
 Add to MetaCart
hill climbing Abstract. Conceptual clustering is an important way of summarizing and explaining data. However, the recent formulation of this paradigm has allowed little exploration of conceptual clustering as a means of improving performance. Furthermore, previous work in conceptual clustering has not explicitly dealt with constraints imposed by real world environments. This article presents COBWEB, a conceptual clustering system that organizes data so as to maximize inference ability. Additionally, COBWEB is incremental and computationally economical, and thus can be flexibly applied in a variety of domains. 1.
Constructive Induction On Decision Trees
 Proceedings of the Eleventh International Joint Conference on Artificial Intelligence
, 1989
"... Selective induction techniques perform poorly when the features are inappropriate for the target concept. One solution is to have the learning system construct new features automatically; unfortunately feature construction is a difficult and poorly understood problem. In this paper we present a defi ..."
Abstract

Cited by 90 (2 self)
 Add to MetaCart
Selective induction techniques perform poorly when the features are inappropriate for the target concept. One solution is to have the learning system construct new features automatically; unfortunately feature construction is a difficult and poorly understood problem. In this paper we present a definition of feature construction in concept learning, and offer a framework for its study based on four aspects: detection, selection, generalization, and evaluation. This framework is used in the analysis of existing learning systems and as the basis for the design of a new system, citre. citre performs feature construction using decision trees and simple domain knowledge as constructive biases. Initial results on a set of spatialdependent problems suggest the importance of domain knowledge and feature generalization, i.e., constructive induction. 1 Introduction Good representations are often crucial for solving difficult problems in AI. Finding suitable problem representations, however, ...
Learning the CLASSIC Description Logic: Theoretical and Experimental Results
 In Principles of Knowledge Representation and Reasoning: Proceedings of the Fourth International Conference (KR94
, 1994
"... We present a series of theoretical and experimental results on the learnability of description logics. We first extend previous formal learnability results on simple description logics to CClassic, a description logic expressive enough to be practically useful. We then experimentally evaluate two e ..."
Abstract

Cited by 89 (7 self)
 Add to MetaCart
We present a series of theoretical and experimental results on the learnability of description logics. We first extend previous formal learnability results on simple description logics to CClassic, a description logic expressive enough to be practically useful. We then experimentally evaluate two extensions of a learning algorithm suggested by the formal analysis. The first extension learns CClassic descriptions from individuals. (The formal results assume that examples are themselves descriptions.) The second extension learns disjunctions of CClassic descriptions from individuals. The experiments, which were conducted using several hundred target concepts from a number of domains, indicate that both extensions reliably learn complex natural concepts. 1 INTRODUCTION One wellknown family of formalisms for representing knowledge are description logics, sometimes also called terminological logics or KLONEtype languages. Description logics have been applied in a number of contexts...
Trading Spaces: Computation, Representation and the Limits of Uninformed Learning
 BEHAVIORAL AND BRAIN SCIENCES
, 1997
"... It is widely appreciated (e.g. Marr, 1982) that the difficulty of a particular computation varies according to how the input data are presented. What is less well understood is the effect of this computation/representation tradeoff within familiar learning paradigms. We argue that existing learn ..."
Abstract

Cited by 63 (12 self)
 Add to MetaCart
It is widely appreciated (e.g. Marr, 1982) that the difficulty of a particular computation varies according to how the input data are presented. What is less well understood is the effect of this computation/representation tradeoff within familiar learning paradigms. We argue that existing learning algorithms are often poorly equipped to solve problems involving a certain type of important and widespread statistical regularity, which we call `type2 regularity'. The solution in these cases is to trade achieved representation against computational search. We investigate several ways in which such a tradeoff may be pursued including simple incremental learning, modular connectionism, and the developmental hypothesis of `representational redescription'. In addition, the most distinctive features of human cognition  language and culture  may themselves be viewed as adaptations enabling this representation/computation tradeoff to be pursued on an even grander scale.
The Learnability of Description Logics with Equality Constraints
 Machine Learning
, 1994
"... Although there is an increasing amount of experimental research on learning concepts expressed in firstorder logic, there are still relatively few formal results on the polynomial learnability of firstorder representations from examples. Most previous analyses in the pacmodel have focused on s ..."
Abstract

Cited by 36 (3 self)
 Add to MetaCart
Although there is an increasing amount of experimental research on learning concepts expressed in firstorder logic, there are still relatively few formal results on the polynomial learnability of firstorder representations from examples. Most previous analyses in the pacmodel have focused on subsets of Prolog, and only a few highly restricted subsets have been shown to be learnable. In this paper, we will study instead the learnability of the restricted firstorder logics known as "description logics", also sometimes called "terminological logics" or "KLONEtype languages". Description logics are also subsets of predicate calculus, but are expressed using a different syntax, allowing a different set of syntactic restrictions to be explored. We first define a simple description logic, summarize some results on its expressive power, and then analyze its learnability. It is shown that the full logic cannot be tractably learned. However, syntactic restrictions exist that enable tractable learning from positive examples alone, independent of the size of the vocabulary used to describe examples. The learnable sublanguage appears to be incomparable in expressive power to any subset of firstorder logic previously known to be learnable.
CLASSIC Learning
 In Proceedings of the Seventh Annual ACM Conference on Computational Learning Theory
, 1991
"... . Description logics, also called terminological logics, are commonly used in knowledgebased systems to describe objects and their relationships. We investigate the learnability of a typical description logic, Classic, and show that Classic sentences are learnable in polynomial time in the exact lea ..."
Abstract

Cited by 31 (1 self)
 Add to MetaCart
. Description logics, also called terminological logics, are commonly used in knowledgebased systems to describe objects and their relationships. We investigate the learnability of a typical description logic, Classic, and show that Classic sentences are learnable in polynomial time in the exact learning model using equivalence queries and membership queries (which are in essence, "subsumption queries"we show a prediction hardness result for the more traditional membership queries that convey information about specific individuals). We show that membership queries alone are insufficient for polynomial time learning of Classic sentences. Combined with earlier negative results (Cohen & Hirsh, 1994a) showing that, given standard complexity theoretic assumptions, equivalence queries alone are insufficient (or random examples alone in the PAC setting are insufficient), this shows that both sources of information are necessary for efficient learning in that neither type alone is sufficie...
Discovering Patterns in Sequence of Events
 ARTIFICIAL INTELLIGENCE
, 1985
"... Given a sequence of events (or ob]ects), each 'characterized by a set of attributes, the problem considered is to discover a rule characterizing the sequence and able to predict a plausible sequence continuation. The rule, called a sequencegenerating rule, is nondeterministic in the sense that it d ..."
Abstract

Cited by 30 (3 self)
 Add to MetaCart
Given a sequence of events (or ob]ects), each 'characterized by a set of attributes, the problem considered is to discover a rule characterizing the sequence and able to predict a plausible sequence continuation. The rule, called a sequencegenerating rule, is nondeterministic in the sense that it does not necessarily tell exactly which etent must appear next in the sequence, but rather, defines a set of plausible next eents. The basic assumption of the methodology presented here is that the next etent depends solely on the attributes of the previous eents in the sequence. These attributes are either initially given or can be den'td from the initial ones through a chain of inferences. Three basic rule models are employed to guide the search for a sequence.generating rule: decomposition, periodic, and disjunctive normal form (DNF). The search process involves simultaneously transforming the initial sequences to derived sequences and instantiating models to find the best match between the instantiated model and the derived sequence. A program, called SPARC/E, is described that implements most of the methodology a.v applied to discosring sequence generating rules in the card game Eleusis. This game, which models the process of scientiftc discovery, is used as a sottrce of examples for illustrating the performance of SPARC/E.
Database Dependency Discovery: A Machine Learning Approach
, 1999
"... this paper are designed such that they can easily be generalised to other kinds of dependencies. Like in current approaches to computational induction such as inductive logic programming, we distinguish between topdown algorithms and bottomup algorithms. In a topdown approach, hypotheses are gener ..."
Abstract

Cited by 27 (4 self)
 Add to MetaCart
this paper are designed such that they can easily be generalised to other kinds of dependencies. Like in current approaches to computational induction such as inductive logic programming, we distinguish between topdown algorithms and bottomup algorithms. In a topdown approach, hypotheses are generated in a systematic way and then tested against the given relation. In a bottomup approach, the relation is inspected in order to see what dependencies it may satisfy or violate. We give algorithms for both approaches. Keywords: Induction, attribute dependency, database reverse engineering, data mining. 1. Introduction
Principled Constructive Induction
, 1991
"... A framework for the construction of new features for hard classification tasks is discussed. The approach brings together ideas from the fields of machine learning, computational geometry, and pattern recognition. Two heuristics for evaluation of newlyconstructed features are proposed, and their st ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
A framework for the construction of new features for hard classification tasks is discussed. The approach brings together ideas from the fields of machine learning, computational geometry, and pattern recognition. Two heuristics for evaluation of newlyconstructed features are proposed, and their statistical significance verified. Finally, it is shown how the proposed framework can be used to combine techniques for selection of representative examples with techniques for construction of new features, in order to solve difficult problems in learning from examples. 1. Introduction. The problem of new terms, also known as the constructive induction problem, has long been considered a source of difficulty in machine learning (Dietterich, 1982). Simple classifiers using only the primitive features of description have limited learning capabilities. For example: (i) Singlelayered neural networks can realize only those class dichotomies, where the classes are linearly separable in the featur...
Induction over the unexplained: Using overlygeneral domain theories to aid concept learning
, 1993
"... This paper describes and evaluates an approach to combining empirical and explanationbased learning called Induction Over the Unexplained (IOU). IOU is intended for learning concepts that can be partially explained by an overlygeneral domain theory. An eclectic evaluation of the method is presented ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
This paper describes and evaluates an approach to combining empirical and explanationbased learning called Induction Over the Unexplained (IOU). IOU is intended for learning concepts that can be partially explained by an overlygeneral domain theory. An eclectic evaluation of the method is presented which includes results from all three major approaches: empirical, theoretical, and psychological. Empirical results shows that IOU is effective at refining overlygeneral domain theories and that it learns more accurate concepts from fewer examples than a purely empirical approach. The application of theoretical results from PAC learnability theory explains why IOU requires fewer examples. IOU is also shown to be able to model psychological data demonstrating the effect of background knowledge on human learning.