Results 1  10
of
28
Using Genetic Algorithms for Concept Learning
"... In this paper, we explore the use of genetic algorithms (GAs) as a key element in the design and implementation of robust concept learning systems. We describe and evaluate a GAbased system called GABIL that continually learns and refines concept classification rules from its interaction with the e ..."
Abstract

Cited by 132 (5 self)
 Add to MetaCart
In this paper, we explore the use of genetic algorithms (GAs) as a key element in the design and implementation of robust concept learning systems. We describe and evaluate a GAbased system called GABIL that continually learns and refines concept classification rules from its interaction with the environment. The use of GAs is motivated by recent studies showing the effects of various forms of bias built into different concept learning systems, resulting in systems that perform well on certain concept classes (generally, those well matched to the biases) and poorly on others. By incorporating a GA as the underlying adaptive search mechanism, we are able to construct a concept learning system that has a simple, unified architecture with several important features. First, the system is surprisingly robust even with minimal bias. Second, the system can be easily extended to incorporate traditional forms of bias found in other concept learning systems. Finally, the architecture of the system encourages explicit representation of such biases and, as a result, provides for an important additional feature: the ability to dynamically adjust system bias. The viability of this approach is illustrated by comparing the performance of GABIL with that of four other more traditional concept learners (AQ14, C4.5, ID5R, and IACL) on a variety of target concepts. We conclude with some observations about the merits of this approach and about possible extensions.
A Polynomial Approach to the Constructive Induction of . . .
 MACHINE LEARNING
, 1994
"... The representation formalism as well as the representation language is of great importance for the success of machine learning. The representation formalism should be expressive, efficient, useful, and applicable. Firstorder logic needs to be restricted in order to be efficient for inductive and de ..."
Abstract

Cited by 62 (2 self)
 Add to MetaCart
The representation formalism as well as the representation language is of great importance for the success of machine learning. The representation formalism should be expressive, efficient, useful, and applicable. Firstorder logic needs to be restricted in order to be efficient for inductive and deductive reasoning. In the field of knowledge representation term subsumption formalisms have been developed which are efficient and expressive. In this paper, a learning algorithm, KLUSTER, is described which represents concept definitions in this formalism. KLUSTER enhances the representation language if this is necessary for the discrimination of concepts. Hence, KLUSTER is a constructive induction program. KLUSTER builds the most specific generalization and a most general discrimination in polynomial time. It embeds these concept learning problems into the overall task of learning a hierarchy of concepts.
Learning TwoTiered Descriptions of Flexible Concepts: The Poseidon Systems
 MACHINE LEARNING
, 1992
"... This paper describes a method for learning flexible concepts. by which are meant concepts that lack precise definition and are contextqlependent. To describe such concepts, the method employs a twotiered represen tation. in which the first tier captures explicitly basic concept properties, and the ..."
Abstract

Cited by 44 (20 self)
 Add to MetaCart
This paper describes a method for learning flexible concepts. by which are meant concepts that lack precise definition and are contextqlependent. To describe such concepts, the method employs a twotiered represen tation. in which the first tier captures explicitly basic concept properties, and the second tier characterizes allowable concept's modifications and context dependency. In e proposed method. the first tier, called Base Concept Representation (BCR), is created in two phases. In phase 1, the AQ15 rule learning program is applied to induce a complete and consistent concept description from supplied examples. In phase 2, this description is optimized according to a domaindependent quality criterion. The second tier, called the inferential concept interpretation dCI). consists of a procedure for flexible matching, and a set of inference rules. The proposed method has been implemented in the POSEIDON system. and experimentally tested on two realworld problems: [earning the concept of an acceptable umon contract, and learning voting patterns of Republicans and Democrats in the U.S. Congress. For comparison, a few other learning methods were also applied to the same problems. These methods included simple variants of exemplarbased learning, and an ID3tyl: decision tree learning, implemented m the ASSISTANT program. In the exl:riments, POSEIDON generated concept descriptions that were both, more accurate and also substantially simpler than those produced by the other methods.
Toward a Unified Theory of Learning: Multistrategy TaskAdaptive Learning
 IN: READINGS IN KNOWLEDGE ACQUISITION AND
, 1993
"... Any learning process can be viewed as a selfmodification of the leaxnefs current knowledge tArough an. interaction with some information source. Such knowledge modification is guided by the learner's deshe to achieve a certain outcome, and can engage any kind of inference. The type of inference inv ..."
Abstract

Cited by 29 (10 self)
 Add to MetaCart
Any learning process can be viewed as a selfmodification of the leaxnefs current knowledge tArough an. interaction with some information source. Such knowledge modification is guided by the learner's deshe to achieve a certain outcome, and can engage any kind of inference. The type of inference involved depends on he input information, the current (background) knowledge and the learneFs task ax hand. Based on such a view of learning, several fundamental concepts are analized and clarified, in paxticular, analytic and synthetic learning, derivm:ional and hypothetical explanation, constnictive induction, abduction, abstraction and deductive generalization. It is shown that inductive generalization and abduction can be viewed as two basic forms of general induction, and that abstraction and deductive generalization axe two related forms of constructive deduction. Using this conceptual framework, a methodology for multistrategy taskadaptive learning (MTL) is outlined, in which learning strategies axe combined dynamically, depending on the current learning situation. Speccally, an MTL learner anaLizes a "wiad" relationship among the input information, the background knowledge and the learning task, and on that basis determines which strategy, or. a combination thereof, is most appropriate at a given learning step. To implement the MTL methodology, a new knowledge representation is proposed, based on the parametric association rules (PARs). Basic ideas of MTL are illustrated by means of the wellknown "cup" example, through which is shown how an MTL learner can employ, depending the above mad relationship, emprical learning, constructive inductive generalization, abduction, explanationbased learning and absuaction.
Data Mining and Knowledge Discovery: A Review of Issues and a Multistrategy Approach
 MACHINE LEARNING AND DATA MINING: METHODS AND APPLICATIONS
, 1997
"... An enormous proliferation of databases in almost every area of human endeavor has created a great demand for new, powerful tools for turning data into useful, taskoriented knowledge. In efforts to satisfy this need, researchers have been exploring ideas and methods developed in machine learning, pa ..."
Abstract

Cited by 25 (12 self)
 Add to MetaCart
An enormous proliferation of databases in almost every area of human endeavor has created a great demand for new, powerful tools for turning data into useful, taskoriented knowledge. In efforts to satisfy this need, researchers have been exploring ideas and methods developed in machine learning, pattern recognition, statistical data analysis, data visualization, neural nets, etc. These efforts have led to the emergence of a new research area, frequently called data mining and knowledge discovery. The first part of this chapter is a compendium of ideas on the applicability of symbolic machine learning methods to this area. The second part describes a multistrategy methodology for conceptual data exploration, by which we mean the derivation of highlevel concepts and descriptions from data through symbolic reasoning involving both data and background knowledge. The methodology, which has been implemented in the INLEN system, combines machine learning, database and knowledgebased techn...
Multistrategy Constructive Induction: AQ17MCI
 Proceedings of the Second International Workshop on Multistrategy Learning, Harpers Ferry, WV
, 1993
"... This paper presents a method for multistrategy constructive induction that integrates two inferential learning strategies—empirical induction and deduction, and two computational methods—datadriven and hypothesisdriven. The method generates inductive hypotheses in an iteratively modified represent ..."
Abstract

Cited by 25 (7 self)
 Add to MetaCart
This paper presents a method for multistrategy constructive induction that integrates two inferential learning strategies—empirical induction and deduction, and two computational methods—datadriven and hypothesisdriven. The method generates inductive hypotheses in an iteratively modified representation space. The operators modifying the representation space are classified into "constructors, " which expand the space (by generating additional attributes) and "destructors " which contract the space (by removing low relevance attributes or abstracting attribute values). Constructors generate new dimensions (attributes) by analyzing original or transformed examples (datadriven) and by analyzing the rules obtained in the previous iteration (hypothesisdriven). Destructors detect the irrelevant components of the representation space by rulebased inference or statistical analysis. The method has been implemented in the AQ17MCI program. The preliminary results from applying it to a problem with noisy training data and large number of irrelevant attributes demonstrated a superiority of the method over other constructive induction methods both in terms of the predictive accuracy, as well as the overall simplicity of the generated descriptions. Key words: multistrategy learning, inductive inference, constructive induction, representation space, concept learning. 1.
The Management of ContextSensitive Features: A Review of Strategies
, 1996
"... In this paper, we review five heuristic strategies for handling contextsensitive features in supervised machine learning from examples. We discuss two methods for recovering lost (implicit) contextual information. We mention some evidence that hybrid strategies can have a synergetic effect. We t ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
In this paper, we review five heuristic strategies for handling contextsensitive features in supervised machine learning from examples. We discuss two methods for recovering lost (implicit) contextual information. We mention some evidence that hybrid strategies can have a synergetic effect. We then show how the work of several machine learning researchers fits into this framework. While we do not claim that these strategies exhaust the possibilities, it appears that the framework includes all of the techniques that can be found in the published literature on contextsensitive learning.
Second Tier for Decision Trees
 Machine Learning: Proceedings of the 13th International Conference
, 1996
"... A learner's performance does not rely only on the representation language and on the algorithm inducing a hypothesis in this language. Also the way the induced hypothesis is interpreted for the needs of concept recognition is of interest. A flexible methodology for hypothesis interpretion is offered ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
A learner's performance does not rely only on the representation language and on the algorithm inducing a hypothesis in this language. Also the way the induced hypothesis is interpreted for the needs of concept recognition is of interest. A flexible methodology for hypothesis interpretion is offered by the philosophy of a learner's second tier as originally suggested by Michalski (1987). Here, the potential of this general approach is demonstrated in the framework of numeric decision trees. The second tier improves classification performance, increases ability to handle context, and facilitates transfer of a hypothesis between different contexts. 1 Introduction This paper concentrates on concept learning from examples described by vectors of numeric variables and classified as positive and negative instances of the concept. No background knowledge is considered. The learner takes as input a set of pairs [x; c(x)], where x = [x 1 ; x 2 ; : : : ; xn ] is the vector describing the examp...
A MULTISTRATEGY LEARNING APPROACH TO DOMAIN MODELING AND KNOWLEDGE ACQUISITION
 Y. KODRATOFF (ED), MACHINE LEARNING · EWSL91
, 1991
"... This paper presents an approach to domain modeling and knowledge acquisition that consists of a gradual and goaldriven improvement of an incomplete domain model provided by a human expen. Our approach is based on a multistrategy learning method that allows a system with incomplete knowledge to lear ..."
Abstract

Cited by 9 (6 self)
 Add to MetaCart
This paper presents an approach to domain modeling and knowledge acquisition that consists of a gradual and goaldriven improvement of an incomplete domain model provided by a human expen. Our approach is based on a multistrategy learning method that allows a system with incomplete knowledge to learn general inference or problem solving rules from specific facts or problem solving episodes received from the human expen. The system will learn the general knowledge pieces by considering all their possible instances in the current domain model. trying to learn complete and consistent descriptions. Because of the incompleteness of the domain model the learned rules will have exceptions that are eliminated by refining the definitions of the existing concepts or by defining new concepts.
Kernel Regression Trees
 Proceedings of the poster papers of the European Conference on Machine Learning. University of Economics, Faculty of Informatics and Statistics
"... This paper presents a novel method for learning in domains with continuous target variables. The method integrates regression trees with kernel regression models. The integration is done by adding kernel regressors at the tree leaves producing what we call kernel regression trees. The approach is mo ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
This paper presents a novel method for learning in domains with continuous target variables. The method integrates regression trees with kernel regression models. The integration is done by adding kernel regressors at the tree leaves producing what we call kernel regression trees. The approach is motivated by the goal of trying to take advantage of the different biases of the two regression methodologies. The presented method is implemented. Kernel regression trees are comprehensible and accurate regression models of the data. Experimental comparisons on both artificial and real world domains revealed the superiority of kernel regression trees when compared to the two individual approaches. The use of kernel regression at the trees leaves gives a significant performance gain. Moreover, a good performance level is achieved with much smaller trees than if kernel regression was not used. Compared to kernel regression our method improves its comprehensibility and execution time. Keywords ...