Results 1  10
of
22
Instancebased learning algorithms
 Machine Learning
, 1991
"... Abstract. Storing and using specific instances improves the performance of several supervised learning algorithms. These include algorithms that learn decision trees, classification rules, and distributed networks. However, no investigation has analyzed algorithms that use only specific instances to ..."
Abstract

Cited by 1051 (18 self)
 Add to MetaCart
Abstract. Storing and using specific instances improves the performance of several supervised learning algorithms. These include algorithms that learn decision trees, classification rules, and distributed networks. However, no investigation has analyzed algorithms that use only specific instances to solve incremental learning tasks. In this paper, we describe a framework and methodology, called instancebased learning, that generates classification predictions using only specific instances. Instancebased learning algorithms do not maintain a set of abstractions derived from specific instances. This approach extends the nearest neighbor algorithm, which has large storage requirements. We describe how storage requirements can be significantly reduced with, at most, minor sacrifices in learning rate and classification accuracy. While the storagereducing algorithm performs well on several realworld databases, its performance degrades rapidly with the level of attribute noise in training instances. Therefore, we extended it with a significance test to distinguish noisy instances. This extended algorithm's performance degrades gracefully with increasing noise levels and compares favorably with a noisetolerant decision tree algorithm.
Incremental Induction of Decision Trees
, 1989
"... This article presents an incremental algorithm for inducing decision trees equivalent to those formed by Quinlan's nonincremental ID3 algorithm, given the same training instances. The new algorithm, named ID5R, lets one apply the ID3 induction process to learning tasks in which training instances ..."
Abstract

Cited by 160 (3 self)
 Add to MetaCart
This article presents an incremental algorithm for inducing decision trees equivalent to those formed by Quinlan's nonincremental ID3 algorithm, given the same training instances. The new algorithm, named ID5R, lets one apply the ID3 induction process to learning tasks in which training instances are presented serially.
Genetic Programming: A Paradigm For Genetically Breeding Populations Of Computer Programs To Solve Problems
, 1990
"... Many seemingly different problems in artificial intelligence, symbolic processing, and machine learning can be viewed as requiring discovery of a computer program that produces some desired output for particular inputs. When viewed in this way, the process of solving these problems becomes equivalen ..."
Abstract

Cited by 147 (26 self)
 Add to MetaCart
Many seemingly different problems in artificial intelligence, symbolic processing, and machine learning can be viewed as requiring discovery of a computer program that produces some desired output for particular inputs. When viewed in this way, the process of solving these problems becomes equivalent to searching a space of possible computer programs for a most fit individual computer program. The new genetic programming paradigm described herein provides a way to search for this most fit individual computer program. In this new genetic programming paradigm, populations of computer programs are genetically bred using the Darwinian principle of survival of the fittest and using a genetic crossover (recombination) operator appropriate for genetically mating computer programs. In this paper, the process of formulating and solving problems using this new paradigm is illustrated using examples from various areas.
Learning Concept Classification Rules using Genetic Algorithms
 Proceedings of the Twelfth International Joint Conference on Artificial Intelligence
, 1991
"... In this paper we explore the use of an adaptive search technique (genetic algorithms) to construct a system GABEL which continually learns and refines concept classification rules from its interaction with the environment. The performance of the system is measured on a set of concept learning proble ..."
Abstract

Cited by 83 (7 self)
 Add to MetaCart
In this paper we explore the use of an adaptive search technique (genetic algorithms) to construct a system GABEL which continually learns and refines concept classification rules from its interaction with the environment. The performance of the system is measured on a set of concept learning problems and compared with the performance of two existing systems: ID5R and C4.5. Preliminary results support that, despite minimal system bias, GABIL is an effective concept learner and is quite competitive with ID5R and C4.5 as the target concept increases in complexity. 1
A knowledgeintensive genetic algorithm for supervised learning
, 1993
"... Abstract. Supervised learning in attributebased spaces is one of the most popular machine learning problems studied and, consequently, has attracted considerable attention of the genetic algorithm community. The fullmemory approach developed here uses the same nighlevel descriptive language that i ..."
Abstract

Cited by 82 (1 self)
 Add to MetaCart
Abstract. Supervised learning in attributebased spaces is one of the most popular machine learning problems studied and, consequently, has attracted considerable attention of the genetic algorithm community. The fullmemory approach developed here uses the same nighlevel descriptive language that is used in rulebased systems. This allows for an easy utilization of inference rules of the wellknown inductive learning methodology, which replace the traditional domainindependent operators and make the search taskspecific. Moreover, a closer relationship between the underlying task and the processing mechanisms provides a setting for an application of more powerful taskspecific heuristics. Initial results obtained with a prototype implementation for the simplest case of single concepts indicate that genetic algorithms can be effectively used to process nighlevel concepts and incorporate taskspecific knowledge. The method of abstracting the genetic algorithm to the problem level, described here for the supervised inductive learning, can be also extended to other domains and tasks, since it provides a framework for combining recently popular genetic algorithm methods with traditional problemsolving methodologies. Moreover, in this particular case, it provides a very powerful tool enabling study of the widely accepted but not so well understood inductive learning methodology.
BottomUp Induction of Oblivious ReadOnce Decision Graphs
, 1994
"... . We investigate the use of oblivious, readonce decision graphs as structures for representing concepts over discrete domains, and present a bottomup, hillclimbing algorithm for inferring these structures from labelled instances. The algorithm is robust with respect to irrelevant attributes, and ..."
Abstract

Cited by 45 (8 self)
 Add to MetaCart
. We investigate the use of oblivious, readonce decision graphs as structures for representing concepts over discrete domains, and present a bottomup, hillclimbing algorithm for inferring these structures from labelled instances. The algorithm is robust with respect to irrelevant attributes, and experimental results show that it performs well on problems considered difficult for symbolic induction methods, such as the Monk's problems and parity. 1 Introduction Top down induction of decision trees [25, 24, 20] has been one of the principal induction methods for symbolic, supervised learning. The tree structure, which is used for representing the hypothesized target concept, suffers from some wellknown problems, most notably the replication problem and the fragmentation problem [23]. The replication problem forces duplication of subtrees in disjunctive concepts, such as (A B) (C D); the fragmentation problem causes partitioning of the data into fragments, when a higharity attrib...
An incremental method for finding multivariate splits for decision trees
 In Proceedings of the Seventh International Conference on Machine Learning
, 1990
"... Decision trees that are limited to testing a single variable at a node are potentially much larger than trees that allow testing multiple variables at a node. This limitation reduces the ability to express concepts succinctly, which renders many classes of concepts difficult or impossible to express ..."
Abstract

Cited by 29 (3 self)
 Add to MetaCart
Decision trees that are limited to testing a single variable at a node are potentially much larger than trees that allow testing multiple variables at a node. This limitation reduces the ability to express concepts succinctly, which renders many classes of concepts difficult or impossible to express. This paper presents the PT2 algorithm, which searches for a multivariate split at each node. Because a univariate test is a special case of a multivariate test, the expressive power of such decision trees is strictly increased. The algorithm is incremental, handles ordered and unordered variables, and estimates missing values. 1
Transferring Previously Learned BackPropagation Neural Networks To New Learning Tasks
, 1993
"... ..."
Learning of boolean functions using support vector machines
 In Proc. of the 12th International Conference on Algorithmic Learning Theory
, 2001
"... Abstract. This paper concerns the design of a Support Vector Machine (SVM) appropriate for the learning of Boolean functions. This is motivated by the need of a more sophisticated algorithm for classification in discrete attribute spaces. Classification in discrete attribute spaces is reduced to the ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
Abstract. This paper concerns the design of a Support Vector Machine (SVM) appropriate for the learning of Boolean functions. This is motivated by the need of a more sophisticated algorithm for classification in discrete attribute spaces. Classification in discrete attribute spaces is reduced to the problem of learning Boolean functions from examples of its input/output behavior. Since any Boolean function can be written in Disjunctive Normal Form (DNF), it can be represented as a weighted linear sum of all possible conjunctions of Boolean literals. This paper presents a particular kernel function called the DNF kernel which enables SVMs to efficiently learn such linear functions in the highdimensional space whose coordinates correspond to all possible conjunctions. For a limited form of DNF consisting of positive Boolean literals, the monotone DNF kernel is also presented. SVMs employing these kernel functions can perform the learning in a highdimensional feature space whose features are derived from given basic attributes. In addition, it is expected that SVMs’ wellfounded capacity control alleviates overfitting. In fact, an empirical study on learning of randomly generated Boolean functions shows that the resulting algorithm outperforms C4.5. Furthermore, in comparison with SVMs employing the Gaussian kernel, it is shown that DNF kernel produces accuracy comparable to best adjusted Gaussian kernels. 1