Results 1 
7 of
7
Induction of Decision Trees
 MACH. LEARN
, 1986
"... The technology for building knowledgebased systems by inductive inference from examples has been demonstrated successfully in several practical applications. This paper summarizes an approach to synthesizing decision trees that has been used in a variety of systems, and it describes one such syste ..."
Abstract

Cited by 3331 (4 self)
 Add to MetaCart
The technology for building knowledgebased systems by inductive inference from examples has been demonstrated successfully in several practical applications. This paper summarizes an approach to synthesizing decision trees that has been used in a variety of systems, and it describes one such system, ID3, in detail. Results from recent studies show ways in which the methodology can be modified to deal with information that is noisy and/or incomplete. A reported shortcoming of the basic algorithm is discussed and two means of overcoming it are compared. The paper concludes with illustrations of current research directions.
A Guide to the Literature on Learning Probabilistic Networks From Data
, 1996
"... This literature review discusses different methods under the general rubric of learning Bayesian networks from data, and includes some overlapping work on more general probabilistic networks. Connections are drawn between the statistical, neural network, and uncertainty communities, and between the ..."
Abstract

Cited by 172 (0 self)
 Add to MetaCart
This literature review discusses different methods under the general rubric of learning Bayesian networks from data, and includes some overlapping work on more general probabilistic networks. Connections are drawn between the statistical, neural network, and uncertainty communities, and between the different methodological communities, such as Bayesian, description length, and classical statistics. Basic concepts for learning and Bayesian networks are introduced and methods are then reviewed. Methods are discussed for learning parameters of a probabilistic network, for learning the structure, and for learning hidden variables. The presentation avoids formal definitions and theorems, as these are plentiful in the literature, and instead illustrates key concepts with simplified examples. Keywords Bayesian networks, graphical models, hidden variables, learning, learning structure, probabilistic networks, knowledge discovery. I. Introduction Probabilistic networks or probabilistic gra...
Applications of Machine Learning and Rule Induction
 Communications of the ACM
, 1995
"... An important area of application for machine learning is in automating the acquisition of knowledge bases required for expert systems. In this paper, we review the major paradigms for machine learning, including neural networks, instancebased methods, genetic learning, rule induction, and analytic ..."
Abstract

Cited by 97 (9 self)
 Add to MetaCart
An important area of application for machine learning is in automating the acquisition of knowledge bases required for expert systems. In this paper, we review the major paradigms for machine learning, including neural networks, instancebased methods, genetic learning, rule induction, and analytic approaches. We consider rule induction in greater detail and review some of its recent applications, in each case stating the problem, how rule induction was used, and the status of the resulting expert system. In closing, we identify the main stages in fielding an applied learning system and draw some lessons from successful applications. Introduction Machine learning is the study of computational methods for improving performance by mechanizing the acquisition of knowledge from experience. Expert performance requires much domainspecific knowledge, and knowledge engineering has produced hundreds of AI expert systems that are now used regularly in industry. Machine learning aims to provide ...
A Theory of Learning Classification Rules
, 1992
"... The main contributions of this thesis are a Bayesian theory of learning classification rules, the unification and comparison of this theory with some previous theories of learning, and two extensive applications of the theory to the problems of learning class probability trees and bounding error whe ..."
Abstract

Cited by 79 (6 self)
 Add to MetaCart
The main contributions of this thesis are a Bayesian theory of learning classification rules, the unification and comparison of this theory with some previous theories of learning, and two extensive applications of the theory to the problems of learning class probability trees and bounding error when learning logical rules. The thesis is motivated by considering some current research issues in machine learning such as bias, overfitting and search, and considering the requirements placed on a learning system when it is used for knowledge acquisition. Basic Bayesian decision theory relevant to the problem of learning classification rules is reviewed, then a Bayesian framework for such learning is presented. The framework has three components: the hypothesis space, the learning protocol, and criteria for successful learning. Several learning protocols are analysed in detail: queries, logical, noisy, uncertain and positiveonly examples. The analysis is done by interpreting a protocol as a...
Learning Logical Exceptions In Chess
, 1994
"... This thesis is about inductive learning, or learning from examples. The goal has been to investigate ways of improving learning algorithms. The chess endgame "King and Rook against King" (KRK) was chosen, and a number of benchmark learning tasks were defined within this domain, sufficient to overc ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
This thesis is about inductive learning, or learning from examples. The goal has been to investigate ways of improving learning algorithms. The chess endgame "King and Rook against King" (KRK) was chosen, and a number of benchmark learning tasks were defined within this domain, sufficient to overchallenge stateof theart learning algorithms. The tasks comprised learning rules to distinguish (1) illegal positions and (2) legal positions won optimally in a fixed number of moves. From our experimental results with task (1) the bestperforming algorithm was selected and a number of improvements were made. The principal extension to this generalisation method was to alter its representation from classical logic to a nonmonotonic formalism. A novel algorithm was developed in this framework to implement rule specialisation, relying on the invention of new predicates. When experimentally tested this combined approach did not at first deliver the expected performance gains due to restrictio...
Rule Induction with Extension Matrices
 American Society for Inform. Science
, 1998
"... This paper presents a heuristic, attributebased, noisetolerant data mining program, HCV (Version 2.0), based on the newlydeveloped extension matrix approach. By dividing the positive examples (PE) of a specific class in a given example set into intersecting groups and adopting a set of strategies ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
This paper presents a heuristic, attributebased, noisetolerant data mining program, HCV (Version 2.0), based on the newlydeveloped extension matrix approach. By dividing the positive examples (PE) of a specific class in a given example set into intersecting groups and adopting a set of strategies to find a heuristic conjunctive formula in each group which covers all the group's positive examples and none of the negative examples (NE), the HCV induction algorithm adopted in the HCV (Version 2.0) software finds a description formula in the form of variablevalued logic for PE against NE in loworder polynomial time at induction time. In addition to the HCV induction algorithm, this paper also outlines some of the techniques for noise handling and discretization of numerical domains developed and implemented in the HCV (Version 2.0) software, and provides a performance comparison of HCV (Version 2.0) with other data mining algorithms ID3, C4.5, C4.5rules and NewID in noisy and continuo...
Applications of Machine Learning and Rule
"... Machine learning is the study of computational methods for improving performance by mechanizing the acquisition of knowledge from experience. Expert performance requires much domainspecific knowledge, and knowledge engineering has produced hundreds of AI expert systems that are now used regularly i ..."
Abstract
 Add to MetaCart
Machine learning is the study of computational methods for improving performance by mechanizing the acquisition of knowledge from experience. Expert performance requires much domainspecific knowledge, and knowledge engineering has produced hundreds of AI expert systems that are now used regularly in industry. Machine learning aims to provide increasing levels of automation in the knowledge engineering process, replacing much timeconsuming human activity with automatic techniques that improve accuracy or efficiency by discovering and exploiting regularities in training data. The ultimate test of machine learning is its ability to produce systems that are used regularly in industry, education, and elsewhere. Recent successes in applying machine learning to realworld problems are examined in this article. Five basic learning paradigms are reviewed before focusing on one of these: methods for inducing logical rules from experience.