Results 1  10
of
18
A System for Induction of Oblique Decision Trees
 Journal of Artificial Intelligence Research
, 1994
"... This article describes a new system for induction of oblique decision trees. This system, OC1, combines deterministic hillclimbing with two forms of randomization to find a good oblique split (in the form of a hyperplane) at each node of a decision tree. Oblique decision tree methods are tuned espe ..."
Abstract

Cited by 251 (13 self)
 Add to MetaCart
This article describes a new system for induction of oblique decision trees. This system, OC1, combines deterministic hillclimbing with two forms of randomization to find a good oblique split (in the form of a hyperplane) at each node of a decision tree. Oblique decision tree methods are tuned especially for domains in which the attributes are numeric, although they can be adapted to symbolic or mixed symbolic/numeric attributes. We present extensive empirical studies, using both real and artificial data, that analyze OC1's ability to construct oblique trees that are smaller and more accurate than their axisparallel counterparts. We also examine the benefits of randomization for the construction of oblique decision trees. 1. Introduction Current data collection technology provides a unique challenge and opportunity for automated machine learning techniques. The advent of major scientific projects such as the Human Genome Project, the Hubble Space Telescope, and the human brain mappi...
Multivariate Decision Trees
, 1992
"... Multivariate decision trees overcome a representational limitation of univariate decision trees: univariate decision trees are restricted to splits of the instance space that are orthogonal to the feature's axis. This paper discusses the following issues for constructing multivariate decision trees: ..."
Abstract

Cited by 119 (6 self)
 Add to MetaCart
Multivariate decision trees overcome a representational limitation of univariate decision trees: univariate decision trees are restricted to splits of the instance space that are orthogonal to the feature's axis. This paper discusses the following issues for constructing multivariate decision trees: representing a multivariate test, including symbolic and numeric features, learning the coefficients of a multivariate test, selecting the features to include in a test, and pruning of multivariate decision trees. We present some new and review some wellknown methods for forming multivariate decision trees. The methods are compared across a variety of learning tasks to assess each method's ability to find concise, accurate decision trees. The results demonstrate that some multivariate methods are more effective than others. In addition, the experiments confirm that allowing multivariate tests improves the accuracy of the resulting decision tree over univariate trees. Contents 1 Introduc...
Addressing the Selective Superiority Problem: Automatic Algorithm/Model Class Selection
, 1993
"... The results of empirical comparisons of existing learning algorithms illustrate that each algorithm has a selective superiority; it is best for some but not all tasks. Given a data set, it is often not clear beforehand which algorithm will yield the best performance. In such cases one must search th ..."
Abstract

Cited by 63 (2 self)
 Add to MetaCart
The results of empirical comparisons of existing learning algorithms illustrate that each algorithm has a selective superiority; it is best for some but not all tasks. Given a data set, it is often not clear beforehand which algorithm will yield the best performance. In such cases one must search the space of available algorithms to find the one that produces the best classifier. In this paper we present an approach that applies knowledge about the representational biases of a set of learning algorithms to conduct this search automatically. In addition, the approach permits the available algorithms' model classes to be mixed in a recursive treestructured hybrid. We describe an implementation of the approach, MCS, that performs a heuristic bestfirst search for the best hybrid classifier for a set of data. An empirical comparison of MCS to each of its primitive learning algorithms, and to the computationally intensive method of crossvalidation, illustrates that automatic selection of l...
An Improved Algorithm for Incremental Induction of Decision Trees
 In Proceedings of the Eleventh International Conference on Machine Learning
, 1994
"... This paper presents an algorithm for incremental induction of decision trees that is able to handle both numeric and symbolic variables. In order to handle numeric variables, a new tree revision operator called `slewing' is introduced. Finally, a nonincremental method is given for finding a decisio ..."
Abstract

Cited by 45 (4 self)
 Add to MetaCart
This paper presents an algorithm for incremental induction of decision trees that is able to handle both numeric and symbolic variables. In order to handle numeric variables, a new tree revision operator called `slewing' is introduced. Finally, a nonincremental method is given for finding a decision tree based on a direct metric of a candidate tree. Contents 1 Introduction 1 2 Design Goals 1 3 An Improved Algorithm 2 3.1 Incorporating a Training Instance : : : : : : : : : : : : : : : : : : : : : : : : 2 3.2 Ensuring a Best Test at Each Decision Node : : : : : : : : : : : : : : : : : : 3 3.3 Information Kept at a Decision Node : : : : : : : : : : : : : : : : : : : : : : 3 3.4 Tree Transposition : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 4 3.5 Slewing a Cutpoint : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 4 3.6 How to Ensure a Best Test Everywhere : : : : : : : : : : : : : : : : : : : : : 5 4 Incremental Training Cost 5 5 ErrorCorrection Mo...
Simplifying Decision Trees: A Survey
, 1996
"... Induced decision trees are an extensivelyresearched solution to classification tasks. For many practical tasks, the trees produced by treegeneration algorithms are not comprehensible to users due to their size and complexity. Although many tree induction algorithms have been shown to produce simpl ..."
Abstract

Cited by 38 (5 self)
 Add to MetaCart
Induced decision trees are an extensivelyresearched solution to classification tasks. For many practical tasks, the trees produced by treegeneration algorithms are not comprehensible to users due to their size and complexity. Although many tree induction algorithms have been shown to produce simpler, more comprehensible trees (or data structures derived from trees) with good classification accuracy, tree simplification has usually been of secondary concern relative to accuracy and no attempt has been made to survey the literature from the perspective of simplification. We present a framework that organizes the approaches to tree simplification and summarize and critique the approaches within this framework. The purpose of this survey is to provide researchers and practitioners with a concise overview of treesimplification approaches and insight into their relative capabilities. In our final discussion, we briefly describe some empirical findings and discuss the application of tree i...
Constructing XofN Attributes for Decision Tree Learning
 Machine Learning
, 1998
"... . While many constructive induction algorithms focus on generating new binary attributes, this paper explores novel methods of constructing nominal and numeric attributes. We propose a new constructive operator, XofN. An XofN representation is a set containing one or more attributevalue pairs. ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
. While many constructive induction algorithms focus on generating new binary attributes, this paper explores novel methods of constructing nominal and numeric attributes. We propose a new constructive operator, XofN. An XofN representation is a set containing one or more attributevalue pairs. For a given instance, the value of an XofN representation corresponds to the number of its attributevalue pairs that are true of the instance. A single XofN representation can directly and simply represent any concept that can be represented by a single conjunctive, a single disjunctive, or a single MofN representation commonly used for constructive induction, and the reverse is not true. In this paper, we describe a constructive decision tree learning algorithm, called XofN. When building decision trees, this algorithm creates one XofN representation, either as a nominal attribute or as a numeric attribute, at each decision node. The construction of XofN representations is carrie...
A scheme for feature construction and a comparison of empirical methods
 Proceedings of the Twelfth International Joint Conference on Artificial Intelligence
, 1991
"... A class of concept learning algorithms CL augments standard similaritybased techniques by performing feature construction based on the SBL output. Pagallo and Hausslcr's FRINGE, Pagallo's extension Symmetric FRINGE (SymFringe) and a refinement we call DCFringe are all instances of this class using ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
A class of concept learning algorithms CL augments standard similaritybased techniques by performing feature construction based on the SBL output. Pagallo and Hausslcr's FRINGE, Pagallo's extension Symmetric FRINGE (SymFringe) and a refinement we call DCFringe are all instances of this class using decision trees as their underlying representation. These methods use patterns at the fringe of the tree to guide their construction, but DCFringe uses limited construction of conjunction and disjunction. Experiments with small DNF and CNF concepts show that DCFringe outperforms both the purely conjunctive FRINGE and the less restrictive SymFringe, in terms of accuracy, conciseness, and efficiency. Further, the gain of these methods is linked to the size of the training set. We discuss the apparent limitation of current methods to concepts exhibiting a low degree of feature interaction, and suggest ways to alleviate it. This leads to a feature construction approach based on a wider variety of patterns restricted by statistical measures and optional knowledge. 1
Automatic Feature Construction and a Simple Rule Induction Algorithm for Skin Detection
 In Proc. of the ICML Workshop on Machine Learning in Computer Vision
, 2002
"... Many vision systems use skin detection as a principal component. Skin detection algorithms, normally evaluate a single and thus limited color model, such as HSV, Y C r C b , YUV, RGB, normalized RGB, etc. Their limited performance, however, suggests that they are looking at the incorrect color model ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
Many vision systems use skin detection as a principal component. Skin detection algorithms, normally evaluate a single and thus limited color model, such as HSV, Y C r C b , YUV, RGB, normalized RGB, etc. Their limited performance, however, suggests that they are looking at the incorrect color models.
Constructing Nominal XofN Attributes
 Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence
, 1995
"... Most constructive induction researchers focus only on new boolean attributes. This paper reports a new constructive induction algorithm, called XofN, that constructs new nominal attributes in the form of XofN representations. An XofN is a set containing one or more attributevalue pairs. For a g ..."
Abstract

Cited by 14 (6 self)
 Add to MetaCart
Most constructive induction researchers focus only on new boolean attributes. This paper reports a new constructive induction algorithm, called XofN, that constructs new nominal attributes in the form of XofN representations. An XofN is a set containing one or more attributevalue pairs. For a given instance, its value corresponds to the number of its attributevalue pairs that are true. The promising preliminary experimental results, on both artificial and realworld domains, show that constructing new nominal attributes in the form of XofN representations can significantly improve the performance of selective induction in terms of both higher prediction accuracy and lower theory complexity. 1 Introduction A wellknown elementary limitation of selective induction algorithms is that when tasksupplied attributes are not adequate for describing hypotheses, their performance in terms of prediction accuracy and/or theory complexity is poor. To overcome this limitation, constructiv...
Theoretical Comparison between the Gini Index and Information Gain Criteria
 Annals of Mathematics and Artificial Intelligence
, 2000
"... Knowledge Discovery in Databases (KDD) is an active and important research area with the promise for a high payoff in many business and scientific applications. One of the main tasks in KDD is classification. A particular efficient method for classification is decision tree induction. The selectio ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
Knowledge Discovery in Databases (KDD) is an active and important research area with the promise for a high payoff in many business and scientific applications. One of the main tasks in KDD is classification. A particular efficient method for classification is decision tree induction. The selection of the attribute used at each node of the tree to split the data (split criterion) is crucial in order to correctly classify objects. Different split criteria were proposed in the literature (Information Gain, Gini Index, etc.). It is not obvious which of them will produce the best decision tree for a given data set. A large amount of empirical tests were conducted in order to answer this question. No conclusive results were found.