Results 11 
14 of
14
Decision Trees For Classification: A Review And Some New Results
"... Introduction Topdown induction of decision trees is a simple and powerful method of inferring classication rules from a set of labeled examples 1 . Each node of the tree implements a decision rule that splits the examples into two or more partitions. New nodes are created to handle each of the p ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Introduction Topdown induction of decision trees is a simple and powerful method of inferring classication rules from a set of labeled examples 1 . Each node of the tree implements a decision rule that splits the examples into two or more partitions. New nodes are created to handle each of the partitions and a node is considered terminal or a leaf node based on a stopping criteria. This standard approach to decision tree construction thus corresponds to a topdown greedy algorithm that makes locally optimal decisions at each node. There are two advantages that decision trees have over many other methods of classication methods. The rst is that the sequence of decisions made from the root node to the eventual labeling of a test input is easy to follow. This gives them an intuitive appeal that other methods of classication such as
A DistanceBased Attribute for Decision Tree Induction Selection Measure
"... Abstract. This note introduces a new attribute selection measure for ID31ike inductive algorithms. This measure is based on a distance between partitions such that the selected attribute in a node induces the partition which is closest to the correct partition of the subset of training examples cor ..."
Abstract
 Add to MetaCart
Abstract. This note introduces a new attribute selection measure for ID31ike inductive algorithms. This measure is based on a distance between partitions such that the selected attribute in a node induces the partition which is closest to the correct partition of the subset of training examples corresponding to this node. The relationship of this measure with Quinlan's information gain is also established. It is also formally proved that our distance is not biased towards attributes with large numbers of values. Experimental studies with this distance confirm previously reported results showing that the predictive accuracy of induced decision trees is not sensitive to the goodness of the attribute selection measure. However, this distance produces smaller trees than the gain ratio measure of Quinlan, especially in the case of data whose attributes have significantly different numbers of values.
By
"... me with some related work on learning decision structures and decision · graphs, and Professor George Tecuci, Computer Science Department, for pointing some related work. I would like to thank my colleagues: Nabil AlKharouf for reviewing my dissertation, Eric Bloedorn for reviewing an earlier draft ..."
Abstract
 Add to MetaCart
me with some related work on learning decision structures and decision · graphs, and Professor George Tecuci, Computer Science Department, for pointing some related work. I would like to thank my colleagues: Nabil AlKharouf for reviewing my dissertation, Eric Bloedorn for reviewing an earlier draft of my dissertation and for using his program AQ17DCI in my experiments, Srinivas Gutta for providing some application for my Ph.D. work, Mike Heib for reviewing an earlier draft of my dissertation and helping me find relevant articles, Ken Kaufman for reviewing an earlier draft of my thesis, Mark Maloof for providing me with script files which made it easier to iteratively run AQ1Sc, Halah Vafaie for working with her on application and comparison of different aspects of my work, and Janusz Wnek for using his DIAV program for