Results 1  10
of
28
A System for Induction of Oblique Decision Trees
 Journal of Artificial Intelligence Research
, 1994
"... This article describes a new system for induction of oblique decision trees. This system, OC1, combines deterministic hillclimbing with two forms of randomization to find a good oblique split (in the form of a hyperplane) at each node of a decision tree. Oblique decision tree methods are tuned espe ..."
Abstract

Cited by 251 (13 self)
 Add to MetaCart
This article describes a new system for induction of oblique decision trees. This system, OC1, combines deterministic hillclimbing with two forms of randomization to find a good oblique split (in the form of a hyperplane) at each node of a decision tree. Oblique decision tree methods are tuned especially for domains in which the attributes are numeric, although they can be adapted to symbolic or mixed symbolic/numeric attributes. We present extensive empirical studies, using both real and artificial data, that analyze OC1's ability to construct oblique trees that are smaller and more accurate than their axisparallel counterparts. We also examine the benefits of randomization for the construction of oblique decision trees. 1. Introduction Current data collection technology provides a unique challenge and opportunity for automated machine learning techniques. The advent of major scientific projects such as the Human Genome Project, the Hubble Space Telescope, and the human brain mappi...
Multivariate Decision Trees
, 1992
"... Multivariate decision trees overcome a representational limitation of univariate decision trees: univariate decision trees are restricted to splits of the instance space that are orthogonal to the feature's axis. This paper discusses the following issues for constructing multivariate decision trees: ..."
Abstract

Cited by 119 (6 self)
 Add to MetaCart
Multivariate decision trees overcome a representational limitation of univariate decision trees: univariate decision trees are restricted to splits of the instance space that are orthogonal to the feature's axis. This paper discusses the following issues for constructing multivariate decision trees: representing a multivariate test, including symbolic and numeric features, learning the coefficients of a multivariate test, selecting the features to include in a test, and pruning of multivariate decision trees. We present some new and review some wellknown methods for forming multivariate decision trees. The methods are compared across a variety of learning tasks to assess each method's ability to find concise, accurate decision trees. The results demonstrate that some multivariate methods are more effective than others. In addition, the experiments confirm that allowing multivariate tests improves the accuracy of the resulting decision tree over univariate trees. Contents 1 Introduc...
OC1: Randomized induction of oblique decision trees
, 1993
"... This paper introduces OC1, a new algorithm for generating multivariate decision trees. Multivariate trees classify examples by testing linear combinations of the features at each nonleaf node of the tree. Each test is equivalent to a hyperplane at an oblique orientation to the axes. Because of the ..."
Abstract

Cited by 55 (4 self)
 Add to MetaCart
This paper introduces OC1, a new algorithm for generating multivariate decision trees. Multivariate trees classify examples by testing linear combinations of the features at each nonleaf node of the tree. Each test is equivalent to a hyperplane at an oblique orientation to the axes. Because of the computational intractability of finding an optimal orientation for these hyperplanes, heuristic methods must be used to produce good trees. This paper explores a new method that combines deterministic and randomized procedures to search for a good tree. Experiments on several different realworld data sets demonstrate that the method consistently finds much smaller trees than comparable methods using univariate tests. In addition, the accuracy of the trees found with our method matches or exceeds the best results of other machine learning methods. 1 Introduction Decision trees (DTs) have been used quite extensively in the machine learning literature for a wide range of classification probl...
Comparing Connectionist and Symbolic Learning Methods
 Computational Learning Theory and Natural Learning Systems: Constraints and Prospects
, 1994
"... : Experimental comparison of backpropagation and decision tree methods have provided many data points but less understanding of why one method works better for some tasks than for others. This paper observes that, just as there are sequential and parallel classification methods, there are certa ..."
Abstract

Cited by 48 (0 self)
 Add to MetaCart
: Experimental comparison of backpropagation and decision tree methods have provided many data points but less understanding of why one method works better for some tasks than for others. This paper observes that, just as there are sequential and parallel classification methods, there are certain classification tasks that lend themselves to methods of one or the other type. Introduction Numerous papers that have appeared over the last few years compare the performance of a variety of learning algorithms on real and constructed datasets. Such comparisons, uncovering the strengths and weaknesses of algorithms on different tasks, provide valuable data points that help to map and understand the inherent capabilities of the methods. One emerging theme is that these capabilities appear to be taskdependent  few researchers would claim that one method is uniformly superior to another. This paper focuses on two kinds of learning algorithms: symbolic methods, that represent what is le...
Tree Based Discretization for Continuous State Space Reinforcement Learning
, 1998
"... Reinforcement learning is an effective technique for learning action policies in discrete stochastic environments, but its efficiency can decay exponentially with the size of the state space. In many situations significant portions of a large state space may be irrelevant to a specific goal and can ..."
Abstract

Cited by 43 (5 self)
 Add to MetaCart
Reinforcement learning is an effective technique for learning action policies in discrete stochastic environments, but its efficiency can decay exponentially with the size of the state space. In many situations significant portions of a large state space may be irrelevant to a specific goal and can be aggregated into a few, relevant, states. The U Tree algorithm generates a tree based state discretization that efficiently finds the relevant state chunks of large propositional domains. In this paper, we extend the U Tree algorithm to challenging domains with a continuous state space for which there is no initial discretization. This Continuous U Tree algorithm transfers traditional regression tree techniques to reinforcement learning. We have performed experiments in a variety of domains that show that Continuous U Tree effectively handles large continuous state spaces. In this paper, we report on results in two different domains, one gives a clear visualization of the algorithm and another empirically demonstrates an effective state discretization in a simple multiagent environment.
Feature Generation Using General Constructor Functions
 MACHINE LEARNING
, 2002
"... Most classification algorithms receive as input a set of attributes of the classified objects. In many cases, however, the supplied set of attributes is not sufficient for creating an accurate, succinct and comprehensible representation of the target concept. To overcome this problem, researchers ha ..."
Abstract

Cited by 31 (5 self)
 Add to MetaCart
Most classification algorithms receive as input a set of attributes of the classified objects. In many cases, however, the supplied set of attributes is not sufficient for creating an accurate, succinct and comprehensible representation of the target concept. To overcome this problem, researchers have proposed algorithms for automatic construction of features. The majority of these algorithms use a limited predefined set of operators for building new features. In this paper we propose a generalized and flexible framework that is capable of generating features from any given set of constructor functions. These can be domainindependent functions such as arithmetic and logic operators, or domaindependent operators that rely on partial knowledge on the part of the user. The paper describes an algorithm which receives as input a set of classified objects, a set of attributes, and a specification for a set of constructor functions that contains their domains, ranges and properties. The algorithm produces as output a set of generated features that can be used by standard concept learners to create improved classifiers. The algorithm maintains a set of its best generated features and improves this set iteratively. During each iteration, the algorithm performs a beam search over its defined feature space and constructs new features by applying constructor functions to the members of its current feature set. The search is guided by general heuristic measures that are not confined to a specific feature representation. The algorithm was applied to a variety of classification problems and was able to generate features that were strongly related to the underlying target concepts. These features also significantly improved the accuracy achieved by standard concept learners, for a ...
Tree Based Hierarchical Reinforcement Learning
, 2002
"... In this thesis we investigate methods for speeding up automatic control algorithms. Specifically, we provide new abstraction techniques for Reinforcement Learning and SemiMarkov Decision Processes (SMDPs). We introduce the use of policies as temporally abstract actions. This is different from pre ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
In this thesis we investigate methods for speeding up automatic control algorithms. Specifically, we provide new abstraction techniques for Reinforcement Learning and SemiMarkov Decision Processes (SMDPs). We introduce the use of policies as temporally abstract actions. This is different from previous definitions of temporally abstract actions as we do not have termination criteria. We provide an approach for processing previously solved problems to extract these policies. We also contribute a method for using supplied or extracted policies to guide and speed up problem solving of new problems. We treat extracting policies as a supervised learning task and introduce the Lumberjack algorithm that extracts repeated substructure within a decision tree. We then introduce the TTree algorithm that combines state and temporal abstraction to increase problem solving speed on new problems. TTree solves SMDPs by using both user and machine supplied policies as temporally abstract actions while generating its own tree based abstract state representation.
Discriminant Trees
, 1999
"... In a previous work, we presented system Ltree, a multivariate tree that combines a decision tree with a linear discriminant by means of constructive induction. We have shown that it performs quite well, in terms of accuracy and learning times, in comparison with other multivariate systems like LMDT, ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
In a previous work, we presented system Ltree, a multivariate tree that combines a decision tree with a linear discriminant by means of constructive induction. We have shown that it performs quite well, in terms of accuracy and learning times, in comparison with other multivariate systems like LMDT, OC1, and CART. In this work, we extend the previous work by using two new discriminant functions: a quadratic discriminant and a logistic discriminant. Using the same architecture as Ltree we obtain two new multivariate trees Qtree and LgTree. The three systems have been evaluate on 17 UCI datasets. From the empirical study, we argue that these systems can be shown as a composition of classifiers with low correlation error. From a biasvariance analysis of the error rate, the error reduction of all the systems in comparison to a univariate tree, is due to a reduction on both components.
NonLinear Decision Trees  NDT
 IN INT. CONF. ON MACHINE LEARNING
, 1996
"... Most decision tree algorithms focus on univariate, i.e. axisparallel tests at each internal node of a tree. Oblique decision trees use multivariate linear tests at each nonleaf node. This paper reports a novel approach to the construction of nonlinear decision trees. The crux of this method ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Most decision tree algorithms focus on univariate, i.e. axisparallel tests at each internal node of a tree. Oblique decision trees use multivariate linear tests at each nonleaf node. This paper reports a novel approach to the construction of nonlinear decision trees. The crux of this method consists of the generation of new features and the augmentation of the primitive features with these new ones. The resulted nonlinear decision trees are more accurate than their axisparallel or oblique counterparts. Experiments on several artificial and realworld data sets demonstrate this property.
Hybrid Decision Tree
, 2002
"... In this paper, a hybrid learning approach named HDT is proposed. HDT simulates human reasoning by using symbolic leaming to do qualitative analysis and using neural leaming to do subsequent quantitative analysis. It generates the trunk of a binary hybrid decision tree according to the binary informa ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
In this paper, a hybrid learning approach named HDT is proposed. HDT simulates human reasoning by using symbolic leaming to do qualitative analysis and using neural leaming to do subsequent quantitative analysis. It generates the trunk of a binary hybrid decision tree according to the binary information gain ratio criterion in an instance space defined by only original unordered attributes. If unordered attributes cannot further distinguish training examples falling into a leaf node whose diversity is beyond the diversitythreshold, then the node is marked as a dummy node. After all those dummy nodes are marked, a specific feedforward neural network named Fnqc that is trained in an instance space defined by only original ordered attributes is exploited to accomplish the leaming task. Moreover, this paper distinguishes three kinds of incremental learning tasks. Two incremental leaming procedures designed for exampleincremental leaming with different storage requirements are provided, which enables HDT to deal gracefully with data sets where new data are frequently appended. Also a hypothesisdriven constructive induction mechanism is provided, which enables HDT to generate compact concept descriptions.