Results 1  10
of
84
Multivariate Decision Trees
, 1992
"... Multivariate decision trees overcome a representational limitation of univariate decision trees: univariate decision trees are restricted to splits of the instance space that are orthogonal to the feature's axis. This paper discusses the following issues for constructing multivariate decision trees: ..."
Abstract

Cited by 119 (6 self)
 Add to MetaCart
Multivariate decision trees overcome a representational limitation of univariate decision trees: univariate decision trees are restricted to splits of the instance space that are orthogonal to the feature's axis. This paper discusses the following issues for constructing multivariate decision trees: representing a multivariate test, including symbolic and numeric features, learning the coefficients of a multivariate test, selecting the features to include in a test, and pruning of multivariate decision trees. We present some new and review some wellknown methods for forming multivariate decision trees. The methods are compared across a variety of learning tasks to assess each method's ability to find concise, accurate decision trees. The results demonstrate that some multivariate methods are more effective than others. In addition, the experiments confirm that allowing multivariate tests improves the accuracy of the resulting decision tree over univariate trees. Contents 1 Introduc...
Hierarchical discriminant regression
 IEEE Trans. Pattern Anal. Mach. Intell
, 2000
"... AbstractÐThe main motivation of this paper is to propose a new classification and regression method for challenging highdimensional data. The proposed new technique casts classification problems (class labels as output) and regression problems (numeric values as output) into a unified regression pro ..."
Abstract

Cited by 46 (24 self)
 Add to MetaCart
AbstractÐThe main motivation of this paper is to propose a new classification and regression method for challenging highdimensional data. The proposed new technique casts classification problems (class labels as output) and regression problems (numeric values as output) into a unified regression problem. This unified view enables classification problems to use numeric information in the output space that is available for regression problems but are traditionally not readily available for classification problemsÐdistance metric among clustered class labels for coarse and fine classifications. A doubly clustered subspacebased hierarchical discriminating regression (HDR) method is proposed in this work. The major characteristics include: 1) Clustering is performed in both output space and input space at each internal node, termed ªdoubly clustered.º Clustering in the output space provides virtual labels for computing clusters in the input space. 2) Discriminants in the input space are automatically derived from the clusters in the input space. These discriminants span the discriminating subspace at each internal node of the tree. 3) A hierarchical probability distribution model is applied to the resulting discriminating subspace at each internal node. This realizes a coarsetofine approximation of probability distribution of the input samples, in the hierarchical discriminating subspaces. No global distribution models are assumed. 4) To relax the per class sample requirement of traditional discriminant analysis techniques, a samplesize dependent negativeloglikelihood (NLL) is introduced. This new technique is designed for automatically dealing with smallsample applications, largesample applications, and unbalancedsample applications. 5) The execution of HDR method is fast, due to the empirical logarithmic time complexity of the HDR algorithm. Although the method is applicable to any data, we report the experimental results for three types of data: synthetic data for examining the nearoptimal performance, large raw faceimage data bases, and traditional databases with manually selected features along with a comparison with some major existing methods, such as CART,
M (2003) Using evolutionary algorithms as instance selection for data reduction in KDD: an experimental study
 IEEE Trans Evol Comput
"... Abstract—Evolutionary algorithms are adaptive methods based on natural evolution that may be used for search and optimization. As data reduction in knowledge discovery in databases (KDDs) can be viewed as a search problem, it could be solved using evolutionary algorithms (EAs). In this paper, we hav ..."
Abstract

Cited by 41 (13 self)
 Add to MetaCart
Abstract—Evolutionary algorithms are adaptive methods based on natural evolution that may be used for search and optimization. As data reduction in knowledge discovery in databases (KDDs) can be viewed as a search problem, it could be solved using evolutionary algorithms (EAs). In this paper, we have carried out an empirical study of the performance of four representative EA models in which we have taken into account two different instance selection perspectives, the prototype selection and the training set selection for data reduction in KDD. This paper includes a comparison between these algorithms and other nonevolutionary instance selection algorithms. The results show that the evolutionary instance selection algorithms consistently outperform the nonevolutionary ones, the main advantages being: better instance reduction rates, higher classification accuracy, and models that are easier to interpret. Index Terms—Data mining (DM), data reduction, evolutionary algorithms (EAs), instance selection, knowledge discovery.
A Sequential Model for MultiClass Classification. EMNLP ’01
, 2001
"... Many classification problems require decisions among a large number of competing classes. These tasks, however, are not handled well by general purpose learning methods and are usually addressed in an adhoc fashion. We suggest a general approach – a sequential learning model that utilizes classifie ..."
Abstract

Cited by 33 (12 self)
 Add to MetaCart
Many classification problems require decisions among a large number of competing classes. These tasks, however, are not handled well by general purpose learning methods and are usually addressed in an adhoc fashion. We suggest a general approach – a sequential learning model that utilizes classifiers to sequentially restrict the number of competing classes while maintaining, with high probability, the presence of the true outcome in the candidates set. Some theoretical and computational properties of the model are discussed and we argue that these are important in NLPlike domains. The advantages of the model are illustrated in an experiment in partofspeech tagging. 1
Decision Trees For Geometric Models
, 1993
"... A fundamental problem in modelbased computer vision is that of identifying which of a given set of geometric models is present in an image. Considering a "probe" to be an oracle that tells us whether or not a model is present at a given point, we study the problem of computing efficient strategi ..."
Abstract

Cited by 32 (4 self)
 Add to MetaCart
A fundamental problem in modelbased computer vision is that of identifying which of a given set of geometric models is present in an image. Considering a "probe" to be an oracle that tells us whether or not a model is present at a given point, we study the problem of computing efficient strategies ("decision trees") for probing an image, with the goal to minimize the number of probes necessary (in the worst case) to determine which single model is present. We show that a dlg ke height binary decision tree always exists for k polygonal models (in fixed position), provided (1) they are nondegenerate (do not share boundaries) and (2) they share a common point of intersection. Further, we give an efficient algorithm for constructing such decision tress when the models are given as a set of polygons in the plane. We show that constructing a minimum height tree is NPcomplete if either of the two assumptions is omitted. We provide an efficient greedy heuristic strategy and show ...
Automatic Selection of Split Criterion during Tree Growing Based on Node Location
, 1995
"... Typically, decision tree construction algorithms apply a single "goodness of split" criterion to form each test node of the tree. It is a hypothesis of this research that better results can be obtained if during tree construction one applies a split criterion suited to the "location" of the test nod ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
Typically, decision tree construction algorithms apply a single "goodness of split" criterion to form each test node of the tree. It is a hypothesis of this research that better results can be obtained if during tree construction one applies a split criterion suited to the "location" of the test node in the tree. Specifically, given the objective of maximizing predictive accuracy, test nodes near the root of the tree should be chosen using a measure based on information theory, whereas test nodes closer to the leaves of the pruned tree should be chosen to maximize classification accuracy on the training set. The results of an empirical evaluation illustrate that adapting the split criterion to node location can improve classification performance. 1 DECISION TREE CONSTRUCTION A decision tree is either a leaf node containing a classification or an attribute test, with for each value of the attribute, a branch to a decision tree. To classify an instance using a decision tree, one starts ...
TreeBased Pursuit: Algorithm and Properties
, 2005
"... This paper proposes a treebased pursuit algorithm that efficiently trades off complexity and approximation performance for overcomplete signal expansions. Finding the sparsest representation of a signal using a redundant dictionary is, in general, a NPHard problem. Even suboptimal algorithms such ..."
Abstract

Cited by 21 (5 self)
 Add to MetaCart
This paper proposes a treebased pursuit algorithm that efficiently trades off complexity and approximation performance for overcomplete signal expansions. Finding the sparsest representation of a signal using a redundant dictionary is, in general, a NPHard problem. Even suboptimal algorithms such as Matching Pursuit remain highly complex. We propose a structuring strategy that can be applied to any redundant set of functions, and which basically groups similar atoms together. A measure of similarity based on coherence allows for representing a highly redundant subdictionary of atoms by a unique element, called molecule. When the clustering is applied recursively on atoms and then on molecules, it naturally leads to the creation of a tree structure. We then present a new pursuit algorithm that uses the structure created by clustering as a decision tree. This treebased algorithm offers important complexity reduction with respect to Matching Pursuit, as it prunes important parts of the dictionary when traversing the tree. Recent results on incoherent dictionaries are extended to molecules, while the true highly redundant nature of the dictionary stays hidden by the tree structure. We then derive recovery conditions on the structured dictionary, under which treebased pursuit is guaranteed to converge. Experimental results finally show that the gain in complexity offered by treebased pursuit does in general not have a high penalty on the approximation performance. They show that the dimensionality of the problem is reduced thanks to the tree construction, without significant loss of information at hand.
Treebased modeling of prosodic phrasing and segmental duration for Korean TTS systems
"... This study describes the treebased modeling of prosodic phrasing, pause duration between phrases and segmental duration for Korean TTS systems. We collected 400 sentences from various genres and built a corresponding speech corpus uttered by a professional female announcer. The phonemic and prosodi ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
This study describes the treebased modeling of prosodic phrasing, pause duration between phrases and segmental duration for Korean TTS systems. We collected 400 sentences from various genres and built a corresponding speech corpus uttered by a professional female announcer. The phonemic and prosodic boundaries were manually marked on the recorded speech, and morphological analysis, graphemetophoneme conversion and syntactic analysis were also done on the text. A decision tree and regression trees were trained on 240 sentences (of approximately 20 minutes length), and tested on 160 sentences (of approximately 13 minutes length). Features for modeling prosody are proposed, and their effectiveness is measured by interpreting the resulting trees. The misclassification rate of the decision tree was 14.46%, the RMSEs of the regression trees, which predict pause duration and segmental duration, were 132 ms and 22 ms respectively for the test set. To understand the performance of our approac...
Classification of High Dimensional Data With Limited Training Samples
, 1998
"...  iiTABLE OF CONTENTS ABSTRACT.......................................................................................iv ..."
Abstract

Cited by 17 (8 self)
 Add to MetaCart
 iiTABLE OF CONTENTS ABSTRACT.......................................................................................iv
Belief Decision Trees: Theoretical foundations
, 2000
"... This paper extends the decision tree technique to an uncertain environment where the uncertainty is represented by belief functions as interpreted in the Transferable Belief Model (TBM). This socalled belief decision tree is a new classification method adapted to uncertain data. We will be concerne ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
This paper extends the decision tree technique to an uncertain environment where the uncertainty is represented by belief functions as interpreted in the Transferable Belief Model (TBM). This socalled belief decision tree is a new classification method adapted to uncertain data. We will be concerned with the construction of the belief decision tree from a training set where the knowledge about the instances' classes is represented by belief functions, and its use for the classification of new instances where the knowledge about the attributes' values is represented by belief functions. Keywords: Belief Functions, Decision Tree, Belief Decision Tree, Classification, Transferable Belief Model. 1 Introduction Several learning methods have been developed to ensure classification. Among these, the decision tree method may be one of the most commonly used in supervised learning approaches. Indeed decision trees are characterized by their capability to break down a complex decision problem ...