• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 249
Next 10 →

Predicting Nearly as Well as the Best Pruning of a Decision Tree

by David P. Helmbold, Robert E. Schapire - Machine Learning , 1995
"... . Many algorithms for inferring a decision tree from data involve a two-phase process: First, a very large decision tree is grown which typically ends up "over-fitting" the data. To reduce over-fitting, in the second phase, the tree is pruned using one of a number of available methods. The ..."
Abstract - Cited by 82 (7 self) - Add to MetaCart
will not be "much worse" (in a precise technical sense) than the predictions made by the best reasonably small pruning of the given decision tree. Thus, our procedure is guaranteed to be competitive (in terms of the quality of its predictions) with any pruning algorithm. We prove that our procedure

Predicting Nearly as Well as the Best Pruning of a Planar Decision Graph

by Eiji Takimoto, Manfred K. Warmuth - Theoretical Computer Science , 2000
"... We design ecient on-line algorithms that predict nearly as well as the best pruning of a planar decision graph. We assume that the graph has no cycles. As in the previous work on decision trees, we implicitly maintain one weight for each of the prunings (exponentially many). The method works for a l ..."
Abstract - Cited by 11 (1 self) - Add to MetaCart
We design ecient on-line algorithms that predict nearly as well as the best pruning of a planar decision graph. We assume that the graph has no cycles. As in the previous work on decision trees, we implicitly maintain one weight for each of the prunings (exponentially many). The method works for a

Pruning irrelevant features from oblivious decision trees

by Pat Langley - In Proceedings of the AAAI Fall Symposium on Relevance, 145148 , 1994
"... Abstract In this paper, we examine an approach to feature selection designed to handle domains that involve both irrelevant and interacting features. We review the reasons this situation poses challenges to both nearest neighbor and decision-tree methods, then describe a new algorithm -OBLIVION -th ..."
Abstract - Cited by 2 (0 self) - Add to MetaCart
the process. On each step, the algorithm tentatively prunes each of the remaining features, selects the best, and generates a new tree with one fewer attribute. This continues until the accuracy of the best pruned tree is less than the accuracy of the current one. Unlike Focus and Schlimmer's method

7 Classification and Regression Trees, Bagging, and Boosting

by unknown authors
"... 15 ..."
Abstract - Add to MetaCart
Abstract not found

CMP: A Fast Decision Tree Classifier Using Multivariate Predictions

by Haixun Wang - In Proceedings of the 16th International Conference on Data Engineering , 2000
"... Most decision tree classifiers are designed to keep class histograms for single attributes, and to select a particular attribute for the next split using said histograms. In this paper, we propose a technique where, by keeping histograms on attribute pairs, we achieve (i) a significant speed-up over ..."
Abstract - Cited by 4 (0 self) - Add to MetaCart
Most decision tree classifiers are designed to keep class histograms for single attributes, and to select a particular attribute for the next split using said histograms. In this paper, we propose a technique where, by keeping histograms on attribute pairs, we achieve (i) a significant speed

Universal piecewise linear prediction via context trees

by Suleyman S. Kozat, Andrew C. Singer, Senior Member, Georg Christoph Zeitler - IEEE Transactions on Signal Processing, p. Accepted , 2006
"... Abstract—This paper considers the problem of piecewise linear prediction from a competitive algorithm approach. In prior work, prediction algorithms have been developed that are “universal” with respect to the class of all linear predictors, such that they perform nearly as well, in terms of total s ..."
Abstract - Cited by 10 (8 self) - Add to MetaCart
Abstract—This paper considers the problem of piecewise linear prediction from a competitive algorithm approach. In prior work, prediction algorithms have been developed that are “universal” with respect to the class of all linear predictors, such that they perform nearly as well, in terms of total

Efficient Query Optimization in Distributed Database Using Decision Tree Algorithm

by Dr Kamaljit Singh , Dheerendra Singh
"... ABSTRACT This paper presents semantics query optimization on distributed database using Decision Tree algorithm based on that. Whenever user query from server the master sever can process the query of user from different server's database in Heterogeneous environment on alternative basis which ..."
Abstract - Add to MetaCart
ABSTRACT This paper presents semantics query optimization on distributed database using Decision Tree algorithm based on that. Whenever user query from server the master sever can process the query of user from different server's database in Heterogeneous environment on alternative basis

Quantifying the Predictability of a Personal Place

by Hyeeun Lim, Nupur Bhatnagar
"... Users visit different places and some places are important to them as compared to others. The rating level was given by the user to rate the importance level of a place. This project involves studying the variable patterns of user's commuting behavior and builds a classification model that coul ..."
Abstract - Add to MetaCart
clustering algorithms like K mean and DBSCAN. The classification model was used as predictive model to quantify prediction of a place given a combination of explanatory variables like frequency and duration. Different classifiers like KNN, Naïve Bayes, and Decision Trees were used to build the predictive

Limiting the Number of Trees in Random Forests

by Patrice Latinne, Olivier Debeir, Christine Decaestecker - In Proceedings of MCS 2001, LNCS 2096, 2001
"... Abstract. The aim of this paper is to propose a simple procedure that aprioridetermines a minimum number of classifiers to combine in order to obtain a prediction accuracy level similar to the one obtained with the combination of larger ensembles. The procedure is based on the McNemar non-parametric ..."
Abstract - Cited by 6 (0 self) - Add to MetaCart
.5 decision tree (Breiman’s Bagging, Ho’s Random subspaces, their combination we labeled ‘Bagfs’, and Breiman’s Random forests) and five large benchmark data bases. It is worth noticing that the proposed procedure may easily be extended to other base learning algorithms than a decision tree as well

Collection Tree Protocol

by unknown authors
"... This paper presents and evaluates two principles for designing robust, reliable, and efficient collection protocols. These principles allow a protocol to benefit from accurate and agile link estimators by handling the dynamism such estimators introduce to routing tables. The first is datapath valida ..."
Abstract - Add to MetaCart
validation: a protocol can use data traffic as active topology probes, quickly discovering and fixing routing loops. The second is adaptive beaconing: by extending the Trickle code propagation algorithm to routing control traffic, a protocol sends fewer beacons while simultaneously reducing its route repair
Next 10 →
Results 1 - 10 of 249
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University