Results 1  10
of
210,916
The Dynamics of AdaBoost Weights Tells You What's Hard to Classify
, 2001
"... The dynamical evolution of weights in the AdaBoost algorithm contains useful information about the role that the associated data points play in the built of the AdaBoost model. In particular, the dynamics induces a bipartition of the data set into two (easy/hard) classes. Easy points are ininflu ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The dynamical evolution of weights in the AdaBoost algorithm contains useful information about the role that the associated data points play in the built of the AdaBoost model. In particular, the dynamics induces a bipartition of the data set into two (easy/hard) classes. Easy points
Experiments with a New Boosting Algorithm
, 1996
"... In an earlier paper, we introduced a new “boosting” algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that consistently generates classifiers whose performance is a little better than random guessing. We also introduced the relate ..."
Abstract

Cited by 2176 (21 self)
 Add to MetaCart
In an earlier paper, we introduced a new “boosting” algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that consistently generates classifiers whose performance is a little better than random guessing. We also introduced
Explaining AdaBoost
"... Abstract Boosting is an approach to machine learning based on the idea of creating a highly accurate prediction rule by combining many relatively weak and inaccurate rules. The AdaBoost algorithm of Freund and Schapire was the first practical boosting algorithm, and remains one of the most widely us ..."
Abstract
 Add to MetaCart
Abstract Boosting is an approach to machine learning based on the idea of creating a highly accurate prediction rule by combining many relatively weak and inaccurate rules. The AdaBoost algorithm of Freund and Schapire was the first practical boosting algorithm, and remains one of the most widely
The Boosting Approach to Machine Learning: An Overview
, 2002
"... Boosting is a general method for improving the accuracy of any given learning algorithm. Focusing primarily on the AdaBoost algorithm, this chapter overviews some of the recent work on boosting including analyses of AdaBoost's training error and generalization error; boosting's connecti ..."
Abstract

Cited by 430 (17 self)
 Add to MetaCart
Boosting is a general method for improving the accuracy of any given learning algorithm. Focusing primarily on the AdaBoost algorithm, this chapter overviews some of the recent work on boosting including analyses of AdaBoost's training error and generalization error; boosting's
Boosting and AdaBoost
"... We’ve talked loosely about 1 Lack of inherent superiority of any one particular classifier; and 2 Some systematic ways for selecting a particular method over another for a given scenario. ..."
Abstract
 Add to MetaCart
We’ve talked loosely about 1 Lack of inherent superiority of any one particular classifier; and 2 Some systematic ways for selecting a particular method over another for a given scenario.
On the Convergence Properties of Optimal AdaBoost On the Convergence Properties of Optimal AdaBoost ∗
"... In this paper, we establish the convergence of the Optimal AdaBoost classifier under mild conditions. We frame AdaBoost as a dynamical system, and provide sufficient conditions for the existence of an invariant measure. Employing tools from ergodic theory, we show that the margin for every example c ..."
Abstract
 Add to MetaCart
In this paper, we establish the convergence of the Optimal AdaBoost classifier under mild conditions. We frame AdaBoost as a dynamical system, and provide sufficient conditions for the existence of an invariant measure. Employing tools from ergodic theory, we show that the margin for every example
Machine Learning in Automated Text Categorization
 ACM COMPUTING SURVEYS
, 2002
"... The automated categorization (or classification) of texts into predefined categories has witnessed a booming interest in the last ten years, due to the increased availability of documents in digital form and the ensuing need to organize them. In the research community the dominant approach to this p ..."
Abstract

Cited by 1658 (22 self)
 Add to MetaCart
to this problem is based on machine learning techniques: a general inductive process automatically builds a classifier by learning, from a set of preclassified documents, the characteristics of the categories. The advantages of this approach over the knowledge engineering approach (consisting in the manual
Limma: linear models for microarray data
 Bioinformatics and Computational Biology Solutions using R and Bioconductor
, 2005
"... This free opensource software implements academic research by the authors and coworkers. If you use it, please support the project by citing the appropriate journal articles listed in Section 2.1.Contents ..."
Abstract

Cited by 759 (13 self)
 Add to MetaCart
This free opensource software implements academic research by the authors and coworkers. If you use it, please support the project by citing the appropriate journal articles listed in Section 2.1.Contents
Estimating the Support of a HighDimensional Distribution
, 1999
"... Suppose you are given some dataset drawn from an underlying probability distribution P and you want to estimate a "simple" subset S of input space such that the probability that a test point drawn from P lies outside of S is bounded by some a priori specified between 0 and 1. We propo ..."
Abstract

Cited by 766 (29 self)
 Add to MetaCart
Suppose you are given some dataset drawn from an underlying probability distribution P and you want to estimate a "simple" subset S of input space such that the probability that a test point drawn from P lies outside of S is bounded by some a priori specified between 0 and 1. We
Results 1  10
of
210,916