Results 1  10
of
796,166
The Rate of Convergence of AdaBoost
"... The AdaBoost algorithm was designed to combine many “weak ” hypotheses that perform slightly better than random guessing into a “strong ” hypothesis that has very low error. We study the rate at which AdaBoost iteratively converges to the minimum of the “exponential loss. ” Unlike previous work, our ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
The AdaBoost algorithm was designed to combine many “weak ” hypotheses that perform slightly better than random guessing into a “strong ” hypothesis that has very low error. We study the rate at which AdaBoost iteratively converges to the minimum of the “exponential loss. ” Unlike previous work
Soft Margins for AdaBoost
, 1998
"... Recently ensemble methods like AdaBoost were successfully applied to character recognition tasks, seemingly defying the problems of overfitting. This paper shows that although AdaBoost rarely overfits in the low noise regime it clearly does so for higher noise levels. Central for understanding this ..."
Abstract

Cited by 327 (22 self)
 Add to MetaCart
Recently ensemble methods like AdaBoost were successfully applied to character recognition tasks, seemingly defying the problems of overfitting. This paper shows that although AdaBoost rarely overfits in the low noise regime it clearly does so for higher noise levels. Central for understanding
The Convergence Rate of AdaBoost
"... Abstract. We pose the problem of determining the rate of convergence at which AdaBoost minimizes exponential loss. Boosting is the problem of combining many “weak, ” higherror hypotheses to generate a single “strong” hypothesis with very low error. The AdaBoost algorithm of Freund and Schapire (199 ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
Abstract. We pose the problem of determining the rate of convergence at which AdaBoost minimizes exponential loss. Boosting is the problem of combining many “weak, ” higherror hypotheses to generate a single “strong” hypothesis with very low error. The AdaBoost algorithm of Freund and Schapire
Boosting and AdaBoost
"... We’ve talked loosely about 1 Lack of inherent superiority of any one particular classifier; and 2 Some systematic ways for selecting a particular method over another for a given scenario. ..."
Abstract
 Add to MetaCart
We’ve talked loosely about 1 Lack of inherent superiority of any one particular classifier; and 2 Some systematic ways for selecting a particular method over another for a given scenario.
AdaBoost is consistent
, 2007
"... The risk, or probability of error, of the classifier produced by the AdaBoost algorithm is investigated. In particular, we consider the stopping strategy to be used in AdaBoost to achieve universal consistency. We show that provided AdaBoost is stopped after n 1−ε iterations—for sample size n and ε ..."
Abstract

Cited by 39 (0 self)
 Add to MetaCart
The risk, or probability of error, of the classifier produced by the AdaBoost algorithm is investigated. In particular, we consider the stopping strategy to be used in AdaBoost to achieve universal consistency. We show that provided AdaBoost is stopped after n 1−ε iterations—for sample size n and ε
Process Consistency for AdaBoost
 Annals of Statistics
, 2000
"... Introduction. Some recent experimental results [e.g., Friedman, Hastie and Tibshirani (1999); Grove and Schuurmans (1998); Mason et al. (1998)] and theoretical examples [Jiang (1999)] suggest that the AdaBoost algorithm [e.g., Schapire (1999); Freund and Schapire (1997)] can overfit in the limit of ..."
Abstract

Cited by 44 (1 self)
 Add to MetaCart
Introduction. Some recent experimental results [e.g., Friedman, Hastie and Tibshirani (1999); Grove and Schuurmans (1998); Mason et al. (1998)] and theoretical examples [Jiang (1999)] suggest that the AdaBoost algorithm [e.g., Schapire (1999); Freund and Schapire (1997)] can overfit in the limit
On the Convergence Properties of Optimal AdaBoost On the Convergence Properties of Optimal AdaBoost ∗
"... In this paper, we establish the convergence of the Optimal AdaBoost classifier under mild conditions. We frame AdaBoost as a dynamical system, and provide sufficient conditions for the existence of an invariant measure. Employing tools from ergodic theory, we show that the margin for every example c ..."
Abstract
 Add to MetaCart
In this paper, we establish the convergence of the Optimal AdaBoost classifier under mild conditions. We frame AdaBoost as a dynamical system, and provide sufficient conditions for the existence of an invariant measure. Employing tools from ergodic theory, we show that the margin for every example
Regularizing AdaBoost
, 1999
"... Boosting methods maximize a hard classification margin and are known as powerful techniques that do not exhibit overfitting for low noise cases. Also for noisy data boosting will try to enforce a hard margin and thereby give too much weight to outliers, which then leads to the dilemma of nonsmooth ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
smooth fits and overfitting. Therefore we propose three algorithms to allow for soft margin classification by introducing regularization with slack variables into the boosting concept: (1) AdaBoost reg and regularized versions of (2) linear and (3) quadratic programming AdaBoost. Experiments show
Explaining AdaBoost
"... Abstract Boosting is an approach to machine learning based on the idea of creating a highly accurate prediction rule by combining many relatively weak and inaccurate rules. The AdaBoost algorithm of Freund and Schapire was the first practical boosting algorithm, and remains one of the most widely us ..."
Abstract
 Add to MetaCart
Abstract Boosting is an approach to machine learning based on the idea of creating a highly accurate prediction rule by combining many relatively weak and inaccurate rules. The AdaBoost algorithm of Freund and Schapire was the first practical boosting algorithm, and remains one of the most widely
Results 1  10
of
796,166