## On the Rate of Convergence of Regularized Boosting Classifiers (2003)

### Cached

### Download Links

Venue: | JOURNAL OF MACHINE LEARNING RESEARCH |

Citations: | 46 - 10 self |

### BibTeX

@MISC{Blanchard03onthe,

author = {Gilles Blanchard and Gabor Lugosi and Nicolas Vayatis},

title = { On the Rate of Convergence of Regularized Boosting Classifiers},

year = {2003}

}

### Years of Citing Articles

### OpenURL

### Abstract

A regularized boosting method is introduced, for which regularization is obtained through a penalization function. It is shown through oracle inequalities that this method is model adaptive. The rate of convergence of the probability of misclassification is investigated. It is shown that for quite a large class of distributions, the probability of error converges to the Bayes risk at a rate faster than n -(V+2)/(4(V+1)) where V is the VC dimension of the "base" class whose elements are combined by boosting methods to obtain an aggregated classifier. The dimension-independent nature of the rates may partially explain the good behavior of these methods in practical problems. Under Tsybakov's noise condition the rate of convergence is even faster. We investigate the conditions necessary to obtain such rates for different base classes. The special case of boosting using decision stumps is studied in detail. We characterize the class of classifiers realizable by aggregating decision stumps.