## A PAC-Bayesian Margin Bound for Linear Classifiers (2002)

### Cached

### Download Links

- [www.icos.ethz.ch]
- [www.research.microsoft.com]
- [stat.cs.tu-berlin.de]
- DBLP

### Other Repositories/Bibliography

Citations: | 32 - 3 self |

### BibTeX

@MISC{Herbrich02apac-bayesian,

author = {Ralf Herbrich and Thore Graepel},

title = {A PAC-Bayesian Margin Bound for Linear Classifiers},

year = {2002}

}

### Years of Citing Articles

### OpenURL

### Abstract

We present a bound on the generalisation error of linear classifiers in terms of a refined margin quantity on the training sample. The result is obtained in a PAC-Bayesian framework and is based on geometrical arguments in the space of linear classifiers. The new bound constitutes an exponential improvement of the so far tightest margin bound, which was developed in the luckiness framework, and scales logarithmically in the inverse margin. Even in the case of less training examples than input dimensions sufficiently large margins lead to non-trivial bound values and---for maximum margins---to a vanishing complexity term. In contrast to previous results, however, the new bound does depend on the dimensionality of feature space. The analysis shows that the classical margin is too coarse a measure for the essential quantity that controls the generalisation error: the fraction of hypothesis space consistent with the training sample. The practical relevance of the result lies in the fact that the well-known support vector machine is optimal with respect to the new bound only if the feature vectors in the training sample are all of the same length. As a consequence we recommend to use SVMs on normalised feature vectors only. Numerical simulations support this recommendation and demonstrate that the new error bound can be used for the purpose of model selection.