@MISC{_simplifiedpac-bayesian, author = {}, title = {Simplified PAC-Bayesian Margin Bounds}, year = {} }
Share
OpenURL
Abstract
Abstract. The theoretical understanding of support vector machines is largely based on margin bounds for linear classifiers with unit-norm weight vectors and unit-norm feature vectors. Unit-norm margin bounds have been proved previously using fat-shattering arguments and Rademacher complexity. Recently Langford and Shawe-Taylor proved a dimensionindependent unit-norm margin bound using a relatively simple PACBayesian argument. Unfortunately, the Langford-Shawe-Taylor bound is stated in a variational form making direct comparison to fat-shattering bounds difficult. This paper provides an explicit solution to the variational problem implicit in the Langford-Shawe-Taylor bound and shows that the PAC-Bayesian margin bounds are significantly tighter. Because a PAC-Bayesian bound is derived from a particular prior distribution over hypotheses, a PAC-Bayesian margin bound also seems to provide insight into the nature of the learning bias underlying the bound. 1 Introduction Margin bounds play a central role in learning theory. Margin bounds for convexcombination weight vectors (unit `1 norm weight vectors) provide a theoreticalfoundation for boosting algorithms [15, 9, 8]. Margin bounds for unit-norm weight