Results 1  10
of
552,305
Tighter PACBayes Bounds
, 2006
"... This paper proposes a PACBayes bound to measure the performance of Support Vector Machine (SVM) classifiers. The bound is based on learning a prior over the distribution of classifiers with a part of the training samples. Experimental work shows that this bound is tighter than the original PACBaye ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
This paper proposes a PACBayes bound to measure the performance of Support Vector Machine (SVM) classifiers. The bound is based on learning a prior over the distribution of classifiers with a part of the training samples. Experimental work shows that this bound is tighter than the original PACBayes
PACBayes Bounds with Data Dependent Priors
"... This paper presents the prior PACBayes bound and explores its capabilities as a tool to provide tight predictions of SVMs ’ generalization. The computation of the bound involves estimating a prior of the distribution of classifiers from the available data, and then manipulating this prior in the us ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
This paper presents the prior PACBayes bound and explores its capabilities as a tool to provide tight predictions of SVMs ’ generalization. The computation of the bound involves estimating a prior of the distribution of classifiers from the available data, and then manipulating this prior
Data Dependent Priors in PACBayes Bounds
"... Abstract. One of the central aims of Statistical Learning Theory is the bounding of the test set performance of classifiers trained with i.i.d. data. For Support Vector Machines the tightest technique for assessing this socalled generalisation error is known as the PACBayes theorem. The bound hold ..."
Abstract
 Add to MetaCart
be easily analysed in terms of the new bound. The experimental work includes a set of classification tasks preceded by a bounddriven model selection. These experiments illustrate how the new bound acting on the new classifier can be much tighter than the original PACBayes Bound applied to an SVM, and lead
PACBayes & Margins
 Advances in Neural Information Processing Systems 15
, 2002
"... We show two related things: (1) Given a classi er which consists of a weighted sum of features with a large margin, we can construct a stochastic classi er with negligibly larger training error rate. The stochastic classi er has a future error rate bound that depends on the margin distributio ..."
Abstract

Cited by 80 (12 self)
 Add to MetaCart
We show two related things: (1) Given a classi er which consists of a weighted sum of features with a large margin, we can construct a stochastic classi er with negligibly larger training error rate. The stochastic classi er has a future error rate bound that depends on the margin
A pacbayes risk bound for general loss functions
 Advances in Neural Information Processing Systems 19
, 2007
"... We provide a PACBayesian bound for the expected loss of convex combinations of classifiers under a wide class of loss functions (which includes the exponential loss and the logistic loss). Our numerical experiments with Adaboost indicate that the proposed upper bound, computed on the training set, ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
, behaves very similarly as the true loss estimated on the testing set. 1 Intoduction The PACBayes approach [1, 2, 3, 4, 5] has been very effective at providing tight risk bounds for largemargin classifiers such as the SVM [4, 6]. Within this approach, we consider a prior distribution P over a space
PACBayes risk bounds for samplecompressed Gibbs classifiers
 Proceedings of the 22nth International Conference on Machine Learning (ICML 2005
, 2005
"... We extend the PACBayes theorem to the samplecompression setting where each classifier is represented by two independent sources of information: a compression set which consists of a small subset of the training data, and a message string of the additional information needed to obtain a classifier. ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
. The new bound is obtained by using a prior over a dataindependent set of objects where each object gives a classifier only when the training data is provided. The new PACBayes theorem states that a Gibbs classifier defined on a posterior over samplecompressed classifiers can have a smaller risk bound
PACBayes Generalization Bounds for Randomized Structured Prediction
"... We present a new PACBayes generalization bound for structured prediction that is applicable to perturbationbased probabilistic models. Our analysis explores the relationship between perturbationbased modeling and the PACBayes framework, and connects to recently introduced generalization bounds f ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We present a new PACBayes generalization bound for structured prediction that is applicable to perturbationbased probabilistic models. Our analysis explores the relationship between perturbationbased modeling and the PACBayes framework, and connects to recently introduced generalization bounds
Dimensionality Dependent PACBayes Margin Bound
"... Margin is one of the most important concepts in machine learning. Previous margin bounds, both for SVM and for boosting, are dimensionality independent. A major advantage of this dimensionality independency is that it can explain the excellent performance of SVM whose feature spaces are often of hig ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
of high or infinite dimension. In this paper we address the problem whether such dimensionality independency is intrinsic for the margin bounds. We prove a dimensionality dependent PACBayes margin bound. The bound is monotone increasing with respect to the dimension when keeping all other factors fixed
Results 1  10
of
552,305