Results 1 
1 of
1
Tight Bounds for the Expected Risk of Linear Classifiers and PACBayes FiniteSample Guarantees
"... We analyze the expected risk of linear classifiers for a fixed weight vector in the “minimax ” setting. That is, we analyze the worstcase risk among all data distributions with a given mean and covariance. We provide a simpler proof of the tight polynomialtail bound for general random variables ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
We analyze the expected risk of linear classifiers for a fixed weight vector in the “minimax ” setting. That is, we analyze the worstcase risk among all data distributions with a given mean and covariance. We provide a simpler proof of the tight polynomialtail bound for general random variables. For subGaussian random variables, we derive a novel tight exponentialtail bound. We also provide new PACBayes finitesample guarantees when training data is available. Our “minimax ” generalization bounds are dimensionalityindependent and O(√1/m) for m samples. 1