Results 1  10
of
11
Rademacher Processes And Bounding The Risk Of Function Learning
 High Dimensional Probability II
, 1999
"... We construct data dependent upper bounds on the risk in function learning problems. The bounds are based on the local norms of the Rademacher process indexed by the underlying function class and they do not require prior knowledge about the distribution of training examples or any specific propertie ..."
Abstract

Cited by 39 (6 self)
 Add to MetaCart
We construct data dependent upper bounds on the risk in function learning problems. The bounds are based on the local norms of the Rademacher process indexed by the underlying function class and they do not require prior knowledge about the distribution of training examples or any specific properties of the function class. Using Talagrand's type concentration inequalities for empirical and Rademacher processes, we show that the bounds hold with high probability that decreases exponentially fast when the sample size grows. In typical situations that are frequently encountered in the theory of function learning, the bounds give nearly optimal rate of convergence of the risk to zero. 1. Local Rademacher norms and bounds on the risk: main results Let (S; A) be a measurable space and let F be a class of Ameasurable functions from S into [0; 1]: Denote P(S) the set of all probability measures on (S; A): Let f 0 2 F be an unknown target function. Given a probability measure P 2 P(S) (also unknown), let (X 1 ; : : : ; Xn ) be an i.i.d. sample in (S; A) with common distribution P (defined on a probability space(\Omega ; \Sigma; P)). In computer learning theory, the problem of estimating f 0 ; based on the labeled sample (X 1 ; Y 1 ); : : : ; (Xn ; Yn ); where Y j := f 0 (X j ); j = 1; : : : ; n; is referred to as function learning problem. The so called concept learning is a special case of function learning. In this case, F := fI C : C 2 Cg; where C ae A is called a class of concepts (see Vapnik (1998), Vidyasagar (1996), Devroye, Gyorfi and Lugosi (1996) for the account on statistical learning theory). The goal of function learning is to find an estimate
Datadriven calibration of penalties for leastsquares regression
, 2008
"... Penalization procedures often suffer from their dependence on multiplying factors, whose optimal values are either unknown or hard to estimate from data. We propose a completely datadriven calibration algorithm for these parameters in the leastsquares regression framework, without assuming a parti ..."
Abstract

Cited by 29 (10 self)
 Add to MetaCart
Penalization procedures often suffer from their dependence on multiplying factors, whose optimal values are either unknown or hard to estimate from data. We propose a completely datadriven calibration algorithm for these parameters in the leastsquares regression framework, without assuming a particular shape for the penalty. Our algorithm relies on the concept of minimal penalty, recently introduced by Birgé and Massart (2007) in the context of penalized least squares for Gaussian homoscedastic regression. On the positive side, the minimal penalty can be evaluated from the data themselves, leading to a datadriven estimation of an optimal penalty which can be used in practice; on the negative side, their approach heavily relies on the homoscedastic Gaussian nature of their stochastic framework. The purpose of this paper is twofold: stating a more general heuristics for designing a datadriven penalty (the slope heuristics) and proving that it works for penalized leastsquares regression with a random design, even for heteroscedastic nonGaussian data. For technical reasons, some exact mathematical results will be proved only for regressogram binwidth selection. This is at least a first step towards further results, since the approach and the method that we use are indeed general.
Complexity regularization via localized random penalties
, 2004
"... In this article, model selection via penalized empirical loss minimization in nonparametric classification problems is studied. Datadependent penalties are constructed, which are based on estimates of the complexity of a small subclass of each model class, containing only those functions with small ..."
Abstract

Cited by 24 (3 self)
 Add to MetaCart
In this article, model selection via penalized empirical loss minimization in nonparametric classification problems is studied. Datadependent penalties are constructed, which are based on estimates of the complexity of a small subclass of each model class, containing only those functions with small empirical loss. The penalties are novel since those considered in the literature are typically based on the entire model class. Oracle inequalities using these penalties are established, and the advantage of the new penalties over those based on the complexity of the whole model class is demonstrated.
Rademacher Penalization over Decision Tree Prunings
 In Proc. 14th European Conference on Machine Learning
, 2003
"... Rademacher penalization is a modern technique for obtaining datadependent bounds on the generalization error of classi ers. It would appear to be limited to relatively simple hypothesis classes because of computational complexity issues. In this paper we, nevertheless, apply Rademacher penaliz ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Rademacher penalization is a modern technique for obtaining datadependent bounds on the generalization error of classi ers. It would appear to be limited to relatively simple hypothesis classes because of computational complexity issues. In this paper we, nevertheless, apply Rademacher penalization to the in practice important hypothesis class of unrestricted decision trees by considering the prunings of a given decision tree rather than the tree growing phase. Moreover, we generalize the errorbounding approach from binary classi cation to multiclass situations. Our empirical experiments indicate that the proposed new bounds clearly outperform earlier bounds for decision tree prunings and provide nontrivial error estimates on realworld data sets.
A permutation approach to validation
 In Proc. 10th SIAM International Conference on Data Mining (SDM
, 2010
"... We give a permutation approach to validation (estimation of outsample error). One typical use of validation is model selection. We establish the legitimacy of the proposed permutation complexity by proving a uniform bound on the outsample error, similar to a VCstyle bound. We extensively demonstr ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We give a permutation approach to validation (estimation of outsample error). One typical use of validation is model selection. We establish the legitimacy of the proposed permutation complexity by proving a uniform bound on the outsample error, similar to a VCstyle bound. We extensively demonstrate this approach experimentally on synthetic data, standard data sets from the UCIrepository, and a novel diffusion data set. The outofsample error estimates are comparable to cross validation (CV); yet, the method is more efficient and robust, being less susceptible to overfitting during model selection. 1
Fast Minimum Training Error Discretization
, 2002
"... The need to discretize a numerical range into class coherent intervals is a problem frequently encountered. Training Set Error (TSE) is one of the commonly used impurity functions in this task. ..."
Abstract
 Add to MetaCart
The need to discretize a numerical range into class coherent intervals is a problem frequently encountered. Training Set Error (TSE) is one of the commonly used impurity functions in this task.
DATADEPENDENT GENERALIZATION PERFORMANCE ASSESSMENT VIA QUASICONVEX OPTIMIZATION
"... As compared to classical distributionindependent bounds based on the VC dimension, recent datadependent bounds based on Rademacher complexity yield tighter upper bounds that may offer practical utility for model selection, as suggested by several investigations. We present an approach for kernel m ..."
Abstract
 Add to MetaCart
As compared to classical distributionindependent bounds based on the VC dimension, recent datadependent bounds based on Rademacher complexity yield tighter upper bounds that may offer practical utility for model selection, as suggested by several investigations. We present an approach for kernel machine learning and generalization performance assessment that integrates concepts from prior work on Rademachertype datadependent generalization bounds and learning based on the optimization of quasiconvex losses. Our main contribution focuses on the direct estimation of the Rademacher penalty in order to obtain a tighter generalization bound. Specifically we define the optimization task for the case of learning with the ramp loss and show that direct estimation of the Rademacher penalty can be accomplished by solving a series of quadratic programming problems. 1.
Permutation Complexity Bound on OutSample Error
"... We define a data dependent permutation complexity for a hypothesis set H, which is similar to a Rademacher complexity or maximum discrepancy. The crucial difference is that it is based on permutation (dependent) sampling. We prove a uniform bound on the generalization error, as well as a concentrati ..."
Abstract
 Add to MetaCart
We define a data dependent permutation complexity for a hypothesis set H, which is similar to a Rademacher complexity or maximum discrepancy. The crucial difference is that it is based on permutation (dependent) sampling. We prove a uniform bound on the generalization error, as well as a concentration result which means that the permutation estimate can be efficiently estimated. 1
Model Selection by Loss Rank for Classification and Unsupervised Learning
, 2010
"... www.hutter1.net Hutter (2007) recently introduced the loss rank principle (LoRP) as a generalpurposeprinciplefor modelselection. TheLoRP enjoys many attractive properties and deserves furtherinvestigations. TheLoRP has been wellstudied for regression framework in Hutter and Tran (2010). In this pap ..."
Abstract
 Add to MetaCart
www.hutter1.net Hutter (2007) recently introduced the loss rank principle (LoRP) as a generalpurposeprinciplefor modelselection. TheLoRP enjoys many attractive properties and deserves furtherinvestigations. TheLoRP has been wellstudied for regression framework in Hutter and Tran (2010). In this paper, we study the LoRP for classification framework, and develop it further for model selection problems in unsupervised learning where the main interest is to describe the associations between input measurements, like cluster analysis or graphical modelling. Theoretical properties and simulation studies are presented.