Results 1 - 1 of 1
- Annals of Statistics , 2006
"... Extracting useful information from high-dimensional data is an important part of the focus of today’s statistical research and practice. Penalized loss function minimiza-tion has been shown to be effective for this task both theoretically and empirically. With the virtues of both regularization and ..."
Abstract - Cited by 60 (2 self) - Add to MetaCart
Extracting useful information from high-dimensional data is an important part of the focus of today’s statistical research and practice. Penalized loss function minimiza-tion has been shown to be effective for this task both theoretically and empirically. With the virtues of both regularization and sparsity, the L1-penalized L2 minimization method Lasso has been popular in regression models. In this paper, we combine different norms including L1 to form an intelligent penalty in order to add side information to the fitting of a regression or classification model to obtain reasonable estimates. Specifically, we introduce the Composite Absolute Penal-ties (CAP) family which allows the grouping and hierarchical relationships between the predictors to be expressed. CAP penalties are built by defining groups and com-bining the properties of norm penalties at the across group and within group levels. Grouped selection occurs for non-overlapping groups. In that case, we give a Bayesian 1 interpretation for CAP penalties. Hierarchical variable selection is reached by defining groups with particular overlapping patterns. In the computation aspect, we propose using the BLASSO and cross-validation to obtain CAP estimates. For a subfamily of CAP estimates involving only the L1 and L ∞ norms, we introduce the iCAP algorithm to trace the entire regularization path for the grouped selection problem. Within this subfamily, unbiased estimates of the degrees of freedom (df) are derived allowing the regularization parameter to be selected without cross-validation. CAP is shown to im-prove on the predictive performance of the LASSO in a series of simulated experiments including cases with p>> n and mis-specified groupings. When the complexity of a model is properly calculated, iCAP is seen to be parsimonious in the experiments. 1