## High-dimensional union support recovery in multivariate

Citations: | 28 - 0 self |

### BibTeX

@MISC{Obozinski_high-dimensionalunion,

author = {Guillaume Obozinski and Martin J. Wainwright and Michael I. Jordan},

title = {High-dimensional union support recovery in multivariate},

year = {}

}

### OpenURL

### Abstract

regression

### Citations

1836 | Regression shrinkage and selection via the lasso
- Tibshirani
- 1996
(Show Context)
Citation Context ...ous sparse approximation [9], and simultaneous feature selection in multi-task learning [5]. Block-norms that compose an ℓ1-norm with other norms yield solutions that tend to be sparse like the Lasso =-=[8]-=-, but the structured norm also enforces blockwise sparsity, in the sense that parameters within blocks are more likely zero (or non-zero) simultaneously. The focus of this paper is the model selection... |

505 | Model selection and estimation in regression with grouped variables
- Yuan, Lin
(Show Context)
Citation Context ...ine learning has focused on regularization based on block-structured norms. Such structured norms are well-motivated in various settings, among them kernel learning [3, 6], grouped variable selection =-=[11]-=-, hierarchical model selection [12], simultaneous sparse approximation [9], and simultaneous feature selection in multi-task learning [5]. Block-norms that compose an ℓ1-norm with other norms yield so... |

300 | Just relax: Convex programming methods for identifying sparse signals in noise
- Tropp
- 2006
(Show Context)
Citation Context ... Such structured norms are well-motivated in various settings, among them kernel learning [3, 6], grouped variable selection [11], hierarchical model selection [12], simultaneous sparse approximation =-=[9]-=-, and simultaneous feature selection in multi-task learning [5]. Block-norms that compose an ℓ1-norm with other norms yield solutions that tend to be sparse like the Lasso [8], but the structured norm... |

297 | Stable recovery of sparse overcomplete representations in the presence of noise
- Donoho, Elad, et al.
(Show Context)
Citation Context ... relevant covariates that are active in at least one regression. We refer to this problem as the support union problem. In line with a large body of recent work in statistical machine learning (e.g., =-=[2, 7, 13, 10]-=-), our analysis is high-dimensional in nature, meaning that we allow the model dimension p (as well as other structural parameters) to grow along with the sample size n. A great deal of work has focus... |

276 | Multiple kernel learning, conic duality, and the SMO, algorithm
- Bach, Lanckriet, et al.
(Show Context)
Citation Context ...n A recent line of research in machine learning has focused on regularization based on block-structured norms. Such structured norms are well-motivated in various settings, among them kernel learning =-=[3, 6]-=-, grouped variable selection [11], hierarchical model selection [12], simultaneous sparse approximation [9], and simultaneous feature selection in multi-task learning [5]. Block-norms that compose an ... |

161 | Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using
- Wainwright
(Show Context)
Citation Context ... relevant covariates that are active in at least one regression. We refer to this problem as the support union problem. In line with a large body of recent work in statistical machine learning (e.g., =-=[2, 7, 13, 10]-=-), our analysis is high-dimensional in nature, meaning that we allow the model dimension p (as well as other structural parameters) to grow along with the sample size n. A great deal of work has focus... |

155 | Consistency of the group Lasso and multiple kernel learning
- Bach
(Show Context)
Citation Context ...over the support of a sparse signal even when p ≫ n. 1Some more recent work has studied consistency issues for block-regularization schemes, including classical analysis (p fixed) of the group Lasso =-=[1]-=-, and high-dimensional analysis of the predictive risk of block-regularized logistic regression [4]. Although there have been various empirical demonstrations of the benefits of block regularization, ... |

140 | The group lasso for logistic regression
- Meier, Geer, et al.
(Show Context)
Citation Context ...y issues for block-regularization schemes, including classical analysis (p fixed) of the group Lasso [1], and high-dimensional analysis of the predictive risk of block-regularized logistic regression =-=[4]-=-. Although there have been various empirical demonstrations of the benefits of block regularization, to the best of our knowledge, there has not yet been theoretical analysis of such improvements. In ... |

96 | Learning the kernel function via regularization
- Micchelli, Pontil
(Show Context)
Citation Context ...n A recent line of research in machine learning has focused on regularization based on block-structured norms. Such structured norms are well-motivated in various settings, among them kernel learning =-=[3, 6]-=-, grouped variable selection [11], hierarchical model selection [12], simultaneous sparse approximation [9], and simultaneous feature selection in multi-task learning [5]. Block-norms that compose an ... |

88 | Grouped and hierarchical model selection through composite absolute penalties
- Zhao, Rocha, et al.
- 2006
(Show Context)
Citation Context ...ization based on block-structured norms. Such structured norms are well-motivated in various settings, among them kernel learning [3, 6], grouped variable selection [11], hierarchical model selection =-=[12]-=-, simultaneous sparse approximation [9], and simultaneous feature selection in multi-task learning [5]. Block-norms that compose an ℓ1-norm with other norms yield solutions that tend to be sparse like... |

76 | Sparse additive models
- RAVIKUMAR, LAFFERTY, et al.
- 2009
(Show Context)
Citation Context ... relevant covariates that are active in at least one regression. We refer to this problem as the support union problem. In line with a large body of recent work in statistical machine learning (e.g., =-=[2, 7, 13, 10]-=-), our analysis is high-dimensional in nature, meaning that we allow the model dimension p (as well as other structural parameters) to grow along with the sample size n. A great deal of work has focus... |

11 | Joint covariate selection for grouped classification
- Obozinski, Taskar, et al.
- 2007
(Show Context)
Citation Context ...among them kernel learning [3, 6], grouped variable selection [11], hierarchical model selection [12], simultaneous sparse approximation [9], and simultaneous feature selection in multi-task learning =-=[5]-=-. Block-norms that compose an ℓ1-norm with other norms yield solutions that tend to be sparse like the Lasso [8], but the structured norm also enforces blockwise sparsity, in the sense that parameters... |

8 |
Model selection with the lasso
- Zhao, Yu
- 2006
(Show Context)
Citation Context |