Results 1  10
of
14
Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems
 IEEE Journal of Selected Topics in Signal Processing
, 2007
"... Abstract—Many problems in signal processing and statistical inference involve finding sparse solutions to underdetermined, or illconditioned, linear systems of equations. A standard approach consists in minimizing an objective function which includes a quadratic (squared ℓ2) error term combined wi ..."
Abstract

Cited by 304 (15 self)
 Add to MetaCart
Abstract—Many problems in signal processing and statistical inference involve finding sparse solutions to underdetermined, or illconditioned, linear systems of equations. A standard approach consists in minimizing an objective function which includes a quadratic (squared ℓ2) error term combined with a sparsenessinducing (ℓ1) regularization term.Basis pursuit, the least absolute shrinkage and selection operator (LASSO), waveletbased deconvolution, and compressed sensing are a few wellknown examples of this approach. This paper proposes gradient projection (GP) algorithms for the boundconstrained quadratic programming (BCQP) formulation of these problems. We test variants of this approach that select the line search parameters in different ways, including techniques based on the BarzilaiBorwein method. Computational experiments show that these GP approaches perform well in a wide range of applications, often being significantly faster (in terms of computation time) than competing methods. Although the performance of GP methods tends to degrade as the regularization term is deemphasized, we show how they can be embedded in a continuation scheme to recover their efficient practical performance. A. Background I.
Simultaneous analysis of Lasso and Dantzig selector
 ANNALS OF STATISTICS
, 2009
"... We show that, under a sparsity scenario, the Lasso estimator and the Dantzig selector exhibit similar behavior. For both methods, we derive, in parallel, oracle inequalities for the prediction risk in the general nonparametric regression model, as well as bounds on the ℓp estimation loss for 1 ≤ p ≤ ..."
Abstract

Cited by 187 (6 self)
 Add to MetaCart
We show that, under a sparsity scenario, the Lasso estimator and the Dantzig selector exhibit similar behavior. For both methods, we derive, in parallel, oracle inequalities for the prediction risk in the general nonparametric regression model, as well as bounds on the ℓp estimation loss for 1 ≤ p ≤ 2 in the linear model when the number of variables can be much larger than the sample size.
Sparsity oracle inequalities for the lasso
 Electronic Journal of Statistics
"... Abstract: This paper studies oracle properties of ℓ1penalized least squares in nonparametric regression setting with random design. We show that the penalized least squares estimator satisfies sparsity oracle inequalities, i.e., bounds in terms of the number of nonzero components of the oracle vec ..."
Abstract

Cited by 83 (11 self)
 Add to MetaCart
Abstract: This paper studies oracle properties of ℓ1penalized least squares in nonparametric regression setting with random design. We show that the penalized least squares estimator satisfies sparsity oracle inequalities, i.e., bounds in terms of the number of nonzero components of the oracle vector. The results are valid even when the dimension of the model is (much) larger than the sample size and the regression matrix is not positive definite. They can be applied to highdimensional linear regression, to nonparametric adaptive regression estimation and to the problem of aggregation of arbitrary estimators.
FIXEDPOINT CONTINUATION FOR ℓ1MINIMIZATION: METHODOLOGY AND CONVERGENCE
"... We present a framework for solving largescale ℓ1regularized convex minimization problem: min �x�1 + µf(x). Our approach is based on two powerful algorithmic ideas: operatorsplitting and continuation. Operatorsplitting results in a fixedpoint algorithm for any given scalar µ; continuation refers ..."
Abstract

Cited by 45 (9 self)
 Add to MetaCart
We present a framework for solving largescale ℓ1regularized convex minimization problem: min �x�1 + µf(x). Our approach is based on two powerful algorithmic ideas: operatorsplitting and continuation. Operatorsplitting results in a fixedpoint algorithm for any given scalar µ; continuation refers to approximately following the path traced by the optimal value of x as µ increases. In this paper, we study the structure of optimal solution sets; prove finite convergence for important quantities; and establish qlinear convergence rates for the fixedpoint algorithm applied to problems with f(x) convex, but not necessarily strictly convex. The continuation framework, motivated by our convergence results, is demonstrated to facilitate the construction of practical algorithms.
Linear convergence of iterative softthresholding
 J. Fourier Anal. Appl
"... ABSTRACT. In this article a unified approach to iterative softthresholding algorithms for the solution of linear operator equations in infinite dimensional Hilbert spaces is presented. We formulate the algorithm in the framework of generalized gradient methods and present a new convergence analysis ..."
Abstract

Cited by 35 (9 self)
 Add to MetaCart
ABSTRACT. In this article a unified approach to iterative softthresholding algorithms for the solution of linear operator equations in infinite dimensional Hilbert spaces is presented. We formulate the algorithm in the framework of generalized gradient methods and present a new convergence analysis. As main result we show that the algorithm converges with linear rate as soon as the underlying operator satisfies the socalled finite basis injectivity property or the minimizer possesses a socalled strict sparsity pattern. Moreover it is shown that the constants can be calculated explicitly in special cases (i.e. for compact operators). Furthermore, the techniques also can be used to establish linear convergence for related methods such as the iterative thresholding algorithm for joint sparsity and the accelerated gradient projection method. 1.
unknown title
"... Geometry and homotopy for ℓ1 sparse representations Abstract—We explore the geometry of ℓ1 sparse representations in both the noiseless (Basis Pursuit) and noisy (Basis Pursuit DeNoising) case using a homotopy method. We will see that the concept of the basis vertex c, which has unit inner product ..."
Abstract
 Add to MetaCart
Geometry and homotopy for ℓ1 sparse representations Abstract—We explore the geometry of ℓ1 sparse representations in both the noiseless (Basis Pursuit) and noisy (Basis Pursuit DeNoising) case using a homotopy method. We will see that the concept of the basis vertex c, which has unit inner product with active basis vectors, is a useful geometric concept, both for visualization and for algorithm construction. We derive an explicit homotopy continuation algorithm and find that this method has interesting parallels with the Polytope Faces Pursuit algorithm for the noiseless case. Numerical results confirm the operation of the algorithm. I.