## Bregman iterative algorithms for ℓ1-minimization with applications to compressed sensing (2008)

### Cached

### Download Links

Venue: | SIAM J. Imaging Sci |

Citations: | 65 - 14 self |

### BibTeX

@ARTICLE{Yin08bregmaniterative,

author = {Wotao Yin and Stanley Osher and Donald Goldfarb and Jerome Darbon},

title = {Bregman iterative algorithms for ℓ1-minimization with applications to compressed sensing},

journal = {SIAM J. Imaging Sci},

year = {2008},

pages = {143--168}

}

### OpenURL

### Abstract

Abstract. We propose simple and extremely efficient methods for solving the basis pursuit problem min{‖u‖1: Au = f,u ∈ R n}, which is used in compressed sensing. Our methods are based on Bregman iterative regularization, and they give a very accurate solution after solving only a very small number of 1 instances of the unconstrained problem minu∈Rn μ‖u‖1 + 2 ‖Au−fk ‖ 2 2 for given matrix A and vector f k. We show analytically that this iterative approach yields exact solutions in a finite number of steps and present numerical results that demonstrate that as few as two to six iterations are sufficient in most cases. Our approach is especially useful for many compressed sensing applications where matrix-vector operations involving A and A ⊤ can be computed by fast transforms. Utilizing a fast fixed-point continuation solver that is based solely on such operations for solving the above unconstrained subproblem, we were able to quickly solve huge instances of compressed sensing problems on a standard PC.

### Citations

2066 | Regression shrinkage and selection via the Lasso
- Tibshirani
- 1996
(Show Context)
Citation Context ...ed conjugate gradient method, for which the authors developed an efficient preconditioner. In the code SPGL1 [90], Van den Berg and Friedlander apply an iterative method for solving the LASSO problem =-=[85]-=-, which minimizes ‖Au − f‖ subject to ‖u‖1 ≤ σ, by using an increasing sequence of σ-values in their algorithm to accelerate the computation. In [71], Nesterov proposes an accelerated multistep gradie... |

1885 | Compressed sensing
- Donoho
- 2006
(Show Context)
Citation Context ...it problem (1.1) arises in the applications of compressed sensing (CS). A recent burst of research in CS was led by Candès and Romberg [12], Candès, Romberg, and Tao [14], Candès and Tao [16], Donoho =-=[35]-=-, Donoho and Tanner [36], Tsaig and Donoho [88], and others [80, 86]. The fundamental principle of CS is that, through optimization, the sparsity of a signal can be exploited for recovering that signa... |

1799 | Atomic Decomposition by Basis Pursuit
- Chen, Donoho, et al.
- 1999
(Show Context)
Citation Context ... sensing, iterative regularization, Bregman distances AMS subject classifications. 49, 90, 65 DOI. 10.1137/070703983 1. Introduction. Let A ∈ R m×n , f ∈ R m , and u ∈ R n . The basis pursuit problem =-=[23]-=- solves the constrained minimization problem (1.1) (Basis Pursuit) min{‖u‖1 : Au = f} u to determine an ℓ1-minimal solution uopt of the linear system Au = f, typically underdetermined; i.e., m<n(in ma... |

1537 |
Nonlinear total variation based noise removal algorithms
- Rudin, Osher, et al.
- 1992
(Show Context)
Citation Context ...ocessing; it was then extended to wavelet-based denoising [95], nonlinear inverse scale space in [10, 11], and compressed sensing in MR imaging [59]. The authors of [73] extend the Rudin–Osher–Fatemi =-=[81]-=- model ∫ (2.8) min μ |∇u| + u 1 2 ‖u − b‖2 , where u is an unknown image, b is typically an input noisy measurement of a clean image ū, and μ is a tuning parameter, into an iterative regularization mo... |

1416 | Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information
- Candès, Romberg, et al.
- 2006
(Show Context)
Citation Context ...than one solution. The basis pursuit problem (1.1) arises in the applications of compressed sensing (CS). A recent burst of research in CS was led by Candès and Romberg [12], Candès, Romberg, and Tao =-=[14]-=-, Candès and Tao [16], Donoho [35], Donoho and Tanner [36], Tsaig and Donoho [88], and others [80, 86]. The fundamental principle of CS is that, through optimization, the sparsity of a signal can be e... |

891 | Near-optimal signal recovery from random projections: Universal encoding strategies
- Candès, Tao
(Show Context)
Citation Context ...e basis pursuit problem (1.1) arises in the applications of compressed sensing (CS). A recent burst of research in CS was led by Candès and Romberg [12], Candès, Romberg, and Tao [14], Candès and Tao =-=[16]-=-, Donoho [35], Donoho and Tanner [36], Tsaig and Donoho [88], and others [80, 86]. The fundamental principle of CS is that, through optimization, the sparsity of a signal can be exploited for recoveri... |

704 | Decoding by linear programming
- Candes, Tao
- 2005
(Show Context)
Citation Context ...ution uopt =ū for any ū whenever k, m, n, and A satisfy certain conditions (e.g., see [13, 32, 39, 44, 80, 100, 101]). While these conditions are computationally intractable to check, it was found in =-=[15, 16]-=- and other work that the types of matrices A allowing a high compression ratio (i.e., m ≪ n) include random matrices with independent and identically distributed (i.i.d.) entries and random ensembles ... |

468 |
An iterative thresholding algorithm for linear inverse problems with a sparsity constraint
- Daubechies, Defrise, et al.
(Show Context)
Citation Context ...act that f may contain noise in certain applications, makes solving the unconstrained problem (1.2) min μ‖u‖1 + u 1 ‖Au − f‖2 2 2 more preferable than solving the constrained problem (1.1) (e.g., see =-=[26, 27, 29, 37, 41, 49, 58, 62, 89]-=-). Hereafter, we use ‖·‖≡‖·‖2 to denote the 2-norm. In section 2.1, we give a review of recent numerical methods for solving (1.2). Because (1.2) also allows the constraint Au = f to be relaxed, it is... |

386 | An algorithm for total variation minimization and applications
- Chambolle
- 2004
(Show Context)
Citation Context ...© by SIAM. Unauthorized reproduction of this article is prohibited.146 W. YIN, S. OSHER, D. GOLDFARB, AND J. DARBON in [5] using an auxiliary variable and the idea from Chambolle’s projection method =-=[17]-=-, Elad in [38] and Elad et al. [40] for sparse representation and other related problems, Daubechies, Defrise, and De Mol in [29] through an optimization transfer technique, Combettes and Pesquet [26]... |

326 | Just relax: Convex programming methods for identifying sparse signals in noise
- Tropp
(Show Context)
Citation Context ...g (CS). A recent burst of research in CS was led by Candès and Romberg [12], Candès, Romberg, and Tao [14], Candès and Tao [16], Donoho [35], Donoho and Tanner [36], Tsaig and Donoho [88], and others =-=[80, 86]-=-. The fundamental principle of CS is that, through optimization, the sparsity of a signal can be exploited for recovering that signal from incomplete measurements of it. Let the vector ū ∈ Rn denote a... |

318 | Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems
- Figueiredo, Nowak, et al.
- 2007
(Show Context)
Citation Context ...eme, and conclude the paper in section 6. 2. Background. 2.1. Solving the unconstrained problem (1.2). Several recent algorithms can efficiently solve (1.2) with large-scale data. The authors of GPSR =-=[48]-=-, Figueiredo, Nowak, and Wright [49], reformulate (1.2) as a box-constrained quadratic program, to which they apply the gradient projection method with Barzilai–Borwein steps. The algorithm ℓ1 ℓs [64]... |

293 |
The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming
- Bregman
- 1967
(Show Context)
Citation Context ...meter, into an iterative regularization model by using the Bregman distance (2.10) below based on the total variation functional: � (2.9) J(u) = μ T V (u) = μ |∇u|. Specifically, the Bregman distance =-=[9]-=- based on a convex functional J(∙) between points u and v is defined as (2.10) D p J (u, v) = J(u) − J(v) − 〈p, u − v〉 where p ∈ ∂J(v) is some subgradient in the subdifferential of J at the point v. B... |

255 | An EM algorithm for wavelet-based image restoration
- Figueiredo, Nowak
(Show Context)
Citation Context ...ulevsky in [41] for image denoising, Fadili and Starck [43] for sparse representation-based image deconvolution, Figueiredo and Nowak [47] for image deconvolution, Figueiredo, Bioucas-Dias, and Nowak =-=[45]-=- for wavelet-based image denoising using majorization-minimization algorithms, and Reeves and Kingsbury [78] for image coding. While all of these authors used different approaches, they all developed ... |

248 |
Sparse MRI: The application of compressed sensing for rapid MR imaging: Magnetic Resonance in Medicine
- Lustig, Donoho, et al.
- 2007
(Show Context)
Citation Context ...rices formed from random sets of rows of the matrices corresponding to Fourier and cosine transforms). Recent applications of ℓ1-minimization can be found in [51, 84, 91, 92] for compressive imaging, =-=[61, 68, 70, 69, 96]-=- for MRI and CT, [3, 4, 50, 54, 76, 93] for multisensor networks and distributive sensing, [63, 65, 66, 77, 87] for analog-to-information conversion, [83] for biosensing, [97] for microarray processin... |

207 |
Gradient methods for minimizing composite objective function
- Nesterov
- 2007
(Show Context)
Citation Context ... an iterative method for solving the LASSO problem [85], which minimizes ‖Au − f‖ subject to ‖u‖1 ≤ σ, by using an increasing sequence of σ-values in their algorithm to accelerate the computation. In =-=[71]-=-, Nesterov proposes an accelerated multistep gradient method with an error convergence rate O(1/k 2 ). Under some conditions, the greedy approach StOMP [37] by Donoho, Tsaig, Drori, and Starck can als... |

206 | Nonlinear wavelet image processing: Variational problems, compression, and noise removal through wavelet shrinkage, tip
- Chambolle, DeVore, et al.
- 1998
(Show Context)
Citation Context ...xture model, Bioucas-Dias and Figueiredo for a recent “two-step” shrinkage-based algorithm [7], Blumensath and Davies [8] for solving a cardinality constrained least-squares problem, Chambolle et al. =-=[19]-=- for image denoising, Daubechies, Fornasier, and Loris [30] for a direct and accelerated projected gradient method, Elad, Matalon, and Zibulevsky in [41] for image denoising, Fadili and Starck [43] fo... |

192 |
Multiplier and gradient methods
- Hestenes
- 1969
(Show Context)
Citation Context ...s paper, we found that the Bregman iterative method Algorithm 1 is equivalent to the well-known augmented Lagrangian method (also known as the method of multipliers), which was introduced by Hestenes =-=[60]-=- and Powell [75] and was later generalized by Rockafellar [79]. To solve the constrained optimization problem (3.22) min s(u), subject to ci(u) =0,i=1,...,m, u the augmented Lagrangian method minimize... |

187 |
A method for nonlinear constraints in minimizations problems in optimization
- Powell
- 1969
(Show Context)
Citation Context ...d that the Bregman iterative method Algorithm 1 is equivalent to the well-known augmented Lagrangian method (also known as the method of multipliers), which was introduced by Hestenes [60] and Powell =-=[75]-=- and was later generalized by Rockafellar [79]. To solve the constrained optimization problem (3.22) min s(u), subject to ci(u) =0,i=1,...,m, u the augmented Lagrangian method minimizes the augmented ... |

185 | Sparse Solution of Underdetermined Linear Equations by Stagewise Orthogonal Matching Pursuit, Stanford
- Donoho, Tsaig, et al.
- 2006
(Show Context)
Citation Context ...act that f may contain noise in certain applications, makes solving the unconstrained problem (1.2) min μ‖u‖1 + u 1 ‖Au − f‖2 2 2 more preferable than solving the constrained problem (1.1) (e.g., see =-=[26, 27, 29, 37, 41, 49, 58, 62, 89]-=-). Hereafter, we use ‖·‖≡‖·‖2 to denote the 2-norm. In section 2.1, we give a review of recent numerical methods for solving (1.2). Because (1.2) also allows the constraint Au = f to be relaxed, it is... |

183 | A generalized uncertainty principle and sparse representation in pairs of bases
- Elad, Bruckstein
(Show Context)
Citation Context ...aller than n, and then recover ū from f by solving (1.1). It is proved that the recovery is perfect; i.e., the solution uopt =ū for any ū whenever k, m, n, and A satisfy certain conditions (e.g., see =-=[13, 32, 39, 44, 80, 100, 101]-=-). While these conditions are computationally intractable to check, it was found in [15, 16] and other work that the types of matrices A allowing a high compression ratio (i.e., m ≪ n) include random ... |

133 |
Simultaneous cartoon and texture image inpainting using morphological component analysis (mca). Applied and Computational Harmonic Analysis
- Elad, Starck, et al.
- 2005
(Show Context)
Citation Context ...nsing, [97] for microarray processing, and [24, 25, 98, 99] for image decomposition and computer vision tasks. ℓ1-minimization also has applications in image inpainting and missing data recovery; see =-=[42, 82, 101]-=-, for example. Also nonconvex quasi–ℓp-norm approaches for 0 ≤ p<1 have been proposed by Chartrand [20, 21] and Chartrand and Yin [22]. Problem (1.1) can be transformed into a linear program and then ... |

127 | Quantitative robust uncertainty principles and optimally sparse decompositions
- Candes, Romberg
(Show Context)
Citation Context ...s, m ≪ n), and Au = f has more than one solution. The basis pursuit problem (1.1) arises in the applications of compressed sensing (CS). A recent burst of research in CS was led by Candès and Romberg =-=[12]-=-, Candès, Romberg, and Tao [14], Candès and Tao [16], Donoho [35], Donoho and Tanner [36], Tsaig and Donoho [88], and others [80, 86]. The fundamental principle of CS is that, through optimization, th... |

109 |
Weakly differentiable functions: Sobolev Spaces and Functions of Bounded Variation
- Ziemer
- 1989
(Show Context)
Citation Context ...27], [94], and other work, the iterative procedure (2.2) is adapted for solving the total variation regularization problem (2.6) min μT V (u)+H(u), u where TV(u) denotes the total variation of u (see =-=[102]-=- for a definition of TV(u) and its properties). Specifically, the regularization term μ‖u‖1 in (2.2) is replaced by μT V (u), yielding (2.7) u k+1 ← arg min u μT V (u)+ 1 2δ k ∥ ∥u − (u k − δ k ∇H(u k... |

101 | An iterative regularization method for total variation-based image restoration
- Osher, Burger, et al.
(Show Context)
Citation Context ... one can show that the solution of (1.2) never equals that of (1.1) unless they both have the trivial solution 0. In this paper, we introduce a simple method based on Bregman iterative regularization =-=[73]-=-, which we review in section 2.2, for finding a solution of problem (1.1) by solving only a small number of instances of the unconstrained problem (1.2). Our numerical algorithm, based on this iterati... |

100 |
Exact reconstruction of sparse signals via nonconvex minimization
- Chartrand
(Show Context)
Citation Context ...1-minimization also has applications in image inpainting and missing data recovery; see [42, 82, 101], for example. Also nonconvex quasi–ℓp-norm approaches for 0 ≤ p<1 have been proposed by Chartrand =-=[20, 21]-=- and Chartrand and Yin [22]. Problem (1.1) can be transformed into a linear program and then solved by conventional linear programming solvers. However, such solvers are not tailored for the matrices ... |

96 | Why simple shrinkage is still relevant for redundant representations
- Elad
- 2006
(Show Context)
Citation Context ...uthorized reproduction of this article is prohibited.146 W. YIN, S. OSHER, D. GOLDFARB, AND J. DARBON in [5] using an auxiliary variable and the idea from Chambolle’s projection method [17], Elad in =-=[38]-=- and Elad et al. [40] for sparse representation and other related problems, Daubechies, Defrise, and De Mol in [29] through an optimization transfer technique, Combettes and Pesquet [26] using operato... |

92 | Deterministic Constructions of Compressed Sensing Matrices
- DeVore
- 2007
(Show Context)
Citation Context ...aller than n, and then recover ū from f by solving (1.1). It is proved that the recovery is perfect; i.e., the solution uopt =ū for any ū whenever k, m, n, and A satisfy certain conditions (e.g., see =-=[13, 32, 39, 44, 80, 100, 101]-=-). While these conditions are computationally intractable to check, it was found in [15, 16] and other work that the types of matrices A allowing a high compression ratio (i.e., m ≪ n) include random ... |

91 | An introduction to compressive sensing
- Candès
(Show Context)
Citation Context ... of the matrices corresponding to Fourier and cosine transforms). Recent applications of ℓ1 minimization can be found in [51, 84, 91, 92] for compressive imaging, [61, 68, 70, 69, 96] for MRI and CT, =-=[3, 4, 50, 54, 76, 93]-=- for multi-sensor networks and distributive sensing, ∗ Department of Computational and Applied Mathematics, Rice University, wotao.yin@rice.edu. Research supported by an internal faculty research gran... |

91 | Geometric approach to error correcting codes and reconstruction of signals
- Rudelson, Vershynin
(Show Context)
Citation Context ...g (CS). A recent burst of research in CS was led by Candès and Romberg [12], Candès, Romberg, and Tao [14], Candès and Tao [16], Donoho [35], Donoho and Tanner [36], Tsaig and Donoho [88], and others =-=[80, 86]-=-. The fundamental principle of CS is that, through optimization, the sparsity of a signal can be exploited for recovering that signal from incomplete measurements of it. Let the vector ū ∈ Rn denote a... |

90 | Total variation minimization and a class of binary mrf models
- Chambolle
- 2005
(Show Context)
Citation Context ...T V (u), yielding (2.7) u k+1 ← arg min u μT V (u)+ 1 2δ k ∥ ∥u − (u k − δ k ∇H(u k )) Each subproblem (2.7) can be efficiently solved, for example, by one of the recent graph/networkbased algorithms =-=[18, 28, 53]-=-. In [27] Darbon and Osher also studied an algorithm obtained by replacing μT V (u) in(2.7) by its Bregman distance (see section 2.2) and proved that if H(u) =0.5‖Au − f‖ 2 , then {u k } converges to ... |

82 | Iteratively reweighted algorithms for compressive sensing
- Chartrand, Yin
- 2008
(Show Context)
Citation Context ...tions in image inpainting and missing data recovery; see [42, 82, 101], for example. Also nonconvex quasi–ℓp-norm approaches for 0 ≤ p<1 have been proposed by Chartrand [20, 21] and Chartrand and Yin =-=[22]-=-. Problem (1.1) can be transformed into a linear program and then solved by conventional linear programming solvers. However, such solvers are not tailored for the matrices A that are large-scale and ... |

76 | Recovery algorithm for vector-valued data with joint sparsity constraints
- Fornasier, Rauhut
- 2007
(Show Context)
Citation Context ... of the matrices corresponding to Fourier and cosine transforms). Recent applications of ℓ1-minimization can be found in [51, 84, 91, 92] for compressive imaging, [61, 68, 70, 69, 96] for MRI and CT, =-=[3, 4, 50, 54, 76, 93]-=- for multisensor networks and distributive sensing, [63, 65, 66, 77, 87] for analog-to-information conversion, [83] for biosensing, [97] for microarray processing, and [24, 25, 98, 99] for image decom... |

75 | A new compressive imaging camera architecture using optical-domain compression
- Takhar, Laska, et al.
- 2006
(Show Context)
Citation Context ...mbles of orthonormal transforms (e.g., matrices formed from random sets of rows of the matrices corresponding to Fourier and cosine transforms). Recent applications of ℓ1-minimization can be found in =-=[51, 84, 91, 92]-=- for compressive imaging, [61, 68, 70, 69, 96] for MRI and CT, [3, 4, 50, 54, 76, 93] for multisensor networks and distributive sensing, [63, 65, 66, 77, 87] for analog-to-information conversion, [83]... |

70 |
Random filters for compressive sampling and reconstruction
- Tropp, Wakin, et al.
- 2006
(Show Context)
Citation Context ...pplications of ℓ1-minimization can be found in [51, 84, 91, 92] for compressive imaging, [61, 68, 70, 69, 96] for MRI and CT, [3, 4, 50, 54, 76, 93] for multisensor networks and distributive sensing, =-=[63, 65, 66, 77, 87]-=- for analog-to-information conversion, [83] for biosensing, [97] for microarray processing, and [24, 25, 98, 99] for image decomposition and computer vision tasks. ℓ1-minimization also has application... |

68 | Accelerated projected gradient methods for linear inverse problems with sparsity constraints
- Daubechies, Fornasier, et al.
(Show Context)
Citation Context ...step” shrinkage-based algorithm [7], Blumensath and Davies [8] for solving a cardinality constrained least-squares problem, Chambolle et al. [19] for image denoising, Daubechies, Fornasier, and Loris =-=[30]-=- for a direct and accelerated projected gradient method, Elad, Matalon, and Zibulevsky in [41] for image denoising, Fadili and Starck [43] for sparse representation-based image deconvolution, Figueire... |

67 |
Iterative thresholding for sparse approximations
- Blumensath, Davies
- 2008
(Show Context)
Citation Context ..., Bioucas-Dian [6] for wavelet-based image deconvolution using a Gaussian scale mixture model, Bioucas-Dias and Figueiredo for a recent “two-step” shrinkage-based algorithm [7], Blumensath and Davies =-=[8]-=- for solving a cardinality constrained least-squares problem, Chambolle et al [19] for image denoising, Daubechies, Fornasier, and Loris [30] for a direct and accelerated projected gradient method, El... |

65 | Image restoration with discrete constrained total variation (part I): Fast and exact optimization - Darbon, Sigelle |

65 | Decentralized Compression and Predistribution via Randomized Gossiping
- Rabbat, Haupt, et al.
- 2006
(Show Context)
Citation Context ... of the matrices corresponding to Fourier and cosine transforms). Recent applications of ℓ1-minimization can be found in [51, 84, 91, 92] for compressive imaging, [61, 68, 70, 69, 96] for MRI and CT, =-=[3, 4, 50, 54, 76, 93]-=- for multisensor networks and distributive sensing, [63, 65, 66, 77, 87] for analog-to-information conversion, [83] for biosensing, [97] for microarray processing, and [24, 25, 98, 99] for image decom... |

62 | Linearized Bregman iterations for compressed sensing
- Cai, Osher, et al.
(Show Context)
Citation Context ...ector multiplication and shrinkage operators that generates a sequence {uk} that converges rapidly to an approximate solution of the basis pursuit problem (1.1). In fact, the numerical experiments in =-=[34]-=- indicate that this algorithm converges to a true solution if the parameter μ is large enough. Finally, preliminary experiments indicate that our algorithms are robust with respect to a certain amount... |

60 |
Compressive wireless sensing
- Bajwa, Haupt, et al.
- 2006
(Show Context)
Citation Context ... of the matrices corresponding to Fourier and cosine transforms). Recent applications of ℓ1 minimization can be found in [51, 84, 91, 92] for compressive imaging, [61, 68, 70, 69, 96] for MRI and CT, =-=[3, 4, 50, 54, 76, 93]-=- for multi-sensor networks and distributive sensing, ∗ Department of Computational and Applied Mathematics, Rice University, wotao.yin@rice.edu. Research supported by an internal faculty research gran... |

58 |
On sparse representation in pairs of bases
- Feuer, Nemirovski
(Show Context)
Citation Context ...aller than n, and then recover ū from f by solving (1.1). It is proved that the recovery is perfect; i.e., the solution uopt =ū for any ū whenever k, m, n, and A satisfy certain conditions (e.g., see =-=[13, 32, 39, 44, 80, 100, 101]-=-). While these conditions are computationally intractable to check, it was found in [15, 16] and other work that the types of matrices A allowing a high compression ratio (i.e., m ≪ n) include random ... |

57 |
Coordinate and subspace optimization methods for linear least squares with non-quadratic regularization
- Elad, Matalon, et al.
(Show Context)
Citation Context ...act that f may contain noise in certain applications, makes solving the unconstrained problem (1.2) min μ‖u‖1 + u 1 ‖Au − f‖2 2 2 more preferable than solving the constrained problem (1.1) (e.g., see =-=[26, 27, 29, 37, 41, 49, 58, 62, 89]-=-). Hereafter, we use ‖·‖≡‖·‖2 to denote the 2-norm. In section 2.1, we give a review of recent numerical methods for solving (1.2). Because (1.2) also allows the constraint Au = f to be relaxed, it is... |

57 |
A bound optimization approach to wavelet-based image deconvolution
- Figueiredo, Nowak
- 2005
(Show Context)
Citation Context ... and accelerated projected gradient method, Elad, Matalon, and Zibulevsky in [41] for image denoising, Fadili and Starck [43] for sparse representation-based image deconvolution, Figueiredo and Nowak =-=[47]-=- for image deconvolution, Figueiredo, Bioucas-Dias, and Nowak [45] for wavelet-based image denoising using majorization-minimization algorithms, and Reeves and Kingsbury [78] for image coding. While a... |

57 | An architecture for compressive imaging
- Wakin, Laska, et al.
- 2006
(Show Context)
Citation Context ...mbles of orthonormal transforms (e.g., matrices formed from random sets of rows of the matrices corresponding to Fourier and cosine transforms). Recent applications of ℓ1-minimization can be found in =-=[51, 84, 91, 92]-=- for compressive imaging, [61, 68, 70, 69, 96] for MRI and CT, [3, 4, 50, 54, 76, 93] for multisensor networks and distributive sensing, [63, 65, 66, 77, 87] for analog-to-information conversion, [83]... |

55 | Bayesian wavelet-based image deconvolution: a GEM algorithm exploiting a class of heavy-tailed priors
- Bioucas-Dias
(Show Context)
Citation Context ...rbon and Osher [27] through an implicit PDE approach, and others. In addition, related applications and algorithms can be found in Adeyemi and Davies [1] for image sparse representation, Bioucas-Dian =-=[6]-=- for wavelet-based image deconvolution using a Gaussian scale mixture model, Bioucas-Dias and Figueiredo for a recent “two-step” shrinkage-based algorithm [7], Blumensath and Davies [8] for solving a ... |

53 | Atoms of all channels, unite! Average case analysis of multi-channel sparse recovery using greedy algorithms
- Gribonval, Rauhut, et al.
- 2008
(Show Context)
Citation Context ... of the matrices corresponding to Fourier and cosine transforms). Recent applications of ℓ1-minimization can be found in [51, 84, 91, 92] for compressive imaging, [61, 68, 70, 69, 96] for MRI and CT, =-=[3, 4, 50, 54, 76, 93]-=- for multisensor networks and distributive sensing, [63, 65, 66, 77, 87] for analog-to-information conversion, [83] for biosensing, [97] for microarray processing, and [24, 25, 98, 99] for image decom... |

51 | Nonlinear inverse scale space methods for image restoration
- Burger, Osher, et al.
- 2005
(Show Context)
Citation Context ...e regularization was introduced by Osher, Burger, Goldfarb, Xu, and Yin [73] in the context of image processing; it was then extended to wavelet-based denoising [95], nonlinear inverse scale space in =-=[10, 11]-=-, and compressed sensing in MR imaging [59]. The authors of [73] extend the Rudin-Osher-Fatemi [81] model � (2.8) min μ u |∇u| + 1 �u − b�2 2 where u is an unknown image, b is typically an input noisy... |

50 | Compressive imaging for video representation and coding
- Wakin, Laska, et al.
- 2006
(Show Context)
Citation Context ...mbles of orthonormal transforms (e.g., matrices formed from random sets of rows of the matrices corresponding to Fourier and cosine transforms). Recent applications of ℓ1-minimization can be found in =-=[51, 84, 91, 92]-=- for compressive imaging, [61, 68, 70, 69, 96] for MRI and CT, [3, 4, 50, 54, 76, 93] for multisensor networks and distributive sensing, [63, 65, 66, 77, 87] for analog-to-information conversion, [83]... |

48 | Random sampling for analog-to-information conversion of wideband signals
- Laska, Kirolos, et al.
- 2006
(Show Context)
Citation Context ...pplications of ℓ1-minimization can be found in [51, 84, 91, 92] for compressive imaging, [61, 68, 70, 69, 96] for MRI and CT, [3, 4, 50, 54, 76, 93] for multisensor networks and distributive sensing, =-=[63, 65, 66, 77, 87]-=- for analog-to-information conversion, [83] for biosensing, [97] for microarray processing, and [24, 25, 98, 99] for image decomposition and computer vision tasks. ℓ1-minimization also has application... |

47 | Fixed-Point Continuation for ℓ1-minimization: Methodology and Convergence
- Hale, Yin, et al.
- 2008
(Show Context)
Citation Context |