#### DMCA

## Bregman iterative algorithms for ℓ1-minimization with applications to compressed sensing (2008)

### Cached

### Download Links

Venue: | SIAM J. IMAGING SCI |

Citations: | 83 - 15 self |

### Citations

3991 | Regression shrinkage and selection via the lasso
- Tibshirani
- 1996
(Show Context)
Citation Context ...ed conjugate gradient method, for which the authors developed an efficient preconditioner. In the code SPGL1 [90], Van den Berg and Friedlander apply an iterative method for solving the LASSO problem =-=[85]-=-, which minimizes ‖Au − f‖ subject to ‖u‖1 ≤ σ, by using an increasing sequence of σ-values in their algorithm to accelerate the computation. In [71], Nesterov proposes an accelerated multistep gradie... |

3541 | Compressed sensing
- Donoho
(Show Context)
Citation Context ...it problem (1.1) arises in the applications of compressed sensing (CS). A recent burst of research in CS was led by Candès and Romberg [12], Candès, Romberg, and Tao [14], Candès and Tao [16], Donoho =-=[35]-=-, Donoho and Tanner [36], Tsaig and Donoho [88], and others [80, 86]. The fundamental principle of CS is that, through optimization, the sparsity of a signal can be exploited for recovering that signa... |

2679 | Atomic decomposition by basis pursuit.
- Chen, Donoho, et al.
- 1999
(Show Context)
Citation Context ... sensing, iterative regularization, Bregman distances AMS subject classifications. 49, 90, 65 DOI. 10.1137/070703983 1. Introduction. Let A ∈ R m×n , f ∈ R m , and u ∈ R n . The basis pursuit problem =-=[23]-=- solves the constrained minimization problem (1.1) (Basis Pursuit) min{‖u‖1 : Au = f} u to determine an ℓ1-minimal solution uopt of the linear system Au = f, typically underdetermined; i.e., m<n(in ma... |

2559 | Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information
- Candés, Romberg, et al.
- 2006
(Show Context)
Citation Context ...than one solution. The basis pursuit problem (1.1) arises in the applications of compressed sensing (CS). A recent burst of research in CS was led by Candès and Romberg [12], Candès, Romberg, and Tao =-=[14]-=-, Candès and Tao [16], Donoho [35], Donoho and Tanner [36], Tsaig and Donoho [88], and others [80, 86]. The fundamental principle of CS is that, through optimization, the sparsity of a signal can be e... |

2232 | Nonlinear total variation based noise removal algorithms
- Rudin, Osher, et al.
- 1992
(Show Context)
Citation Context ...ocessing; it was then extended to wavelet-based denoising [95], nonlinear inverse scale space in [10, 11], and compressed sensing in MR imaging [59]. The authors of [73] extend the Rudin–Osher–Fatemi =-=[81]-=- model ∫ (2.8) min μ |∇u| + u 1 2 ‖u − b‖2 , where u is an unknown image, b is typically an input noisy measurement of a clean image ū, and μ is a tuning parameter, into an iterative regularization mo... |

1477 | Near optimal signal recovery from random projections: Universal encoding strategies?,”
- Candès, Tao
- 2006
(Show Context)
Citation Context ...e basis pursuit problem (1.1) arises in the applications of compressed sensing (CS). A recent burst of research in CS was led by Candès and Romberg [12], Candès, Romberg, and Tao [14], Candès and Tao =-=[16]-=-, Donoho [35], Donoho and Tanner [36], Tsaig and Donoho [88], and others [80, 86]. The fundamental principle of CS is that, through optimization, the sparsity of a signal can be exploited for recoveri... |

1365 | Decoding by linear programming
- Candes, Tao
- 2005
(Show Context)
Citation Context ...ution uopt =ū for any ū whenever k, m, n, and A satisfy certain conditions (e.g., see [13, 32, 39, 44, 80, 100, 101]). While these conditions are computationally intractable to check, it was found in =-=[15, 16]-=- and other work that the types of matrices A allowing a high compression ratio (i.e., m ≪ n) include random matrices with independent and identically distributed (i.i.d.) entries and random ensembles ... |

731 | An iterative thresholding algorithm for linear inverse problems with a sparsity constraint
- Daubechies, Defrise, et al.
- 2004
(Show Context)
Citation Context ...act that f may contain noise in certain applications, makes solving the unconstrained problem (1.2) min μ‖u‖1 + u 1 ‖Au − f‖2 2 2 more preferable than solving the constrained problem (1.1) (e.g., see =-=[26, 27, 29, 37, 41, 49, 58, 62, 89]-=-). Hereafter, we use ‖·‖≡‖·‖2 to denote the 2-norm. In section 2.1, we give a review of recent numerical methods for solving (1.2). Because (1.2) also allows the constraint Au = f to be relaxed, it is... |

617 | An algorithm for total variation minimization and applications
- Chambolle
(Show Context)
Citation Context ...© by SIAM. Unauthorized reproduction of this article is prohibited.146 W. YIN, S. OSHER, D. GOLDFARB, AND J. DARBON in [5] using an auxiliary variable and the idea from Chambolle’s projection method =-=[17]-=-, Elad in [38] and Elad et al. [40] for sparse representation and other related problems, Daubechies, Defrise, and De Mol in [29] through an optimization transfer technique, Combettes and Pesquet [26]... |

520 | Sparse MRI: The Application of Compressed Sensing for Rapid MR Imaging 2007 [Online]. Available: http://www.stanford.edu/~mlustig/SparseMRI.pdf
- Lustig, Donoho, et al.
(Show Context)
Citation Context ...rices formed from random sets of rows of the matrices corresponding to Fourier and cosine transforms). Recent applications of ℓ1-minimization can be found in [51, 84, 91, 92] for compressive imaging, =-=[61, 68, 70, 69, 96]-=- for MRI and CT, [3, 4, 50, 54, 76, 93] for multisensor networks and distributive sensing, [63, 65, 66, 77, 87] for analog-to-information conversion, [83] for biosensing, [97] for microarray processin... |

519 | Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems
- Figueiredo, Nowak, et al.
- 2007
(Show Context)
Citation Context ...eme, and conclude the paper in section 6. 2. Background. 2.1. Solving the unconstrained problem (1.2). Several recent algorithms can efficiently solve (1.2) with large-scale data. The authors of GPSR =-=[48]-=-, Figueiredo, Nowak, and Wright [49], reformulate (1.2) as a box-constrained quadratic program, to which they apply the gradient projection method with Barzilai–Borwein steps. The algorithm ℓ1 ℓs [64]... |

478 | Just relax: Convex programming methods for identifying sparse signals
- Tropp
- 2006
(Show Context)
Citation Context ...g (CS). A recent burst of research in CS was led by Candès and Romberg [12], Candès, Romberg, and Tao [14], Candès and Tao [16], Donoho [35], Donoho and Tanner [36], Tsaig and Donoho [88], and others =-=[80, 86]-=-. The fundamental principle of CS is that, through optimization, the sparsity of a signal can be exploited for recovering that signal from incomplete measurements of it. Let the vector ū ∈ Rn denote a... |

477 |
The relaxation method of finding the common points of convex sets and its application to the solution of problems in convex programming
- Bregman
- 1967
(Show Context)
Citation Context ...meter, into an iterative regularization model by using the Bregman distance (2.10) below based on the total variation functional: � (2.9) J(u) = μ T V (u) = μ |∇u|. Specifically, the Bregman distance =-=[9]-=- based on a convex functional J(∙) between points u and v is defined as (2.10) D p J (u, v) = J(u) − J(v) − 〈p, u − v〉 where p ∈ ∂J(v) is some subgradient in the subdifferential of J at the point v. B... |

388 | Gradient methods for minimizing composite objective function,” Center for Operations Research and
- Nesterov
- 2007
(Show Context)
Citation Context ... an iterative method for solving the LASSO problem [85], which minimizes ‖Au − f‖ subject to ‖u‖1 ≤ σ, by using an increasing sequence of σ-values in their algorithm to accelerate the computation. In =-=[71]-=-, Nesterov proposes an accelerated multistep gradient method with an error convergence rate O(1/k 2 ). Under some conditions, the greedy approach StOMP [37] by Donoho, Tsaig, Drori, and Starck can als... |

344 | Majorization-minimization algorithms for wavelet-based image restoration
- Figueiredo, Bioucas-Dias, et al.
- 2007
(Show Context)
Citation Context ...ulevsky in [41] for image denoising, Fadili and Starck [43] for sparse representation-based image deconvolution, Figueiredo and Nowak [47] for image deconvolution, Figueiredo, Bioucas-Dias, and Nowak =-=[45]-=- for wavelet-based image denoising using majorization-minimization algorithms, and Reeves and Kingsbury [78] for image coding. While all of these authors used different approaches, they all developed ... |

285 |
Multiplier and gradient methods
- Hestenes
- 1969
(Show Context)
Citation Context ...s paper, we found that the Bregman iterative method Algorithm 1 is equivalent to the well-known augmented Lagrangian method (also known as the method of multipliers), which was introduced by Hestenes =-=[60]-=- and Powell [75] and was later generalized by Rockafellar [79]. To solve the constrained optimization problem (3.22) min s(u), subject to ci(u) =0,i=1,...,m, u the augmented Lagrangian method minimize... |

270 | Sparse solution of underdetermined linear equations by stagewise orthogonal matching pursuit
- Donoho, Tsaig, et al.
(Show Context)
Citation Context ...act that f may contain noise in certain applications, makes solving the unconstrained problem (1.2) min μ‖u‖1 + u 1 ‖Au − f‖2 2 2 more preferable than solving the constrained problem (1.1) (e.g., see =-=[26, 27, 29, 37, 41, 49, 58, 62, 89]-=-). Hereafter, we use ‖·‖≡‖·‖2 to denote the 2-norm. In section 2.1, we give a review of recent numerical methods for solving (1.2). Because (1.2) also allows the constraint Au = f to be relaxed, it is... |

265 |
A method for non-linear constraints in minimization problems
- Powell
- 1967
(Show Context)
Citation Context ...d that the Bregman iterative method Algorithm 1 is equivalent to the well-known augmented Lagrangian method (also known as the method of multipliers), which was introduced by Hestenes [60] and Powell =-=[75]-=- and was later generalized by Rockafellar [79]. To solve the constrained optimization problem (3.22) min s(u), subject to ci(u) =0,i=1,...,m, u the augmented Lagrangian method minimizes the augmented ... |

255 | Nonlinear wavelet image processing: Variational problems, compression, and noise removal through wavelet shrinkage
- Chambolle, DeVore, et al.
- 1998
(Show Context)
Citation Context ...xture model, Bioucas-Dias and Figueiredo for a recent “two-step” shrinkage-based algorithm [7], Blumensath and Davies [8] for solving a cardinality constrained least-squares problem, Chambolle et al. =-=[19]-=- for image denoising, Daubechies, Fornasier, and Loris [30] for a direct and accelerated projected gradient method, Elad, Matalon, and Zibulevsky in [41] for image denoising, Fadili and Starck [43] fo... |

244 | A generalized uncertainty principle and sparse representation in pairs of RN bases
- Bruckstein, Elad
- 2002
(Show Context)
Citation Context ...aller than n, and then recover ū from f by solving (1.1). It is proved that the recovery is perfect; i.e., the solution uopt =ū for any ū whenever k, m, n, and A satisfy certain conditions (e.g., see =-=[13, 32, 39, 44, 80, 100, 101]-=-). While these conditions are computationally intractable to check, it was found in [15, 16] and other work that the types of matrices A allowing a high compression ratio (i.e., m ≪ n) include random ... |

215 |
Simultaneous cartoon and texture image inpainting using morphological component analysis
- Elad, Starck, et al.
- 2005
(Show Context)
Citation Context ...nsing, [97] for microarray processing, and [24, 25, 98, 99] for image decomposition and computer vision tasks. ℓ1-minimization also has applications in image inpainting and missing data recovery; see =-=[42, 82, 101]-=-, for example. Also nonconvex quasi–ℓp-norm approaches for 0 ≤ p<1 have been proposed by Chartrand [20, 21] and Chartrand and Yin [22]. Problem (1.1) can be transformed into a linear program and then ... |

190 | An Iterative Regularization Method for Total Variation-Based Image Restoration,” Multiscale Modeling
- Osher, Burger, et al.
- 2005
(Show Context)
Citation Context ... one can show that the solution of (1.2) never equals that of (1.1) unless they both have the trivial solution 0. In this paper, we introduce a simple method based on Bregman iterative regularization =-=[73]-=-, which we review in section 2.2, for finding a solution of problem (1.1) by solving only a small number of instances of the unconstrained problem (1.2). Our numerical algorithm, based on this iterati... |

185 | Iteratively reweighted algorithms for compressive sensing
- Chartrand, Yin
- 2008
(Show Context)
Citation Context ...tions in image inpainting and missing data recovery; see [42, 82, 101], for example. Also nonconvex quasi–ℓp-norm approaches for 0 ≤ p<1 have been proposed by Chartrand [20, 21] and Chartrand and Yin =-=[22]-=-. Problem (1.1) can be transformed into a linear program and then solved by conventional linear programming solvers. However, such solvers are not tailored for the matrices A that are large-scale and ... |

184 |
Weakly Differentiable Functions. Sobolev Spaces and Functions of Bounded Variation
- Ziemer
- 1989
(Show Context)
Citation Context ...27], [94], and other work, the iterative procedure (2.2) is adapted for solving the total variation regularization problem (2.6) min μT V (u)+H(u), u where TV(u) denotes the total variation of u (see =-=[102]-=- for a definition of TV(u) and its properties). Specifically, the regularization term μ‖u‖1 in (2.2) is replaced by μT V (u), yielding (2.7) u k+1 ← arg min u μT V (u)+ 1 2δ k ∥ ∥u − (u k − δ k ∇H(u k... |

183 | Exact reconstruction of sparse signals via nonconvex minimization
- Chartrand
- 2007
(Show Context)
Citation Context ...1-minimization also has applications in image inpainting and missing data recovery; see [42, 82, 101], for example. Also nonconvex quasi–ℓp-norm approaches for 0 ≤ p<1 have been proposed by Chartrand =-=[20, 21]-=- and Chartrand and Yin [22]. Problem (1.1) can be transformed into a linear program and then solved by conventional linear programming solvers. However, such solvers are not tailored for the matrices ... |

178 | Quantitative robust uncertainty principles and optimally sparse decompositions
- Candes, Romberg
(Show Context)
Citation Context ...s, m ≪ n), and Au = f has more than one solution. The basis pursuit problem (1.1) arises in the applications of compressed sensing (CS). A recent burst of research in CS was led by Candès and Romberg =-=[12]-=-, Candès, Romberg, and Tao [14], Candès and Tao [16], Donoho [35], Donoho and Tanner [36], Tsaig and Donoho [88], and others [80, 86]. The fundamental principle of CS is that, through optimization, th... |

149 |
Iterative thresholding for sparse approximations
- Blumensath, Davies
- 2008
(Show Context)
Citation Context ..., Bioucas-Dian [6] for wavelet-based image deconvolution using a Gaussian scale mixture model, Bioucas-Dias and Figueiredo for a recent “two-step” shrinkage-based algorithm [7], Blumensath and Davies =-=[8]-=- for solving a cardinality constrained least-squares problem, Chambolle et al [19] for image denoising, Daubechies, Fornasier, and Loris [30] for a direct and accelerated projected gradient method, El... |

138 | Deterministic constructions of compressed sensing matrices
- Devore
- 2007
(Show Context)
Citation Context ...aller than n, and then recover ū from f by solving (1.1). It is proved that the recovery is perfect; i.e., the solution uopt =ū for any ū whenever k, m, n, and A satisfy certain conditions (e.g., see =-=[13, 32, 39, 44, 80, 100, 101]-=-). While these conditions are computationally intractable to check, it was found in [15, 16] and other work that the types of matrices A allowing a high compression ratio (i.e., m ≪ n) include random ... |

135 | Distributed compressed sensing
- Baron, Wakin, et al.
- 2005
(Show Context)
Citation Context ... of the matrices corresponding to Fourier and cosine transforms). Recent applications of ℓ1 minimization can be found in [51, 84, 91, 92] for compressive imaging, [61, 68, 70, 69, 96] for MRI and CT, =-=[3, 4, 50, 54, 76, 93]-=- for multi-sensor networks and distributive sensing, ∗ Department of Computational and Applied Mathematics, Rice University, wotao.yin@rice.edu. Research supported by an internal faculty research gran... |

120 | Geometric approach to error correcting codes and reconstruction of signals
- Rudelson, Vershynin
- 2005
(Show Context)
Citation Context ...g (CS). A recent burst of research in CS was led by Candès and Romberg [12], Candès, Romberg, and Tao [14], Candès and Tao [16], Donoho [35], Donoho and Tanner [36], Tsaig and Donoho [88], and others =-=[80, 86]-=-. The fundamental principle of CS is that, through optimization, the sparsity of a signal can be exploited for recovering that signal from incomplete measurements of it. Let the vector ū ∈ Rn denote a... |

118 | Total variation minimization and a class of binary MRF models
- Chambolle
- 2005
(Show Context)
Citation Context ...T V (u), yielding (2.7) u k+1 ← arg min u μT V (u)+ 1 2δ k ∥ ∥u − (u k − δ k ∇H(u k )) Each subproblem (2.7) can be efficiently solved, for example, by one of the recent graph/networkbased algorithms =-=[18, 28, 53]-=-. In [27] Darbon and Osher also studied an algorithm obtained by replacing μT V (u) in(2.7) by its Bregman distance (see section 2.2) and proved that if H(u) =0.5‖Au − f‖ 2 , then {u k } converges to ... |

115 | Why simple shrinkage is still relevant for redundant representations
- Elad
- 2006
(Show Context)
Citation Context ...uthorized reproduction of this article is prohibited.146 W. YIN, S. OSHER, D. GOLDFARB, AND J. DARBON in [5] using an auxiliary variable and the idea from Chambolle’s projection method [17], Elad in =-=[38]-=- and Elad et al. [40] for sparse representation and other related problems, Daubechies, Defrise, and De Mol in [29] through an optimization transfer technique, Combettes and Pesquet [26] using operato... |

110 | Recovery algorithms for vector valued data with joint sparsity constraints
- Fornassier, Rauhut
- 2008
(Show Context)
Citation Context ... of the matrices corresponding to Fourier and cosine transforms). Recent applications of ℓ1-minimization can be found in [51, 84, 91, 92] for compressive imaging, [61, 68, 70, 69, 96] for MRI and CT, =-=[3, 4, 50, 54, 76, 93]-=- for multisensor networks and distributive sensing, [63, 65, 66, 77, 87] for analog-to-information conversion, [83] for biosensing, [97] for microarray processing, and [24, 25, 98, 99] for image decom... |

105 | Compressive wireless sensing
- Bajwa, Haupt, et al.
- 2006
(Show Context)
Citation Context ... of the matrices corresponding to Fourier and cosine transforms). Recent applications of ℓ1 minimization can be found in [51, 84, 91, 92] for compressive imaging, [61, 68, 70, 69, 96] for MRI and CT, =-=[3, 4, 50, 54, 76, 93]-=- for multi-sensor networks and distributive sensing, ∗ Department of Computational and Applied Mathematics, Rice University, wotao.yin@rice.edu. Research supported by an internal faculty research gran... |

104 | A new compressive imaging camera architecture using optical-domain compression
- Takhar, Laska, et al.
- 2006
(Show Context)
Citation Context ...mbles of orthonormal transforms (e.g., matrices formed from random sets of rows of the matrices corresponding to Fourier and cosine transforms). Recent applications of ℓ1-minimization can be found in =-=[51, 84, 91, 92]-=- for compressive imaging, [61, 68, 70, 69, 96] for MRI and CT, [3, 4, 50, 54, 76, 93] for multisensor networks and distributive sensing, [63, 65, 66, 77, 87] for analog-to-information conversion, [83]... |

95 |
Random filters for compressive sampling and reconstruction
- Tropp, Wakin, et al.
- 2006
(Show Context)
Citation Context ...pplications of ℓ1-minimization can be found in [51, 84, 91, 92] for compressive imaging, [61, 68, 70, 69, 96] for MRI and CT, [3, 4, 50, 54, 76, 93] for multisensor networks and distributive sensing, =-=[63, 65, 66, 77, 87]-=- for analog-to-information conversion, [83] for biosensing, [97] for microarray processing, and [24, 25, 98, 99] for image decomposition and computer vision tasks. ℓ1-minimization also has application... |

94 | Fast Linearized Bregman Iteration for Compressed Sensing
- Cai, Osher, et al.
- 2008
(Show Context)
Citation Context ...ector multiplication and shrinkage operators that generates a sequence {uk} that converges rapidly to an approximate solution of the basis pursuit problem (1.1). In fact, the numerical experiments in =-=[34]-=- indicate that this algorithm converges to a true solution if the parameter μ is large enough. Finally, preliminary experiments indicate that our algorithms are robust with respect to a certain amount... |

93 | Theory and implementation of an analog-to-information converter using random demodulation
- Laska, Kirolos, et al.
- 2007
(Show Context)
Citation Context ...pplications of ℓ1-minimization can be found in [51, 84, 91, 92] for compressive imaging, [61, 68, 70, 69, 96] for MRI and CT, [3, 4, 50, 54, 76, 93] for multisensor networks and distributive sensing, =-=[63, 65, 66, 77, 87]-=- for analog-to-information conversion, [83] for biosensing, [97] for microarray processing, and [24, 25, 98, 99] for image decomposition and computer vision tasks. ℓ1-minimization also has application... |

91 | Image restoration with discrete constrained total variation; part I: Fast and exact optimization - Darbon, Sigelle - 2006 |

85 | An architecture for compressive imaging
- Wakin, Laska, et al.
- 2006
(Show Context)
Citation Context ...mbles of orthonormal transforms (e.g., matrices formed from random sets of rows of the matrices corresponding to Fourier and cosine transforms). Recent applications of ℓ1-minimization can be found in =-=[51, 84, 91, 92]-=- for compressive imaging, [61, 68, 70, 69, 96] for MRI and CT, [3, 4, 50, 54, 76, 93] for multisensor networks and distributive sensing, [63, 65, 66, 77, 87] for analog-to-information conversion, [83]... |

83 | Atoms of all channels, unite! average case analysis of multichannel sparse recovery using greedy algorithms
- Gribonval, Rauhut, et al.
- 2008
(Show Context)
Citation Context ... of the matrices corresponding to Fourier and cosine transforms). Recent applications of ℓ1-minimization can be found in [51, 84, 91, 92] for compressive imaging, [61, 68, 70, 69, 96] for MRI and CT, =-=[3, 4, 50, 54, 76, 93]-=- for multisensor networks and distributive sensing, [63, 65, 66, 77, 87] for analog-to-information conversion, [83] for biosensing, [97] for microarray processing, and [24, 25, 98, 99] for image decom... |

83 | Compressive imaging for video representation and coding
- Wakin, Laska, et al.
- 2006
(Show Context)
Citation Context ...mbles of orthonormal transforms (e.g., matrices formed from random sets of rows of the matrices corresponding to Fourier and cosine transforms). Recent applications of ℓ1-minimization can be found in =-=[51, 84, 91, 92]-=- for compressive imaging, [61, 68, 70, 69, 96] for MRI and CT, [3, 4, 50, 54, 76, 93] for multisensor networks and distributive sensing, [63, 65, 66, 77, 87] for analog-to-information conversion, [83]... |

81 | On sparse representation in pairs of bases
- Feuer, Nemirovski
- 2003
(Show Context)
Citation Context ...aller than n, and then recover ū from f by solving (1.1). It is proved that the recovery is perfect; i.e., the solution uopt =ū for any ū whenever k, m, n, and A satisfy certain conditions (e.g., see =-=[13, 32, 39, 44, 80, 100, 101]-=-). While these conditions are computationally intractable to check, it was found in [15, 16] and other work that the types of matrices A allowing a high compression ratio (i.e., m ≪ n) include random ... |

80 | Accelerated projected gradient method for linear inverse problems with sparsity constraints
- Daubechies, Fornasier, et al.
- 2008
(Show Context)
Citation Context ...step” shrinkage-based algorithm [7], Blumensath and Davies [8] for solving a cardinality constrained least-squares problem, Chambolle et al. [19] for image denoising, Daubechies, Fornasier, and Loris =-=[30]-=- for a direct and accelerated projected gradient method, Elad, Matalon, and Zibulevsky in [41] for image denoising, Fadili and Starck [43] for sparse representation-based image deconvolution, Figueire... |

79 | Decentralized compression and predistribution via randomized gossiping
- Rabbat, Haupt, et al.
- 2006
(Show Context)
Citation Context ... of the matrices corresponding to Fourier and cosine transforms). Recent applications of ℓ1-minimization can be found in [51, 84, 91, 92] for compressive imaging, [61, 68, 70, 69, 96] for MRI and CT, =-=[3, 4, 50, 54, 76, 93]-=- for multisensor networks and distributive sensing, [63, 65, 66, 77, 87] for analog-to-information conversion, [83] for biosensing, [97] for microarray processing, and [24, 25, 98, 99] for image decom... |

69 | Total variation models for variable lighting face recognition",
- Chen
- 2006
(Show Context)
Citation Context ...I and CT, [3, 4, 50, 54, 76, 93] for multisensor networks and distributive sensing, [63, 65, 66, 77, 87] for analog-to-information conversion, [83] for biosensing, [97] for microarray processing, and =-=[24, 25, 98, 99]-=- for image decomposition and computer vision tasks. ℓ1-minimization also has applications in image inpainting and missing data recovery; see [42, 82, 101], for example. Also nonconvex quasi–ℓp-norm ap... |

69 |
A bound optimization approach to waveletbased image deconvolution
- Figueiredo, Nowak
- 2005
(Show Context)
Citation Context ... and accelerated projected gradient method, Elad, Matalon, and Zibulevsky in [41] for image denoising, Fadili and Starck [43] for sparse representation-based image deconvolution, Figueiredo and Nowak =-=[47]-=- for image deconvolution, Figueiredo, Bioucas-Dias, and Nowak [45] for wavelet-based image denoising using majorization-minimization algorithms, and Reeves and Kingsbury [78] for image coding. While a... |

68 | Zibulevsky M. Coordinate and subspace optimization methods for linear least squares with non-quadratic regularization
- Elad, Matalon
(Show Context)
Citation Context ...act that f may contain noise in certain applications, makes solving the unconstrained problem (1.2) min μ‖u‖1 + u 1 ‖Au − f‖2 2 2 more preferable than solving the constrained problem (1.1) (e.g., see =-=[26, 27, 29, 37, 41, 49, 58, 62, 89]-=-). Hereafter, we use ‖·‖≡‖·‖2 to denote the 2-norm. In section 2.1, we give a review of recent numerical methods for solving (1.2). Because (1.2) also allows the constraint Au = f to be relaxed, it is... |

66 | Bayesian wavelet-based image deconvolution: a GEM algorithm exploiting a class of heavy-tailed priors
- Bioucas-Dias
- 2006
(Show Context)
Citation Context ...rbon and Osher [27] through an implicit PDE approach, and others. In addition, related applications and algorithms can be found in Adeyemi and Davies [1] for image sparse representation, Bioucas-Dian =-=[6]-=- for wavelet-based image deconvolution using a Gaussian scale mixture model, Bioucas-Dias and Figueiredo for a recent “two-step” shrinkage-based algorithm [7], Blumensath and Davies [8] for solving a ... |

64 |
Neighborliness of randomly projected simplices in high dimensions
- Donoho, Tanner
- 2005
(Show Context)
Citation Context ...in the applications of compressed sensing (CS). A recent burst of research in CS was led by Candès and Romberg [12], Candès, Romberg, and Tao [14], Candès and Tao [16], Donoho [35], Donoho and Tanner =-=[36]-=-, Tsaig and Donoho [88], and others [80, 86]. The fundamental principle of CS is that, through optimization, the sparsity of a signal can be exploited for recovering that signal from incomplete measur... |

63 | Nonlinear inverse scale space methods
- Burger, Gilboa, et al.
(Show Context)
Citation Context ...e regularization was introduced by Osher, Burger, Goldfarb, Xu, and Yin [73] in the context of image processing; it was then extended to wavelet-based denoising [95], nonlinear inverse scale space in =-=[10, 11]-=-, and compressed sensing in MR imaging [59]. The authors of [73] extend the Rudin-Osher-Fatemi [81] model � (2.8) min μ u |∇u| + 1 �u − b�2 2 where u is an unknown image, b is typically an input noisy... |

63 | Fixed-point continuation for ℓ1-minimization: Methodology and convergence
- Hale, Yin, et al.
(Show Context)
Citation Context |

61 | Distributed sparse random projections for refinable approximation
- Wang, Garofalakis, et al.
- 2007
(Show Context)
Citation Context |

60 | Random Sampling for Analog-toInformation Conversion of Wideband Signals
- Laska, Kirolos, et al.
- 2006
(Show Context)
Citation Context ...pplications of ℓ1-minimization can be found in [51, 84, 91, 92] for compressive imaging, [61, 68, 70, 69, 96] for MRI and CT, [3, 4, 50, 54, 76, 93] for multisensor networks and distributive sensing, =-=[63, 65, 66, 77, 87]-=- for analog-to-information conversion, [83] for biosensing, [97] for microarray processing, and [24, 25, 98, 99] for image decomposition and computer vision tasks. ℓ1-minimization also has application... |

59 | Proximal thresholding algorithm for minimization over orthonormal bases
- Combettes, Pesquet
- 2007
(Show Context)
Citation Context |

58 | Analog-toinformation conversion via random demodulation
- Kirolos, Laska, et al.
- 2006
(Show Context)
Citation Context |

56 |
Sparsity and Incoherence
- Candes, Romberg
- 2007
(Show Context)
Citation Context |

52 | A dual approach to solving nonlinear programming problems by unconstrained optimization
- Rockafellar
- 1973
(Show Context)
Citation Context ...1 is equivalent to the well-known augmented Lagrangian method (also known as the method of multipliers), which was introduced by Hestenes [60] and Powell [75] and was later generalized by Rockafellar =-=[79]-=-. To solve the constrained optimization problem (3.22) min s(u), subject to ci(u) =0,i=1,...,m, u the augmented Lagrangian method minimizes the augmented Lagrangian function (3.23) L(u; λ k ,ν):=s(u)+... |

50 |
A ℓ1-unified variational framework for image restoration
- Bect, Féraud, et al.
(Show Context)
Citation Context ...eiredo and Nowak in [46, 72] under the expectation-minimization framework for wavelet-based deconvolution, De Mol and Defrise [31] for wavelet inversion, Bect, Blance-Feraud, Aubert, and Chambolle in =-=[5]-=- using an auxiliary variable and the idea from Chambolle’s projection method [17], Elad in [38] and he with Matalon, Shtok, and Zibulevsky [40] for sparse representation and other related problems, Da... |

46 | Waveshrink with firm shrinkage
- Gao, Bruce
- 1997
(Show Context)
Citation Context ...is interesting is that Bregman iterations gives ⎧ ⎪⎨ ˜fj, | ⎪⎩ ˜ fj| > μ k−1 , k ˜ fj − μ sign( ˜ μ fj), k ≤|˜ fj| ≤ 0, | ˜ fj| ≤ μ k . (3.11) ũ k j = μ k−1 , So soft shrinkage becomes firm shrinkage =-=[52]-=- with thresholds τ (k) = μ k and τ (k−1) = μ (k−1) . In [10, 11] the concept of nonlinear inverse scale space was introduced and analyzed, which is basically the limit of Bregman iteration as k and μ ... |

42 | A wide-angle view at iterated shrinkage algorithms
- Elad, Matalon, et al.
- 2007
(Show Context)
Citation Context ...n of this article is prohibited.146 W. YIN, S. OSHER, D. GOLDFARB, AND J. DARBON in [5] using an auxiliary variable and the idea from Chambolle’s projection method [17], Elad in [38] and Elad et al. =-=[40]-=- for sparse representation and other related problems, Daubechies, Defrise, and De Mol in [29] through an optimization transfer technique, Combettes and Pesquet [26] using operator-splitting, Hale, Yi... |

42 | Block compressed sensing of natural images
- Gan
- 2007
(Show Context)
Citation Context |

39 | On the Linear Convergence of Descent Methods for Convex Essentially Smooth Minimization
- Luo, Tseng
- 1992
(Show Context)
Citation Context ...ONS FOR ℓ1-MINIMIZATION 147 under certain conditions on H and δ k . Under weaker conditions, they also established r-linear convergence of {u k } based on previous work by Pang [74] and Luo and Tseng =-=[67]-=- on gradient projection methods. Furthermore, it was also proved in [55] that under mild conditions, the support and signs of u k converge finitely; that is, there exists a finite number K such that {... |

38 |
A fixed-point continuation method for ℓ1-regularized minimization with applications to compressed sensing.” TR07-07
- Hale, Yin, et al.
- 2007
(Show Context)
Citation Context ...oblem (1.1) by solving only a small number of instances of the unconstrained problem (1.2). Our numerical algorithm, based on this iterative method, calls the fast fixed-point continuation solver FPC =-=[55, 56]-=- of(1.2), which involves only matrix-vector multiplications (or fast linear transforms) and componentwise shrinkages (defined in (2.4)). Using a moderate value for the penalty parameter μ, we were abl... |

38 | k-t SPARSE: High frame rate dynamic MRI exploiting spatio-temporal sparsity
- Lustig, JM, et al.
- 2006
(Show Context)
Citation Context ...rices formed from random sets of rows of the matrices corresponding to Fourier and cosine transforms). Recent applications of ℓ1-minimization can be found in [51, 84, 91, 92] for compressive imaging, =-=[61, 68, 70, 69, 96]-=- for MRI and CT, [3, 4, 50, 54, 76, 93] for multisensor networks and distributive sensing, [63, 65, 66, 77, 87] for analog-to-information conversion, [83] for biosensing, [97] for microarray processin... |

37 | A posteriori error bounds for the linearly-constrained variational inequality problem", School of Management, University of Texas at - Pang - 1985 |

36 | Sparse representation-based image deconvolution by iterative thresholding
- Fadili, Starck
- 2006
(Show Context)
Citation Context ...l. [19] for image denoising, Daubechies, Fornasier, and Loris [30] for a direct and accelerated projected gradient method, Elad, Matalon, and Zibulevsky in [41] for image denoising, Fadili and Starck =-=[43]-=- for sparse representation-based image deconvolution, Figueiredo and Nowak [47] for image deconvolution, Figueiredo, Bioucas-Dias, and Nowak [45] for wavelet-based image denoising using majorization-m... |

33 | Parametric maximum flow algorithms for fast total variation minimization,” Dept
- Goldfarb, Yin
- 2007
(Show Context)
Citation Context ...T V (u), yielding (2.7) u k+1 ← arg min u μT V (u)+ 1 2δ k ∥ ∥u − (u k − δ k ∇H(u k )) Each subproblem (2.7) can be efficiently solved, for example, by one of the recent graph/networkbased algorithms =-=[18, 28, 53]-=-. In [27] Darbon and Osher also studied an algorithm obtained by replacing μT V (u) in(2.7) by its Bregman distance (see section 2.2) and proved that if H(u) =0.5‖Au − f‖ 2 , then {u k } converges to ... |

28 |
MR image reconstruction by using the iterative refinement method and nonlinear inverse scale space methods,” UCLA
- He, Chang, et al.
- 2006
(Show Context)
Citation Context ...troduced by Osher et al. [73] in the context of image processing; it was then extended to wavelet-based denoising [95], nonlinear inverse scale space in [10, 11], and compressed sensing in MR imaging =-=[59]-=-. The authors of [73] extend the Rudin–Osher–Fatemi [81] model ∫ (2.8) min μ |∇u| + u 1 2 ‖u − b‖2 , where u is an unknown image, b is typically an input noisy measurement of a clean image ū, and μ is... |

27 | Overcomplete image coding using iterative projection-based noise shaping
- Reeves, Kingsbury
(Show Context)
Citation Context ...ion, Figueiredo and Nowak [47] for image deconvolution, Figueiredo, Bioucas-Dias, and Nowak [45] for wavelet-based image denoising using majorization-minimization algorithms, and Reeves and Kingsbury =-=[78]-=- for image coding. While all of these authors used different approaches, they all developed or used algorithms based on the iterative scheme (2.2) u k+1 ← arg min u μ‖u‖1 + 1 2δ k ∥ ∥u − (u k − δ k ∇H... |

27 | A simple proof for recoverability of ℓ1-minimization: go over or under? – Rice CAAM Department
- Zhang
- 2005
(Show Context)
Citation Context |

23 |
Iterative regularization and nonlinear inverse scale space applied to wavelet-based denoising
- Xu, Osher
(Show Context)
Citation Context ...ined problem (1.2) and provide some background on our Bregman iterative regularization scheme. The main Bregman iterative algorithm is described in section 3.1; its relationship to some previous work =-=[95]-=- is presented in section 3.2; and its convergence is analyzed in section 3.3. Numerical results are presented in section 4. Finally, we extend our results to more general classes of problems in sectio... |

22 |
Fast wavelet-based image deconvolution using the EM algorithm
- Nowak, Figueiredo
- 2001
(Show Context)
Citation Context ...le functions H(·) is an iterative procedure based on shrinkage (also called soft thresholding; see (2.4) below). This type of method was independently proposed and analyzed by Figueiredo and Nowak in =-=[46, 72]-=- under the expectation-minimization framework for wavelet-based deconvolution, De Mol and Defrise [31] for wavelet inversion, Bect et al. Copyright © by SIAM. Unauthorized reproduction of this article... |

17 |
A note on wavelet-based inversion algorithms
- Mol, Defrise
- 2002
(Show Context)
Citation Context ...elow). This type of method was independently proposed and analyzed by Figueiredo and Nowak in [46, 72] under the expectation-minimization framework for wavelet-based deconvolution, De Mol and Defrise =-=[31]-=- for wavelet inversion, Bect et al. Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.146 W. YIN, S. OSHER, D. GOLDFARB, AND J. DARBON in [5] using an auxiliary variable an... |

15 | Nonconvex compressed sensing and error correction
- Chartrand
- 2007
(Show Context)
Citation Context ...1-minimization also has applications in image inpainting and missing data recovery; see [42, 82, 101], for example. Also nonconvex quasi–ℓp-norm approaches for 0 ≤ p<1 have been proposed by Chartrand =-=[20, 21]-=- and Chartrand and Yin [22]. Problem (1.1) can be transformed into a linear program and then solved by conventional linear programming solvers. However, such solvers are not tailored for the matrices ... |

13 |
Implementation models for analog-to-information conversion via random sampling, in
- Ragheb, Kirolos, et al.
(Show Context)
Citation Context |

12 | Two-step algorithms for linear inverse problems with non-quadratic regularization
- Bioucas-Dias, Figueiredo
(Show Context)
Citation Context ...image sparse representation, Bioucas-Dian [6] for wavelet-based image deconvolution using a Gaussian scale mixture model, Bioucas-Dias and Figueiredo for a recent “two-step” shrinkage-based algorithm =-=[7]-=-, Blumensath and Davies [8] for solving a cardinality constrained least-squares problem, Chambolle et al [19] for image denoising, Daubechies, Fornasier, and Loris [30] for a direct and accelerated pr... |

12 |
Fast discrete optimization for sparse approximations and deconvolutions
- Darbon, Osher
- 2007
(Show Context)
Citation Context |

10 |
Background correction for cDNA microarray image using
- Yin, Chen, et al.
(Show Context)
Citation Context ...imaging, [61, 68, 70, 69, 96] for MRI and CT, [3, 4, 50, 54, 76, 93] for multisensor networks and distributive sensing, [63, 65, 66, 77, 87] for analog-to-information conversion, [83] for biosensing, =-=[97]-=- for microarray processing, and [24, 25, 98, 99] for image decomposition and computer vision tasks. ℓ1-minimization also has applications in image inpainting and missing data recovery; see [42, 82, 10... |

7 | The total variation regularized L 1 model for multiscale decomposition
- Yin, Goldfarb, et al.
- 2007
(Show Context)
Citation Context ...I and CT, [3, 4, 50, 54, 76, 93] for multisensor networks and distributive sensing, [63, 65, 66, 77, 87] for analog-to-information conversion, [83] for biosensing, [97] for microarray processing, and =-=[24, 25, 98, 99]-=- for image decomposition and computer vision tasks. ℓ1-minimization also has applications in image inpainting and missing data recovery; see [42, 82, 101], for example. Also nonconvex quasi–ℓp-norm ap... |

6 | Sparse representations of images using overcomplete complex wavelets
- Adeyemi, Davies
- 2006
(Show Context)
Citation Context ... continuation technique in their code FPC [56], Darbon and Osher [27] through an implicit PDE approach, and others. In addition, related applications and algorithms can be found in Adeyemi and Davies =-=[1]-=- for image sparse representation, Bioucas-Dian [6] for wavelet-based image deconvolution using a Gaussian scale mixture model, Bioucas-Dias and Figueiredo for a recent “two-step” shrinkage-based algor... |

5 | A new coarse-to-fine framework for 3D brain MR image registration, in
- Chen, Huang, et al.
- 2005
(Show Context)
Citation Context ...I and CT, [3, 4, 50, 54, 76, 93] for multisensor networks and distributive sensing, [63, 65, 66, 77, 87] for analog-to-information conversion, [83] for biosensing, [97] for microarray processing, and =-=[24, 25, 98, 99]-=- for image decomposition and computer vision tasks. ℓ1-minimization also has applications in image inpainting and missing data recovery; see [42, 82, 101], for example. Also nonconvex quasi–ℓp-norm ap... |

5 |
A numerical study of fixed-point continuation applied to compressed sensing
- Hale, Yin, et al.
- 2010
(Show Context)
Citation Context ...r study. Finally, to compare the Bregman iterative procedure based on the solver FPC with other recent ℓ1 algorithms such as StOMP [37], one can refer to the CPU times of FPC in the comparative study =-=[57]-=- and multiply these times by the average numbers of Bregman iterations. 5. Extensions. In this section we present extensions of our results in section 3 to more general convex functionals J(·) and H(·... |

5 | Pixel recovery via ℓ1 minimization in the wavelet domain
- Selesnick, Slyke, et al.
- 2004
(Show Context)
Citation Context ...nsing, [97] for microarray processing, and [24, 25, 98, 99] for image decomposition and computer vision tasks. ℓ1-minimization also has applications in image inpainting and missing data recovery; see =-=[42, 82, 101]-=-, for example. Also nonconvex quasi–ℓp-norm approaches for 0 ≤ p<1 have been proposed by Chartrand [20, 21] and Chartrand and Yin [22]. Problem (1.1) can be transformed into a linear program and then ... |

5 |
When Is Missing Data Recoverable?, CAAM
- Zhang
- 2006
(Show Context)
Citation Context |

4 |
Sparse MR: The Application of Compressed Sensing For Rapid
- Lusting, Donoho, et al.
- 2007
(Show Context)
Citation Context ...rices formed from random sets of rows of the matrices corresponding to Fourier and cosine transforms). Recent applications of ℓ1-minimization can be found in [51, 84, 91, 92] for compressive imaging, =-=[61, 68, 70, 69, 96]-=- for MRI and CT, [3, 4, 50, 54, 76, 93] for multisensor networks and distributive sensing, [63, 65, 66, 77, 87] for analog-to-information conversion, [83] for biosensing, [97] for microarray processin... |

3 | Compressed sensing shape estimation of star-shaped objects in Fourier imaging
- Ye
(Show Context)
Citation Context |

2 |
Iterative total variation methods for nonlinear inverse problems
- Bachmayr
- 2007
(Show Context)
Citation Context ...k � 2 ≤ �Au k − f� 2 .s20 YIN, OSHER, GOLDFARB, DARBON Finally we have a result which typifies the effectiveness of Bregman iteration in the presence of noisy data. Our argument below follows that of =-=[2]-=-. Theorem 5.5. Let J(ũ) and �ũ� be finite and I − 2δAA ⊤ be strictly positive definite. Then the generalized Bregman distance ˜D pk J (ũ, uk ) = J(ũ) − J(u k ) − 〈ũ − u k , p k 〉 + 1 2δ �ũ − uk � 2 di... |

2 |
A Method for Large-Scale ℓ1-Regularized
- Kim, Koh, et al.
- 2007
(Show Context)
Citation Context |

2 |
Compressed Sensing DNA Microarrays, Rice ECE Department
- Sheikh, Milenkovic, et al.
- 2007
(Show Context)
Citation Context ... 92] for compressive imaging, [61, 68, 70, 69, 96] for MRI and CT, [3, 4, 50, 54, 76, 93] for multisensor networks and distributive sensing, [63, 65, 66, 77, 87] for analog-to-information conversion, =-=[83]-=- for biosensing, [97] for microarray processing, and [24, 25, 98, 99] for image decomposition and computer vision tasks. ℓ1-minimization also has applications in image inpainting and missing data reco... |

2 |
A fast fixed-point algorithm for convex total variation regularization
- Wang, Yin, et al.
- 2007
(Show Context)
Citation Context ...is called path following or continuation. While our algorithm does not depend on using a specific code, we chose to use FPC [56], one of the fastest codes, to solve each subproblem in (2.2). In [27], =-=[94]-=-, and other work, the iterative procedure (2.2) is adapted for solving the total variation regularization problem (2.6) min μT V (u)+H(u), u where TV(u) denotes the total variation of u (see [102] for... |

2 |
A comparison of three total variation-based texture extraction models
- Yin, Goldfarb, et al.
(Show Context)
Citation Context |

1 |
ℓ1 ℓs: A Simple MATLAB Solver for ℓ1-Regularized Least Squares Problems, http://www.stanford.edu/∼boyd/l1 ls
- Koh, Kim, et al.
- 2007
(Show Context)
Citation Context ...[48], Figueiredo, Nowak, and Wright [49], reformulate (1.2) as a box-constrained quadratic program, to which they apply the gradient projection method with Barzilai–Borwein steps. The algorithm ℓ1 ℓs =-=[64]-=- by Kim et al. [62] was developed for an ℓ1-regularization problem equivalent to (1.2). The authors apply an interior-point method to a log-barrier formulation of (1.2). The main step in each interior... |

1 |
SPGL1: A solver for sparse reconstruction, http://www. cs.ubc.ca/labs/scl/spgl1 (2007). Copyright © by SIAM. Unauthorized reproduction of this article is prohibited
- Berg, Friedlander
(Show Context)
Citation Context ...which involves solving a system of linear equations, is accelerated by using a preconditioned conjugate gradient method, for which the authors developed an efficient preconditioner. In the code SPGL1 =-=[90]-=-, Van den Berg and Friedlander apply an iterative method for solving the LASSO problem [85], which minimizes ‖Au − f‖ subject to ‖u‖1 ≤ σ, by using an increasing sequence of σ-values in their algorith... |