## Alternating direction algorithms for ℓ1-problems in compressive sensing (2009)

### Cached

### Download Links

Citations: | 23 - 2 self |

### BibTeX

@TECHREPORT{Yang09alternatingdirection,

author = {Junfeng Yang and Yin Zhang},

title = {Alternating direction algorithms for ℓ1-problems in compressive sensing},

institution = {},

year = {2009}

}

### OpenURL

### Abstract

Abstract. In this paper, we propose and study the use of alternating direction algorithms for several ℓ1-norm minimization problems arising from sparse solution recovery in compressive sensing, including the basis pursuit problem, the basis-pursuit denoising problems of both unconstrained and constrained forms, as well as others. We present and investigate two classes of algorithms derived from either the primal or the dual forms of the ℓ1-problems. The construction of the algorithms consists of two main steps: (1) to reformulate an ℓ1-problem into one having partially separable objective functions by adding new variables and constraints; and (2) to apply an exact or inexact alternating direction method to the resulting problem. The derived alternating direction algorithms can be regarded as first-order primal-dual algorithms because both primal and dual variables are updated at each and every iteration. Convergence properties of these algorithms are established or restated when they already exist. Extensive numerical results in comparison with several state-of-the-art algorithms are given to demonstrate that the proposed algorithms are efficient, stable and robust. Moreover, we present numerical results to emphasize two practically important but perhaps overlooked points. One point is that algorithm speed should always be evaluated relative to appropriate solution accuracy; another is that whenever erroneous measurements possibly exist, the ℓ1-norm fidelity should be the fidelity of choice in compressive sensing. Key words. Sparse solution recovery, compressive sensing, ℓ1-minimization, primal, dual, alternating direction method

### Citations

1865 | Compressed sensing
- Donoho
- 2006
(Show Context)
Citation Context ...e A ∈ Cm×n (m < n) is an encoding matrix. The original signal ¯x is then reconstructed from the underdetermined linear system Ax = b via certain reconstruction technique. Basic CS theory presented in =-=[7, 9, 15]-=- states that it is extremely probable to reconstruct ¯x accurately or even exactly from b provided that ¯x is sufficiently sparse (or nearly sparse) relative to the number of measurements, and the enc... |

1778 | Atomic decomposition by basis pursuit
- Chen, Donoho, et al.
- 1998
(Show Context)
Citation Context ...measurements (see e.g., [1]). Unfortunately, this ℓ0-problem is combinatorial and generally computationally intractable. A fundamental decoding model in CS is the so-called basis pursuit (BP) problem =-=[12]-=-: min x∈Cn{‖x‖1 : Ax = b}. (1.3) Minimizing the ℓ1-norm in (1.3) plays a central role in promoting solution sparsity. In fact, problem (1.3) shares common solutions with (1.2) under some favorable con... |

1509 |
Nonlinear total variation based noise removal algorithms
- Rudin, Osher, et al.
- 1992
(Show Context)
Citation Context ...ike regularizations such as nuclear-norm (sum of singular values) regularization in matrix rank minimization like the matrix completion problem [40, 8, 10], or the total variation (TV) regularization =-=[42]-=- widely used in image processing. While the nuclear-norm is just an extension of ℓ1-norm to the matrix case, the TV regularization can be converted to ℓ1-regularization after introducing a splitting v... |

1401 | Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information
- Candès, Romberg, et al.
(Show Context)
Citation Context ...e A ∈ Cm×n (m < n) is an encoding matrix. The original signal ¯x is then reconstructed from the underdetermined linear system Ax = b via certain reconstruction technique. Basic CS theory presented in =-=[7, 9, 15]-=- states that it is extremely probable to reconstruct ¯x accurately or even exactly from b provided that ¯x is sufficiently sparse (or nearly sparse) relative to the number of measurements, and the enc... |

803 |
Stable signal recovery from incomplete and inaccurate measurements
- Candès, Romberg, et al.
(Show Context)
Citation Context ...e A ∈ Cm×n (m < n) is an encoding matrix. The original signal ¯x is then reconstructed from the underdetermined linear system Ax = b via certain reconstruction technique. Basic CS theory presented in =-=[7, 9, 15]-=- states that it is extremely probable to reconstruct ¯x accurately or even exactly from b provided that ¯x is sufficiently sparse (or nearly sparse) relative to the number of measurements, and the enc... |

460 |
An iterative thresholding algorithm for linear inverse problems with a sparsity constraint
- Daubechies, Defrise, et al.
(Show Context)
Citation Context ...(1.5) is the iterative shrinkage/thresholding (IST) method, which is first proposed in [21, 37, 14] for wavelet-based image deconvolution and then independently discovered and analyzed by many others =-=[19, 43, 44, 13]-=-. In [29, 30], Hale, Yin and Zhang derived the IST algorithm from an operator splitting framework and combined it with a continuation strategy. The resulting algorithm, which is named fixed-point cont... |

410 | A fast iterative shrinkage-thresholding algorithm for linear inverse problems
- Beck, Teboulle
(Show Context)
Citation Context ... steplength [2]. A similar sparse reconstruction algorithm called SpaRSA was also studied by Wright, Nowak and Figueiredo in [51]. Recently, Beck and Teboulle proposed a fast IST algorithm (FISTA) in =-=[3]-=-, which attains the same optimal convergence in function values as Nesterov’s multi-step gradient method [36] for minimizing composite functions. Lately, Yun and Toh also studied a block coordinate gr... |

362 | 2004) For Most Large Underdetermined Systems of Linear Equations the Minimal l1 norm Solution is also the sparsest Solution. URL : http://stat.stanford.edu/˜donoho/Reports/2004
- Donoho
(Show Context)
Citation Context ...(1.3) Minimizing the ℓ1-norm in (1.3) plays a central role in promoting solution sparsity. In fact, problem (1.3) shares common solutions with (1.2) under some favorable conditions (see, for example, =-=[16]-=-). When b contains noise, or when ¯x is not exactly sparse but only compressible, as are the cases in most practical applications, certain relaxation to the equality constraint in (1.3) is desirable. ... |

359 |
Regression shrinkage and selection via the
- Tibshirani
- 1996
(Show Context)
Citation Context ... in [5, 6, 54]. In [23], Friedlander and Van den Berg proposed a spectral projection gradient method (SPGL1), where (1.4) is solved by a root-finding framework applied to a sequence of LASSO problems =-=[45]-=-. Moreover, based on a smoothing technique studied in [35], a fast and accurate first-order algorithm called NESTA was proposed in [4] for solving (1.4). In Section 4, we present extensive comparison ... |

348 | Exact matrix completion via convex optimization. Submitted for publication
- Candès, Recht
- 2008
(Show Context)
Citation Context ...image and data analysis, particular those involving ℓ1-like regularizations such as nuclear-norm (sum of singular values) regularization in matrix rank minimization like the matrix completion problem =-=[40, 8, 10]-=-, or the total variation (TV) regularization [42] widely used in image processing. While the nuclear-norm is just an extension of ℓ1-norm to the matrix case, the TV regularization can be converted to ... |

324 | Signal recovery from random measurements via orthogonal matching pursuit
- Tropp, Gilbert
- 2007
(Show Context)
Citation Context ... These nonnegative counterparts will be briefly considered later. Finally, we mention that aside from ℓ1-related decoders, there exist alternative decoding techniques such as greedy algorithms (e.g., =-=[46]-=-) which, however, are not a subject of concern in this paper.Alternating Direction Method for ℓ1-Problems in Compressive Sensing 3 1.2. Some existing methods. In the last few years, quite a number of... |

315 | Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems
- Figueiredo, Nowak, et al.
- 2007
(Show Context)
Citation Context ... take advantage of such a special feature lead to better performance and are highly desirable. One of the earliest first-order methods for solving (1.5) is the gradient projection method suggested in =-=[22]-=-, where the authors reformulated (1.5) as a box-constrained quadratic program and implemented a gradient projection method with line search. To date, the most widely studied first-order method for sol... |

274 | Smooth minimization of non-smooth functions
- Nesterov
- 2005
(Show Context)
Citation Context ...osed a spectral projection gradient method (SPGL1), where (1.4) is solved by a root-finding framework applied to a sequence of LASSO problems [45]. Moreover, based on a smoothing technique studied in =-=[35]-=-, a fast and accurate first-order algorithm called NESTA was proposed in [4] for solving (1.4). In Section 4, we present extensive comparison results with several state-of-the-art algorithms including... |

252 | An EM algorithm for wavelet-based image restoration
- Figueiredo, Nowak
- 2003
(Show Context)
Citation Context ...d a gradient projection method with line search. To date, the most widely studied first-order method for solving (1.5) is the iterative shrinkage/thresholding (IST) method, which is first proposed in =-=[21, 37, 14]-=- for wavelet-based image deconvolution and then independently discovered and analyzed by many others [19, 43, 44, 13]. In [29, 30], Hale, Yin and Zhang derived the IST algorithm from an operator split... |

240 |
Numerical methods for nonlinear variational problems
- Glowinski
- 1984
(Show Context)
Citation Context ... minimization [18, 31]. In (2.4), a steplength γ > 0 is attached to the update of λ. Under certain technical assumptions, convergence of ADM with a steplength γ ∈ (0, ( √ 5 + 1)/2) was established in =-=[26, 27]-=- in the context of variational inequality. The shrinkage in the permitted range from (0, 2) in the augmented Lagrangian method to (0, ( √ 5 + 1)/2) in ADM is related to relaxing the exact minimization... |

203 |
Gradient methods for minimizing composite objective function
- NESTEROV
(Show Context)
Citation Context ...nd Figueiredo in [51]. Recently, Beck and Teboulle proposed a fast IST algorithm (FISTA) in [3], which attains the same optimal convergence in function values as Nesterov’s multi-step gradient method =-=[36]-=- for minimizing composite functions. Lately, Yun and Toh also studied a block coordinate gradient descent (CGD) method in [56] for solving (1.5). There exist also algorithms for solving constrained ℓ1... |

188 | On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators
- Eckstein, Bertsekas
- 1992
(Show Context)
Citation Context ...equently, ADM was studied extensively in optimization and variational analysis. In [27], ADM is interpreted as the Douglas-Rachford splitting method [17] applied to a dual problem. method is shown in =-=[18]-=-. The equivalence between ADM and a proximal point ADM is also studied in convex programming [24] and variational inequalities [47, 32]. Moreover, ADM has been extended to allowing inexact subproblem ... |

187 |
Multipliers and gradient methods
- Hestenes
- 1969
(Show Context)
Citation Context ...n of this problem is given by LA(x, y, λ) = f(x) + g(y) − λ ⊤ (Ax + By − b) + β 2 ‖Ax + By − b‖2 , (2.2) where λ ∈ R p is the Lagrangian multiplier and β > 0 is a penalty parameter. Lagrangian method =-=[33, 39]-=- iterates as follows: given λ k ∈ R p , { (xk+1, yk+1 ) ← arg minx,y LA(x, y, λk ), λk+1 ← λk − γβ(Axk+1 + Byk+1 − b), The classic augmented (2.3) where γ ∈ (0, 2) guarantees convergence, as long as t... |

185 | Sparse reconstruction by separable approximation
- Wright, Nowak, et al.
- 2009
(Show Context)
Citation Context ...), is also accelerated via a non-monotone line search with Barzilai-Borwein steplength [2]. A similar sparse reconstruction algorithm called SpaRSA was also studied by Wright, Nowak and Figueiredo in =-=[51]-=-. Recently, Beck and Teboulle proposed a fast IST algorithm (FISTA) in [3], which attains the same optimal convergence in function values as Nesterov’s multi-step gradient method [36] for minimizing c... |

184 |
Two point step size gradient methods
- Barzilai, Borwein
- 1988
(Show Context)
Citation Context ...d combined it with a continuation strategy. The resulting algorithm, which is named fixed-point continuation (FPC), is also accelerated via a non-monotone line search with Barzilai-Borwein steplength =-=[2]-=-. A similar sparse reconstruction algorithm called SpaRSA was also studied by Wright, Nowak and Figueiredo in [51]. Recently, Beck and Teboulle proposed a fast IST algorithm (FISTA) in [3], which atta... |

183 |
A method for nonlinear constraints in minimization problems
- Powell
- 1969
(Show Context)
Citation Context ...n of this problem is given by LA(x, y, λ) = f(x) + g(y) − λ ⊤ (Ax + By − b) + β 2 ‖Ax + By − b‖2 , (2.2) where λ ∈ R p is the Lagrangian multiplier and β > 0 is a penalty parameter. Lagrangian method =-=[33, 39]-=- iterates as follows: given λ k ∈ R p , { (xk+1, yk+1 ) ← arg minx,y LA(x, y, λk ), λk+1 ← λk − γβ(Axk+1 + Byk+1 − b), The classic augmented (2.3) where γ ∈ (0, 2) guarantees convergence, as long as t... |

167 | Probing the Pareto frontier for basis pursuit solutions
- Berg, Friedlander
(Show Context)
Citation Context ...) and (1.4). The Bregman iteration [38] was applied to the basis pursuit problem in [53]. In the same paper, a linearized Bregman method was also suggested and analyzed subsequently in [5, 6, 54]. In =-=[23]-=-, Friedlander and Van den Berg proposed a spectral projection gradient method (SPGL1), where (1.4) is solved by a root-finding framework applied to a sequence of LASSO problems [45]. Moreover, based o... |

144 | The power of convex relaxation: Near-optimal matrix completion
- Candès, Tao
- 2010
(Show Context)
Citation Context ...image and data analysis, particular those involving ℓ1-like regularizations such as nuclear-norm (sum of singular values) regularization in matrix rank minimization like the matrix completion problem =-=[40, 8, 10]-=-, or the total variation (TV) regularization [42] widely used in image processing. While the nuclear-norm is just an extension of ℓ1-norm to the matrix case, the TV regularization can be converted to ... |

133 |
The split Bregman method for L1regularized problems
- Goldstein, Osher
- 2009
(Show Context)
Citation Context ...recently when a burst of works applying ADM techniques appeared in 2009, including our ADM-based ℓ1-solver package YALL1 ([62], published online in April 2009) and a number of ADM-related papers (see =-=[22, 57, 31, 42, 1, 2]-=-, for example). The rest of the paper is to present the derivation and performance of the proposed ADM algorithms for solving the ℓ1-models (1.3)-(1.6) and their nonnegative counterparts, many of whic... |

106 |
A dual algorithm for the solution of nonlinear variational problems via element approximations
- Gabay, Mercier
- 1976
(Show Context)
Citation Context ...x LA(x, y k , λ k ), y k+1 ← arg miny LA(x k+1 , y, λ k ), λ k+1 ← λ k − γβ(Ax k+1 + By k+1 − b). (2.4) The basic idea of ADM goes back to the work of Glowinski and Marocco [28] and Gabay and Mercier =-=[25]-=-. Let θ1(·) and θ2(·) be convex functionals, and A be a continuous linear operator. The authors of [25] considered minimizing an energy function of the form min θ1(u) + θ2(Au). u By introducing an aux... |

101 | An iterative regularization method for total variation-based image restoration
- Osher, Burger, et al.
(Show Context)
Citation Context ...and Toh also studied a block coordinate gradient descent (CGD) method in [56] for solving (1.5). There exist also algorithms for solving constrained ℓ1-problems (1.3) and (1.4). The Bregman iteration =-=[38]-=- was applied to the basis pursuit problem in [53]. In the same paper, a linearized Bregman method was also suggested and analyzed subsequently in [5, 6, 54]. In [23], Friedlander and Van den Berg prop... |

96 | Why simple shrinkage is still relevant for redundant representations
- Elad
- 2006
(Show Context)
Citation Context ...(1.5) is the iterative shrinkage/thresholding (IST) method, which is first proposed in [21, 37, 14] for wavelet-based image deconvolution and then independently discovered and analyzed by many others =-=[19, 43, 44, 13]-=-. In [29, 30], Hale, Yin and Zhang derived the IST algorithm from an operator splitting framework and combined it with a continuation strategy. The resulting algorithm, which is named fixed-point cont... |

82 | NESTA: a fast and accurate first-order method for sparse recovery
- Becker, Bobin, et al.
- 2009
(Show Context)
Citation Context ...a root-finding framework applied to a sequence of LASSO problems [45]. Moreover, based on a smoothing technique studied in [35], a fast and accurate first-order algorithm called NESTA was proposed in =-=[4]-=- for solving (1.4). In Section 4, we present extensive comparison results with several state-of-the-art algorithms including FPC, SpaRSA, FISTA and CGD for solving (1.5), and SPGL1 and NESTA for solvi... |

78 |
Sur l’approximation par éléments finis d’ordre un, et la résolution par pénalisation–dualité d’une classe de problèmes de Dirichlet non linéaires
- Glowinski, Marroco
- 1975
(Show Context)
Citation Context ...ows ⎧ ⎪⎨ ⎪⎩ x k+1 ← arg minx LA(x, y k , λ k ), y k+1 ← arg miny LA(x k+1 , y, λ k ), λ k+1 ← λ k − γβ(Ax k+1 + By k+1 − b). (2.4) The basic idea of ADM goes back to the work of Glowinski and Marocco =-=[28]-=- and Gabay and Mercier [25]. Let θ1(·) and θ2(·) be convex functionals, and A be a continuous linear operator. The authors of [25] considered minimizing an energy function of the form min θ1(u) + θ2(A... |

71 |
Applications of a splitting algorithm to decomposition in convex programming and variational inequalities
- Tseng
- 1991
(Show Context)
Citation Context ... splitting method [17] applied to a dual problem. method is shown in [18]. The equivalence between ADM and a proximal point ADM is also studied in convex programming [24] and variational inequalities =-=[47, 32]-=-. Moreover, ADM has been extended to allowing inexact subproblem minimization [18, 31]. In (2.4), a steplength γ > 0 is attached to the update of λ. Under certain technical assumptions, convergence of... |

70 |
On the Numerical Solution of Heat Conduction Problems in Two or Three Space Variables,” Trans
- Douglas, Rachford
- 1956
(Show Context)
Citation Context ....1) and to which the ADM approach was applied. Subsequently, ADM was studied extensively in optimization and variational analysis. In [27], ADM is interpreted as the Douglas-Rachford splitting method =-=[17]-=- applied to a dual problem. method is shown in [18]. The equivalence between ADM and a proximal point ADM is also studied in convex programming [24] and variational inequalities [47, 32]. Moreover, AD... |

65 | Bregman iterative algorithms for ℓ1 minimization with application to compressed sensing
- Yin, Osher, et al.
- 2008
(Show Context)
Citation Context ...descent (CGD) method in [56] for solving (1.5). There exist also algorithms for solving constrained ℓ1-problems (1.3) and (1.4). The Bregman iteration [38] was applied to the basis pursuit problem in =-=[53]-=-. In the same paper, a linearized Bregman method was also suggested and analyzed subsequently in [5, 6, 54]. In [23], Friedlander and Van den Berg proposed a spectral projection gradient method (SPGL1... |

62 | Linearized Bregman iterations for compressed sensing
- Cai, Osher, et al.
(Show Context)
Citation Context ...1-problems (1.3) and (1.4). The Bregman iteration [38] was applied to the basis pursuit problem in [53]. In the same paper, a linearized Bregman method was also suggested and analyzed subsequently in =-=[5, 6, 54]-=-. In [23], Friedlander and Van den Berg proposed a spectral projection gradient method (SPGL1), where (1.4) is solved by a root-finding framework applied to a sequence of LASSO problems [45]. Moreover... |

53 | Fast image recovery using variable splitting and constrained optimization
- Afonso, Bioucas-Dias, et al.
- 2010
(Show Context)
Citation Context ...recently when a burst of works applying ADM techniques appeared in 2009, including our ADM-based ℓ1-solver package YALL1 ([62], published online in April 2009) and a number of ADM-related papers (see =-=[22, 57, 31, 42, 1, 2]-=-, for example). The rest of the paper is to present the derivation and performance of the proposed ADM algorithms for solving the ℓ1-models (1.3)-(1.6) and their nonnegative counterparts, many of whic... |

51 | Astronomical image representation by the curvelet transform
- Starck, Candès, et al.
- 2003
(Show Context)
Citation Context ...(1.5) is the iterative shrinkage/thresholding (IST) method, which is first proposed in [21, 37, 14] for wavelet-based image deconvolution and then independently discovered and analyzed by many others =-=[19, 43, 44, 13]-=-. In [29, 30], Hale, Yin and Zhang derived the IST algorithm from an operator splitting framework and combined it with a continuation strategy. The resulting algorithm, which is named fixed-point cont... |

48 |
Applications of Lagrangian-based alternating direction methods and connections to split Bregman
- Esser
- 2009
(Show Context)
Citation Context ... minimization of LA(x, y, λ k ) with respect to (x, y) to merely one round of alternating minimization. Recently, ADM has been applied to total variation based image restoration and reconstruction in =-=[20, 52]-=-. In the following, we apply ADM technique to (1.4) and (1.5), while the application to (1.3) and (1.6) will be a by-product. 2.2. Applying ADM to primal problems. In this subsection, we apply ADM to ... |

47 | Fixed-Point Continuation for ℓ1-minimization: Methodology and Convergence
- Hale, Yin, et al.
- 2008
(Show Context)
Citation Context ...e shrinkage/thresholding (IST) method, which is first proposed in [21, 37, 14] for wavelet-based image deconvolution and then independently discovered and analyzed by many others [19, 43, 44, 13]. In =-=[29, 30]-=-, Hale, Yin and Zhang derived the IST algorithm from an operator splitting framework and combined it with a continuation strategy. The resulting algorithm, which is named fixed-point continuation (FPC... |

47 |
The multiplier method of Hestenes and Powell applied to convex programming
- Rockafellar
- 1973
(Show Context)
Citation Context ..., λk ), λk+1 ← λk − γβ(Axk+1 + Byk+1 − b), The classic augmented (2.3) where γ ∈ (0, 2) guarantees convergence, as long as the subproblem is solved to an increasingly high accuracy at every iteration =-=[41]-=-. However, an accurate, joint minimization with respect to (x, y) can become costly without taking advantage of the separable form of the objective function f(x) + g(y). In contrast, ADM utilizes the ... |

46 |
Wavelets and curvelets for image deconvolution: a combined approach
- Starck, Nguyen, et al.
(Show Context)
Citation Context |

45 |
Guaranteed minimum rank solutions of matrix equations via nuclear norm minimization
- Recht, Fazel, et al.
(Show Context)
Citation Context ...image and data analysis, particular those involving ℓ1-like regularizations such as nuclear-norm (sum of singular values) regularization in matrix rank minimization like the matrix completion problem =-=[40, 8, 10]-=-, or the total variation (TV) regularization [42] widely used in image processing. While the nuclear-norm is just an extension of ℓ1-norm to the matrix case, the TV regularization can be converted to ... |

39 |
Augmented Lagrangian and Operator Splitting Method in Non-Linear Mechanics
- Glowinski, Tallec
- 1989
(Show Context)
Citation Context ...to min u,v {θ1(u) + θ2(v) : Au − v = 0} , which has the form of (2.1) and to which the ADM approach was applied. Subsequently, ADM was studied extensively in optimization and variational analysis. In =-=[27]-=-, ADM is interpreted as the Douglas-Rachford splitting method [17] applied to a dual problem. method is shown in [18]. The equivalence between ADM and a proximal point ADM is also studied in convex pr... |

32 |
A new inexact alternating directions method for monotone variational inequalities
- He, Liao, et al.
- 2002
(Show Context)
Citation Context ...ence between ADM and a proximal point ADM is also studied in convex programming [24] and variational inequalities [47, 32]. Moreover, ADM has been extended to allowing inexact subproblem minimization =-=[18, 31]-=-. In (2.4), a steplength γ > 0 is attached to the update of λ. Under certain technical assumptions, convergence of ADM with a steplength γ ∈ (0, ( √ 5 + 1)/2) was established in [26, 27] in the contex... |

24 | W.: Alternating direction augmented Lagrangian methods for semidefinite programming
- Wen, Goldfarb, et al.
- 2010
(Show Context)
Citation Context ...nalogous to those for ℓ1-problems as presented in this paper. Recently, the ADM has also been applied to total variation based image reconstruction in [20, 52, 34] and to semi-definite programming in =-=[49]-=-. A more recent application of the ADM approach is to the problem of decomposing a given matrix into a sum of a low-rank matrix and a sparse matrix simultaneously using ℓ1-norm and nuclear-norm regula... |

24 |
A fast alternating direction method for TVL1-L2 signal reconstruction from partial fourier data
- Yang, Zhang, et al.
- 2008
(Show Context)
Citation Context ... minimization of LA(x, y, λ k ) with respect to (x, y) to merely one round of alternating minimization. Recently, ADM has been applied to total variation based image restoration and reconstruction in =-=[20, 52]-=-. In the following, we apply ADM technique to (1.4) and (1.5), while the application to (1.3) and (1.6) will be a by-product. 2.2. Applying ADM to primal problems. In this subsection, we apply ADM to ... |

22 | Convergence of the linearized Bregman iteration for ℓ1-norm minimization
- Cai, Osher, et al.
(Show Context)
Citation Context ...1-problems (1.3) and (1.4). The Bregman iteration [38] was applied to the basis pursuit problem in [53]. In the same paper, a linearized Bregman method was also suggested and analyzed subsequently in =-=[5, 6, 54]-=-. In [23], Friedlander and Van den Berg proposed a spectral projection gradient method (SPGL1), where (1.4) is solved by a root-finding framework applied to a sequence of LASSO problems [45]. Moreover... |

18 |
Application of the alternating direction method of multipliers to separable convex programming problems
- Fukushima
- 1992
(Show Context)
Citation Context ...nterpreted as the Douglas-Rachford splitting method [17] applied to a dual problem. method is shown in [18]. The equivalence between ADM and a proximal point ADM is also studied in convex programming =-=[24]-=- and variational inequalities [47, 32]. Moreover, ADM has been extended to allowing inexact subproblem minimization [18, 31]. In (2.4), a steplength γ > 0 is attached to the update of λ. Under certain... |

17 |
Fast wavelet-based image deconvolution using the EM algorithm
- Nowak, Figueiredo
- 2001
(Show Context)
Citation Context ...d a gradient projection method with line search. To date, the most widely studied first-order method for solving (1.5) is the iterative shrinkage/thresholding (IST) method, which is first proposed in =-=[21, 37, 14]-=- for wavelet-based image deconvolution and then independently discovered and analyzed by many others [19, 43, 44, 13]. In [29, 30], Hale, Yin and Zhang derived the IST algorithm from an operator split... |

16 |
A note on wavelet-based inversion algorithms
- Mol, Defrise
- 2002
(Show Context)
Citation Context ...d a gradient projection method with line search. To date, the most widely studied first-order method for solving (1.5) is the iterative shrinkage/thresholding (IST) method, which is first proposed in =-=[21, 37, 14]-=- for wavelet-based image deconvolution and then independently discovered and analyzed by many others [19, 43, 44, 13]. In [29, 30], Hale, Yin and Zhang derived the IST algorithm from an operator split... |

12 | Sparse and low-rank matrix decomposition via alternating direction methods
- YUAN, YANG
- 2009
(Show Context)
Citation Context ...en matrix into a sum of a low-rank matrix and a sparse matrix simultaneously using ℓ1-norm and nuclear-norm regularizations (see [11]). An ADM scheme has been proposed and studied for this problem in =-=[55]-=-. Although the ADM approach is classic and its convergence properties have been well studied, its remarkable effectiveness in signal and image reconstruction problems involving ℓ1-like regularizations... |

10 | A Fast Algorithm for the Constrained Formulation of Compressive Image Reconstruction and Other Linear Inverse Problems
- Afonso, Bioucas-Dias, et al.
- 2010
(Show Context)
Citation Context ...recently when a burst of works applying ADM techniques appeared in 2009, including our ADM-based ℓ1-solver package YALL1 ([62], published online in April 2009) and a number of ADM-related papers (see =-=[22, 57, 31, 42, 1, 2]-=-, for example). The rest of the paper is to present the derivation and performance of the proposed ADM algorithms for solving the ℓ1-models (1.3)-(1.6) and their nonnegative counterparts, many of whic... |