## An Unconstrained ℓq Minimization with 0 < q ≤ 1 for Sparse Solution of Under-determined Linear Systems (2009)

Citations: | 3 - 1 self |

### BibTeX

@MISC{Lai09anunconstrained,

author = {Ming-jun Lai and Jingyue Wang},

title = {An Unconstrained ℓq Minimization with 0 < q ≤ 1 for Sparse Solution of Under-determined Linear Systems},

year = {2009}

}

### OpenURL

### Abstract

We study an unconstrained version of the ℓq minimization for the sparse solution of under-determined linear systems for 0 < q ≤ 1. Although the minimization is nonconvex, we introduce a regularization and develop an iterative algorithm. We show that the iterative solutions converge to the sparse solution. Numerical experiments will be demonstrated to show that our approach works very well.

### Citations

1715 | Compressed sensing - Donoho - 2006 |

831 | Practical Signal Recovery from Random
- Candes, Romberg, et al.
(Show Context)
Citation Context ...≤ ‖AxT ‖ 2 2 ≤ (1 + δs)‖xT ‖ 2 2, (15) where xT is a vector in R n whose nonzero entries are those with indices in T for all T ⊂ {1, 2, · · · , n} with #(T) ≤ s. The concept was introduced in [6] and =-=[7]-=- which generated a great deal of interest. Many random matrices such as Gaussian, sub-Gaussian, and pre-Gaussian random matrices are shown to have RIP with overwhelming probability. See [6], [10] and ... |

741 |
Stable signal recovery from incomplete and inaccurate measurements
- Candès, Romberg, et al.
- 2006
(Show Context)
Citation Context ... study, let us outline some research results related to numerical algorithms for the computation of sparse solutions of (1). First of all, the ℓ1 minimization (2) by Candés and his collaborators (cf. =-=[5]-=-) is a successful approach to find sparse solutions (1) if the sparsity s = ‖x‖0 is not very large. A matlab program based on a linear programming method for the sparse solution is available on-line a... |

654 | Decoding by linear programming
- Candès, Tao
- 2005
(Show Context)
Citation Context ... ‖ 2 2 ≤ ‖AxT ‖ 2 2 ≤ (1 + δs)‖xT ‖ 2 2 , (15) where xT is a vector in R n whose nonzero entries are those with indices in T for all T ⊂ {1, 2, · · · , n} with #(T) ≤ s. The concept was introduced in =-=[6]-=- and [7] which generated a great deal of interest. Many random matrices such as Gaussian, sub-Gaussian, and pre-Gaussian random matrices are shown to have RIP with overwhelming probability. See [6], [... |

634 |
An introduction to compressive sampling
- Candès, Wakin
- 2008
(Show Context)
Citation Context ...s related to the existence, uniqueness, and other properties of the sparse solution as well as computational algorithms and their convergence analysis to tackle Problem (1). See survey papers in [1], =-=[3]-=-, and [2]. To motivate our study, let us outline some research results related to numerical algorithms for the computation of sparse solutions of (1). First of all, the ℓ1 minimization (2) by Candés a... |

525 | Greed is good: Algorithmic results for sparse approximation
- Tropp
- 2004
(Show Context)
Citation Context ...page. The performance of the ℓ1 method is further improved based on the ideas of repeating reweighted iteration (cf. [8]). Another approach is based on orthogonal greedy algorithm (OGA). See [32] and =-=[33]-=- for some theoretic study and [30] for an efficient numerical algorithm. The performance of the OGA in [30] is much improved based on the greedy ℓ1 algorithm proposed recently in [25]. Another approac... |

419 |
An iterative thresholding algorithm for linear inverse problems with a sparsity constraint
- Daubechies, Defrise, et al.
- 2004
(Show Context)
Citation Context ...om variable. See [20]. In addition, there are many other approaches, e.g., optimal basis pursuit(OMB) method (for problem (2)), soft-thresholding iterations, standard and damped Landweber iterations (=-=[11]-=-) for problem (2), iterative reweighted least squares (IRLS) method (cf. [13]) (for problems (2) and (3)) and etc.. In this paper we shall consider another version of ℓq minimization: min x∈RN ‖x‖ q q... |

365 | Elad.“Optimally sparse representation in general (nonorthogonal) dictionaries via 1 minimization - Donoho, Michael - 2003 |

321 |
The restricted isometry property and its implications for compressed sensing
- Candès
(Show Context)
Citation Context ...gh, where δ2s is the restricted isometry constants of matrix 2A (similar for δ2s+1, δ2s+2). Note that for the ℓ1 minimization, one needs δ2s < √ 2 − 1 or δ2s < 2/(3 + √ 2) or 4/(6 + √ 6) as shown in =-=[4]-=-, [19] and [18]. The third advantage is that the ℓq minimization can be applied to a wider class of random matrices A, e.g., when A is a random matrix whose entries are iid copies of a pre-Gaussian ra... |

317 |
Sparse approximate solutions to linear systems
- Natarajan
- 1995
(Show Context)
Citation Context ...ch. This problem is motivated by data compression, error correction decodes, n-term approximation, and etc.. (See, e.g. [26]). It is known that the problem (1) needs non-polynomial time to solve (cf. =-=[28]-=-). It is crucial to recognize that one natural approach to tackle (1) is to solve the following convex minimization problem: min {‖x‖1, Ax = b}, (2) x∈RN where ‖x‖1 = ∑ N j=1 |xj| is the standard ℓ1 n... |

304 | Compressive sensing
- Baraniuk
- 2007
(Show Context)
Citation Context ...utions related to the existence, uniqueness, and other properties of the sparse solution as well as computational algorithms and their convergence analysis to tackle Problem (1). See survey papers in =-=[1]-=-, [3], and [2]. To motivate our study, let us outline some research results related to numerical algorithms for the computation of sparse solutions of (1). First of all, the ℓ1 minimization (2) by Can... |

223 |
An Introduction to Γ-Convergence
- Maso
- 1993
(Show Context)
Citation Context ...k→∞ E(v). Since v is arbitrarily chosen, we have lim sup kj→∞ inf u∈X Ekj (u) ≤ inf v∈X E(v). One important consequence of a Γ-convergent sequence of functionals is the following standard result (cf. =-=[27]-=-) 14Lemma 2.5 Suppose that a sequence of functionals Ek is Γ−convergent to a functional E on X as k → ∞. Letting Ekj be a subsequence and ukj be the minimizer of Ekj , if ukj converges to u in X, the... |

202 | From sparse solutions of systems of equations to sparse modeling of signals and images
- Bruckstein, Donoho, et al.
- 2009
(Show Context)
Citation Context ... to the existence, uniqueness, and other properties of the sparse solution as well as computational algorithms and their convergence analysis to tackle Problem (1). See survey papers in [1], [3], and =-=[2]-=-. To motivate our study, let us outline some research results related to numerical algorithms for the computation of sparse solutions of (1). First of all, the ℓ1 minimization (2) by Candés and his co... |

143 |
Tchebycheff systems: With applications in analysis and statistics
- Karlin, Studden
- 1966
(Show Context)
Citation Context ... cos(xj), sin(xj), · · · , cos(mxj), sin(mxj)]j=1,···,n for all xj ∈ [0, 2π), j = 1, · · · , n be a matrix of size (2m + 1) × n. Then A is of completely full rank since A is a Tchebysheff system (cf. =-=[26]-=-). Lemma 2.2 Suppose that A is of completely full rank. Let [ ] A 0m A = . In where 0m is a zero block matrix of size m × m, In is the identity matrix of size n × n, and Rm is a zero matrix except for... |

103 | Uniform uncertainty principle and signal recovery via regularized orthogonal matching pursuit
- Needell, Vershynin
- 2009
(Show Context)
Citation Context .... [25]), the standard ℓ1 (L1) (cf. [5]) and the reweighted ℓ1(RWL1) algorithms (cf. [8]) which can be obtained on-line from the Candés webpage, the regularized orthogonal matching pursuit (ROMP) (cf. =-=[29]-=-), in addition to the ℓq (Lq) algorithm developed in [19]. In our unconstrained ℓq (nLq) minimization, we choose λ = 10 −6 and run our iterative algorithm explained in Section 2 for many ǫ > 0 and q >... |

94 |
Enhancing sparsity by reweighted l1 minimization
- Candes, Wakin, et al.
- 2008
(Show Context)
Citation Context ...r programming method for the sparse solution is available on-line at the Candés webpage. The performance of the ℓ1 method is further improved based on the ideas of repeating reweighted iteration (cf. =-=[8]-=-). Another approach is based on orthogonal greedy algorithm (OGA). See [32] and [33] for some theoretic study and [30] for an efficient numerical algorithm. The performance of the OGA in [30] is much ... |

94 |
Su un tipo di convergenza variazionale
- Giorgi, Franzoni
(Show Context)
Citation Context ...). We show that z q converges to the sparse solution of the original problem (1) as q → 0+. We shall use the concept of Γ-convergence which was introduced by E. De Giorgi and T. Franzoni in 1975 (cf. =-=[22]-=-). We first give the definition for the Γ-convergence. Definition 2.1 Let (X, d) be a metric space with metric d. We say that a sequence of functionals Ek : X → [−∞, ∞] is Γ-convergent to a functional... |

93 |
Exact reconstruction of sparse signals via nonconvex minimization
- Chartrand
(Show Context)
Citation Context ...e. One is the result in [10]: for a Gaussian random matrix A, the restricted q-isometry property of order s holds if s is almost proportional to m when q → 0+. Another advantage demonstrated in [31], =-=[9]-=- and [19] is when δ2s < 1 (or δ2s+1 < 1, δ2s+2 < 1), the solution of the ℓq minimization is a sparse solution when q > 0 small enough, where δ2s is the restricted isometry constants of matrix 2A (sim... |

87 | Temlyakov, Stable recovery of sparse overcomplete representations in the presence of noise - Donoho, Elad, et al. - 2006 |

79 | Sparsest solutions of underdetermined linear systems via lq - minimization for 0
- Foucart, Lai
- 2009
(Show Context)
Citation Context ...s the result in [10]: for a Gaussian random matrix A, the restricted q-isometry property of order s holds if s is almost proportional to m when q → 0+. Another advantage demonstrated in [31], [9] and =-=[19]-=- is when δ2s < 1 (or δ2s+1 < 1, δ2s+2 < 1), the solution of the ℓq minimization is a sparse solution when q > 0 small enough, where δ2s is the restricted isometry constants of matrix 2A (similar for ... |

63 | Iteratively reweighted least squares minimization for sparse recovery
- Daubechies, DeVore, et al.
(Show Context)
Citation Context ...timal basis pursuit(OMB) method (for problem (2)), soft-thresholding iterations, standard and damped Landweber iterations ([11]) for problem (2), iterative reweighted least squares (IRLS) method (cf. =-=[13]-=-) (for problems (2) and (3)) and etc.. In this paper we shall consider another version of ℓq minimization: min x∈RN ‖x‖ q q + 1 2λ ‖Ax − b‖22, (4) which ‖x‖ 2 2 = ∑ N j=1 x2 j and λ > 0 is a parameter... |

60 | Accelerated projected gradient methods for linear inverse problems with sparsity constraints - Daubechies, Fornasier, et al. - 2008 |

60 | Bregman iterative algorithms for ℓ1 minimization with applications to compressed sensing
- Yin, Osher, et al.
(Show Context)
Citation Context ... go to zero in order to approximate ‖x‖q q . This is the main minimization problem we study in this paper. Although many unconstrained versions of problem (2) have been studied in the literature (cf. =-=[34]-=- and references therein), the unconstrained ℓq minimization (4) has not been analyzed yet so far. We shall show that the above problem (5) has a solution for any q ∈ (0, 1] and ǫ > 0. We also derive a... |

50 |
Restricted isometry properties and nonconvex compressive sensing
- Chartrand, Staneva
(Show Context)
Citation Context ... researchers have worked on this direction. Even though it is NP hard (cf. [22]), there are at least three advantages of using this approach to the best of the authors knowledge. One is the result in =-=[10]-=-: for a Gaussian random matrix A, the restricted q-isometry property of order s holds if s is almost proportional to m when q → 0+. Another advantage demonstrated in [31], [9] and [19] is when δ2s < 1... |

48 | Nonlinear methods of approximation
- Temlyakov
- 2002
(Show Context)
Citation Context ...andés webpage. The performance of the ℓ1 method is further improved based on the ideas of repeating reweighted iteration (cf. [8]). Another approach is based on orthogonal greedy algorithm (OGA). See =-=[32]-=- and [33] for some theoretic study and [30] for an efficient numerical algorithm. The performance of the OGA in [30] is much improved based on the greedy ℓ1 algorithm proposed recently in [25]. Anothe... |

38 |
Sparse decompositions in unions of bases
- Gribonval, Nielsen
(Show Context)
Citation Context ...the following min x∈RN {‖x‖ q q, Ax = b}, (3) where ‖x‖q q = ∑N j=1 |xj| q for 0 < q ≤ 1. This minimization is motivated by the following fact: lim ‖x‖ q→0+ q q = ‖x‖0. This approach was initiated by =-=[23]-=- and many researchers have worked on this direction. Even though it is NP hard (cf. [22]), there are at least three advantages of using this approach to the best of the authors knowledge. One is the r... |

32 | Signal recovery and the large sieve - Donoho, Logan - 1992 |

30 | On verifiable sufficient conditions for sparse signal recovery via ℓ 1 minimization
- Juditsky, Nemirovski
- 2008
(Show Context)
Citation Context ...lobal minimizer and the sparsity of critical points. We shall introduce a concept called matrices of completely full rank and recall the standard notion of the restricted isometry property (RIP). See =-=[24]-=- for a verifiable sufficient condition for sparse solution. A matrix which is of completely full rank can be renormalized to be a matrix with a RIP. Under the assumption that xǫ,q is a global minimize... |

20 |
A note on guaranteed sparse recovery via ℓ1-minimization
- Foucart
- 2010
(Show Context)
Citation Context ...is the restricted isometry constants of matrix 2A (similar for δ2s+1, δ2s+2). Note that for the ℓ1 minimization, one needs δ2s < √ 2 − 1 or δ2s < 2/(3 + √ 2) or 4/(6 + √ 6) as shown in [4], [19] and =-=[18]-=-. The third advantage is that the ℓq minimization can be applied to a wider class of random matrices A, e.g., when A is a random matrix whose entries are iid copies of a pre-Gaussian random variable. ... |

6 | Fast implementation of orthogonal greedy algorithm for tight wavelet frames
- Petukhov
(Show Context)
Citation Context ...thod is further improved based on the ideas of repeating reweighted iteration (cf. [8]). Another approach is based on orthogonal greedy algorithm (OGA). See [32] and [33] for some theoretic study and =-=[30]-=- for an efficient numerical algorithm. The performance of the OGA in [30] is much improved based on the greedy ℓ1 algorithm proposed recently in [25]. Another approach for the computation of the spars... |

6 |
Efficient reconstruction of piecewise constant images using nonsmooth nonconvex minimization
- Nikolova, Ng, et al.
(Show Context)
Citation Context ...strained versions of problem (2) have been studied in the literature (cf. [37] and references therein). In addition, there are several studies on the unconstrained ℓq minimization (4), e.g., [11] and =-=[32]-=-. The researchers in their papers [32] and [11] used several formats to regularize (4). These regularized minimizations are different from the one in (5). They obtained several interesting results on ... |

4 | Sparse Recovery with Pre-Gaussian Random Matrices
- Foucart, Lai
- 2009
(Show Context)
Citation Context ...e third advantage is that the ℓq minimization can be applied to a wider class of random matrices A, e.g., when A is a random matrix whose entries are iid copies of a pre-Gaussian random variable. See =-=[20]-=-. In addition, there are many other approaches, e.g., optimal basis pursuit(OMB) method (for problem (2)), soft-thresholding iterations, standard and damped Landweber iterations ([11]) for problem (2)... |

3 | On Sparse Solutions of Underdetermined Linear Systems
- Lai
- 2009
(Show Context)
Citation Context ...olution of Ax = b. This is one of critical problems in compressed sensing research. This problem is motivated by data compression, error correction decodes, n-term approximation, and etc.. (See, e.g. =-=[26]-=-). It is known that the problem (1) needs non-polynomial time to solve (cf. [28]). It is crucial to recognize that one natural approach to tackle (1) is to solve the following convex minimization prob... |

2 | Recovery of sparsest signals via ℓq-minimization
- Sun
(Show Context)
Citation Context ...owledge. One is the result in [10]: for a Gaussian random matrix A, the restricted q-isometry property of order s holds if s is almost proportional to m when q → 0+. Another advantage demonstrated in =-=[31]-=-, [9] and [19] is when δ2s < 1 (or δ2s+1 < 1, δ2s+2 < 1), the solution of the ℓq minimization is a sparse solution when q > 0 small enough, where δ2s is the restricted isometry constants of matrix 2A... |

2 |
Lower Bound Theory of Nonzero Entries
- Chen, Xu, et al.
(Show Context)
Citation Context ...any unconstrained versions of problem (2) have been studied in the literature (cf. [37] and references therein). In addition, there are several studies on the unconstrained ℓq minimization (4), e.g., =-=[11]-=- and [32]. The researchers in their papers [32] and [11] used several formats to regularize (4). These regularized minimizations are different from the one in (5). They obtained several interesting re... |

1 |
A note on complexity of Lp minimization, manuscript
- Ge, Ye
- 2010
(Show Context)
Citation Context .... This minimization is motivated by the following fact: lim ‖x‖ q→0+ q q = ‖x‖0. This approach was initiated by [23] and many researchers have worked on this direction. Even though it is NP hard (cf. =-=[22]-=-), there are at least three advantages of using this approach to the best of the authors knowledge. One is the result in [10]: for a Gaussian random matrix A, the restricted q-isometry property of ord... |

1 |
Sparse Solutions of Underdetermined Linear Systems, Chapter in Handbook of Geomathematics, edited by W
- Kozlov, Petukhov
- 2010
(Show Context)
Citation Context ...A). See [32] and [33] for some theoretic study and [30] for an efficient numerical algorithm. The performance of the OGA in [30] is much improved based on the greedy ℓ1 algorithm proposed recently in =-=[25]-=-. Another approach for the computation of the sparse solutions is based on ℓq minimization with 0 < q < 1. That is, we consider the following min x∈RN {‖x‖ q q, Ax = b}, (3) where ‖x‖q q = ∑N j=1 |xj|... |

1 |
A note on complexity of Lp minimization, Mathematical Programming manuscript
- Ge, Ye
- 2010
(Show Context)
Citation Context .... This minimization is motivated by the following fact: lim ‖x‖ q→0+ q q = ‖x‖0. This approach was initiated by [24] and many researchers have worked on this direction. Even though it is NP hard (cf. =-=[23]-=-), there are at least three advantages of using this approach to the best of the authors’ knowledge. One is the result in [10]: for a Gaussian random matrix A, the restricted q-isometry property of or... |