## Sparsest solutions of underdetermined linear systems via ℓ

Citations: | 79 - 8 self |

### BibTeX

@INPROCEEDINGS{Foucart_sparsestsolutions,

author = {Simon Foucart and Ming-jun Lai},

title = {Sparsest solutions of underdetermined linear systems via ℓ},

booktitle = {},

year = {}

}

### OpenURL

### Abstract

We present a condition on the matrix of an underdetermined linear system which guarantees that the solution of the system with minimal ℓq-quasinorm is also the sparsest one. This generalizes, and sightly improves, a similar result for the ℓ1-norm. We then introduce a simple numerical scheme to compute solutions with minimal ℓq-quasinorm, and we study its convergence. Finally, we display the results of some experiments which indicate that the ℓq-method performs better than other available methods. 1

### Citations

833 | Practical signal recovery from random projections
- Candès, Romberg
- 2005
(Show Context)
Citation Context ...ns have been put forward, such as orthogonal greedy algorithms or basis pursuit. The latter replaces the problem (P0) by the ℓ1-minimization minimize z∈R N ‖z‖1 subject to Az = y. (P1) Candès and Tao =-=[5]-=- showed for instance that any s-sparse vector is exactly recovered via the minimization (P1) as soon as δ3s + 3δ4s < 2. Note that a condition involving only δ2s would seem more natural, in view of the... |

742 |
Stable signal recovery from incomplete and inaccurate measurements
- Candès, Romberg, et al.
- 2006
(Show Context)
Citation Context ...alistic situation of a measurement y = Ax + e containing a perturbation vector e with ‖e‖2 ≤ ϑ for some fixed amount ϑ ≥ 0. This framework is exactly the one introduced by Candès, Romberg, and Tao in =-=[3]-=- for the case q = 1. Next, in Section 4, we propose a numerical algorithm to approximate the minimization (Pq). We then discuss convergence issues and prove that the output of the algorithm is not mer... |

656 | Decoding by linear programming
- Candes, Tao
- 2005
(Show Context)
Citation Context ...z‖ 2 2 whenever ‖z‖0 ≤ k. (1) It is then easy to observe that any s-sparse vector x is recovered via the minimization (P0) in which y = Ax if and only if the strict inequality δ2s < 1 holds, see e.g. =-=[4]-=-. However appealing (P0) might seem, it remains an NP-problem [11] that cannot be solved in practice. Nonetheless, assuming certain conditions on the matrix A, alternative strategies to find sparsest ... |

524 | Greed is good: Algorithmic results for sparse approximation
- Tropp
- 2004
(Show Context)
Citation Context ...nal matching pursuit (ROMP, see [12]), the ℓ1-minimization (L1), and the reweighted ℓ1-minimization (RWL1, see [6]). There are many greedy algorithms available in the literature, see e.g. [14], [15], =-=[16]-=-, and [9], but that we find the orthogonal greedy algorithm of [13] more efficient due to two of its features: one is to select multiple columns from A during each greedy iteration and the other is to... |

364 | Optimal sparse representation in general (non-orthogonal) dictionaries via ℓ1 minimization
- Donoho, Elad
- 2003
(Show Context)
Citation Context ...y observe that a solution z of (P0) is guaranteed to be unique as soon as 2‖z‖0 < spark(A), where spark(A) ≤ rank(A) + 1 is the smallest integer σ for which σ columns of A are linearly dependent, see =-=[8]-=-. Uniqueness can also be characterized in terms of the Restricted Isometry Constants δk of the matrix A. We recall that these are the smallest constants 0 < δk ≤ 1 for which the matrix A satisfies the... |

322 |
The restricted isometry property and its implications for 2775 compressed sensing,” Compte Rendus l’Academie
- Candes
- 2008
(Show Context)
Citation Context ...y recovered via the minimization (P1) as soon as δ3s + 3δ4s < 2. Note that a condition involving only δ2s would seem more natural, in view of the previous considerations. Candès provided just that in =-=[2]-=- when he established exact recovery of s-sparse vectors via ℓ1-minimization under the condition δ2s < √ 2 − 1 ≈ 0.4142. (2) We shall now adopt a strategy that lies between the minimizations (P0) and (... |

316 |
Sparse approximate solutions to linear systems
- Natarajan
- 1995
(Show Context)
Citation Context ... s-sparse vector x is recovered via the minimization (P0) in which y = Ax if and only if the strict inequality δ2s < 1 holds, see e.g. [4]. However appealing (P0) might seem, it remains an NP-problem =-=[11]-=- that cannot be solved in practice. Nonetheless, assuming certain conditions on the matrix A, alternative strategies to find sparsest solutions have been put forward, such as orthogonal greedy algorit... |

302 | A simple proof of the restricted isometry property for random matrices
- Baraniuk, Davenport, et al.
(Show Context)
Citation Context ...be e.g. independent realizations of Gaussian random variables of mean zero and identical variance, provided that m ≥ c · s log(N/s), where c is a constant depending on γ2s − 1. We refer the reader to =-=[1]-=- for a precise statement and a simple proof based on concentration of measure inequalities. We start by illustrating Theorem 3.1 in the special case of s-sparse vectors that are measured with infinite... |

103 | Uniform uncertainty principle and signal recovery via regularized orthogonal matching pursuit
- Needell, Vershynin
(Show Context)
Citation Context ...n, but is in fact exact. Finally, we compare in Section 5 our ℓq-algorithm with four existing methods: the orthogonal greedy algorithm, see e.g. [13], the regularized orthogonal matching pursuit, see =-=[12]-=-, the ℓ1-minimization, and the reweighted ℓ1-minimization, see [6]. The last two, as well as our ℓq-algorithm, use the ℓ1-magic software available on Candès’ web page. It comes as a small surprise tha... |

93 |
Enhancing sparsity by reweighted l1 minimization
- Candes, Wakin, et al.
- 2008
(Show Context)
Citation Context ...gorithm with four existing methods: the orthogonal greedy algorithm, see e.g. [13], the regularized orthogonal matching pursuit, see [12], the ℓ1-minimization, and the reweighted ℓ1-minimization, see =-=[6]-=-. The last two, as well as our ℓq-algorithm, use the ℓ1-magic software available on Candès’ web page. It comes as a small surprise that the ℓq-method performs best. 2 Exact recovery via ℓq-minimizatio... |

93 |
Exact reconstruction of sparse signals via nonconvex minimization
- Chartrand
(Show Context)
Citation Context ...e minimization minimize z∈R N ‖z‖q subject to Az = y. (Pq) This is by no means a brand new approach. Gribonval and Nielsen, see e.g. [10], studied the ℓq-minimization in terms of Coherence. Chartrand =-=[7]-=- studied it in terms of Restricted Isometry Constants. He stated that s-sparse vectors can be exactly recovered by solving (Pq) under the assumption that δas +bδ(a+1)s < b−1 holds for some b > 1 and a... |

87 |
Temlyakov, Stable recovery of sparse overcomplete representations in the presence of noise
- Donoho, Elad, et al.
- 2006
(Show Context)
Citation Context ...ng pursuit (ROMP, see [12]), the ℓ1-minimization (L1), and the reweighted ℓ1-minimization (RWL1, see [6]). There are many greedy algorithms available in the literature, see e.g. [14], [15], [16], and =-=[9]-=-, but that we find the orthogonal greedy algorithm of [13] more efficient due to two of its features: one is to select multiple columns from A during each greedy iteration and the other is to use an i... |

56 | Highly sparse representations from dictionaries are unique and independent of the sparseness measure
- Gribonval, Nielsen
- 2003
(Show Context)
Citation Context ...izations (P0) and (P1). Namely, we consider, for some 0 < q ≤ 1, the minimization minimize z∈R N ‖z‖q subject to Az = y. (Pq) This is by no means a brand new approach. Gribonval and Nielsen, see e.g. =-=[10]-=-, studied the ℓq-minimization in terms of Coherence. Chartrand [7] studied it in terms of Restricted Isometry Constants. He stated that s-sparse vectors can be exactly recovered by solving (Pq) under ... |

50 | Vector greedy algorithms
- Lutoborski, Temlyakov
- 2003
(Show Context)
Citation Context ...ized orthogonal matching pursuit (ROMP, see [12]), the ℓ1-minimization (L1), and the reweighted ℓ1-minimization (RWL1, see [6]). There are many greedy algorithms available in the literature, see e.g. =-=[14]-=-, [15], [16], and [9], but that we find the orthogonal greedy algorithm of [13] more efficient due to two of its features: one is to select multiple columns from A during each greedy iteration and the... |

48 | Nonlinear methods of approximation
- Temlyakov
- 2002
(Show Context)
Citation Context ...rthogonal matching pursuit (ROMP, see [12]), the ℓ1-minimization (L1), and the reweighted ℓ1-minimization (RWL1, see [6]). There are many greedy algorithms available in the literature, see e.g. [14], =-=[15]-=-, [16], and [9], but that we find the orthogonal greedy algorithm of [13] more efficient due to two of its features: one is to select multiple columns from A during each greedy iteration and the other... |

6 | Fast implementation of orthogonal greedy algorithm for tight wavelet frames
- Petukhov
(Show Context)
Citation Context ...e output of the algorithm is not merely an approximation, but is in fact exact. Finally, we compare in Section 5 our ℓq-algorithm with four existing methods: the orthogonal greedy algorithm, see e.g. =-=[13]-=-, the regularized orthogonal matching pursuit, see [12], the ℓ1-minimization, and the reweighted ℓ1-minimization, see [6]. The last two, as well as our ℓq-algorithm, use the ℓ1-magic software availabl... |

1 |
Weak greedy algorithms, Adv
- Temlyakov
(Show Context)
Citation Context ...ized orthogonal matching pursuit (ROMP, see [12]), the ℓ1-minimization (L1), and the reweighted ℓ1-minimization (RWL1, see [6]). There are many greedy algorithms available in the literature, see e.g. =-=[14]-=-, [15], [16], and [9], but that we find the orthogonal greedy algorithm of [13] more efficient due to two 14of its features: one is to select multiple columns from A during each greedy iteration and ... |