## Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit, submitted (2007)

### Cached

### Download Links

Citations: | 111 - 6 self |

### BibTeX

@TECHREPORT{Needell07uniformuncertainty,

author = {Deanna Needell and Roman Vershynin},

title = {Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit, submitted},

institution = {},

year = {2007}

}

### Years of Citing Articles

### OpenURL

### Abstract

Abstract. This paper seeks to bridge the two major algorithmic approaches to sparse signal recovery from an incomplete set of linear measurements – L1-minimization methods and iterative methods (Matching Pursuits). We find a simple regularized version of Orthogonal Matching Pursuit (ROMP) which has advantages of both approaches: the speed and transparency of OMP and the strong uniform guarantees of L1-minimization. Our algorithm ROMP reconstructs a sparse signal in a number of iterations linear in the sparsity, and the reconstruction is exact provided the linear measurements satisfy the Uniform Uncertainty Principle. 1.

### Citations

1864 | Compressed sensing
- Donoho
- 2006
(Show Context)
Citation Context ...y applications, one needs to recover a signal v which is not sparse but close to being sparse in some way. Such are, for example, compressible signals, whose coefficients decay at a certain rate (see =-=[7]-=-, [4]). To make ROMP work for such signals, one can replace the stopping criterion of exact recovery r = 0 by “repeat n times or until r = 0, whichever occurs first”. Note that we could amend the algo... |

1401 | Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information
- Candès, Romberg, et al.
- 2006
(Show Context)
Citation Context ...approximately an orthonormal system. One can interpret the Restricted Isometry Condition as an abstract version of the Uniform Uncertainty Principle in harmonic analysis ([4], see also discussions in =-=[3]-=- and [16]). Theorem 1.2 (Sparse recovery under RIC [5]). Assume that the measurement matrix Φ satisfies the Restricted Isometry Condition with parameters (3n, 0.2). Then every n-sparse vector x can be... |

887 | Near-optimal signal recovery from random projections: universal encoding strategies
- Candes, Tao
- 2006
(Show Context)
Citation Context ... set of m columns of Φ forms approximately an orthonormal system. One can interpret the Restricted Isometry Condition as an abstract version of the Uniform Uncertainty Principle in harmonic analysis (=-=[4]-=-, see also discussions in [3] and [16]). Theorem 1.2 (Sparse recovery under RIC [5]). Assume that the measurement matrix Φ satisfies the Restricted Isometry Condition with parameters (3n, 0.2). Then e... |

703 | Decoding by linear programming
- Candes, Tao
- 2005
(Show Context)
Citation Context ...ssed Sensing pushed forward this program (see survey [1]). A necessary and sufficient condition of exact sparse recovery is that the map Φ be one-to-one on the set of n-sparse vectors. Candès and Tao =-=[5]-=- proved that a stronger quantitative version of this condition guarantees the equivalence of the problems (L0) and (L1). Definition 1.1 (Restricted Isometry Condition). A measurement matrix Φ satisfie... |

688 |
Compressive sampling
- Candès
(Show Context)
Citation Context ...y of presentation; similar results hold over complex numbers. 1s2 DEANNA NEEDELL AND ROMAN VERSHYNIN In other words, the number of measurements N ≪ d should be almost linear in the sparsity n. Survey =-=[1]-=- contains some of these results; the Compressed Sensing webpage [6] documents progress in this area. The two major algorithmic approaches to sparse recovery are methods based on L1-minimization and it... |

324 | Signal recovery from random measurements via orthogonal matching pursuit
- Tropp, Gilbert
- 2007
(Show Context)
Citation Context ... −1 x, where ΦS denotes the measurement matrix Φ restricted to columns indexed by S. A basic iterative algorithm is Orthogonal Matching Pursuit (OMP), popularized and analyzed by Gilbert and Tropp in =-=[21]-=-, see [22] for a more general setting. OMP recovers the support of v, one index at a time, in n steps. Under a hypothetical assumption that Φ is an isometry, i.e. the columns of Φ are orthonormal, the... |

318 | Stable recovery of sparse overcomplete representations in the presence of noise
- Donoho, Elad, et al.
(Show Context)
Citation Context ...trix is not an isometry, one can still use the notion of coherence in recovery of sparse signals. In that setting, greedy algorithms are used with incoherent dictionaries to recover such signals, see =-=[8]-=-, [9], [13]. In our setting, for random matrices one expects the columns to be approximately orthogonal, and the observation vector u = Φ ∗ x to be a good approximation to the original signal v. The b... |

174 |
Uncertainty principles and signal recovery
- Donoho, Stark
- 1989
(Show Context)
Citation Context ...propose a new iterative method that has advantages of both approaches. 1.1. L1-minimization. This approach to sparse recovery has been advocated over decades by Donoho and his collaborators (see e.g. =-=[10]-=-). The sparse recovery problem can be stated as the problem of finding the sparsest signal v with the given measurements Φv: (L0) min �u�0 subject to Φu = Φv where �u�0 := |supp(u)|. Donoho and his as... |

152 | Smoothed analysis of algorithms: Why the simplex algorithm usually takes polynomial time
- Spielman, Teng
- 2001
(Show Context)
Citation Context ...rogram d, but also on certain condition numbers of the program. While for some classes of random matrices the expected running time of linear programming solvers can be bounded (see the discussion in =-=[20]-=- and subsequent work in [23]), estimating condition numbers is hard for specific matrices. For example, there is no result yet showing that the Restricted Isometry Condition implies that the condition... |

128 | On sparse reconstruction from Fourier and Gaussian measurements,” submitted for publication
- Rudelson, Vershynin
(Show Context)
Citation Context ...n the confidence level δ and the constants C1,c1,C2 from the definition of the corresponding classes of matrices. Remarks. 1. The first part of this theorem is proved in [17]. The second part is from =-=[19]-=-; a similar estimate with somewhat worse exponents in the logarithms was proved in [4]. See these results for the exact dependence of C on the confidence level δ (although usually δ would be chosen to... |

91 |
Temlyakov, “Stable recovery of sparse overcomplete representations in the presence of noise
- Donoho, Elad, et al.
- 2006
(Show Context)
Citation Context ...trix is not an isometry, one can still use the notion of coherence in recovery of sparse signals. In that setting, greedy algorithms are used with incoherent dictionaries to recover such signals, see =-=[8]-=-, [9], [13]. In our setting, for random matrices one expects the columns to be approximately orthogonal, and the observation vector u = Φ ∗ x to be a good approximation to the original signal v. The b... |

78 | Counting faces of randomly-projected polytopes when the projection radically lowers dimension. Manuscript arXiv:math/0607364v2 [math.MG - Donoho, Tanner - 2006 |

74 | Approximation of functions over redundant dictionaries using coherence
- Gilbert, Muthukrishnan, et al.
- 2003
(Show Context)
Citation Context ...t an isometry, one can still use the notion of coherence in recovery of sparse signals. In that setting, greedy algorithms are used with incoherent dictionaries to recover such signals, see [8], [9], =-=[13]-=-. In our setting, for random matrices one expects the columns to be approximately orthogonal, and the observation vector u = Φ ∗ x to be a good approximation to the original signal v. The biggest coor... |

64 | One sketch for all: Fast algorithms for compressed sensing
- Gilbert, Strauss, et al.
- 2007
(Show Context)
Citation Context ...ition, this method works correctly for all sparse signals v. No iterative methods have been known to feature such uniform guarantees, with the exception of Chaining Pursuit [14] and the HHS Algorithm =-=[15]-=- which however only work with specifically designed structured measurement matrices. The Restricted Isometry Condition is a natural abstract deterministic property of a matrix. Although establishing t... |

56 | Uniform uncertainty principle for Bernoulli and sub-gaussian ensembles
- Mendelson, Pajor, et al.
(Show Context)
Citation Context ... the constant C depends only on the confidence level δ and the constants C1,c1,C2 from the definition of the corresponding classes of matrices. Remarks. 1. The first part of this theorem is proved in =-=[17]-=-. The second part is from [19]; a similar estimate with somewhat worse exponents in the logarithms was proved in [4]. See these results for the exact dependence of C on the confidence level δ (althoug... |

52 | Nonlinear methods of approximation - Temlyakov |

31 | Thresholds for the recovery of sparse solutions via l1 minimization - Donoho, Tanner - 2006 |

18 |
Algorithmic linear dimension reduction in the L1 norm for sparse vectors
- Gilbert, Strauss, et al.
- 2006
(Show Context)
Citation Context ...he Restricted Isometry Condition, this method works correctly for all sparse signals v. No iterative methods have been known to feature such uniform guarantees, with the exception of Chaining Pursuit =-=[14]-=- and the HHS Algorithm [15] which however only work with specifically designed structured measurement matrices. The Restricted Isometry Condition is a natural abstract deterministic property of a matr... |

18 | Beyond Hirsch conjecture: walks on random polytopes and smoothed complexity of the simplex method
- Vershynin
- 2006
(Show Context)
Citation Context ...n condition numbers of the program. While for some classes of random matrices the expected running time of linear programming solvers can be bounded (see the discussion in [20] and subsequent work in =-=[23]-=-), estimating condition numbers is hard for specific matrices. For example, there is no result yet showing that the Restricted Isometry Condition implies that the condition numbers of the correspondin... |

9 | Uncertainty principles and vector quantization
- Lyubarskii, Vershynin
- 2008
(Show Context)
Citation Context ...ately an orthonormal system. One can interpret the Restricted Isometry Condition as an abstract version of the Uniform Uncertainty Principle in harmonic analysis ([4], see also discussions in [3] and =-=[16]-=-). Theorem 1.2 (Sparse recovery under RIC [5]). Assume that the measurement matrix Φ satisfies the Restricted Isometry Condition with parameters (3n, 0.2). Then every n-sparse vector x can be exactly ... |

4 |
On the lebesgue type inequalities for greedy approximation
- Donoho, Elad, et al.
(Show Context)
Citation Context ...is not an isometry, one can still use the notion of coherence in recovery of sparse signals. In that setting, greedy algorithms are used with incoherent dictionaries to recover such signals, see [8], =-=[9]-=-, [13]. In our setting, for random matrices one expects the columns to be approximately orthogonal, and the observation vector u = Φ ∗ x to be a good approximation to the original signal v. The bigges... |

4 |
On the impossibility of uniform recovery using greedy methods
- Rauhut
(Show Context)
Citation Context ...parse signal v and not for all signals, the algorithm performs correctly with high probability. Rauhut has shown that uniform guarantees for OMP are impossible for natural random measurement matrices =-=[18]-=-. Moreover, OMP’s condition on measurement matrices given in [21] is more restrictive than the Restricted Isometry Condition. In particular, it is not known whether OMP succeeds in the important class... |

3 |
Compressed Sensing and k-Term Approximation (Manuscript
- Cohen, Dahmen, et al.
- 2007
(Show Context)
Citation Context ...≪ d nonadaptive linear measurements of v, and wish to efficiently recover v from these. The measurements are given as the vector Φv ∈ R N , where Φ is some N × d measurement matrix. 1 As discussed in =-=[2]-=-, exact recovery is possible with just N = 2n. However, recovery using only this property is not numerically feasible; the sparse recovery problem in general is known to be NP-hard. Nevertheless, mass... |