## Compressed sensing

### Cached

### Download Links

- [www.ece.ubc.ca]
- [www.signallake.com]
- [www-stat.stanford.edu]
- [www-stat.stanford.edu]
- [www.cs.jhu.edu]
- [www.signallake.com]
- DBLP

### Other Repositories/Bibliography

Venue: | IEEE Trans. Inform. Theory |

Citations: | 1760 - 18 self |

### BibTeX

@ARTICLE{Donoho_compressedsensing,

author = {David L. Donoho},

title = {Compressed sensing},

journal = {IEEE Trans. Inform. Theory},

year = {},

pages = {2006}

}

### Years of Citing Articles

### OpenURL

### Abstract

Abstract—Suppose is an unknown vector in (a digital image or signal); we plan to measure general linear functionals of and then reconstruct. If is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements can be dramatically smaller than the size. Thus, certain natural classes of images with pixels need only = ( 1 4 log 5 2 ()) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual pixel samples. More specifically, suppose has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)—so the coefficients belong to an ball for 0 1. The most important coefficients in that expansion allow reconstruction with 2 error ( 1 2 1

### Citations

2064 |
A wavelet tour of signal processing
- Mallat
- 1999
(Show Context)
Citation Context ...where it was mentioned that the wavelet coefficients at level j obey �θ (j) �1 ≤ C · B · 2 −j where C depends only on the wavelet used. Here and below we use standard wavelet analysis notations as in =-=[8, 32, 33]-=-. We consider two ways of approximating functions in f. In the classic linear scheme, we fix a ’finest scale’ j1 and measure the resumé coefficients βj1,k = 〈f, ϕj1,k〉 where ϕj,k = 2 j/2 ϕ(2 j t−k), w... |

1685 | Atomic decomposition by basis pursuit
- Chen, Donoho, et al.
- 2001
(Show Context)
Citation Context ...t at some level of generality, we are discussing here the idea of getting sparse solutions to underdetermined systems of equations using ℓ 1 methods, which forms part of a now-extensive body of work: =-=[6, 9, 11, 12, 13, 16, 17, 21, 22, 25, 28, 29]-=-. We expect that many of the authors of the just-cited papers will be contributing to this special issue 1.2 Questions ... Readers may want to ask numerous questions about the result (1.2), starting w... |

1328 | Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information
- EJ, Romberg, et al.
(Show Context)
Citation Context ...transform coefficients consistent with measured data and having the smallest possible ℓ 1 norm. We perform a series of numerical experiments which validate in general terms the basic idea proposed in =-=[14, 3, 5]-=-, in the favorable case where the transform coefficients are sparse in the strong sense that the vast majority are zero. We then consider a range of less-favorable cases, in which the object has all c... |

1056 | Matching pursuits with time-frequency dictionaries - Mallat, Zhong - 1993 |

847 | Near-optimal signal recovery from random projections: Universal encoding strategies
- Candes, Tao
- 2006
(Show Context)
Citation Context ...e the idea that interesting relatively concrete families of operators can be developed for compressed sensing applications. In fact, Candès has informed us of some recent results he obtained with Tao =-=[47]-=- indicating that, modulo polylog factors, A2 holds for the uniformly sampled partial Fourier ensemble. This seems a very significant advance. Note Added in Proof In the months since the paper was writ... |

533 | Greed is good: Algorithmic results for sparse approximation
- Tropp
(Show Context)
Citation Context ...t at some level of generality, we are discussing here the idea of getting sparse solutions to underdetermined systems of equations using ℓ 1 methods, which forms part of a now-extensive body of work: =-=[6, 9, 11, 12, 13, 16, 17, 21, 22, 25, 28, 29]-=-. We expect that many of the authors of the just-cited papers will be contributing to this special issue 1.2 Questions ... Readers may want to ask numerous questions about the result (1.2), starting w... |

369 |
Interpolation spaces. An introduction
- Bergh
- 1976
(Show Context)
Citation Context ...nt works for p = 1 as well). The key point will be to apply the p-triangle inequality �θ + θ ′ � p p ≤ �θ� p p + �θ ′ � p p, valid for 0 < p < 1; this inequality is well-known in interpolation theory =-=[1]-=- through Peetre and Sparr’s work, and is easy to verify directly. Suppose without loss of generality that there is an optimal subspace Vn, which is fixed and given in this proof. As we just saw, Now E... |

359 | Uncertainty principles and ideal atomic decompositions
- Donoho, Huo
- 2001
(Show Context)
Citation Context ...sively underdetermined, minimization and sparse solution coincide—when the result is sufficiently sparse. There is by now an extensive literature exhibiting results on equivalence of and minimization =-=[27]-=-–[34]. In the early literature on this subject, equivalence was found under conditions involving sparsity constraints allowing nonzeros. While it may seem surprising that any results of this kind are ... |

344 | For most large underdetermined systems of linear equations the minimal `1norm is also the sparsest solution
- Donoho
- 2004
(Show Context)
Citation Context ...t at some level of generality, we are discussing here the idea of getting sparse solutions to underdetermined systems of equations using ℓ 1 methods, which forms part of a now-extensive body of work: =-=[6, 9, 11, 12, 13, 16, 17, 21, 22, 25, 28, 29]-=-. We expect that many of the authors of the just-cited papers will be contributing to this special issue 1.2 Questions ... Readers may want to ask numerous questions about the result (1.2), starting w... |

326 | The concentration of measure phenomenon - Ledoux - 2001 |

320 |
Sparse approximate solutions to linear systems
- Natarajan
- 1995
(Show Context)
Citation Context |

310 |
Wavelets and Operators
- Meyer
- 1992
(Show Context)
Citation Context ...component of the image at scale j, and let (ψ j i ) denote the orthonormal basis of wavelets at scale j, containing 2j elements. The corresponding coefficients again obey �θ (j) �1 ≤ c · R · 2 −j/2 , =-=[33]-=-. While in these two examples, the ℓ 1 constraint appeared, other ℓ p constraints can appear naturally as well; see below. For some readers the use of ℓ p norms with p < 1 may seem initially strange; ... |

305 | relax: Convex programming methods for identifying sparse signals in noise
- Tropp
(Show Context)
Citation Context ... solution to a problem instance of with obeys The proof requires an stability lemma, showing the stability of minimization under small perturbations as measured in norm. For and stability lemmas, see =-=[33]-=-–[35]; however, note that those lemmas do not suffice for our needs in this proof.1298 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 2006 Lemma 4.2: Let be a vector in and be the cor... |

302 | Stable recovery of sparse overcomplete representations in the presence of noise
- Donoho, Elad, et al.
(Show Context)
Citation Context ...y underdetermined, minimization and sparse solution coincide—when the result is sufficiently sparse. There is by now an extensive literature exhibiting results on equivalence of and minimization [27]–=-=[34]-=-. In the early literature on this subject, equivalence was found under conditions involving sparsity constraints allowing nonzeros. While it may seem surprising that any results of this kind are possi... |

287 | Curvelets: A surprisingly effective nonadaptive representation of objects with edges
- Candes, Donoho
- 1999
(Show Context)
Citation Context ...ow consider an example where , and we can apply the extensions to tight frames and to weak- mentioned earlier. Again in the image processing setting, we use the - model discussed in Candès and Donoho =-=[39]-=-, [40]. Consider the collection of piecewise smooth , with values, first and second partial derivatives bounded by , away from an exceptional set which is a union of curves having first and second der... |

285 |
The Volume of Convex Bodies and Banach Space Geometry. Cambridge Tracts in Mathematics 94
- Pisier
- 1989
(Show Context)
Citation Context ...erator ΦJ for |J| < ρn/ log(m) affords a spherical section of the ℓ 1 n ball. The basic argument we use originates from work of Milman, Kashin and others [22, 28, 37]; we refine an argument in Pisier =-=[41]-=- and, as in [17] draw inferences that may be novel. We conclude that not only do almost-spherical sections exist, but they are so ubiquitous that every ΦJ with |J| < ρn/ log(m) will generate them. Def... |

228 | Translation-invariant de-noising
- Coifman, Donoho
- 1995
(Show Context)
Citation Context ...though the data are not noisy in our examples). To alleviate this phenomenon, we considered the test cases shown earlier, namely Blocks and Bumps, and applied translation-invariant wavelet de-noising =-=[7]-=- to the reconstructed ‘noisy’ signals. Results are shown in panel (c) of Figure 3 and panels (b) and (d) of Figure 6. At least visually, there is a great deal of improvement. 4 Noise-Aware Reconstruct... |

215 |
Sparse representations in unions of bases
- Gribonval, Nielsen
(Show Context)
Citation Context |

174 | A generalized uncertainty principle and sparse representation in pairs of bases - Elad, Bruckstein |

160 | On sparse representations in arbitrary redundant bases
- Fuchs
(Show Context)
Citation Context |

144 | Data compression and harmonic analysis
- Donoho, Vetterli, et al.
- 1998
(Show Context)
Citation Context ...i| p ) 1/p ≤ R. (1.1) Such constraints are actually obeyed on natural classes of signals and images; this is the primary reason for the success of standard compression tools based on transform coding =-=[10]-=-. To fix ideas, we mention two simple examples of ℓ p constraint. • Bounded Variation model for images. Here image brightness is viewed as an underlying function f(x, y) on the unit square 0 ≤ x, y ≤ ... |

143 | Unconditional bases are optimal bases for data compression and for statistical estimation
- Donoho
(Show Context)
Citation Context ...l; see below. For some readers the use of ℓ p norms with p < 1 may seem initially strange; it is now well-understood that the ℓ p norms with such small p are natural mathematical measures of sparsity =-=[11, 13]-=-. As p decreases below 1, more and more sparsity is being required. Also, from this viewpoint, an ℓ p constraint based on p = 2 requires no sparsity at all. Note that in each of these examples, we als... |

136 |
Asymptotic theory of finite-dimensional normed spaces, volume 1200
- Milman, Schechtman
- 1986
(Show Context)
Citation Context ... that, with overwhelming probability, every operator ΦJ for |J| < ρn/ log(m) affords a spherical section of the ℓ 1 n ball. The basic argument we use originates from work of Milman, Kashin and others =-=[22, 28, 37]-=-; we refine an argument in Pisier [41] and, as in [17] draw inferences that may be novel. We conclude that not only do almost-spherical sections exist, but they are so ubiquitous that every ΦJ with |J... |

120 | n-Widths in Approximation Theory
- Pinkus
- 1985
(Show Context)
Citation Context ...owed by Garnaev and Gluskin [19], implicitly considered the random signs ensemble in the dual problem of Kolmogorov n-widths. Owing to a duality relationship between Gel’fand and Kolmogorov n-widths (=-=[26]-=-), and a relationship between Gel’fand n-widths and compressed sensing [14, 27] these matrices are suitable for use in the case p = 1. • Donoho [12, 13, 14] considered the uniform Spherical ensemble. ... |

117 |
Empirical Processes: theory and applications
- Pollard
- 1990
(Show Context)
Citation Context ...is event, �y − y0�2 ≤ δ · �y0�2. (7.8) Lemma 7.7 will be proven in a section of its own. We now show that it implies Lemma 7.6. We recall a standard implication of so-called Vapnik-Cervonenkis theory =-=[42]-=-: � � � � � � n n n #ΣJ ≤ + + · · · + . 0 1 |J| 26sNotice that if |J| < ρn/ log(m), then while also log(#ΣJ) ≤ ρ · n + log(n), log #{J : |J| < ρn/ log(m), J ⊂ {1, . . . , m}} ≤ ρn. Hence, the total nu... |

90 | Just relax: Convex programming methods for subset selection and sparse approximation
- Tropp
- 2006
(Show Context)
Citation Context |

89 | Wavelab and reproducible research
- Buckheit, Donoho
- 1995
(Show Context)
Citation Context ...needed. This seems much smaller than the n = ck log(m) one might expect based on (1.2). As a more intuitive representation of this phenomenon, we considered the object Blocks from the Wavelab package =-=[1]-=-. As Figure 2 shows, the object is piecewise constant, and its Haar wavelet transform has relatively few nonzero coefficients. In fact, Blocks has 77 nonzero coefficients for signal length m = 2048. F... |

80 | Nearoptimal sparse Fourier representations via sampling
- Gilbert, Guha, et al.
(Show Context)
Citation Context ...ossible, there would be implications in a range of different fields, extending from faster data acquisition, to higher effective sampling rates, and lower communications burden. Several recent papers =-=[20, 3, 14, 5]-=-, have shown that, under various assumptions, it may be possible to directly acquire a form of compressed representation. In this paper, we put such ideas to the test by making a series of empirical s... |

66 |
The widths of a Euclidean ball
- Garnaev, Gluskin
- 1984
(Show Context)
Citation Context ...d as knowing the N biggest transform coefficients. Examples were sketched for model problems caricaturing imaging and spectroscopy. In related prior work, classical literature in approximation theory =-=[23, 19, 27]-=- (developing the theory of Gel’fand n-widths) deals with closely related problems from an even more abstract 2sviewpoint; see the discussion in [14]. More recently, Gilbert et al. [20] considered n-by... |

57 |
A general theory of optimal algorithms
- Traub, Wo´zniakowski
- 1980
(Show Context)
Citation Context ...ts which will allow faithful reconstruction of x. Such questions have been discussed (for other types of assumptions about x) under the names of Optimal Recovery [39] and Information-Based Complexity =-=[46]-=-; we now adopt their viewpoint, and partially adopt their notation, without making a special effort to be really orthodox. We use ‘OR/IBC’ as a generic label for work taking place in those fields, adm... |

56 | Highly sparse representations from dictionaries are unique and independent of the sparseness measure
- Gribonval, Nielsen
- 2003
(Show Context)
Citation Context |

53 | Sparse components of images and optimal atomic decomposition
- Donoho
(Show Context)
Citation Context ...l; see below. For some readers the use of ℓ p norms with p < 1 may seem initially strange; it is now well-understood that the ℓ p norms with such small p are natural mathematical measures of sparsity =-=[11, 13]-=-. As p decreases below 1, more and more sparsity is being required. Also, from this viewpoint, an ℓ p constraint based on p = 2 requires no sparsity at all. Note that in each of these examples, we als... |

38 |
Diameters of certain finite-dimensional sets in classes of smooth functions
- Kashin
- 1977
(Show Context)
Citation Context ...d as knowing the N biggest transform coefficients. Examples were sketched for model problems caricaturing imaging and spectroscopy. In related prior work, classical literature in approximation theory =-=[23, 19, 27]-=- (developing the theory of Gel’fand n-widths) deals with closely related problems from an even more abstract 2sviewpoint; see the discussion in [14]. More recently, Gilbert et al. [20] considered n-by... |

37 |
A survey of optimal recovery
- Micchelli, Rivlin
- 1977
(Show Context)
Citation Context ...that it is not necessary to know the radius R of the ball Xp,m(R); the element of least norm will always lie inside it. Calling the solution ˆxp,n, one can show, adapting standard OR/IBC arguments in =-=[36, 46, 40]-=- Lemma 3.1 sup Xp,m(R) �x − ˆxp,n�2 ≤ 2 · En(Xp,m(R)), 0 < p ≤ 1. (3.1) In short, the least-norm method is within a factor two of optimal. Proof. We first justify our claims for optimality of the cent... |

29 |
Entropy numbers of diagonal operators between symmetric Banach spaces
- Schütt
(Show Context)
Citation Context ...lity at most 2 n . From Carl’s Theorem - see the exposition in Pisier’s book - there is a constant c > 0 so that the Gel’fand n-widths dominate the entropy numbers. Secondly, the entropy numbers obey =-=[43, 30]-=- d n (bp,m) ≥ cen(bp,m). en(bp,m) ≍ (n/ log(m/n)) 1/2−1/p . 13sAt the same time the combination of Theorem 7 and Theorem 6 shows that d n (bp,m) ≤ c(n/ log(m)) 1/2−1/p Applying now the Feasible-Point ... |

28 |
Non linear approximation and the space
- Cohen, Devore, et al.
- 1999
(Show Context)
Citation Context ...ed Variation We consider now the model with images of Bounded Variation from the introduction. Let F(B) denote the class of functions f(x) with domain(x) ∈ [0, 1] 2 , having total variation at most B =-=[9]-=-, and bounded in absolute value by �f�∞ ≤ B as well. In the introduction, it was mentioned that the wavelet coefficients at level j obey �θ (j) �1 ≤ C ·B where C depends only on the wavelet used. It i... |

23 | Unconditional bases and bit-level compression
- Donoho
- 1996
(Show Context)
Citation Context ...seems even more surprising when we note that for objects x ∈ Xp,m(R), the transform representation is the optimal one: no other representation can do as well at characterising x by a few coefficients =-=[11, 12]-=-. Surely then, one imagines, the sampling kernels ξi underlying the optimal information operator must be simply measuring individual transform coefficients? Actually, no: the information operator is m... |

20 | Optimally Sparse Representation from Overcomplete Dictionaries via l1-norm minimization - Donoho, Elad - 2002 |

17 |
Entropy numbers, s-numbers, and eigenvalue problems
- Carl
- 1981
(Show Context)
Citation Context ...wer bound, we consider the entropy numbers, defined as follows. Let be a set and let be the smallest number such that an -net for can be built using a net of cardinality at most . From Carl’s theorem =-=[18]-=-—see the exposition in Pisier’s book [19]—there is a constant so that the Gel’fand -widths dominate the entropy numbers. Secondly, the entropy numbers obey [20], [21] At the same time, the combination... |

16 |
On the power of Adaption
- Novak
- 1996
(Show Context)
Citation Context ...�= 1 > 0. We conclude that as was to be proven. 3.4 Proof of Theorem 2 En(bp,m) ≍ (n/ log(m/n)) 1/2−1/p . Now is an opportune time to prove Theorem 2. We note that in the case of 1 ≤ p, this is known =-=[38]-=-. The argument is the same for 0 < p < 1, and we simply repeat it. Suppose that x = 0, and consider the adaptively-constructed subspace according to whatever algorithm is in force. When the algorithm ... |

10 |
A lower estimate for entropy numbers
- Kühn
- 2001
(Show Context)
Citation Context ...lity at most 2 n . From Carl’s Theorem - see the exposition in Pisier’s book - there is a constant c > 0 so that the Gel’fand n-widths dominate the entropy numbers. Secondly, the entropy numbers obey =-=[43, 30]-=- d n (bp,m) ≥ cen(bp,m). en(bp,m) ≍ (n/ log(m/n)) 1/2−1/p . 13sAt the same time the combination of Theorem 7 and Theorem 6 shows that d n (bp,m) ≤ c(n/ log(m)) 1/2−1/p Applying now the Feasible-Point ... |

9 |
Spaces with large distance to ℓ n ∞ and random matrices
- Szarek
- 1990
(Show Context)
Citation Context ... n0, uniformly in |J| ≤ ρ1n. P (Ω c n,J) ≤ exp(−nβ1), 23sThis was derived in [17] and in [18], using the concentration of measure property of singular values of random matrices, eg. see Szarek’s work =-=[44, 45]-=-. Second, we note that the event of main interest is representable as: Ωn,m,ρ,η = ∩ |J|≤ρn/ log(m)Ωn,J. Thus we need to estimate the probability of occurrence of every Ωn,J simultaneously. Third, we c... |

8 |
For most underdetermined systems of linear equations, the minimal ‘ 1 -norm near-solution approximates the sparsest near-solution, Manuscript, submitted for publication
- Donoho
- 2004
(Show Context)
Citation Context |

8 |
A survey of optimal recovery. Optimal estimation in approximation theory 1
- Micchelli, Rivlin
- 1977
(Show Context)
Citation Context ...en rise to the information (II.3)DONOHO: COMPRESSED SENSING 1295 In the feasible-point method, we simply select any member of , by whatever means. One can show, adapting standard OR/IBC arguments in =-=[15]-=-, [6], [8] the following. Lemma 3.1: Let where and is an optimal information operator, and let be any element of . Then for (III.1) In short, any feasible point is within a factor two of optimal. Proo... |

7 |
Interpolation of Normed Abelian Groups
- Peetre, Sparr
- 1972
(Show Context)
Citation Context ...st take a small detour, examining the relation between and the extreme case of the spaces. Let us define subject to where is just the number of nonzeros in . Again, since the work of Peetre and Sparr =-=[16]-=-, the importance of and the relation with for is well understood; see [17] for more detail. Ordinarily, solving such a problem involving the norm requires combinatorial optimization; one enumerates al... |

7 |
Spaces with large distance to l n ∞ and random matrices
- Szarek
- 1990
(Show Context)
Citation Context ... so that for sufficiently small and all uniformly in . This was derived in [11] and in [35], using the concentration of measure property of singular values of random matrices, e.g., see Szarek’s work =-=[41]-=-, [42]. B. Spherical Sections Property We now show that condition CS2 can be made overwhelmingly likely for large by choice of and sufficiently small but still positive. Our approach derives from [11]... |

6 |
Bruckstein A M 2002 A generalized uncertainty principle and sparse representation in pairs
- Elad
(Show Context)
Citation Context |

6 |
Daubechies, “Data compression and harmonic analysis
- Donoho, Vetterli, et al.
- 1998
(Show Context)
Citation Context ...me and for some (I.1) Such constraints are actually obeyed on natural classes of signals and images; this is the primary reason for the success of standard compression tools based on transform coding =-=[1]-=-. To fix ideas, we mention two simple examples of constraint. • Bounded Variation model for images. Here image brightness is viewed as an underlying function on the unit square , which obeys (essentia... |

6 |
tight frames of curvelets and optimal representations of objects with C2 singularities
- New
(Show Context)
Citation Context ...sider an example where , and we can apply the extensions to tight frames and to weak- mentioned earlier. Again in the image processing setting, we use the - model discussed in Candès and Donoho [39], =-=[40]-=-. Consider the collection of piecewise smooth , with values, first and second partial derivatives bounded by , away from an exceptional set which is a union of curves having first and second derivativ... |

5 |
Xiaoming (2001) Uncertainty Principles and Ideal Atomic Decomposition
- Donoho, Huo
- 2001
(Show Context)
Citation Context |