#### DMCA

## Stable signal recovery from incomplete and inaccurate measurements,” (2006)

Venue: | Comm. Pure Appl. Math., |

Citations: | 1389 - 38 self |

### Citations

7445 | Convex Optimization
- Boyd, Vandenberghe
- 2004
(Show Context)
Citation Context ... be recovered by solving (TV) (again with = 0). Figures 3.2(b) and (c) and the fourth row of Table 3.3 show the (TV) recovery results. The reconstructions have smaller error and do not contain visually displeasing artifacts. 4 Discussion The convex programs (P2) and (TV) are simple instances of a class of problems known as second-order cone programs (SOCPs). As an example, one can recast (TV) as (4.1) min ∑ i, j ui, j subject to − ui, j ≤ ‖Gi, j x‖2 ≤ ui, j , ‖Ax − y‖2 ≤ , where Gi, j x = (xi+1, j − xi, j , xi, j+1 − xi, j ) [15]. SOCPs can be solved efficiently by interior-point methods [1] and hence our approach is computationally tractable. From a certain viewpoint, recovering via (P2) is using a priori information about the nature of the underlying image, i.e., that it is sparse in some known orthobasis, to overcome the shortage of data. In practice, we could of course use far more sophisticated models to perform the recovery. Obvious extensions include looking 1222 E. J. CANDÈS, J. ROMBERG, AND T. TAO for signals that are sparse in overcomplete wavelet or curvelet bases or for images that have certain geometrical structure. The numerical experiments in Section 3 show how cha... |

3609 | Compressed sensing - Donoho - 1996 |

2717 | Atomic decomposition by basis pursuit
- Chen, Donoho, et al.
- 1999
(Show Context)
Citation Context ... To be broadly applicable, our recovery procedure must be stable: small changes in the observations should result in small changes in the recovery. This wish, however, may be quite hopeless. How can we possibly hope to recover our signal when not only is the available information severely incomplete, but the few available observations are also inaccurate? Consider nevertheless (as in [12], for example) the convex program searching, among all signals consistent with the data y, for that with minimum 1-norm (P2) min ‖x‖1 subject to ‖Ax − y‖2 ≤ . 1 (P1) can even be recast as a linear program [6]. STABLE SIGNAL RECOVERY 1209 The first result of this paper shows that contrary to the belief expressed above, solving (P2) will recover an unknown sparse object with an error at most proportional to the noise level. Our condition for stable recovery again involves the restricted isometry constants. THEOREM 1.1 Let S be such that δ3S +3δ4S < 2. Then for any signal x0 supported on T0 with |T0 |≤ S and any perturbation e with ‖e‖2 ≤ , the solution x to (P2) obeys (1.3) ‖x − x0‖2 ≤ CS · , where the constant CS depends only on δ4S. For reasonable values of δ4S, CS is well behaved; for examp... |

2621 | Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information
- Candès, Romberg, et al.
- 2006
(Show Context)
Citation Context ... observations. As a second instance, suppose one observes few Fourier samples of x0; then stable recovery occurs for almost any set of n coefficients provided that the number of nonzeros is of the order of n/(log m)6. In the case where the error term vanishes, the recovery is of course exact, and this work actually provides novel insights into the exact recovery phenomenon discussed in earlier papers. The methodology also explains why one can also very nearly recover approximately sparse signals. c© 2006 Wiley Periodicals, Inc. 1 Introduction 1.1 Exact Recovery of Sparse Signals Recent papers [2, 3, 4, 5, 10] have developed a series of powerful results about the exact recovery of a finite signal x0 ∈ Rm from a very limited number of observations. As a representative result from this literature, consider the problem of Communications on Pure and Applied Mathematics, Vol. LIX, 1207–1223 (2006) c© 2006 Wiley Periodicals, Inc. 1208 E. J. CANDÈS, J. ROMBERG, AND T. TAO recovering an unknown sparse signal x0(t) ∈ Rm , i.e., a signal x0 whose support T0 = {t : x0(t) = 0} is assumed to have small cardinality. All we know about x0 are n linear measurements of the form yk = 〈x0, ak〉, k = 1, . . . , n or y ... |

1505 | Near optimal signal recovery from random projections: Universal encoding strategies?,” - Candès, Tao - 2006 |

1398 | Decoding by linear programming
- Candès, Tao
(Show Context)
Citation Context ... observations. As a second instance, suppose one observes few Fourier samples of x0; then stable recovery occurs for almost any set of n coefficients provided that the number of nonzeros is of the order of n/(log m)6. In the case where the error term vanishes, the recovery is of course exact, and this work actually provides novel insights into the exact recovery phenomenon discussed in earlier papers. The methodology also explains why one can also very nearly recover approximately sparse signals. c© 2006 Wiley Periodicals, Inc. 1 Introduction 1.1 Exact Recovery of Sparse Signals Recent papers [2, 3, 4, 5, 10] have developed a series of powerful results about the exact recovery of a finite signal x0 ∈ Rm from a very limited number of observations. As a representative result from this literature, consider the problem of Communications on Pure and Applied Mathematics, Vol. LIX, 1207–1223 (2006) c© 2006 Wiley Periodicals, Inc. 1208 E. J. CANDÈS, J. ROMBERG, AND T. TAO recovering an unknown sparse signal x0(t) ∈ Rm , i.e., a signal x0 whose support T0 = {t : x0(t) = 0} is assumed to have small cardinality. All we know about x0 are n linear measurements of the form yk = 〈x0, ak〉, k = 1, . . . , n or y ... |

630 | Optimally sparse representation in general (nonorthogonal) dictionaries via l-minimization
- Donoho, Elad
- 2003
(Show Context)
Citation Context ...om the orthonormal matrix U = ∗. The recovery condition then depends on the mutual coherence µ between the measurement basis and the sparsity basis that measures the similarity between and ; µ(,) = √m max |〈φk, ψj 〉|, φk ∈ , ψj ∈ . 1212 E. J. CANDÈS, J. ROMBERG, AND T. TAO 1.4 Prior Work and Innovations The problem of recovering a sparse vector by minimizing 1 under linear equality constraints has recently received much attention, mostly in the context of basis pursuit, where the goal is to uncover sparse signal decompositions in overcomplete dictionaries. We refer the reader to [11, 13] and the references therein for a full discussion. We would especially like to note two works by Donoho, Elad, and Temlyakov [12] and Tropp [18] that also study the recovery of sparse signals from noisy observations by solving (P2) (and other closely related optimization programs), and give conditions for stable recovery. In [12], the sparsity constraint on the underlying signal x0 depends on the magnitude of the maximum entry of the Gram matrix M(A) = maxi, j :i = j |(A∗ A)|i, j . Stable recovery occurs when the number of nonzeros is at most (M−1 +1)/4. For instance, when A is a Fourier ense... |

580 | Uncertainty principles and ideal atomic decomposition
- Donoho, Huo
- 2001
(Show Context)
Citation Context ...om the orthonormal matrix U = ∗. The recovery condition then depends on the mutual coherence µ between the measurement basis and the sparsity basis that measures the similarity between and ; µ(,) = √m max |〈φk, ψj 〉|, φk ∈ , ψj ∈ . 1212 E. J. CANDÈS, J. ROMBERG, AND T. TAO 1.4 Prior Work and Innovations The problem of recovering a sparse vector by minimizing 1 under linear equality constraints has recently received much attention, mostly in the context of basis pursuit, where the goal is to uncover sparse signal decompositions in overcomplete dictionaries. We refer the reader to [11, 13] and the references therein for a full discussion. We would especially like to note two works by Donoho, Elad, and Temlyakov [12] and Tropp [18] that also study the recovery of sparse signals from noisy observations by solving (P2) (and other closely related optimization programs), and give conditions for stable recovery. In [12], the sparsity constraint on the underlying signal x0 depends on the magnitude of the maximum entry of the Gram matrix M(A) = maxi, j :i = j |(A∗ A)|i, j . Stable recovery occurs when the number of nonzeros is at most (M−1 +1)/4. For instance, when A is a Fourier ense... |

478 | Just relax: Convex programming methods for identifying sparse signals in noise - Tropp |

460 | Stable recovery of sparse overcomplete representations in the presence of noise
- Donoho, Elad, et al.
(Show Context)
Citation Context ...not assume that Ax0 is known with arbitrary precision. More appropriately, we will assume instead that we are given “noisy” data y = Ax0+e, where e is some unknown perturbation bounded by a known amount ‖e‖2 ≤ . To be broadly applicable, our recovery procedure must be stable: small changes in the observations should result in small changes in the recovery. This wish, however, may be quite hopeless. How can we possibly hope to recover our signal when not only is the available information severely incomplete, but the few available observations are also inaccurate? Consider nevertheless (as in [12], for example) the convex program searching, among all signals consistent with the data y, for that with minimum 1-norm (P2) min ‖x‖1 subject to ‖Ax − y‖2 ≤ . 1 (P1) can even be recast as a linear program [6]. STABLE SIGNAL RECOVERY 1209 The first result of this paper shows that contrary to the belief expressed above, solving (P2) will recover an unknown sparse object with an error at most proportional to the noise level. Our condition for stable recovery again involves the restricted isometry constants. THEOREM 1.1 Let S be such that δ3S +3δ4S < 2. Then for any signal x0 supported on T0 w... |

372 |
Image compression through wavelet transform coding
- DeVore, Jawerth, et al.
- 1992
(Show Context)
Citation Context ...overy error is dominated by the approximation error—the second term on the right-hand side of (1.4). As a reference, the 50-term nonlinear approximation errors of these compressible signals is around 0.47; at low signal-to-noise ratios our recovery error is about 1.5 times this quantity. As the noise power gets large, the recovery error becomes less than , just as in the sparse case. Finally, we apply our recovery procedure to realistic imagery. Photographlike images, such as the 256 × 256 pixel Boats image shown in Figure 3.2(a), have wavelet coefficient sequences that are compressible (see [7]). The image is a 65 536-dimensional vector, making the standard Gaussian ensemble too unwieldy.3 Instead, we make 25 000 measurements of the image using a scrambled real Fourier ensemble; i.e., the test functions ak(t) are real-valued sines and cosines (with randomly selected frequencies) that are temporally scrambled by randomly permuting the m time points. This ensemble is obtained from the (real-valued) Fourier ensemble by a random permutation of the columns. For our purposes here, the test functions behave like a Gaussian ensemble in the sense that from n measurements, one can recover sig... |

181 | Quantitative robust uncertainty principles and optimally sparse decompositions
- Candes, Romberg
- 2006
(Show Context)
Citation Context ... observations. As a second instance, suppose one observes few Fourier samples of x0; then stable recovery occurs for almost any set of n coefficients provided that the number of nonzeros is of the order of n/(log m)6. In the case where the error term vanishes, the recovery is of course exact, and this work actually provides novel insights into the exact recovery phenomenon discussed in earlier papers. The methodology also explains why one can also very nearly recover approximately sparse signals. c© 2006 Wiley Periodicals, Inc. 1 Introduction 1.1 Exact Recovery of Sparse Signals Recent papers [2, 3, 4, 5, 10] have developed a series of powerful results about the exact recovery of a finite signal x0 ∈ Rm from a very limited number of observations. As a representative result from this literature, consider the problem of Communications on Pure and Applied Mathematics, Vol. LIX, 1207–1223 (2006) c© 2006 Wiley Periodicals, Inc. 1208 E. J. CANDÈS, J. ROMBERG, AND T. TAO recovering an unknown sparse signal x0(t) ∈ Rm , i.e., a signal x0 whose support T0 = {t : x0(t) = 0} is assumed to have small cardinality. All we know about x0 are n linear measurements of the form yk = 〈x0, ak〉, k = 1, . . . , n or y ... |

58 | Improved time bounds for near-optimal sparse Fourier representations,”
- Gilbert, Muthukrishnan, et al.
- 2005
(Show Context)
Citation Context ... Gaussian matrix with n proportional to m, with unspecified constants for both the support size and that appearing in (1.3). Our main claim is on a very different level since it is (1) deterministic (it can of course be specialized to random matrices) and (2) widely applicable since it extends to any matrix obeying the condition δ3S + 3δ4S < 2. In addition, the argument underlying Theorem 1.1 is short and simple, giving precise and sharper numerical values. Finally, we would like to point out connections with fascinating ongoing work on fast randomized algorithms for sparse Fourier transforms [14, 19]. Suppose x0 is a fixed vector with |T0 |nonzero terms, for example. Then [14] shows that it is possible to randomly sample the frequency domain |T0 |poly(log m) times (poly(log m) denotes a polynomial term in log m) and to reconstruct x0 from this frequency data with positive probability. We do not know whether these algorithms are stable in the sense described in this paper or whether they can be modified to be universal; here, an algorithm is said to be universal if it reconstructs exactly all signals of small support. STABLE SIGNAL RECOVERY 1213 2 Proofs 2.1 Proof of Theorem 1.1: Sparse Ca... |

58 |
Condition numbers of random matrices,”
- Szarek
- 1991
(Show Context)
Citation Context ... wider range of values of S. 1.3 Examples It is of course of interest to know which matrices obey the uniform uncertainty principle with good isometry constants. Using tools from random matrix theory, [3, 5, 10] give several examples of matrices such that (1.2) holds for S on the order of n to within log factors. Examples include (proofs and additional discussion can be found in [5]): STABLE SIGNAL RECOVERY 1211 (1) Random matrices with i.i.d. entries. Suppose the entries of A are independent and identically distributed (i.i.d.) Gaussian random variables with mean zero and variance 1 n ; then [5, 10, 17] show that the condition for Theorem 1.1 holds with overwhelming probability when S ≤ C · n log(m/n) . In fact, [4] gives numerical values for the constant C as a function of the ratio n/m. The same conclusion applies to binary matrices with independent entries taking values ±1/√n with equal probability. (2) Fourier ensemble. Suppose now that A is obtained by selecting n rows from the m × m discrete Fourier transform matrix and renormalizing the columns so that they are unit-normed. If the rows are selected at random, the condition for Theorem 1.1 holds with overwhelming probability for S ≤ C ... |

34 |
Non-linear total variation noise removal algorithm.
- Rudin, Osher, et al.
- 1992
(Show Context)
Citation Context ...Y 1221 TABLE 3.3. Image recovery results. Measurements of the Boats image were corrupted in two different ways: by adding white noise (left column) with σ = 5 · 10−4 and by rounding off to one digit (right column). In each case, the image was recovered in two different ways: by solving (P′2) (third row) and solving (TV) (fourth row). The (TV) images are shown in Figure 3.2. White noise Round-off ‖e‖2 0.0789 0.0824 0.0798 0.0827 ‖α − α0‖2 0.1303 0.1323 ‖αT V − α0‖2 0.0837 0.0843 where ‖x‖T V = ∑ i, j √ (xi+1, j − xi, j )2 + (xi, j+1 − xi, j )2 = ∑ i, j |(∇x)i, j | is the total variation [16] of the image x : the sum of the magnitudes of the (discretized) gradient. By substituting (TV) for (P′2), we are essentially changing our model for photographlike images. Instead of looking for an image with a sparse wavelet transform that explains the observations, program (TV) searches for an image with a sparse gradient (i.e., without spurious high-frequency oscillations). In fact, it is shown in [3] that just as signals which are exactly sparse can be recovered perfectly from a small number of measurements by solving (P2) with = 0, signals with gradients that are exactly sparse can be r... |

17 | For most large undetermined systems of linear equations the minimal l1-norm solution is also the sparsest solution - Donoho - 2006 |

14 | Theoretical and experimental analysis of a randomized algorithm for sparse Fourier transform analysis - Zou, Gilbert, et al. - 2005 |

7 |
Second-order cone programming based methods for total variation image restoration.
- Goldfarb, Yin
- 2004
(Show Context)
Citation Context ...) with = 0, signals with gradients that are exactly sparse can be recovered by solving (TV) (again with = 0). Figures 3.2(b) and (c) and the fourth row of Table 3.3 show the (TV) recovery results. The reconstructions have smaller error and do not contain visually displeasing artifacts. 4 Discussion The convex programs (P2) and (TV) are simple instances of a class of problems known as second-order cone programs (SOCPs). As an example, one can recast (TV) as (4.1) min ∑ i, j ui, j subject to − ui, j ≤ ‖Gi, j x‖2 ≤ ui, j , ‖Ax − y‖2 ≤ , where Gi, j x = (xi+1, j − xi, j , xi, j+1 − xi, j ) [15]. SOCPs can be solved efficiently by interior-point methods [1] and hence our approach is computationally tractable. From a certain viewpoint, recovering via (P2) is using a priori information about the nature of the underlying image, i.e., that it is sparse in some known orthobasis, to overcome the shortage of data. In practice, we could of course use far more sophisticated models to perform the recovery. Obvious extensions include looking 1222 E. J. CANDÈS, J. ROMBERG, AND T. TAO for signals that are sparse in overcomplete wavelet or curvelet bases or for images that have certain geometrical... |