## SPARSE SIGNAL RECONSTRUCTION FROM NOISY COMPRESSIVE MEASUREMENTS USING CROSS VALIDATION

Citations: | 18 - 1 self |

### BibTeX

@MISC{Boufounos_sparsesignal,

author = {Petros Boufounos and Marco F. Duarte and Richard G. Baraniuk},

title = {SPARSE SIGNAL RECONSTRUCTION FROM NOISY COMPRESSIVE MEASUREMENTS USING CROSS VALIDATION},

year = {}

}

### OpenURL

### Abstract

Compressive sensing is a new data acquisition technique that aims to measure sparse and compressible signals at close to their intrinsic information rate rather than their Nyquist rate. Recent results in compressive sensing show that a sparse or compressible signal can be reconstructed from very few incoherent measurements. Although the sampling and reconstruction process is robust to measurement noise, all current reconstruction methods assume some knowledge of the noise power or the acquired signal to noise ratio. This knowledge is necessary to set algorithmic parameters and stopping conditions. If these parameters are set incorrectly, then the reconstruction algorithms either do not fully reconstruct the acquired signal (underfitting) or try to explain a significant portion of the noise by distorting the reconstructed signal (overfitting). This paper explores this behavior and examines the use of cross validation to determine the stopping conditions for the optimization algorithms. We demonstrate that by designating a small set of measurements as a validation set it is possible to optimize these algorithms and reduce the reconstruction error. Furthermore we explore the trade-off between using the additional measurements for cross validation instead of reconstruction. Index Terms — Data acquisition, sampling methods, data models, signal reconstruction, parameter estimation. 1.

### Citations

2184 | Regression shrinkage and selection via the lasso
- Tibshirani
- 1994
(Show Context)
Citation Context ...truction error magnitude with a soft weight λ in the following program: bx = arg min �x�ℓ1 + λ�y − Φx�ℓ2 . (3) x With appropriate parameter correspondence, this formulation is equivalent to the Lasso =-=[11]-=-: bx = arg min �y − Φx�ℓ2 s.t. �x�ℓ1 ≤ q. (4) x Furthermore it is demonstrated in [11] that as λ ranges from zero to infinity, the solution path of (3) is the same at the solution path of (4) as q ran... |

1972 | Compressed sensing
- Donoho
- 2006
(Show Context)
Citation Context ...RODUCTION Compressive sensing (CS) is a new data acquisition technique that aims to measure sparse and compressible signals at close to their intrinsic information rate rather than their Nyquist rate =-=[1, 2]-=-. The fundamental premise is that certain classes of signals, such as natural images, have a concise representation in terms of a sparsity inducing basis (or sparsity basis for short) where most of th... |

1871 | Atomic decomposition by basis pursuit
- Chen, Donoho, et al.
- 1998
(Show Context)
Citation Context ... using sparse reconstruction algorithms that determine the sparsest signal bx that explains the measurements y [1, 2, 4]. Specific reconstruction algorithms include linear programming (Basis Pursuit) =-=[9]-=- and Orthogonal Matching Pursuit (OMP) [4]; numerical experiments demonstrate good performance using Matching Pursuit (MP) [10] for reconstruction even though there are no theoretical guarantees. MP i... |

1197 | Matching pursuits with timefrequency dictionaries
- Mallat, Zhang
- 1993
(Show Context)
Citation Context ...P) is a greedy algorithm that iteratively incorporates in the reconstructed signal the component from the measurement set that explains the largest portion of the residual from the previous iteration =-=[13]-=-. At each iteration i, the algorithm computes: c (i) k = 〈r(i−1) , φk〉, bk = arg max |c k (i) k |, x (i) = x (i−1) + c (i) bk δb k , r (i) = r (i−1) − c (i) bk φb k , where r (i) is the residual after... |

855 | Least angle regression
- Efron, Hastie, et al.
- 2004
(Show Context)
Citation Context ... infinity, the solution path of (3) is the same at the solution path of (4) as q ranges from infinity to zero. An efficient algorithm that traces this path is mentioned and experimentally analyzed in =-=[12]-=-. It follows that determining the proper value of λ, even if all the solutions are available, is akin to determining the power limit ɛ of the noise. These three reconstruction formulations are based o... |

842 |
Signal recovery from incomplete and inaccurate measurements
- Candès, Romberg, et al.
(Show Context)
Citation Context ...n terms of the maximum distance of the measurements from the re-measured reconstructed signal. The reconstruction solves the program bx = arg min �x�ℓ1 s.t. �y − Φx�ℓ2 ≤ ɛ, (2) x some small ɛ > 0. In =-=[3]-=- it is shown that if the noise is power-limited to ɛ and enough measurements are taken, then the reconstructed signal bx is guaranteed to be within Cɛ of the original signal x: �x − bx�ℓ2 ≤ Cɛ, where ... |

732 | An introduction to compressive sampling
- Candès, Wakin
- 2008
(Show Context)
Citation Context ...RODUCTION Compressive sensing (CS) is a new data acquisition technique that aims to measure sparse and compressible signals at close to their intrinsic information rate rather than their Nyquist rate =-=[1, 2]-=-. The fundamental premise is that certain classes of signals, such as natural images, have a concise representation in terms of a sparsity inducing basis (or sparsity basis for short) where most of th... |

473 | The dantzig selector: statistical estimation when p is much larger than n
- Candes, Tao
- 2007
(Show Context)
Citation Context ... noise in practice is not necessarily power limited, and, even when it is, the power limit is usually unknown. The Dantzig Selector is an alternative convex program useful when the noise is unbounded =-=[5]-=-. Specifically, for the measurement assumptions in (1) and if enough measurements are taken, the convex program bx = arg min x �x�ℓ1 s.t. �Φ∗ (y − Φx)�ℓ∞ ≤ p 2 log Nσ reconstructs a signal that satisf... |

328 | Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems - Figueiredo, Nowak, et al. |

182 | Signal reconstruction from noisy random projections - Haupt, Nowak |

176 | A new approach to variable selection in least squares problems. Journal of numerical analysis
- Osborne
(Show Context)
Citation Context ... experiments. Similarly, several algorithms exist that solve each of the ℓ1-based formulations mentioned in Sec. 2.3. From those algorithms, we focus the Homotopy continuation algorithm introduced in =-=[14]-=- as a solution to the Lasso formulation. This is the same as the Least Angle Regression (LARS) algorithm with the Lasso modification described in [12]. The key property of Homotopy continuation is tha... |

155 | Signal recovery from partial information via orthogonal matching pursuit
- Tropp, Gilbert
- 2005
(Show Context)
Citation Context ...escribed in [2], then a sparse/compressible signal can be recovered exactly/approximately using sparse reconstruction algorithms that determine the sparsest signal bx that explains the measurements y =-=[1, 2, 4]-=-. Specific reconstruction algorithms include linear programming (Basis Pursuit) [9] and Orthogonal Matching Pursuit (OMP) [4]; numerical experiments demonstrate good performance using Matching Pursuit... |

44 |
Fast reconstruction of piecewise smooth signals from random projections
- Duarte, Wakin, et al.
- 2005
(Show Context)
Citation Context ...ific reconstruction algorithms include linear programming (Basis Pursuit) [9] and Orthogonal Matching Pursuit (OMP) [4]; numerical experiments demonstrate good performance using Matching Pursuit (MP) =-=[10]-=- for reconstruction even though there are no theoretical guarantees. MP is often preferred to OMP due to its significantly reduced computational complexity. 2.3. Reconstruction from Noisy Measurements... |

43 | A method for largescale ℓ1-regularized least squares problems with applications in signal processing and statistics - KIM, KOH, et al. - 2008 |

32 | Thresholds for the recovery of sparse solutions via l1 minimization
- Donoho, Tanner
- 2006
(Show Context)
Citation Context ...nts are used; the oracle performance is shown for reference. Additionally, OMP outperform both Homotopy continuation (HT) and Homotopy continuation with debiasing (HT/DB). and reconstruct an estimate =-=[15]-=- can be used to reduce the reconstruction error, which suggests increasing M and decreasing Mcv. On the other hand, increasing Mcv will improve the CV estimate and thus ensure that the CV optimum is c... |