Results 1 
8 of
8
1 Sparse Recovery Using Sparse Matrices
"... Abstract—We survey algorithms for sparse recovery problems that are based on sparse random matrices. Such matrices has several attractive properties: they support algorithms with low computational complexity, and make it easy to perform incremental updates to signals. We discuss applications to seve ..."
Abstract

Cited by 26 (7 self)
 Add to MetaCart
Abstract—We survey algorithms for sparse recovery problems that are based on sparse random matrices. Such matrices has several attractive properties: they support algorithms with low computational complexity, and make it easy to perform incremental updates to signals. We discuss applications to several areas, including compressive sensing, data stream computing and group testing. I.
Various thresholds for ℓ1optimization in compressed sensing
, 2009
"... Recently, [14, 28] theoretically analyzed the success of a polynomial ℓ1optimization algorithm in solving an underdetermined system of linear equations. In a large dimensional and statistical context [14, 28] proved that if the number of equations (measurements in the compressed sensing terminolog ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Recently, [14, 28] theoretically analyzed the success of a polynomial ℓ1optimization algorithm in solving an underdetermined system of linear equations. In a large dimensional and statistical context [14, 28] proved that if the number of equations (measurements in the compressed sensing terminology) in the system is proportional to the length of the unknown vector then there is a sparsity (number of nonzero elements of the unknown vector) also proportional to the length of the unknown vector such that ℓ1optimization succeeds in solving the system. In this paper, we provide an alternative performance analysis of ℓ1optimization and obtain the proportionality constants that in certain cases match or improve on the best currently known ones from [28, 29].
Compressed Sensing Performance Bounds Under Poisson Noise
"... Abstract—This paper describes performance bounds for compressed sensing (CS) where the underlying sparse or compressible (sparsely approximable) signal is a vector of nonnegative intensities whose measurements are corrupted by Poisson noise. In this setting, standard CS techniques cannot be applied ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Abstract—This paper describes performance bounds for compressed sensing (CS) where the underlying sparse or compressible (sparsely approximable) signal is a vector of nonnegative intensities whose measurements are corrupted by Poisson noise. In this setting, standard CS techniques cannot be applied directly for several reasons. First, the usual signalindependent and/or bounded noise models do not apply to Poisson noise, which is nonadditive and signaldependent. Second, the CS matrices typically considered are not feasible in real optical systems because they do not adhere to important constraints, such as nonnegativity and photon flux preservation. Third, the typical `2 0 `1 minimization leads to overfitting in the highintensity regions and oversmoothing in the lowintensity areas. In this paper, we describe how a feasible positivityand fluxpreserving sensing matrix can be constructed, and then analyze the performance of a CS reconstruction approach for Poisson data that minimizes an objective function consisting of a negative Poisson log likelihood term and a penalty term which measures signal sparsity. We show that, as the overall intensity of the underlying signal increases, an upper bound on the reconstruction error decays at an appropriate rate (depending on the compressibility of the signal), but that for a fixed signal intensity, the error bound actually grows with the number of measurements or sensors. This surprising fact is both proved theoretically and justified based on physical intuition. Index Terms—Complexity regularization, compressive sampling, nonparametric estimation, photonlimited imaging, sparsity. I.
Performance bounds for compressed sensing with Poisson noise
 in IEEE Int. Symp. on Inf. Theory, 2009, Accepted
"... Abstract — This paper describes performance bounds for compressed sensing in the presence of Poisson noise when the underlying signal, a vector of Poisson intensities, is sparse or compressible (admits a sparse approximation). The signalindependent and bounded noise models used in the literature to ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract — This paper describes performance bounds for compressed sensing in the presence of Poisson noise when the underlying signal, a vector of Poisson intensities, is sparse or compressible (admits a sparse approximation). The signalindependent and bounded noise models used in the literature to analyze the performance of compressed sensing do not accurately model the effects of Poisson noise. However, Poisson noise is an appropriate noise model for a variety of applications, including lowlight imaging, where sensing hardware is large or expensive, and limiting the number of measurements collected is important. In this paper, we describe how a feasible positivitypreserving sensing matrix can be constructed, and then analyze the performance of a compressed sensing reconstruction approach for Poisson data that minimizes an objective function consisting of a negative Poisson log likelihood term and a penalty term which could be used as a measure of signal sparsity. I.
1 On the Iterative Decoding of HighRate LDPC Codes With Applications in Compressed Sensing
, 903
"... Abstract — This paper considers the performance of (j, k)regular lowdensity paritycheck (LDPC) codes with messagepassing (MP) decoding algorithms in the highrate regime. In particular, we derive the highrate scaling law for MP decoding of LDPC codes on the binary erasure channel (BEC) and the q ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract — This paper considers the performance of (j, k)regular lowdensity paritycheck (LDPC) codes with messagepassing (MP) decoding algorithms in the highrate regime. In particular, we derive the highrate scaling law for MP decoding of LDPC codes on the binary erasure channel (BEC) and the qary symmetric channel (qSC). For the BEC, the density evolution (DE) threshold of iterative decoding scales like Θ(k −1) and the critical stopping ratio scales like Θ(k −j/(j−2)). For the qSC, the DE threshold of verification decoding depends on the details of the decoder and scales like Θ(k −1) for one decoder. Using the fact that coding over large finite alphabets is very similar to coding over the real numbers, the analysis of verification decoding is also extended to the the compressed sensing (CS) of strictlysparse signals. Of particular note is the performance of CS systems based on LDPC codes with MP verification decoding. A DE based approach is used to analyze the CS systems with randomizedreconstruction guarantees. This leads to the result that strictlysparse signals can be reconstructed efficiently with highprobability using a constant oversampling ratio (i.e., when the number of measurements scales linearly with the sparsity of the signal). A stoppingset based approach is also used to get stronger (e.g., uniforminprobability) reconstruction guarantees. This leads to the result that there exists a single LDPCtype measurement matrix such that MP verification decoding reconstructs all sufficientlysparse signals. This also leads to the result that there exists a single LDPCtype measurement matrix such that verification decoding guarantees uniform reconstruction for all sufficientlysparse nonnegative signals. I.
Verification Decoding of HighRate LDPC Codes with Applications in Compressed Sensing
, 903
"... This paper considers the performance of (j, k)regular lowdensity paritycheck (LDPC) codes with messagepassing (MP) decoding algorithms in the highrate regime. In particular, we derive the highrate scaling law for MP decoding of LDPC codes on the binary erasure channel (BEC) and the qary symmet ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This paper considers the performance of (j, k)regular lowdensity paritycheck (LDPC) codes with messagepassing (MP) decoding algorithms in the highrate regime. In particular, we derive the highrate scaling law for MP decoding of LDPC codes on the binary erasure channel (BEC) and the qary symmetric channel (qSC). For the BEC, the density evolution (DE) threshold of iterative decoding scales like Θ(k −1) and the critical stopping ratio scales like Θ(k −j/(j−2)). For the qSC, the DE threshold of verification decoding depends on the details of the decoder and scales like Θ(k −1) for one decoder. Using the fact that coding over large finite alphabets is very similar to coding over the real numbers, the analysis of verification decoding is also extended to the the compressed sensing (CS) of strictlysparse signals. A DE based approach is used to analyze the CS systems with randomizedreconstruction guarantees. This leads to the result that strictlysparse signals can be reconstructed efficiently with highprobability using a constant oversampling ratio (i.e., when the number of measurements scales linearly with the sparsity of the signal). A stoppingset based approach is also used to get stronger (e.g., uniforminprobability) reconstruction guarantees. Index Terms LDPC codes, verification decoding, compressed sensing, stopping sets, qary symmetric channel
Compressed Sensing: “When sparsity meets sampling”
, 2010
"... The recent theory of Compressed Sensing (Candès, Tao & Romberg, 2006, and Donoho, 2006) states that a signal, e.g. a sound record or an astronomical image, can be sampled at a rate much smaller than what is commonly prescribed by ShannonNyquist. The sampling of a signal can indeed be performed as a ..."
Abstract
 Add to MetaCart
The recent theory of Compressed Sensing (Candès, Tao & Romberg, 2006, and Donoho, 2006) states that a signal, e.g. a sound record or an astronomical image, can be sampled at a rate much smaller than what is commonly prescribed by ShannonNyquist. The sampling of a signal can indeed be performed as a function of its “intrinsic dimension” rather than according to its cutoff frequency. This chapter sketches the main theoretical concepts surrounding this revolution in sampling theory. We emphasize also its deep affiliation with the concept of “sparsity”, now ubiquitous in modern signal processing. The end of this chapter explains what interesting effects