Results 1  10
of
58
Robust 1Bit Compressive Sensing via Binary Stable Embeddings of Sparse Vectors
, 2011
"... The Compressive Sensing (CS) framework aims to ease the burden on analogtodigital converters (ADCs) by reducing the sampling rate required to acquire and stably recover sparse signals. Practical ADCs not only sample but also quantize each measurement to a finite number of bits; moreover, there is ..."
Abstract

Cited by 85 (28 self)
 Add to MetaCart
The Compressive Sensing (CS) framework aims to ease the burden on analogtodigital converters (ADCs) by reducing the sampling rate required to acquire and stably recover sparse signals. Practical ADCs not only sample but also quantize each measurement to a finite number of bits; moreover, there is an inverse relationship between the achievable sampling rate and the bit depth. In this paper, we investigate an alternative CS approach that shifts the emphasis from the sampling rate to the number of bits per measurement. In particular, we explore the extreme case of 1bit CS measurements, which capture just their sign. Our results come in two flavors. First, we consider ideal reconstruction from noiseless 1bit measurements and provide a lower bound on the best achievable reconstruction error. We also demonstrate that a large class of measurement mappings achieve this optimal bound. Second, we consider reconstruction robustness to measurement errors and noise and introduce the Binary ɛStable Embedding (BɛSE) property, which characterizes the robustness measurement process to sign changes. We show the same class of matrices that provide optimal noiseless performance also enable such a robust mapping. On the practical side, we introduce the Binary Iterative Hard Thresholding (BIHT) algorithm for signal reconstruction from 1bit measurements that offers stateoftheart performance.
Onebit compressed sensing by linear programming. Preprint. Available at http://arxiv.org/abs/1109.4299
"... Abstract. We give the first computationally tractable and almost optimal solution to the problem of onebit compressed sensing, showing how to accurately recover an ssparse vector x ∈ R n from the signs of O(s log 2 (n/s)) random linear measurements of x. The recovery is achieved by a simple linear ..."
Abstract

Cited by 59 (5 self)
 Add to MetaCart
Abstract. We give the first computationally tractable and almost optimal solution to the problem of onebit compressed sensing, showing how to accurately recover an ssparse vector x ∈ R n from the signs of O(s log 2 (n/s)) random linear measurements of x. The recovery is achieved by a simple linear program. This result extends to approximately sparse vectors x. Our result is universal in the sense that with high probability, one measurement scheme will successfully recover all sparse vectors simultaneously. The argument is based on solving an equivalent geometric problem on random hyperplane tessellations. 1.
Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice
"... Abstract—Compressive sensing provides a framework for recovering sparse signals of length N from M ≪ N measurements. If the measurements contain noise bounded by ɛ, then standard algorithms recover sparse signals with error at most Cɛ. However, these algorithms perform suboptimally when the measurem ..."
Abstract

Cited by 38 (2 self)
 Add to MetaCart
(Show Context)
Abstract—Compressive sensing provides a framework for recovering sparse signals of length N from M ≪ N measurements. If the measurements contain noise bounded by ɛ, then standard algorithms recover sparse signals with error at most Cɛ. However, these algorithms perform suboptimally when the measurement noise is also sparse. This can occur in practice due to shot noise, malfunctioning hardware, transmission errors, or narrowband interference. We demonstrate that a simple algorithm, which we dub Justice Pursuit (JP), can achieve exact recovery from measurements corrupted with sparse noise. The algorithm handles unbounded errors, has no input parameters, and is easily implemented via standard recovery techniques. I.
Regime Change: BitDepth versus MeasurementRate in Compressive Sensing ∗
, 2011
"... The recently introduced compressive sensing (CS) framework enables digital signal acquisition systems to take advantage of signal structures beyond bandlimitedness. Indeed, the number of CS measurements required for stable reconstruction is closer to the order of the signal complexity than the Nyqui ..."
Abstract

Cited by 33 (1 self)
 Add to MetaCart
The recently introduced compressive sensing (CS) framework enables digital signal acquisition systems to take advantage of signal structures beyond bandlimitedness. Indeed, the number of CS measurements required for stable reconstruction is closer to the order of the signal complexity than the Nyquist rate. To date, the CS theory has focused on realvalued measurements, but in practice, measurements are mapped to bits from a finite alphabet. Moreover, in many potential applications the total number of measurement bits is constrained, which suggests a tradeoff between the number of measurements and the number of bits per measurement. We study this situation in this paper and show that there exist two distinct regimes of operation that correspond to high/low signaltonoise ratio (SNR). In the measurement compression (MC) regime, a high SNR favors acquiring fewer measurements with more bits per measurement; in the quantization compression (QC) regime, a low SNR favors acquiring more measurements with fewer bits per measurement. A surprise from our analysis and experiments is that in many practical applications it is better to operate in the QC regime, even acquiring as few as 1 bit per measurement. 1
Universal RateEfficient Scalar Quantization
 IEEE TRANSACTIONS ON INFORMATION THEORY, TO APPEAR
, 2011
"... Scalar quantization is the most practical and straightforward approach to signal quantization. However, it has been shown that scalar quantization of oversampled or compressively sensed signals can be inefficient in terms of the ratedistortion tradeoff, especially as the oversampling rate or the s ..."
Abstract

Cited by 33 (9 self)
 Add to MetaCart
Scalar quantization is the most practical and straightforward approach to signal quantization. However, it has been shown that scalar quantization of oversampled or compressively sensed signals can be inefficient in terms of the ratedistortion tradeoff, especially as the oversampling rate or the sparsity of the signal increases. In this paper, we modify the scalar quantizer to have discontinuous quantization regions. We demonstrate that with this modification it is possible to achieve exponential decay of the quantization error as a function of the oversampling rate instead of the quadratic decay exhibited by current approaches. Our approach is universal in the sense that prior knowledge of the signal model is not necessary in the quantizer design, only in the reconstruction. Thus, we demonstrate that it is possible to reduce the quantization error by incorporating side information on the acquired signal, such as sparse signal models or signal similarity with known signals. In doing so, we establish a relationship between quantization performance and the Kolmogorov entropy of the signal model.
Greedy sparse signal reconstruction from sign measurements
 In Proc. Asilomar Conf. on Signals Systems and Comput., Asilomar
, 2009
"... a new greedy algorithm to perform sparse signal reconstruction from signs of signal measurements, i.e., measurements quantized to 1bit. The algorithm combines the principle of consistent reconstruction with greedy sparse reconstruction. The resulting MSP algorithm has several advantages, both theor ..."
Abstract

Cited by 31 (12 self)
 Add to MetaCart
(Show Context)
a new greedy algorithm to perform sparse signal reconstruction from signs of signal measurements, i.e., measurements quantized to 1bit. The algorithm combines the principle of consistent reconstruction with greedy sparse reconstruction. The resulting MSP algorithm has several advantages, both theoretical and practical, over previous approaches. Although the problem is not convex, the experimental performance of the algorithm is significantly better compared to reconstructing the signal by treating the quantized measurement as values. Our results demonstrate that combining the principle of consistency with a sparsity prior outperforms approaches that use only consistency or only sparsity priors. I.
Trust, but verify: Fast and accurate signal recovery from 1bit compressive measurements
, 2010
"... Abstract—The recently emerged compressive sensing (CS) framework aims to acquire signals at reduced sample rates compared to the classical ShannonNyquist rate. To date, the CS theory has assumed primarily realvalued measurements; it has recently been demonstrated that accurate and stable signal ac ..."
Abstract

Cited by 30 (2 self)
 Add to MetaCart
Abstract—The recently emerged compressive sensing (CS) framework aims to acquire signals at reduced sample rates compared to the classical ShannonNyquist rate. To date, the CS theory has assumed primarily realvalued measurements; it has recently been demonstrated that accurate and stable signal acquisition is still possible even when each measurement is quantized to just a single bit. This property enables the design of simplified CS acquisition hardware based around a simple sign comparator rather than a more complex analogtodigital converter; moreover, it ensures robustness to gross nonlinearities applied to the measurements. In this paper we introduce a new algorithm — restrictedstep shrinkage (RSS) — to recover sparse signals from 1bit CS measurements. In contrast to previous algorithms for 1bit CS, RSS has provable convergence guarantees, is about an order of magnitude faster, and achieves higher average recovery signaltonoise ratio. RSS is similar in spirit to trustregion methods for nonconvex optimization on the unit sphere, which are relatively unexplored in signal processing and hence of independent interest. Index Terms—1bit compressive sensing, quantization, consistent reconstruction, trustregion algorithms I.
The pros and cons of compressive sensing for wideband signal acquisition: Noise folding vs. dynamic range
, 2011
"... Compressive sensing (CS) exploits the sparsity present in many common signals to reduce the number of measurements needed for digital acquisition. With this reduction would come, in theory, commensurate reductions in the size, weight, power consumption, and/or monetary cost of both signal sensors an ..."
Abstract

Cited by 27 (5 self)
 Add to MetaCart
(Show Context)
Compressive sensing (CS) exploits the sparsity present in many common signals to reduce the number of measurements needed for digital acquisition. With this reduction would come, in theory, commensurate reductions in the size, weight, power consumption, and/or monetary cost of both signal sensors and any associated communication links. This paper examines the use of CS in the design of a wideband radio receiver in a noisy environment. We formulate the problem statement for such a receiver and establish a reasonable set of requirements that a receiver should meet to be practically useful. We then evaluate the performance of a CSbased receiver in two ways: via a theoretical analysis of the expected performance, with a particular emphasis on noise and dynamic range, and via simulations that compare the CS receiver against the performance expected from a conventional implementation. On the one hand, we show that CSbased systems that aim to reduce the number of acquired measurements are somewhat sensitive to signal noise, exhibiting a 3dB SNR loss per octave of subsampling which parallels the classic noisefolding phenomenon. On the other hand, we demonstrate that since they sample at a lower rate, CSbased systems can potentially attain a significantly larger dynamic range. Hence, we conclude that while a CSbased system has inherent limitations that do impose some restrictions on its potential applications, it also has attributes that make it highly desirable in a number of important practical settings. 1
Recovery of sparsely corrupted signals
 IEEE Trans. Inf. Theory
, 2012
"... Abstract—We investigate the recovery of signals exhibiting a sparse representation in a general (i.e., possibly redundant or incomplete) dictionary that are corrupted by additive noise admitting a sparse representation in another general dictionary. This setup covers a wide range of applications, su ..."
Abstract

Cited by 25 (8 self)
 Add to MetaCart
(Show Context)
Abstract—We investigate the recovery of signals exhibiting a sparse representation in a general (i.e., possibly redundant or incomplete) dictionary that are corrupted by additive noise admitting a sparse representation in another general dictionary. This setup covers a wide range of applications, such as image inpainting, superresolution, signal separation, and recovery of signals that are impaired by, e.g., clipping, impulse noise, or narrowband interference. We present deterministic recovery guarantees based on a novel uncertainty relation for pairs of general dictionaries and we provide corresponding practicable recovery algorithms. The recovery guarantees we find depend on the signal and noise sparsity levels, on the coherence parameters of the involved dictionaries, and on the amount of prior knowledge about the signal and noise support sets. Index Terms—Uncertainty relations, signal restoration, signal separation, coherencebased recovery guarantees, `1norm minimization, greedy algorithms. I.
Stable Restoration and Separation of Approximately Sparse Signals
"... This paper develops new theory and algorithms to recover signals that are approximately sparse in some general (i.e., basis, frame, overcomplete, or incomplete) dictionary but corrupted by a combination of measurement noise and interference having a sparse representation in a second general diction ..."
Abstract

Cited by 17 (10 self)
 Add to MetaCart
(Show Context)
This paper develops new theory and algorithms to recover signals that are approximately sparse in some general (i.e., basis, frame, overcomplete, or incomplete) dictionary but corrupted by a combination of measurement noise and interference having a sparse representation in a second general dictionary. Particular applications covered by our framework include the restoration of signals impaired by impulse noise, narrowband interference, or saturation, as well as image inpainting, superresolution, and signal separation. We develop efficient recovery algorithms and deterministic conditions that guarantee stable restoration and separation. Two application examples demonstrate the efficacy of our approach.