Results 1  10
of
16
Robust 1Bit Compressive Sensing via Binary Stable Embeddings of Sparse Vectors
, 2011
"... The Compressive Sensing (CS) framework aims to ease the burden on analogtodigital converters (ADCs) by reducing the sampling rate required to acquire and stably recover sparse signals. Practical ADCs not only sample but also quantize each measurement to a finite number of bits; moreover, there is ..."
Abstract

Cited by 26 (13 self)
 Add to MetaCart
The Compressive Sensing (CS) framework aims to ease the burden on analogtodigital converters (ADCs) by reducing the sampling rate required to acquire and stably recover sparse signals. Practical ADCs not only sample but also quantize each measurement to a finite number of bits; moreover, there is an inverse relationship between the achievable sampling rate and the bit depth. In this paper, we investigate an alternative CS approach that shifts the emphasis from the sampling rate to the number of bits per measurement. In particular, we explore the extreme case of 1bit CS measurements, which capture just their sign. Our results come in two flavors. First, we consider ideal reconstruction from noiseless 1bit measurements and provide a lower bound on the best achievable reconstruction error. We also demonstrate that a large class of measurement mappings achieve this optimal bound. Second, we consider reconstruction robustness to measurement errors and noise and introduce the Binary ɛStable Embedding (BɛSE) property, which characterizes the robustness measurement process to sign changes. We show the same class of matrices that provide optimal noiseless performance also enable such a robust mapping. On the practical side, we introduce the Binary Iterative Hard Thresholding (BIHT) algorithm for signal reconstruction from 1bit measurements that offers stateoftheart performance.
Democracy in Action: Quantization, Saturation, and Compressive Sensing
"... Recent theoretical developments in the area of compressive sensing (CS) have the potential to significantly extend the capabilities of digital data acquisition systems such as analogtodigital converters and digital imagers in certain applications. A key hallmark of CS is that it enables subNyquis ..."
Abstract

Cited by 23 (15 self)
 Add to MetaCart
Recent theoretical developments in the area of compressive sensing (CS) have the potential to significantly extend the capabilities of digital data acquisition systems such as analogtodigital converters and digital imagers in certain applications. A key hallmark of CS is that it enables subNyquist sampling for signals, images, and other data. In this paper, we explore and exploit another heretofore relatively unexplored hallmark, the fact that certain CS measurement systems are democractic, which means that each measurement carries roughly the same amount of information about the signal being acquired. Using the democracy property, we rethink how to quantize the compressive measurements in practical CS systems. If we were to apply the conventional wisdom gained from conventional ShannonNyquist uniform sampling, then we would scale down the analog signal amplitude (and therefore increase the quantization error) to avoid the gross saturation errors that occur when the signal amplitude exceeds the quantizer’s dynamic range. In stark contrast, we demonstrate that a CS system achieves the best performance when it operates at a significantly nonzero saturation rate. We develop two methods to recover signals from saturated CS measurements. The first directly exploits the democracy property by simply discarding the saturated measurements. The second integrates saturated measurements as constraints into standard linear programming and greedy recovery techniques. Finally, we develop a simple automatic gain control system that uses the saturation rate to optimize the input gain.
Universal RateEfficient Scalar Quantization
 IEEE TRANSACTIONS ON INFORMATION THEORY, TO APPEAR
, 2011
"... Scalar quantization is the most practical and straightforward approach to signal quantization. However, it has been shown that scalar quantization of oversampled or compressively sensed signals can be inefficient in terms of the ratedistortion tradeoff, especially as the oversampling rate or the s ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
Scalar quantization is the most practical and straightforward approach to signal quantization. However, it has been shown that scalar quantization of oversampled or compressively sensed signals can be inefficient in terms of the ratedistortion tradeoff, especially as the oversampling rate or the sparsity of the signal increases. In this paper, we modify the scalar quantizer to have discontinuous quantization regions. We demonstrate that with this modification it is possible to achieve exponential decay of the quantization error as a function of the oversampling rate instead of the quadratic decay exhibited by current approaches. Our approach is universal in the sense that prior knowledge of the signal model is not necessary in the quantizer design, only in the reconstruction. Thus, we demonstrate that it is possible to reduce the quantization error by incorporating side information on the acquired signal, such as sparse signal models or signal similarity with known signals. In doing so, we establish a relationship between quantization performance and the Kolmogorov entropy of the signal model.
Trust, but verify: Fast and accurate signal recovery from 1bit compressive measurements
, 2010
"... Abstract—The recently emerged compressive sensing (CS) framework aims to acquire signals at reduced sample rates compared to the classical ShannonNyquist rate. To date, the CS theory has assumed primarily realvalued measurements; it has recently been demonstrated that accurate and stable signal ac ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Abstract—The recently emerged compressive sensing (CS) framework aims to acquire signals at reduced sample rates compared to the classical ShannonNyquist rate. To date, the CS theory has assumed primarily realvalued measurements; it has recently been demonstrated that accurate and stable signal acquisition is still possible even when each measurement is quantized to just a single bit. This property enables the design of simplified CS acquisition hardware based around a simple sign comparator rather than a more complex analogtodigital converter; moreover, it ensures robustness to gross nonlinearities applied to the measurements. In this paper we introduce a new algorithm — restrictedstep shrinkage (RSS) — to recover sparse signals from 1bit CS measurements. In contrast to previous algorithms for 1bit CS, RSS has provable convergence guarantees, is about an order of magnitude faster, and achieves higher average recovery signaltonoise ratio. RSS is similar in spirit to trustregion methods for nonconvex optimization on the unit sphere, which are relatively unexplored in signal processing and hence of independent interest. Index Terms—1bit compressive sensing, quantization, consistent reconstruction, trustregion algorithms I.
DEQUANTIZING COMPRESSED SENSING WITH NONGAUSSIAN CONSTRAINTS
"... In this paper, following the Compressed Sensing (CS) paradigm, we study the problem of recovering sparse or compressible signals from uniformly quantized measurements. We present a new class of convex optimization programs, or decoders, coined Basis Pursuit DeQuantizer of moment p (BPDQp), that mode ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
In this paper, following the Compressed Sensing (CS) paradigm, we study the problem of recovering sparse or compressible signals from uniformly quantized measurements. We present a new class of convex optimization programs, or decoders, coined Basis Pursuit DeQuantizer of moment p (BPDQp), that model the quantization distortion more faithfully than the commonly used Basis Pursuit DeNoise (BPDN) program. Our decoders proceed by minimizing the sparsity of the signal to be reconstructed while enforcing a data fidelity term of bounded ℓpnorm, for 2 < p � ∞. We show that in oversampled situations, i.e. when the number of measurements is higher than the minimal value required by CS, the performance of the BPDQp decoders outperforms that of BPDN, with reconstruction error due to quantization divided by √ p + 1. This reduction relies on a modified Restricted Isometry Property of the sensing matrix expressed in the ℓpnorm (RIPp); a property satisfied by Gaussian random matrices with high probability. We conclude with numerical experiments comparing BPDQp and BPDN for signal and image reconstruction problems.
Regime Change: BitDepth versus MeasurementRate in Compressive Sensing ∗
, 2011
"... The recently introduced compressive sensing (CS) framework enables digital signal acquisition systems to take advantage of signal structures beyond bandlimitedness. Indeed, the number of CS measurements required for stable reconstruction is closer to the order of the signal complexity than the Nyqui ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
The recently introduced compressive sensing (CS) framework enables digital signal acquisition systems to take advantage of signal structures beyond bandlimitedness. Indeed, the number of CS measurements required for stable reconstruction is closer to the order of the signal complexity than the Nyquist rate. To date, the CS theory has focused on realvalued measurements, but in practice, measurements are mapped to bits from a finite alphabet. Moreover, in many potential applications the total number of measurement bits is constrained, which suggests a tradeoff between the number of measurements and the number of bits per measurement. We study this situation in this paper and show that there exist two distinct regimes of operation that correspond to high/low signaltonoise ratio (SNR). In the measurement compression (MC) regime, a high SNR favors acquiring fewer measurements with more bits per measurement; in the quantization compression (QC) regime, a low SNR favors acquiring more measurements with fewer bits per measurement. A surprise from our analysis and experiments is that in many practical applications it is better to operate in the QC regime, even acquiring as few as 1 bit per measurement. 1
S.: Robust 1bit compressive sensing using adaptive outlier pursuit
 Signal Processing, IEEE Transactions on (2012
"... Abstract—In compressive sensing (CS), the goal is to recover signals at reduced sample rate compared to the classic ShannonNyquist rate. However, the classic CS theory assumes the measurements to be realvalued and have infinite bit precision. The quantization of CS measurements has been studied re ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Abstract—In compressive sensing (CS), the goal is to recover signals at reduced sample rate compared to the classic ShannonNyquist rate. However, the classic CS theory assumes the measurements to be realvalued and have infinite bit precision. The quantization of CS measurements has been studied recently and it has been shown that accurate and stable signal acquisition is possible even when each measurement is quantized to only one single bit. There are many algorithms proposed for 1bit compressive sensing and they work well when there is no noise in the measurements, e.g., there are no sign flips, while the performance is worsened when there are a lot of sign flips in the measurements. In this paper, we propose a robust method for recovering signals from 1bit measurements using adaptive outlier pursuit. This method will detect the positions where sign flips happen and recover the signals using “correct” measurements. Numerical experiments show the accuracy of sign flips detection and high performance of signal recovery for our algorithms compared with other algorithms. Index Terms—1bit compressive sensing, adaptive outlier pursuit I.
Sigma Delta Quantization for Compressed Sensing
"... Abstract—Recent results make it clear that the compressed sensing paradigm can be used effectively for dimension reduction. On the other hand, the literature on quantization of compressed sensing measurements is relatively sparse, and mainly focuses on pulsecodemodulation (PCM) type schemes where ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Abstract—Recent results make it clear that the compressed sensing paradigm can be used effectively for dimension reduction. On the other hand, the literature on quantization of compressed sensing measurements is relatively sparse, and mainly focuses on pulsecodemodulation (PCM) type schemes where each measurement is quantized independently using a uniform quantizer, say, of step size δ. The robust recovery result of Candès et al. and Donoho guarantees that in this case, under certain generic conditions on the measurement matrix such as the restricted isometry property, ℓ 1 recovery yields an approximation of the original sparse signal with an accuracy of O(δ). In this paper, we propose sigmadelta quantization as a more effective alternative to PCM in the compressed sensing setting. We show that if we use an rth order sigmadelta scheme to quantize m compressed sensing measurements of a ksparse signal in R N, the reconstruction accuracy can be improved by a factor of (m/k) (r−1/2)α for any 0 < α < 1 if m �r k(log N) 1/(1−α) (with high probability on the measurement matrix). This is achieved by employing an alternative recovery method via rthorder Sobolev dual frames. I.
Distributed Representation of Geometrically Correlated Images with Compressed Linear Measurements
, 2010
"... Abstract—This paper addresses the problem of distributed coding of images whose correlation is driven by the motion of objects or the camera positioning. It concentrates on the problem where images are encoded with compressed linear measurements. We propose a geometrybased correlation model that de ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract—This paper addresses the problem of distributed coding of images whose correlation is driven by the motion of objects or the camera positioning. It concentrates on the problem where images are encoded with compressed linear measurements. We propose a geometrybased correlation model that describes the common information in pairs of images. We assume that the constitutive components of natural images can be captured by visual features that undergo local transformations (e.g., translation) in different images. We first identify prominent visual features by computing a sparse approximation of a reference image with a dictionary of geometric basis functions. We then pose a regularized optimization problem in order to estimate the corresponding features in correlated images that are given by quantized linear measurements. The correlation model is thus given by the relative geometric transformations between corresponding features. We then propose an efficient joint decoding algorithm that reconstructs the compressed images such that they are consistent with both the quantized measurements and the correlation model. Experimental results show that the proposed algorithm effectively estimates the correlation between images in multiview data sets. In addition, the proposed algorithm provides effective decoding performance that advantageously compares to independent coding solutions and stateoftheart distributed coding schemes based on disparity learning. Index Terms—Correlation estimation, geometric transformations, quantization, random projections, sparse approximations. I.
Boufounos, “Universal rateefficient scalar quantization
"... Scalar quantization is the most practical and straightforward approach to signal quantization. However, it has been shown that scalar quantization of oversampled or compressively sensed signals can be inefficient in terms of the ratedistortion tradeoff, especially as the oversampling rate or the s ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Scalar quantization is the most practical and straightforward approach to signal quantization. However, it has been shown that scalar quantization of oversampled or compressively sensed signals can be inefficient in terms of the ratedistortion tradeoff, especially as the oversampling rate or the sparsity of the signal increases. In this paper, we modify the scalar quantizer to have discontinuous quantization regions. We demonstrate that with this modification it is possible to achieve exponential decay of the quantization error as a function of the oversampling rate instead of the quadratic decay exhibited by current approaches. Our approach is universal in the sense that prior knowledge of the signal model is not necessary in the quantizer design, only in the reconstruction. Thus, we demonstrate that it is possible to reduce the quantization error by incorporating side information on the acquired signal, such as sparse signal models or signal similarity with known signals. In doing so, we establish a relationship between quantization performance and the Kolmogorov entropy of the signal model.