Results 1  10
of
23
Robust 1Bit Compressive Sensing via Binary Stable Embeddings of Sparse Vectors
, 2011
"... The Compressive Sensing (CS) framework aims to ease the burden on analogtodigital converters (ADCs) by reducing the sampling rate required to acquire and stably recover sparse signals. Practical ADCs not only sample but also quantize each measurement to a finite number of bits; moreover, there is ..."
Abstract

Cited by 26 (13 self)
 Add to MetaCart
The Compressive Sensing (CS) framework aims to ease the burden on analogtodigital converters (ADCs) by reducing the sampling rate required to acquire and stably recover sparse signals. Practical ADCs not only sample but also quantize each measurement to a finite number of bits; moreover, there is an inverse relationship between the achievable sampling rate and the bit depth. In this paper, we investigate an alternative CS approach that shifts the emphasis from the sampling rate to the number of bits per measurement. In particular, we explore the extreme case of 1bit CS measurements, which capture just their sign. Our results come in two flavors. First, we consider ideal reconstruction from noiseless 1bit measurements and provide a lower bound on the best achievable reconstruction error. We also demonstrate that a large class of measurement mappings achieve this optimal bound. Second, we consider reconstruction robustness to measurement errors and noise and introduce the Binary ɛStable Embedding (BɛSE) property, which characterizes the robustness measurement process to sign changes. We show the same class of matrices that provide optimal noiseless performance also enable such a robust mapping. On the practical side, we introduce the Binary Iterative Hard Thresholding (BIHT) algorithm for signal reconstruction from 1bit measurements that offers stateoftheart performance.
Onebit compressed sensing by linear programming. Preprint. Available at http://arxiv.org/abs/1109.4299
"... Abstract. We give the first computationally tractable and almost optimal solution to the problem of onebit compressed sensing, showing how to accurately recover an ssparse vector x ∈ R n from the signs of O(s log 2 (n/s)) random linear measurements of x. The recovery is achieved by a simple linear ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
Abstract. We give the first computationally tractable and almost optimal solution to the problem of onebit compressed sensing, showing how to accurately recover an ssparse vector x ∈ R n from the signs of O(s log 2 (n/s)) random linear measurements of x. The recovery is achieved by a simple linear program. This result extends to approximately sparse vectors x. Our result is universal in the sense that with high probability, one measurement scheme will successfully recover all sparse vectors simultaneously. The argument is based on solving an equivalent geometric problem on random hyperplane tessellations. 1.
Universal RateEfficient Scalar Quantization
 IEEE TRANSACTIONS ON INFORMATION THEORY, TO APPEAR
, 2011
"... Scalar quantization is the most practical and straightforward approach to signal quantization. However, it has been shown that scalar quantization of oversampled or compressively sensed signals can be inefficient in terms of the ratedistortion tradeoff, especially as the oversampling rate or the s ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
Scalar quantization is the most practical and straightforward approach to signal quantization. However, it has been shown that scalar quantization of oversampled or compressively sensed signals can be inefficient in terms of the ratedistortion tradeoff, especially as the oversampling rate or the sparsity of the signal increases. In this paper, we modify the scalar quantizer to have discontinuous quantization regions. We demonstrate that with this modification it is possible to achieve exponential decay of the quantization error as a function of the oversampling rate instead of the quadratic decay exhibited by current approaches. Our approach is universal in the sense that prior knowledge of the signal model is not necessary in the quantizer design, only in the reconstruction. Thus, we demonstrate that it is possible to reduce the quantization error by incorporating side information on the acquired signal, such as sparse signal models or signal similarity with known signals. In doing so, we establish a relationship between quantization performance and the Kolmogorov entropy of the signal model.
Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice
"... Abstract—Compressive sensing provides a framework for recovering sparse signals of length N from M ≪ N measurements. If the measurements contain noise bounded by ɛ, then standard algorithms recover sparse signals with error at most Cɛ. However, these algorithms perform suboptimally when the measurem ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
Abstract—Compressive sensing provides a framework for recovering sparse signals of length N from M ≪ N measurements. If the measurements contain noise bounded by ɛ, then standard algorithms recover sparse signals with error at most Cɛ. However, these algorithms perform suboptimally when the measurement noise is also sparse. This can occur in practice due to shot noise, malfunctioning hardware, transmission errors, or narrowband interference. We demonstrate that a simple algorithm, which we dub Justice Pursuit (JP), can achieve exact recovery from measurements corrupted with sparse noise. The algorithm handles unbounded errors, has no input parameters, and is easily implemented via standard recovery techniques. I.
Boufounos, “Quantized embeddings of scaleinvariant image features for mobile augmented reality
 in Proc. IEEE International Workshop on Multimedia Signal Processing (MMSP
, 2012
"... Abstract—Randomized embeddings of scaleinvariant image features are proposed for retrieval of objectspecific meta data in an augmented reality application. The method extracts scale invariant features from a query image, computes a small number of quantized random projections of these features, an ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
Abstract—Randomized embeddings of scaleinvariant image features are proposed for retrieval of objectspecific meta data in an augmented reality application. The method extracts scale invariant features from a query image, computes a small number of quantized random projections of these features, and sends them to a database server. The server performs a nearest neighbor search in the space of the random projections and returns metadata corresponding to the query image. Prior work has shown that binary embeddings of image features enable efficient image retrieval. This paper generalizes the prior art by characterizing the tradeoff between the number of random projections and the number of bits used to represent each projection. The theoretical results suggest a bit allocation scheme under a total bit rate constraint: It is often advisable to spend bits on a small number of finely quantized random measurements rather than on a large number of coarsely quantized random measurements. This theoretical result is corroborated via experimental study of the above tradeoff using the ZuBuD database. The proposed scheme achieves a retrieval accuracy up to 94 % while requiring the mobile device to transmit only 2.5 kB to the database server, a significant improvement over 1bit quantization schemes reported in prior art. I.
Trust, but verify: Fast and accurate signal recovery from 1bit compressive measurements
, 2010
"... Abstract—The recently emerged compressive sensing (CS) framework aims to acquire signals at reduced sample rates compared to the classical ShannonNyquist rate. To date, the CS theory has assumed primarily realvalued measurements; it has recently been demonstrated that accurate and stable signal ac ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Abstract—The recently emerged compressive sensing (CS) framework aims to acquire signals at reduced sample rates compared to the classical ShannonNyquist rate. To date, the CS theory has assumed primarily realvalued measurements; it has recently been demonstrated that accurate and stable signal acquisition is still possible even when each measurement is quantized to just a single bit. This property enables the design of simplified CS acquisition hardware based around a simple sign comparator rather than a more complex analogtodigital converter; moreover, it ensures robustness to gross nonlinearities applied to the measurements. In this paper we introduce a new algorithm — restrictedstep shrinkage (RSS) — to recover sparse signals from 1bit CS measurements. In contrast to previous algorithms for 1bit CS, RSS has provable convergence guarantees, is about an order of magnitude faster, and achieves higher average recovery signaltonoise ratio. RSS is similar in spirit to trustregion methods for nonconvex optimization on the unit sphere, which are relatively unexplored in signal processing and hence of independent interest. Index Terms—1bit compressive sensing, quantization, consistent reconstruction, trustregion algorithms I.
A Simple Proof that Random Matrices are Democratic
, 2009
"... The recently introduced theory of compressive sensing (CS) enables the reconstruction of sparse or compressible signals from a small set of nonadaptive, linear measurements. If properly chosen, the number of measurements can be significantly smaller than the ambient dimension of the signal and yet p ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
The recently introduced theory of compressive sensing (CS) enables the reconstruction of sparse or compressible signals from a small set of nonadaptive, linear measurements. If properly chosen, the number of measurements can be significantly smaller than the ambient dimension of the signal and yet preserve the significant signal information. Interestingly, it can be shown that random measurement schemes provide a nearoptimal encoding in terms of the required number of measurements. In this report, we explore another relatively unexplored, though often alluded to, advantage of using random matrices to acquire CS measurements. Specifically, we show that random matrices are democractic, meaning that each measurement carries roughly the same amount of signal information. We demonstrate that by slightly increasing the number of measurements, the system is robust to the loss of a small number of arbitrary measurements. In addition, we draw connections to oversampling and demonstrate stability from the loss of significantly more measurements. 1
Compressive Oversampling for Robust Data Transmission in Sensor Networks
"... Abstract—Data loss in wireless sensing applications is inevitable and while there have been many attempts at coping with this issue, recent developments in the area of Compressive Sensing (CS) provide a new and attractive perspective. Since many physical signals of interest are known to be sparse or ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Abstract—Data loss in wireless sensing applications is inevitable and while there have been many attempts at coping with this issue, recent developments in the area of Compressive Sensing (CS) provide a new and attractive perspective. Since many physical signals of interest are known to be sparse or compressible, employing CS, not only compresses the data and reduces effective transmission rate, but also improves the robustness of the system to channel erasures. This is possible because reconstruction algorithms for compressively sampled signals are not hampered by the stochastic nature of wireless link disturbances, which has traditionally plagued attempts at proactively handling the effects of these errors. In this paper, we propose that if CS is employed for source compression, then CS can further be exploited as an application layer erasure coding strategy for recovering missing data. We show that CS erasure encoding (CSEC) with random sampling is efficient for handling missing data in erasure channels, paralleling the performance of BCH codes, with the added benefit of graceful degradation of the reconstruction error even when the amount of missing data far exceeds the designed redundancy. Further, since CSEC is equivalent to nominal oversampling in the incoherent measurement basis, it is computationally cheaper than conventional erasure coding. We support our proposal through extensive performance studies. Keywordserasure coding; compressive sensing. I.
Stable Restoration and Separation of Approximately Sparse Signals
"... This paper develops new theory and algorithms to recover signals that are approximately sparse in some general (i.e., basis, frame, overcomplete, or incomplete) dictionary but corrupted by a combination of measurement noise and interference having a sparse representation in a second general diction ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
This paper develops new theory and algorithms to recover signals that are approximately sparse in some general (i.e., basis, frame, overcomplete, or incomplete) dictionary but corrupted by a combination of measurement noise and interference having a sparse representation in a second general dictionary. Particular applications covered by our framework include the restoration of signals impaired by impulse noise, narrowband interference, or saturation, as well as image inpainting, superresolution, and signal separation. We develop efficient recovery algorithms and deterministic conditions that guarantee stable restoration and separation. Two application examples demonstrate the efficacy of our approach.
Regime Change: BitDepth versus MeasurementRate in Compressive Sensing ∗
, 2011
"... The recently introduced compressive sensing (CS) framework enables digital signal acquisition systems to take advantage of signal structures beyond bandlimitedness. Indeed, the number of CS measurements required for stable reconstruction is closer to the order of the signal complexity than the Nyqui ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
The recently introduced compressive sensing (CS) framework enables digital signal acquisition systems to take advantage of signal structures beyond bandlimitedness. Indeed, the number of CS measurements required for stable reconstruction is closer to the order of the signal complexity than the Nyquist rate. To date, the CS theory has focused on realvalued measurements, but in practice, measurements are mapped to bits from a finite alphabet. Moreover, in many potential applications the total number of measurement bits is constrained, which suggests a tradeoff between the number of measurements and the number of bits per measurement. We study this situation in this paper and show that there exist two distinct regimes of operation that correspond to high/low signaltonoise ratio (SNR). In the measurement compression (MC) regime, a high SNR favors acquiring fewer measurements with more bits per measurement; in the quantization compression (QC) regime, a low SNR favors acquiring more measurements with fewer bits per measurement. A surprise from our analysis and experiments is that in many practical applications it is better to operate in the QC regime, even acquiring as few as 1 bit per measurement. 1