Results 1  10
of
10
Robust 1Bit Compressive Sensing via Binary Stable Embeddings of Sparse Vectors
, 2011
"... The Compressive Sensing (CS) framework aims to ease the burden on analogtodigital converters (ADCs) by reducing the sampling rate required to acquire and stably recover sparse signals. Practical ADCs not only sample but also quantize each measurement to a finite number of bits; moreover, there is ..."
Abstract

Cited by 85 (26 self)
 Add to MetaCart
The Compressive Sensing (CS) framework aims to ease the burden on analogtodigital converters (ADCs) by reducing the sampling rate required to acquire and stably recover sparse signals. Practical ADCs not only sample but also quantize each measurement to a finite number of bits; moreover, there is an inverse relationship between the achievable sampling rate and the bit depth. In this paper, we investigate an alternative CS approach that shifts the emphasis from the sampling rate to the number of bits per measurement. In particular, we explore the extreme case of 1bit CS measurements, which capture just their sign. Our results come in two flavors. First, we consider ideal reconstruction from noiseless 1bit measurements and provide a lower bound on the best achievable reconstruction error. We also demonstrate that a large class of measurement mappings achieve this optimal bound. Second, we consider reconstruction robustness to measurement errors and noise and introduce the Binary ɛStable Embedding (BɛSE) property, which characterizes the robustness measurement process to sign changes. We show the same class of matrices that provide optimal noiseless performance also enable such a robust mapping. On the practical side, we introduce the Binary Iterative Hard Thresholding (BIHT) algorithm for signal reconstruction from 1bit measurements that offers stateoftheart performance.
Universal RateEfficient Scalar Quantization
 IEEE TRANSACTIONS ON INFORMATION THEORY, TO APPEAR
, 2011
"... Scalar quantization is the most practical and straightforward approach to signal quantization. However, it has been shown that scalar quantization of oversampled or compressively sensed signals can be inefficient in terms of the ratedistortion tradeoff, especially as the oversampling rate or the s ..."
Abstract

Cited by 31 (9 self)
 Add to MetaCart
Scalar quantization is the most practical and straightforward approach to signal quantization. However, it has been shown that scalar quantization of oversampled or compressively sensed signals can be inefficient in terms of the ratedistortion tradeoff, especially as the oversampling rate or the sparsity of the signal increases. In this paper, we modify the scalar quantizer to have discontinuous quantization regions. We demonstrate that with this modification it is possible to achieve exponential decay of the quantization error as a function of the oversampling rate instead of the quadratic decay exhibited by current approaches. Our approach is universal in the sense that prior knowledge of the signal model is not necessary in the quantizer design, only in the reconstruction. Thus, we demonstrate that it is possible to reduce the quantization error by incorporating side information on the acquired signal, such as sparse signal models or signal similarity with known signals. In doing so, we establish a relationship between quantization performance and the Kolmogorov entropy of the signal model.
Sobolev duals in frame theory and SigmaDelta quantization
"... Abstract. A new class of alternative dual frames is introduced in the setting of finite frames for Rd. These dual frames, called Sobolev duals, provide a high precision linear reconstruction procedure for SigmaDelta (Σ∆) quantization of finite frames. The main result is summarized as follows: recon ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
(Show Context)
Abstract. A new class of alternative dual frames is introduced in the setting of finite frames for Rd. These dual frames, called Sobolev duals, provide a high precision linear reconstruction procedure for SigmaDelta (Σ∆) quantization of finite frames. The main result is summarized as follows: reconstruction with Sobolev duals enables stable rth order SigmaDelta schemes to achieve deterministic approximation error of order O(N−r) for a wide class of finite frames of size N. This asymptotic order is generally not achievable with canonical dual frames. Moreover, Sobolev dual reconstruction leads to minimal mean squared error under the classical white noise assumption. 1.
Multipledescription coding by dithered deltasigma quantization
 in Data Compression Conference, 2007. DCC ’07, (Snowbird, UT
, 2007
"... We address the connection between the multipledescription (MD) problem and DeltaSigma quantization. The inherent redundancy due to oversampling in DeltaSigma quantization, and the simple linearadditive noise model resulting from dithered lattice quantization, allow us to construct a symmetric and ..."
Abstract

Cited by 13 (8 self)
 Add to MetaCart
(Show Context)
We address the connection between the multipledescription (MD) problem and DeltaSigma quantization. The inherent redundancy due to oversampling in DeltaSigma quantization, and the simple linearadditive noise model resulting from dithered lattice quantization, allow us to construct a symmetric and timeinvariant MD coding scheme. We show that the use of a noise shaping filter makes it possible to trade off central distortion for side distortion. Asymptotically as the dimension of the lattice vector quantizer and order of the noise shaping filter approach infinity, the entropy rate of the dithered DeltaSigma quantization scheme approaches the symmetric twochannel MD ratedistortion function for a memoryless Gaussian source and MSE fidelity criterion, at any sidetocentral distortion ratio and any resolution. In the optimal scheme, the infiniteorder noise shaping filter must be minimum phase and have a piecewise flat power spectrum with a single jump discontinuity. An important advantage of the proposed design is that it is symmetric in rate and distortion by construction, so the coding rates of the descriptions are identical and there is therefore no need for source splitting. Index Terms deltasigma modulation, dithered lattice quantization, entropy coding, joint sourcechannel coding, multipledescription coding, vector quantization. I.
Sigma Delta Quantization for Compressive Sensing
"... Compressive sensing is a new data acquisition technique that aims to measure sparse and compressible signals at close to their intrinsic information rate rather than their Nyquist rate. Recent results in compressive sensing show that a sparse or compressible signal can be reconstructed from very few ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
(Show Context)
Compressive sensing is a new data acquisition technique that aims to measure sparse and compressible signals at close to their intrinsic information rate rather than their Nyquist rate. Recent results in compressive sensing show that a sparse or compressible signal can be reconstructed from very few measurements with an incoherent, and even randomly generated, dictionary. To date the hardware implementation of compressive sensing analogtodigital systems has not been straightforward. This paper explores the use of SigmaDelta quantizer architecture to implement such a system. After examining the challenges of using SigmaDelta with a randomly generated compressive sensing dictionary, we present efficient algorithms to compute the coefficients of the feedback loop. The experimental results demonstrate that SigmaDelta relaxes the required analog filter order and quantizer precision. We further demonstrate that restrictions on the feedback coefficient values and stability constraints impose a small penalty on the performance of the SigmaDelta loop, while they make hardware implementations significantly simpler.
Causal Compensation for Erasures in Frame Representations
 IEEE Trans. on Signal Processing
, 2008
"... Abstract—In a variety of signal processing and communications contexts, erasures occur inadvertently or can be intentionally introduced as part of a data reduction strategy. This paper discusses causal compensation for erasures in frame representations of signals. The approach described assumes line ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
Abstract—In a variety of signal processing and communications contexts, erasures occur inadvertently or can be intentionally introduced as part of a data reduction strategy. This paper discusses causal compensation for erasures in frame representations of signals. The approach described assumes linear synthesis of the signal using a prespecified frame but no specific generation mechanism for the coefficients. Under this assumption, it is demonstrated that erasures can be compensated for using lowcomplexity causal systems. If the transmitter is aware of the occurrence of the erasure, an optimal compensation is to project the erasure error to the remaining coefficients. It is demonstrated that the same compensation can be executed using a transmitter/receiver combination in which the transmitter is not aware of the erasure occurrence. The transmitter precompensates using projections, as if assuming erasures will occur. The receiver undoes the compensation for the coefficients that have not been erased, thus maintaining the compensation only of the erased coefficients. The stability of the resulting systems is explored, and stability conditions are derived. It is shown that stability for any erasure pattern can be enforced by optimizing a constrained quadratic program at the system design stage. The paper concludes with examples and simulations that verify the theoretical results and illustrate key issues in the algorithms. Index Terms—Erasures, frames, overcomplete signal representations. I I.
Quantization and Compressive Sensing
, 2015
"... Quantization is an essential step in digitizing signals, and, therefore, an indispensable component of any modern acquisition system. This chapter explores the interaction of quantization and compressive sensing and examines practical quantization strategies for compressive acquisition systems. Spec ..."
Abstract
 Add to MetaCart
Quantization is an essential step in digitizing signals, and, therefore, an indispensable component of any modern acquisition system. This chapter explores the interaction of quantization and compressive sensing and examines practical quantization strategies for compressive acquisition systems. Specifically, we first provide a brief overview of quantization and examine fundamental performance bounds applicable to any quantization approach. Next, we consider several forms of scalar quantizers, namely uniform, nonuniform, and 1bit. We provide performance bounds and fundamental analysis, as well as practical quantizer designs and reconstruction algorithms that account for quantization. Furthermore, we provide an overview of SigmaDelta () quantization in the compressed sensing context, and also discuss implementation issues, recovery algorithms and performance bounds. As we demonstrate, proper accounting for quantization and careful quantizer design has significant impact in the performance of a compressive acquisition system.
OVERSAMPLED NOISY BINARY IMAGE SENSOR
"... We study the oversampled binary image sensor in [1] under noisy scenario. The binary image sensor is similar to traditional photographic film with pixel value equal to “0” or “1”. The potential application of the oversampled binary image sensor is high dynamic range imaging. Since the pixel value is ..."
Abstract
 Add to MetaCart
We study the oversampled binary image sensor in [1] under noisy scenario. The binary image sensor is similar to traditional photographic film with pixel value equal to “0” or “1”. The potential application of the oversampled binary image sensor is high dynamic range imaging. Since the pixel value is binary, we model the noise as additive Bernoulli noise. We focus on the case that the threshold in the binary sensor is equal to a single photon. Because of noise, the dynamic range of the sensor is reduced. But the image sensor is quite robust to noise when the light intensity value is large. We use maximumlikelihood estimator (MLE) to reconstruct the light intensity field, and prove that when the threshold is a single photon, even if there is noise, the loglikelihood function is still concave, which guarantees to find the global optimal solution. Experimental results for 1D signal and 2D images verify our performance analysis and show the effectiveness of the reconstruction algorithm.