Results 1  10
of
31
Robust 1Bit Compressive Sensing via Binary Stable Embeddings of Sparse Vectors
, 2011
"... The Compressive Sensing (CS) framework aims to ease the burden on analogtodigital converters (ADCs) by reducing the sampling rate required to acquire and stably recover sparse signals. Practical ADCs not only sample but also quantize each measurement to a finite number of bits; moreover, there is ..."
Abstract

Cited by 83 (26 self)
 Add to MetaCart
The Compressive Sensing (CS) framework aims to ease the burden on analogtodigital converters (ADCs) by reducing the sampling rate required to acquire and stably recover sparse signals. Practical ADCs not only sample but also quantize each measurement to a finite number of bits; moreover, there is an inverse relationship between the achievable sampling rate and the bit depth. In this paper, we investigate an alternative CS approach that shifts the emphasis from the sampling rate to the number of bits per measurement. In particular, we explore the extreme case of 1bit CS measurements, which capture just their sign. Our results come in two flavors. First, we consider ideal reconstruction from noiseless 1bit measurements and provide a lower bound on the best achievable reconstruction error. We also demonstrate that a large class of measurement mappings achieve this optimal bound. Second, we consider reconstruction robustness to measurement errors and noise and introduce the Binary ɛStable Embedding (BɛSE) property, which characterizes the robustness measurement process to sign changes. We show the same class of matrices that provide optimal noiseless performance also enable such a robust mapping. On the practical side, we introduce the Binary Iterative Hard Thresholding (BIHT) algorithm for signal reconstruction from 1bit measurements that offers stateoftheart performance.
Regime Change: BitDepth versus MeasurementRate in Compressive Sensing
, 2011
"... The recently introduced compressive sensing (CS) framework enables digital signal acquisition systems to take advantage of signal structures beyond bandlimitedness. Indeed, the number of CS measurements required for stable reconstruction is closer to the order of the signal complexity than the Nyqui ..."
Abstract

Cited by 32 (1 self)
 Add to MetaCart
(Show Context)
The recently introduced compressive sensing (CS) framework enables digital signal acquisition systems to take advantage of signal structures beyond bandlimitedness. Indeed, the number of CS measurements required for stable reconstruction is closer to the order of the signal complexity than the Nyquist rate. To date, the CS theory has focused on realvalued measurements, but in practice, measurements are mapped to bits from a finite alphabet. Moreover, in many potential applications the total number of measurement bits is constrained, which suggests a tradeoff between the number of measurements and the number of bits per measurement. We study this situation in this paper and show that there exist two distinct regimes of operation that correspond to high/low signaltonoise ratio (SNR). In the measurement compression (MC) regime, a high SNR favors acquiring fewer measurements with more bits per measurement; in the quantization compression (QC) regime, a low SNR favors acquiring more measurements with fewer bits per measurement. A surprise from our analysis and experiments is that in many practical applications it is better to operate in the QC regime, even acquiring as few as 1 bit per measurement.
Dimension reduction by random hyperplane tessellations
 Discrete & Computational Geometry
, 2011
"... Abstract. Given a subset K of the unit Euclidean sphere, we estimate the minimal number m = m(K) of hyperplanes that generate a uniform tessellation of K, in the sense that the fraction of the hyperplanes separating any pair x, y ∈ K is nearly proportional to the Euclidean distance between x and y. ..."
Abstract

Cited by 18 (3 self)
 Add to MetaCart
(Show Context)
Abstract. Given a subset K of the unit Euclidean sphere, we estimate the minimal number m = m(K) of hyperplanes that generate a uniform tessellation of K, in the sense that the fraction of the hyperplanes separating any pair x, y ∈ K is nearly proportional to the Euclidean distance between x and y. Random hyperplanes prove to be almost ideal for this problem; they achieve the almost optimal bound m = O(w(K)2) where w(K) is the Gaussian mean width of K. Using the map that sends x ∈ K to the sign vector with respect to the hyperplanes, we conclude that every bounded subset K of Rn embeds into the Hamming cube {−1, 1}m with a small distortion in the GromovHaussdorff metric. Since for many sets K one has m = m(K) n, this yields a new discrete mechanism of dimension reduction for sets in Euclidean spaces.
MessagePassing DeQuantization With Applications to Compressed Sensing
, 2012
"... Estimation of a vector from quantized linear measurements is a common problem for which simple linear techniques are suboptimal—sometimes greatly so. This paper develops messagepassing dequantization (MPDQ) algorithms for minimum meansquared error estimation of a random vector from quantized line ..."
Abstract

Cited by 14 (5 self)
 Add to MetaCart
Estimation of a vector from quantized linear measurements is a common problem for which simple linear techniques are suboptimal—sometimes greatly so. This paper develops messagepassing dequantization (MPDQ) algorithms for minimum meansquared error estimation of a random vector from quantized linear measurements, notably allowing the linear expansion to be overcomplete or undercomplete and the scalar quantization to be regular or nonregular. The algorithm is based on generalized approximate message passing (GAMP), a recentlydeveloped Gaussian approximation of loopy belief propagation for estimation with linear transforms and nonlinear componentwiseseparable output channels. For MPDQ, scalar quantization of measurements is incorporated into the output channel formalism, leading to the first tractable and effective method for highdimensional estimation problems involving nonregular scalar quantization. The
Messagepassing estimation from quantized samples,” arXiv:1105.6368v1 [cs.IT
, 2011
"... Abstract—Recently, relaxed belief propagation and approximate message passing have been extended to apply to problems with general separable output channels rather than only to problems with additive Gaussian noise. We apply these to estimation of signals from quantized samples with minimum meansqu ..."
Abstract

Cited by 12 (7 self)
 Add to MetaCart
(Show Context)
Abstract—Recently, relaxed belief propagation and approximate message passing have been extended to apply to problems with general separable output channels rather than only to problems with additive Gaussian noise. We apply these to estimation of signals from quantized samples with minimum meansquared error. This provides a remarkably effective estimation technique in three settings: an oversampled dense signal; an undersampled sparse signal; and any signal when the quantizer is not regular. The error performance can be accurately predicted and tracked through the state evolution formalism. We use state evolution to optimize quantizers and discuss several empirical properties of the optimal quantizers. I. OVERVIEW Estimation of a signal from quantized samples arises both from the discretization in digital acquisition devices and the quantization performed for compression. An example in which treating quantization
Efficient Coding of Signal Distances Using Universal Quantized Embeddings
"... Traditional ratedistortion theory is focused on how to best encode a signal using as few bits as possible and incurring as low a distortion as possible. However, very often, the goal of transmission is to extract specific information from the signal at the receiving end, and the distortion should b ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
(Show Context)
Traditional ratedistortion theory is focused on how to best encode a signal using as few bits as possible and incurring as low a distortion as possible. However, very often, the goal of transmission is to extract specific information from the signal at the receiving end, and the distortion should be measured on that extracted information. In this paper we examine the problem of encoding signals such that sufficient information is preserved about their pairwise distances. For that goal, we consider randomized embeddings as an encoding mechanism and provide a framework to analyze their performance. We also propose the recently developed universal quantized embeddings as a solution to that problem and experimentally demonstrate that, in image retrieval experiments, universal embedding can achieve up to 25 % rate reduction over the state of the art. I.
Channeloptimized vector quantizer design for compressed sensing measurements
 in IEEE Int. Conf. Acoust. Speech, and Sig. Proc., 2013
"... ar ..."
Privacypreserving speaker authentication
 In Information Security Conference (ISC
, 2012
"... Abstract. Speaker authentication systems require access to the voice of the user. A person’s voice carries information about their gender, nationality etc., all of which become accessible to the system, which could abuse this knowledge. The system also stores users ’ voice prints – these may be sto ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Speaker authentication systems require access to the voice of the user. A person’s voice carries information about their gender, nationality etc., all of which become accessible to the system, which could abuse this knowledge. The system also stores users ’ voice prints – these may be stolen and used to impersonate the users elsewhere. It is therefore important to develop privacy preserving voice authentication techniques that enable a system to authenticate users by their voice, while simultaneously obscuring the user’s voice and voice patterns from the system. Prior work in this area has employed expensive cryptographic tools, or has cast authentication as a problem of exact match with compromised accuracy. In this paper we present a new technique that employs secure binary embeddings of feature vectors, to perform voice authentication in a privacy preserving manner with minimal computational overhead and little loss of classification accuracy. 1
A quantized Johnson Lindenstrauss lemma: The finding of buffon’s needle. arXiv preprint arXiv:1309.1507
, 2013
"... In 1733, GeorgesLouis Leclerc, Comte de Buffon in France, set the ground of geometric probability theory by defining an enlightening problem: What is the probability that a needle thrown randomly on a ground made of equispaced parallel strips lies on two of them? In this work, we show that the solu ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
In 1733, GeorgesLouis Leclerc, Comte de Buffon in France, set the ground of geometric probability theory by defining an enlightening problem: What is the probability that a needle thrown randomly on a ground made of equispaced parallel strips lies on two of them? In this work, we show that the solution to this problem, and its generalization to N dimensions, allows us to discover a quantized form of the JohnsonLindenstrauss (JL) Lemma, i.e., one that combines a linear dimensionality reduction procedure with a uniform quantization of precision δ> 0. In particular, given a finite set S ⊂ RN of S points and a distortion level > 0, as soon as M> M0 = O( −2 logS), we can (randomly) construct a mapping from (S, `2) to ( (δ Z)M, `1) that approximately preserves the pairwise distances between the points of S. Interestingly, compared to the common JL Lemma, the mapping is quasiisometric and we observe both an additive and a multiplicative distortions on the embedded distances. These two distortions, however, decay as O( logS/M) when M increases. Moreover, for coarse quantization, i.e., for high δ compared to the set radius, the distortion is mainly additive, while for small δ we tend to a Lipschitz isometric embedding. Finally, we show that there exists “almost ” a quasiisometric embedding of (S, `2) in ((δZ)M, `2). This one involves a nonlinear distortion of the `2distance in S that vanishes for distant points in this set. Noticeably, the additive distortion in this case is slower and decays as O((logS/M)1/4). 1
SPEAKER VERIFICATION USING SECURE BINARY EMBEDDINGS
, 2013
"... This paper addresses privacy concerns in voice biometrics. Conventional remote speaker verification systems rely on the system to have access to the user’s recordings, or features derived from them, and also a model of the user’s voice. In the proposed approach, the system has access to none of the ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
This paper addresses privacy concerns in voice biometrics. Conventional remote speaker verification systems rely on the system to have access to the user’s recordings, or features derived from them, and also a model of the user’s voice. In the proposed approach, the system has access to none of them. The supervectors extracted from the user’s recordings are transformed to bit strings in a way that allows the computation of approximate distances, instead of exact ones. The key to the transformation uses a hashing scheme known as Secure Binary Embeddings. An SVM classifier with a modified kernel operates on the hashes. This allows speaker verification to be performed without exposing speaker data. Experiments showed that the secure system yielded similar results as its nonprivate counterpart. The approach may be extended to other types of biometric authentication.