Results 1  10
of
14
Compressive sensing
 IEEE Signal Processing Mag
, 2007
"... The Shannon/Nyquist sampling theorem tells us that in order to not lose information when uniformly sampling a signal we must sample at least two times faster than its bandwidth. In many applications, including digital image and video cameras, the Nyquist rate can be so high that we end up with too m ..."
Abstract

Cited by 326 (41 self)
 Add to MetaCart
The Shannon/Nyquist sampling theorem tells us that in order to not lose information when uniformly sampling a signal we must sample at least two times faster than its bandwidth. In many applications, including digital image and video cameras, the Nyquist rate can be so high that we end up with too many samples and must compress in order to store or transmit them. In other applications, including imaging systems (medical scanners, radars) and highspeed analogtodigital converters, increasing the sampling rate or density beyond the current stateoftheart is very expensive. In this lecture, we will learn about a new technique that tackles these issues using compressive sensing [1, 2]. We will replace the conventional sampling and reconstruction operations with a more general linear measurement scheme coupled with an optimization in order to acquire certain kinds of signals at a rate significantly below Nyquist. 2
From theory to practice: SubNyquist sampling of sparse wideband analog signals
 IEEE J. SEL. TOPICS SIGNAL PROCESS
, 2010
"... Conventional subNyquist sampling methods for analog signals exploit prior information about the spectral support. In this paper, we consider the challenging problem of blind subNyquist sampling of multiband signals, whose unknown frequency support occupies only a small portion of a wide spectrum. ..."
Abstract

Cited by 71 (39 self)
 Add to MetaCart
Conventional subNyquist sampling methods for analog signals exploit prior information about the spectral support. In this paper, we consider the challenging problem of blind subNyquist sampling of multiband signals, whose unknown frequency support occupies only a small portion of a wide spectrum. Our primary design goals are efficient hardware implementation and low computational load on the supporting digital processing. We propose a system, named the modulated wideband converter, which first multiplies the analog signal by a bank of periodic waveforms. The product is then lowpass filtered and sampled uniformly at a low rate, which is orders of magnitude smaller than Nyquist. Perfect recovery from the proposed samples is achieved under certain necessary and sufficient conditions. We also develop a digital architecture, which allows either reconstruction of the analog input, or processing of any band of interest at a low rate, that is, without interpolating to the high Nyquist rate. Numerical simulations demonstrate many engineering aspects: robustness to noise and mismodeling, potential hardware simplifications, realtime performance for signals with timevarying support and stability to quantization effects. We compare our system with two previous approaches: periodic nonuniform sampling, which is bandwidth limited by existing hardware devices, and the random demodulator, which is restricted to discrete multitone signals and has a high computational load. In the broader context of Nyquist sampling, our scheme has the potential to break through the bandwidth barrier of stateoftheart analog conversion technologies such as interleaved converters.
Exploiting structure in waveletbased Bayesian compressive sensing
, 2009
"... Bayesian compressive sensing (CS) is considered for signals and images that are sparse in a wavelet basis. The statistical structure of the wavelet coefficients is exploited explicitly in the proposed model, and therefore this framework goes beyond simply assuming that the data are compressible in a ..."
Abstract

Cited by 50 (9 self)
 Add to MetaCart
Bayesian compressive sensing (CS) is considered for signals and images that are sparse in a wavelet basis. The statistical structure of the wavelet coefficients is exploited explicitly in the proposed model, and therefore this framework goes beyond simply assuming that the data are compressible in a wavelet basis. The structure exploited within the wavelet coefficients is consistent with that used in waveletbased compression algorithms. A hierarchical Bayesian model is constituted, with efficient inference via Markov chain Monte Carlo (MCMC) sampling. The algorithm is fully developed and demonstrated using several natural images, with performance comparisons to many stateoftheart compressivesensing inversion algorithms.
Sparse Signal Recovery Using Markov Random Fields
"... Compressive Sensing (CS) combines sampling and compression into a single subNyquist linear measurement process for sparse and compressible signals. In this paper, we extend the theory of CS to include signals that are concisely represented in terms of a graphical model. In particular, we use Markov ..."
Abstract

Cited by 40 (10 self)
 Add to MetaCart
Compressive Sensing (CS) combines sampling and compression into a single subNyquist linear measurement process for sparse and compressible signals. In this paper, we extend the theory of CS to include signals that are concisely represented in terms of a graphical model. In particular, we use Markov Random Fields (MRFs) to represent sparse signals whose nonzero coefficients are clustered. Our new modelbased recovery algorithm, dubbed Lattice Matching Pursuit (LaMP), stably recovers MRFmodeled signals using many fewer measurements and computations than the current stateoftheart algorithms. 1
Compressive Sensing on Manifolds Using a Nonparametric Mixture of Factor Analyzers: Algorithm and Performance Bounds 1
"... Nonparametric Bayesian methods are employed to constitute a mixture of lowrank Gaussians, for data x ∈ RN that are of high dimension N but are constrained to reside in a lowdimensional subregion of RN. The number of mixture components and their rank are inferred automatically from the data. The re ..."
Abstract

Cited by 19 (7 self)
 Add to MetaCart
Nonparametric Bayesian methods are employed to constitute a mixture of lowrank Gaussians, for data x ∈ RN that are of high dimension N but are constrained to reside in a lowdimensional subregion of RN. The number of mixture components and their rank are inferred automatically from the data. The resulting algorithm can be used for learning manifolds and for reconstructing signals from manifolds, based on compressive sensing (CS) projection measurements. The statistical CS inversion is performed analytically. We derive the required number of CS random measurements needed for successful reconstruction, based on easily computed quantities, drawing on block–sparsity properties. The proposed methodology is validated on several synthetic and real datasets. I.
Learning with Compressible Priors
"... We describe a set of probability distributions, dubbed compressible priors, whose independent and identically distributed (iid) realizations result in pcompressible signals. A signal x ∈ R N is called pcompressible with magnitude R if its sorted coefficients exhibit a powerlaw decay as x(i) � ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
We describe a set of probability distributions, dubbed compressible priors, whose independent and identically distributed (iid) realizations result in pcompressible signals. A signal x ∈ R N is called pcompressible with magnitude R if its sorted coefficients exhibit a powerlaw decay as x(i) � R · i −d, where the decay rate d is equal to 1/p. pcompressible signals live close to Ksparse signals (K ≪ N) in the ℓrnorm (r> p) since their best Ksparse approximation error decreases with O ( R · K 1/r−1/p). We show that the membership of generalized Pareto, Student’s t, lognormal, Fréchet, and loglogistic distributions to the set of compressible priors depends only on the distribution parameters and is independent of N. In contrast, we demonstrate that the membership of the generalized Gaussian distribution (GGD) depends both on the signal dimension and the GGD parameters: the expected decay rate of Nsample iid realizations from the GGD with the shape parameter q is given by 1 / [q log (N/q)]. As stylized examples, we show via experiments that the wavelet coefficients of natural images are 1.67compressible whereas their pixel gradients are 0.95 log (N/0.95)compressible, on the average. We also leverage the connections between compressible priors and sparse signals to develop new iterative reweighted sparse signal recovery algorithms that outperform the standard ℓ1norm minimization. Finally, we describe how to learn the hyperparameters of compressible priors in underdetermined regression problems by exploiting the geometry of their order statistics during signal recovery. 1
Compressive sensing recovery of spike trains using a structured sparsity model
 in Workshop on Signal Processing with Adaptive Sparse Structured Representations (SPARS
, 2009
"... Abstract—The theory of Compressive Sensing (CS) exploits a wellknown concept used in signal compression – sparsity – to design new, efficient techniques for signal acquisition. CS theory states that for a lengthN signal x with sparsity level K, M = O(K log(N/K)) random linear projections of x are ..."
Abstract

Cited by 12 (9 self)
 Add to MetaCart
Abstract—The theory of Compressive Sensing (CS) exploits a wellknown concept used in signal compression – sparsity – to design new, efficient techniques for signal acquisition. CS theory states that for a lengthN signal x with sparsity level K, M = O(K log(N/K)) random linear projections of x are sufficient to robustly recover x in polynomial time. However, richer models are often applicable in realworld settings that impose additional structure on the sparse nonzero coefficients of x. Many such models can be succinctly described as a union of Kdimensional subspaces. In recent work, we have developed a general approach for the design and analysis of robust, efficient CS recovery algorithms that exploit such signal models with structured sparsity. We apply our framework to a new signal model which is motivated by neuronal spike trains. We model the firing process of a single Poisson neuron with absolute refractoriness using a union of subspaces. We then derive a bound on the number of random projections M needed for stable embedding of this signal model, and develop a algorithm that provably recovers any neuronal spike train from M measurements. Numerical experimental results demonstrate the benefits of our modelbased approach compared to conventional CS recovery techniques. I.
Efficient sampling of sparse wideband analog signals
 in Proc. Conv. IEEE in Israel (IEEEI), Eilat
, 2008
"... Periodic nonuniform sampling is a known method to sample spectrally sparse signals below the Nyquist rate. This strategy relies on the implicit assumption that the individual samplers are exposed to the entire frequency range. This assumption becomes impractical for wideband sparse signals. The curr ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
Periodic nonuniform sampling is a known method to sample spectrally sparse signals below the Nyquist rate. This strategy relies on the implicit assumption that the individual samplers are exposed to the entire frequency range. This assumption becomes impractical for wideband sparse signals. The current paper proposes an alternative sampling stage that does not require a fullband front end. Instead, signals are captured with an analog front end that consists of a bank of multipliers and lowpass filters whose cutoff is much lower than the Nyquist rate. The problem of recovering the original signal from the lowrate samples can be studied within the framework of compressive sampling. An appropriate parameter selection ensures that the samples uniquely determine the analog input. Moreover, the analog input can be stably reconstructed with digital algorithms. Numerical experiments support the theoretical analysis. Index Terms — Analog to digital conversion, compressive sampling, infinite measurement vectors (IMV), multiband sampling. 1.
Channel Protection: Random Coding Meets Sparse Channels
"... Abstract—Multipath interference is an ubiquitous phenomenon in modern communication systems. The conventional way to compensate for this effect is to equalize the channel by estimating its impulse response by transmitting a set of training symbols. The primary drawback to this type of approach is th ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Abstract—Multipath interference is an ubiquitous phenomenon in modern communication systems. The conventional way to compensate for this effect is to equalize the channel by estimating its impulse response by transmitting a set of training symbols. The primary drawback to this type of approach is that it can be unreliable if the channel is changing rapidly. In this paper, we show that randomly encoding the signal can protect it against channel uncertainty when the channel is sparse. Before transmission, the signal is mapped into a slightly longer codeword using a random matrix. From the received signal, we are able to simultaneously estimate the channel and recover the transmitted signal. We discuss two schemes for the recovery. Both of them exploit the sparsity of the underlying channel. We show that if the channel impulse response is sufficiently sparse, the transmitted signal can be recovered reliably. I.
SAMPLING IN A UNION OF FRAME GENERATED SUBSPACES
, 904
"... Abstract. A new paradigm in Sampling theory has been developed recently by Lu and Do. In this new approach the classical linear model is replaced by a nonlinear, but structured model consisting of a union of subspaces. This is the natural approach for the new theory of compressed sampling, represen ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. A new paradigm in Sampling theory has been developed recently by Lu and Do. In this new approach the classical linear model is replaced by a nonlinear, but structured model consisting of a union of subspaces. This is the natural approach for the new theory of compressed sampling, representation of sparse signals and signals with finite rate of innovation. In this article we extend the theory of Lu and Do, for the case that the subspaces in the union are shiftinvariant spaces. We describe the subspaces by means of frame generators instead of orthonormal bases. We show that, the one to one and stability conditions for the sampling operator, are valid for this more general case.