Results 1  10
of
13
Compressive sensing
 IEEE Signal Processing Mag
, 2007
"... The Shannon/Nyquist sampling theorem tells us that in order to not lose information when uniformly sampling a signal we must sample at least two times faster than its bandwidth. In many applications, including digital image and video cameras, the Nyquist rate can be so high that we end up with too m ..."
Abstract

Cited by 305 (40 self)
 Add to MetaCart
The Shannon/Nyquist sampling theorem tells us that in order to not lose information when uniformly sampling a signal we must sample at least two times faster than its bandwidth. In many applications, including digital image and video cameras, the Nyquist rate can be so high that we end up with too many samples and must compress in order to store or transmit them. In other applications, including imaging systems (medical scanners, radars) and highspeed analogtodigital converters, increasing the sampling rate or density beyond the current stateoftheart is very expensive. In this lecture, we will learn about a new technique that tackles these issues using compressive sensing [1, 2]. We will replace the conventional sampling and reconstruction operations with a more general linear measurement scheme coupled with an optimization in order to acquire certain kinds of signals at a rate significantly below Nyquist. 2
Exploiting structure in waveletbased Bayesian compressive sensing
, 2009
"... Bayesian compressive sensing (CS) is considered for signals and images that are sparse in a wavelet basis. The statistical structure of the wavelet coefficients is exploited explicitly in the proposed model, and therefore this framework goes beyond simply assuming that the data are compressible in a ..."
Abstract

Cited by 43 (9 self)
 Add to MetaCart
Bayesian compressive sensing (CS) is considered for signals and images that are sparse in a wavelet basis. The statistical structure of the wavelet coefficients is exploited explicitly in the proposed model, and therefore this framework goes beyond simply assuming that the data are compressible in a wavelet basis. The structure exploited within the wavelet coefficients is consistent with that used in waveletbased compression algorithms. A hierarchical Bayesian model is constituted, with efficient inference via Markov chain Monte Carlo (MCMC) sampling. The algorithm is fully developed and demonstrated using several natural images, with performance comparisons to many stateoftheart compressivesensing inversion algorithms.
Sparse Signal Recovery Using Markov Random Fields
"... Compressive Sensing (CS) combines sampling and compression into a single subNyquist linear measurement process for sparse and compressible signals. In this paper, we extend the theory of CS to include signals that are concisely represented in terms of a graphical model. In particular, we use Markov ..."
Abstract

Cited by 39 (10 self)
 Add to MetaCart
Compressive Sensing (CS) combines sampling and compression into a single subNyquist linear measurement process for sparse and compressible signals. In this paper, we extend the theory of CS to include signals that are concisely represented in terms of a graphical model. In particular, we use Markov Random Fields (MRFs) to represent sparse signals whose nonzero coefficients are clustered. Our new modelbased recovery algorithm, dubbed Lattice Matching Pursuit (LaMP), stably recovers MRFmodeled signals using many fewer measurements and computations than the current stateoftheart algorithms. 1
Compressive Sensing on Manifolds Using a Nonparametric Mixture of Factor Analyzers: Algorithm and Performance Bounds 1
"... Nonparametric Bayesian methods are employed to constitute a mixture of lowrank Gaussians, for data x ∈ RN that are of high dimension N but are constrained to reside in a lowdimensional subregion of RN. The number of mixture components and their rank are inferred automatically from the data. The re ..."
Abstract

Cited by 18 (7 self)
 Add to MetaCart
Nonparametric Bayesian methods are employed to constitute a mixture of lowrank Gaussians, for data x ∈ RN that are of high dimension N but are constrained to reside in a lowdimensional subregion of RN. The number of mixture components and their rank are inferred automatically from the data. The resulting algorithm can be used for learning manifolds and for reconstructing signals from manifolds, based on compressive sensing (CS) projection measurements. The statistical CS inversion is performed analytically. We derive the required number of CS random measurements needed for successful reconstruction, based on easily computed quantities, drawing on block–sparsity properties. The proposed methodology is validated on several synthetic and real datasets. I.
Learning with Compressible Priors
"... We describe a set of probability distributions, dubbed compressible priors, whose independent and identically distributed (iid) realizations result in pcompressible signals. A signal x ∈ R N is called pcompressible with magnitude R if its sorted coefficients exhibit a powerlaw decay as x(i) � ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
We describe a set of probability distributions, dubbed compressible priors, whose independent and identically distributed (iid) realizations result in pcompressible signals. A signal x ∈ R N is called pcompressible with magnitude R if its sorted coefficients exhibit a powerlaw decay as x(i) � R · i −d, where the decay rate d is equal to 1/p. pcompressible signals live close to Ksparse signals (K ≪ N) in the ℓrnorm (r> p) since their best Ksparse approximation error decreases with O ( R · K 1/r−1/p). We show that the membership of generalized Pareto, Student’s t, lognormal, Fréchet, and loglogistic distributions to the set of compressible priors depends only on the distribution parameters and is independent of N. In contrast, we demonstrate that the membership of the generalized Gaussian distribution (GGD) depends both on the signal dimension and the GGD parameters: the expected decay rate of Nsample iid realizations from the GGD with the shape parameter q is given by 1 / [q log (N/q)]. As stylized examples, we show via experiments that the wavelet coefficients of natural images are 1.67compressible whereas their pixel gradients are 0.95 log (N/0.95)compressible, on the average. We also leverage the connections between compressible priors and sparse signals to develop new iterative reweighted sparse signal recovery algorithms that outperform the standard ℓ1norm minimization. Finally, we describe how to learn the hyperparameters of compressible priors in underdetermined regression problems by exploiting the geometry of their order statistics during signal recovery. 1
Compressive sensing recovery of spike trains using a structured sparsity model
 in Workshop on Signal Processing with Adaptive Sparse Structured Representations (SPARS
, 2009
"... Abstract—The theory of Compressive Sensing (CS) exploits a wellknown concept used in signal compression – sparsity – to design new, efficient techniques for signal acquisition. CS theory states that for a lengthN signal x with sparsity level K, M = O(K log(N/K)) random linear projections of x are ..."
Abstract

Cited by 11 (8 self)
 Add to MetaCart
Abstract—The theory of Compressive Sensing (CS) exploits a wellknown concept used in signal compression – sparsity – to design new, efficient techniques for signal acquisition. CS theory states that for a lengthN signal x with sparsity level K, M = O(K log(N/K)) random linear projections of x are sufficient to robustly recover x in polynomial time. However, richer models are often applicable in realworld settings that impose additional structure on the sparse nonzero coefficients of x. Many such models can be succinctly described as a union of Kdimensional subspaces. In recent work, we have developed a general approach for the design and analysis of robust, efficient CS recovery algorithms that exploit such signal models with structured sparsity. We apply our framework to a new signal model which is motivated by neuronal spike trains. We model the firing process of a single Poisson neuron with absolute refractoriness using a union of subspaces. We then derive a bound on the number of random projections M needed for stable embedding of this signal model, and develop a algorithm that provably recovers any neuronal spike train from M measurements. Numerical experimental results demonstrate the benefits of our modelbased approach compared to conventional CS recovery techniques. I.
Efficient sampling of sparse wideband analog signals
 in Proc. Conv. IEEE in Israel (IEEEI), Eilat
, 2008
"... Periodic nonuniform sampling is a known method to sample spectrally sparse signals below the Nyquist rate. This strategy relies on the implicit assumption that the individual samplers are exposed to the entire frequency range. This assumption becomes impractical for wideband sparse signals. The curr ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
Periodic nonuniform sampling is a known method to sample spectrally sparse signals below the Nyquist rate. This strategy relies on the implicit assumption that the individual samplers are exposed to the entire frequency range. This assumption becomes impractical for wideband sparse signals. The current paper proposes an alternative sampling stage that does not require a fullband front end. Instead, signals are captured with an analog front end that consists of a bank of multipliers and lowpass filters whose cutoff is much lower than the Nyquist rate. The problem of recovering the original signal from the lowrate samples can be studied within the framework of compressive sampling. An appropriate parameter selection ensures that the samples uniquely determine the analog input. Moreover, the analog input can be stably reconstructed with digital algorithms. Numerical experiments support the theoretical analysis. Index Terms — Analog to digital conversion, compressive sampling, infinite measurement vectors (IMV), multiband sampling. 1.
Channel Protection: Random Coding Meets Sparse Channels
"... Abstract—Multipath interference is an ubiquitous phenomenon in modern communication systems. The conventional way to compensate for this effect is to equalize the channel by estimating its impulse response by transmitting a set of training symbols. The primary drawback to this type of approach is th ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Abstract—Multipath interference is an ubiquitous phenomenon in modern communication systems. The conventional way to compensate for this effect is to equalize the channel by estimating its impulse response by transmitting a set of training symbols. The primary drawback to this type of approach is that it can be unreliable if the channel is changing rapidly. In this paper, we show that randomly encoding the signal can protect it against channel uncertainty when the channel is sparse. Before transmission, the signal is mapped into a slightly longer codeword using a random matrix. From the received signal, we are able to simultaneously estimate the channel and recover the transmitted signal. We discuss two schemes for the recovery. Both of them exploit the sparsity of the underlying channel. We show that if the channel impulse response is sufficiently sparse, the transmitted signal can be recovered reliably. I.
Sparse Geodesic Paths
"... In this paper we propose a new distance metric for signals that admit a sparse representation in a known basis or dictionary. The metric is derived as the length of the sparse geodesic path between two points, by which we mean the shortest path between the points that is itself sparse. We show that ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
In this paper we propose a new distance metric for signals that admit a sparse representation in a known basis or dictionary. The metric is derived as the length of the sparse geodesic path between two points, by which we mean the shortest path between the points that is itself sparse. We show that the distance can be computed via a simple formula and that the entire geodesic path can be easily generated. The distance provides a natural similarity measure that can be exploited as a perceptually meaningful distance metric for natural images. Furthermore, the distance has applications in supervised, semisupervised, and unsupervised learning settings. 1.
Fast Hard Thresholding with Nesterov’s Gradient Method
"... We provide an algorithmic framework for structured sparse recovery which unifies combinatorial optimization with the nonsmooth convex optimization framework by Nesterov [1, 2]. Our algorithm, dubbed Nesterov iterative hardthresholding (NIHT), is similar to the algebraic pursuits (ALPS) in [3] in s ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We provide an algorithmic framework for structured sparse recovery which unifies combinatorial optimization with the nonsmooth convex optimization framework by Nesterov [1, 2]. Our algorithm, dubbed Nesterov iterative hardthresholding (NIHT), is similar to the algebraic pursuits (ALPS) in [3] in spirit: we use the gradient information in the convex data error objective to navigate over the nonconvex set of structured sparse signals. While ALPS feature a priori approximation guarantees, we were only able to provide an online approximation guarantee for NIHT (e.g., the guarantees require the algorithm execution). Experiments show however that NIHT can empirically outperform ALPS and other stateoftheart convex optimizationbased algorithms in sparse recovery. 1