Results 1  10
of
15
Necessary and sufficient conditions on sparsity pattern recovery
, 2009
"... The paper considers the problem of detecting the sparsity pattern of a ksparse vector in R n from m random noisy measurements. A new necessary condition on the number of measurements for asymptotically reliable detection with maximum likelihood (ML) estimation and Gaussian measurement matrices is ..."
Abstract

Cited by 46 (8 self)
 Add to MetaCart
The paper considers the problem of detecting the sparsity pattern of a ksparse vector in R n from m random noisy measurements. A new necessary condition on the number of measurements for asymptotically reliable detection with maximum likelihood (ML) estimation and Gaussian measurement matrices is derived. This necessary condition for ML detection is compared against a sufficient condition for simple maximum correlation (MC) or thresholding algorithms. The analysis shows that the gap between thresholding and ML can be described by a simple expression in terms of the total signaltonoise ratio (SNR), with the gap growing with increasing SNR. Thresholding is also compared against the more sophisticated lasso and orthogonal matching pursuit (OMP) methods. At high SNRs, it is shown that the gap between lasso and OMP over thresholding is described by the range of powers of the nonzero component values of the unknown signals. Specifically, the key benefit of lasso and OMP over thresholding is the ability of lasso and OMP to detect signals with relatively small components.
Asymptotic analysis of MAP estimation via the replica method and applications to compressed sensing
, 2009
"... The replica method is a nonrigorous but widelyaccepted technique from statistical physics used in the asymptotic analysis of large, random, nonlinear problems. This paper applies the replica method to nonGaussian maximum a posteriori (MAP) estimation. It is shown that with random linear measureme ..."
Abstract

Cited by 25 (3 self)
 Add to MetaCart
The replica method is a nonrigorous but widelyaccepted technique from statistical physics used in the asymptotic analysis of large, random, nonlinear problems. This paper applies the replica method to nonGaussian maximum a posteriori (MAP) estimation. It is shown that with random linear measurements and Gaussian noise, the asymptotic behavior of the MAP estimate of anndimensional vector “decouples ” asnscalar MAP estimators. The result is a counterpart to Guo and Verdú’s replica analysis of minimum meansquared error estimation. The replica MAP analysis can be readily applied to many estimators used in compressed sensing, including basis pursuit, lasso, linear estimation with thresholding, and zero normregularized estimation. In the case of lasso estimation the scalar estimator reduces to a softthresholding operator, and for zero normregularized estimation it reduces to a hardthreshold. Among other benefits, the replica method provides a computationallytractable method for exactly computing various performance metrics including meansquared error and sparsity pattern recovery probability.
Sampling Bounds for Sparse Support Recovery in the Presence of Noise
"... It is well known that the support of a sparse signal can be recovered from a small number of random projections. However, in the presence of noise all known sufficient conditions require that the persample signaltonoise ratio (SNR) grows without bound with the dimension of the signal. If the nois ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
It is well known that the support of a sparse signal can be recovered from a small number of random projections. However, in the presence of noise all known sufficient conditions require that the persample signaltonoise ratio (SNR) grows without bound with the dimension of the signal. If the noise is due to quantization of the samples, this means that an unbounded rate per sample is needed. In this paper, it is shown that an unbounded SNR is also a necessary condition for perfect recovery, but any fraction (less than one) of the support can be recovered with bounded SNR. This means that a finite rate per sample is sufficient for partial support recovery. Necessary and sufficient conditions are given for both stochastic and nonstochastic signal models. This problem arises in settings such as compressive sensing, model selection, and signal denoising.
Fast Bayesian Matching Pursuit: Model Uncertainty and Parameter Estimation for Sparse Linear Models
 IEEE TRANSACTIONS ON SIGNAL PROCESSING
, 2009
"... A lowcomplexity recursive procedure is presented for model selection and minimum mean squared error (MMSE) estimation in linear regression. Emphasis is given to the case of a sparse parameter vector and fewer observations than unknown parameters. A Gaussian mixture is chosen as the prior on the un ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
A lowcomplexity recursive procedure is presented for model selection and minimum mean squared error (MMSE) estimation in linear regression. Emphasis is given to the case of a sparse parameter vector and fewer observations than unknown parameters. A Gaussian mixture is chosen as the prior on the unknown parameter vector. The algorithm returns both a set of high posterior probability mixing parameters and an approximate MMSE estimate of the parameter vector. Exact ratios of posterior probabilities serve to reveal potential ambiguity among multiple candidate solutions that are ambiguous due to observation noise or correlation among columns in the regressor matrix. Algorithm complexity is linear in the number of unknown coefficients, the number of observations and the number of nonzero coefficients. If hyperparameters are unknown, a maximum likelihood estimate is found by a generalized expectation maximization algorithm. Numerical simulations demonstrate estimation performance and illustrate the distinctions between MMSE estimation and maximum a posteriori probability model selection.
Informationtheoretic limits on sparse support recovery: Dense versus sparse measurements
, 2008
"... We study the informationtheoretic limits of exactly recovering the support of a sparse signal using noisy projections defined by various classes of measurement matrices. Our analysis is highdimensional in nature, in which the number of
observations n, the ambient signal dimension p, and the signal ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
We study the informationtheoretic limits of exactly recovering the support of a sparse signal using noisy projections defined by various classes of measurement matrices. Our analysis is highdimensional in nature, in which the number of
observations n, the ambient signal dimension p, and the signal
sparsity k are all allowed to tend to infinity in a general manner. This paper makes two novel contributions. First, we provide sharper necessary conditions for exact support recovery using general (nonGaussian) dense measurement matrices. Combined with previously known sufficient conditions, this result yields a sharp characterization of when the optimal decoder can recover a signal with linear sparsity (k = Θ(p)) using a linear scaling of observations (n = Θ(p)) in the presence of noise. Our second contribution is to prove necessary conditions on the number
of observations n required for asymptotically reliable recovery using a class of γsparsified measurement matrices, where the measurement sparsity γ(n, p, k) G (0, 1] corresponds to the fraction of nonzero entries per row. Our analysis allows general scaling of the quadruplet (n, p, k, γ), and reveals three different regimes, corresponding to whether measurement sparsity has no effect, a minor effect, or a dramatic effect on the informationtheoretic limits of the subset recovery problem.
Orthogonal matching pursuit from noisy measurements: A new analysis
 in Proc. Neural Inf. Processing Syst. Conf. (NIPS
, 2009
"... A wellknown analysis of Tropp and Gilbert shows that orthogonal matching pursuit (OMP) can recover a ksparse ndimensional real vector from m = 4klog(n) noisefree linear measurements obtained through a random Gaussian measurement matrix with a probability that approaches one as n → ∞. This work s ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
A wellknown analysis of Tropp and Gilbert shows that orthogonal matching pursuit (OMP) can recover a ksparse ndimensional real vector from m = 4klog(n) noisefree linear measurements obtained through a random Gaussian measurement matrix with a probability that approaches one as n → ∞. This work strengthens this result by showing that a lower number of measurements, m = 2klog(n − k), is in fact sufficient for asymptotic recovery. More generally, when the sparsity level satisfies kmin ≤ k ≤ kmax but is unknown, m = 2kmaxlog(n−kmin) measurements is sufficient. Furthermore, this number of measurements is also sufficient for detection of the sparsity pattern (support) of the vector with measurement errors provided the signaltonoise ratio (SNR) scales to infinity. The scaling m = 2klog(n−k) exactly matches the number of measurements required by the more complex lasso method for signal recovery in a similar SNR scaling. 1
A Note on Optimal Support Recovery in Compressed Sensing
, 2009
"... Recovery of the support set (or sparsity pattern) of a sparse vector from a small number of noisy linear projections (or samples) is a “compressed sensing ” problem that arises in signal processing and statistics. Although many computationally efficient recovery algorithms have been studied, the opt ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Recovery of the support set (or sparsity pattern) of a sparse vector from a small number of noisy linear projections (or samples) is a “compressed sensing ” problem that arises in signal processing and statistics. Although many computationally efficient recovery algorithms have been studied, the optimality (or gap from optimality) of these algorithms is, in general, not well understood. In this note, approximate support recovery under a Gaussian prior is considered, and it is shown that optimal estimation depends on the recovery metric in general. By contrast, it is shown that in the SNR limits, there exist uniformly nearoptimal estimators, namely, the ML estimate in the high SNR case, and a computationally trivial thresholding algorithm in the low SNR case.
Distributed Sensor Perception via Sparse Representation
 THE PROCEEDINGS OF IEEE
"... Sensor network scenarios are considered where the underlying signals of interest exhibit a degree of sparsity, which means that in an appropriate basis, they can be expressed in terms of a small number of nonzero coefficients. Following the emerging theory of compressive sensing, an overall architec ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Sensor network scenarios are considered where the underlying signals of interest exhibit a degree of sparsity, which means that in an appropriate basis, they can be expressed in terms of a small number of nonzero coefficients. Following the emerging theory of compressive sensing, an overall architecture is considered where the sensors acquire potentially noisy projections of the data, and the underlying sparsity is exploited to recover useful information about the signals of interest, which will be referred to as distributed sensor perception. First, we discuss the question of which projections of the data should be acquired, and how many of them. Then, we discuss how to take advantage of possible joint sparsity of the signals acquired by multiple sensors, and show how this can further improve the inference of the events from the sensor network. Two practical sensor applications are demonstrated, namely, distributed wearable action recognition using lowpower motion sensors and distributed object recognition using highpower camera sensors. Experimental data support the utility of the compressive sensing framework in distributed sensor perception.
A sparsity detection framework for on–off random access channels
 in Proc. IEEE Int. Symp. Inform. Th., Seoul, Korea, Jun.–Jul. 2009
"... Abstract—This paper considers a simple on–off random multiple access channel (MAC), where n users communicate simultaneously to a single receiver. Each user is assigned a single codeword which it transmits with some probability λ over m degrees of freedom. The receiver must detect which users transm ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Abstract—This paper considers a simple on–off random multiple access channel (MAC), where n users communicate simultaneously to a single receiver. Each user is assigned a single codeword which it transmits with some probability λ over m degrees of freedom. The receiver must detect which users transmitted. We show that detection for this random MAC is mathematically equivalent to a standard sparsity detection problem. Using new results in sparse estimation we are able to estimate the capacity of these channels and compare the achieved performance of various detection algorithms. The analysis provides insight into the roles of power control and multiuser detection. I.