Results 1  10
of
27
ExpectationMaximization GaussianMixture Approximate Message Passing
"... Abstract—When recovering a sparse signal from noisy compressive linear measurements, the distribution of the signal’s nonzero coefficients can have a profound affect on recovery meansquared error (MSE). If this distribution was apriori known, one could use efficient approximate message passing (AM ..."
Abstract

Cited by 40 (12 self)
 Add to MetaCart
(Show Context)
Abstract—When recovering a sparse signal from noisy compressive linear measurements, the distribution of the signal’s nonzero coefficients can have a profound affect on recovery meansquared error (MSE). If this distribution was apriori known, one could use efficient approximate message passing (AMP) techniques for nearly minimum MSE (MMSE) recovery. In practice, though, the distribution is unknown, motivating the use of robust algorithms like Lasso—which is nearly minimax optimal—at the cost of significantly larger MSE for nonleastfavorable distributions. As an alternative, we propose an empiricalBayesian technique that simultaneously learns the signal distribution while MMSErecovering the signal—according to the learned distribution—using AMP. In particular, we model the nonzero distribution as a Gaussian mixture, and learn its parameters through expectation maximization, using AMP to implement the expectation step. Numerical experiments confirm the stateoftheart performance of our approach on a range of 1 2 signal classes. I.
Support recovery with sparsely sampled free random matrices
 in Proc. IEEE Int. Symp. Inf. Theory, Saint
, 2011
"... Abstract—Consider a BernoulliGaussian complex nvector whose components are Vi = XiBi, with Xi ∼ CN (0, Px) and binary Bi mutually independent and iid across i. This random qsparse vector is multiplied by a square random matrix U, and a randomly chosen subset, of average size np, p ∈ [0, 1], of th ..."
Abstract

Cited by 27 (1 self)
 Add to MetaCart
(Show Context)
Abstract—Consider a BernoulliGaussian complex nvector whose components are Vi = XiBi, with Xi ∼ CN (0, Px) and binary Bi mutually independent and iid across i. This random qsparse vector is multiplied by a square random matrix U, and a randomly chosen subset, of average size np, p ∈ [0, 1], of the resulting vector components is then observed in additive Gaussian noise. We extend the scope of conventional noisy compressive sampling models where U is typically a matrix with iid components, to allow U satisfying a certain freeness condition. This class of matrices encompasses Haar matrices and other unitarily invariant matrices. We use the replica method and the decoupling principle of Guo and Verdú, as well as a number of information theoretic bounds, to study the inputoutput mutual information and the support recovery error rate in the limit of n → ∞. We also extend the scope of the large deviation approach of Rangan, Fletcher and Goyal and characterize the performance of a class of estimators encompassing thresholded linear MMSE and ℓ1 relaxation.
MMSE dimension
 in Proc. 2010 IEEE Int. Symp. Inf. Theory
, 2010
"... Abstract—If is standard Gaussian, the minimum meansquare error (MMSE) of estimating a random variable based on + vanishes at least as fast as 1 as.We define the MMSE dimension of as the limit as of the product of and the MMSE. MMSE dimension is also shown to be the asymptotic ratio of nonlinear MMSE ..."
Abstract

Cited by 21 (9 self)
 Add to MetaCart
(Show Context)
Abstract—If is standard Gaussian, the minimum meansquare error (MMSE) of estimating a random variable based on + vanishes at least as fast as 1 as.We define the MMSE dimension of as the limit as of the product of and the MMSE. MMSE dimension is also shown to be the asymptotic ratio of nonlinear MMSE to linear MMSE. For discrete, absolutely continuous or mixed distribution we show that MMSE dimension equals Rényi’s information dimension. However, for a class of selfsimilar singular (e.g., Cantor distribution), we show that the product of and MMSE oscillates around information dimension periodically in (dB). We also show that these results extend considerably beyond Gaussian noise under various technical conditions. Index Terms—Additive noise, Bayesian statistics, Gaussian noise, highSNR asymptotics, minimum meansquare error (MMSE), mutual information, nonGaussian noise, Rényi information dimension. where the infimum in (1) is over all Borel measurable. When is related to through an additivenoise channel with gain, i.e., (3) where is independent of, we denote
Sparse signal processing with linear and nonlinear observations: A unified Shannontheoretic approach,” arXiv preprint arXiv:1304.0682
, 2013
"... In this work we derive fundamental limits for many linear and nonlinear sparse signal processing models including linear and quantized compressive sensing, group testing, multivariate regression and problems with missing features. In general, sparse signal processing problems can be characterized i ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
In this work we derive fundamental limits for many linear and nonlinear sparse signal processing models including linear and quantized compressive sensing, group testing, multivariate regression and problems with missing features. In general, sparse signal processing problems can be characterized in terms of the following Markovian property. We are given a set of N variables X1, X2,..., XN, and there is an unknown subset of variables S ⊂ {1, 2,..., N} that are relevant for predicting outcomes/outputs Y. In other words, when Y is conditioned on {Xn}n∈S it is conditionally independent of the other variables, {Xn}n 6∈S. Our goal is to identify the set S from samples of the variables X and the associated outcomes Y. We characterize this problem as a version of the noisy channel coding problem. Using asymptotic information theoretic analyses, we establish mutual information formulas that provide sufficient and necessary conditions on the number of samples required to successfully recover the salient variables. These mutual information expressions unify conditions for both linear and nonlinear observations. We then compute sample complexity bounds for the aforementioned models, based on the mutual information expressions in order to demonstrate the applicability and flexibility of our results in general sparse signal processing models. 1
Reconstruction of Signals Drawn from a Gaussian Mixture via Noisy Compressive Measurements
, 2014
"... This paper determines to within a single measurement the minimum number of measurements required to successfully reconstruct a signal drawn from a Gaussian mixture model in the lownoise regime. The method is to develop upper and lower bounds that are a function of the maximum dimension of the lin ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
This paper determines to within a single measurement the minimum number of measurements required to successfully reconstruct a signal drawn from a Gaussian mixture model in the lownoise regime. The method is to develop upper and lower bounds that are a function of the maximum dimension of the linear subspaces spanned by the Gaussian mixture components. The method not only reveals the existence or absence of a minimum meansquared error (MMSE) error floor (phase transition) but also provides insight into the MMSE decay via multivariate generalizations of the MMSE dimension and the MMSE power offset, which are a function of the interaction between the geometrical properties of the kernel and the Gaussian mixture. These results apply not only to standard linear random Gaussian measurements but also to linear kernels that minimize the MMSE. It is shown that optimal kernels do not change the number of measurements associated with the MMSE phase transition, rather they affect the sensed power required to achieve a target MMSE in the lownoise regime. Overall, our bounds are tighter and sharper than standard bounds on the minimum number of measurements needed to recover sparse signals associated with a union of subspaces model, as they are not asymptotic in the signal dimension or signal sparsity.
Near optimal compressed sensing of sparse rankone matrices via sparse power factorization. arXiv preprint arXiv:1312.0525
, 2013
"... ar ..."
(Show Context)
Channel capacity under subnyquist nonuniform sampling,” submitted to
 IEEE Trans on Information Theory, April 2012. [Online]. Available: http://arxiv.org/abs/1204.6049
"... Abstract — This paper investigates the effect of subNyquist sampling upon the capacity of an analog channel. The channel is assumed to be a linear timeinvariant Gaussian channel, where perfect channel knowledge is available at both the transmitter and the receiver. We consider a general class of r ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
Abstract — This paper investigates the effect of subNyquist sampling upon the capacity of an analog channel. The channel is assumed to be a linear timeinvariant Gaussian channel, where perfect channel knowledge is available at both the transmitter and the receiver. We consider a general class of rightinvertible timepreserving sampling methods which includes irregular nonuniform sampling, and characterize in closed form the channel capacity achievable by this class of sampling methods, under a sampling rate and power constraint. Our results indicate that the optimal sampling structures extract out the set of frequencies that exhibits the highest signaltonoise ratio among all spectral sets of measure equal to the sampling rate. This can be attained through filterbank sampling with uniform sampling grid employed at each branch with possibly different rates, or through a single branch of modulation and filtering followed by uniform sampling. These results reveal that for a large class of channels, employing irregular nonuniform sampling sets, while are typically complicated to realize in practice, does not provide capacity gain over uniform sampling sets with appropriate preprocessing. Our findings demonstrate that aliasing or scrambling of spectral components does not provide capacity gain in this scenario, which is in contrast to the benefits obtained from random mixing in spectrumblind compressive sampling schemes. Index Terms — Nonuniform sampling, irregular sampling, sampled analog channels, subNyquist sampling, channel
Analysis of Regularized LS Reconstruction and Random Matrix Ensembles in Compressed Sensing
, 2013
"... Performance of regularized leastsquares estimation in noisy compressed sensing is analyzed in the limit when the dimensions of the measurement matrix grow large. The sensing matrix is considered to be from a class of random ensembles that encloses as special cases standard Gaussian, roworthogonal ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Performance of regularized leastsquares estimation in noisy compressed sensing is analyzed in the limit when the dimensions of the measurement matrix grow large. The sensing matrix is considered to be from a class of random ensembles that encloses as special cases standard Gaussian, roworthogonal and socalled Torthogonal constructions. Source vectors that have nonuniform sparsity are included in the system model. Regularization based on `1norm and leading to LASSO estimation, or basis pursuit denoising, is given the main emphasis in the analysis. Extensions to `2norm and “zeronorm ” regularization are also briefly discussed. The analysis is carried out using the replica method in conjunction with some novel matrix integration results. Numerical experiments for LASSO are provided to verify the accuracy of the analytical results. The numerical experiments show that for noisy compressed sensing, the standard Gaussian ensemble is a suboptimal choice for the measurement matrix. Orthogonal constructions provide a superior performance in all considered scenarios and are easier to implement in practical applications. It is also discovered that for nonuniform sparsity patterns the Torthogonal matrices can further improve the mean square error behavior of the reconstruction when the noise level is not too high. However, as the additive noise becomes more prominent in the system, the simple roworthogonal measurement matrix appears to be the best choice out of the considered ensembles.
Minimax Capacity Loss under SubNyquist Universal Sampling
, 2014
"... This paper considers the capacity of subsampled analog channels when the sampler is designed to operate independent of instantaneous channel realizations. A compound multiband Gaussian channel with unknown subband occupancy is considered, with perfect channel state information available at both the ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
This paper considers the capacity of subsampled analog channels when the sampler is designed to operate independent of instantaneous channel realizations. A compound multiband Gaussian channel with unknown subband occupancy is considered, with perfect channel state information available at both the receiver and the transmitter. We restrict our attention to a general class of periodic subNyquist samplers, which subsumes as special cases sampling with periodic modulation and filter banks. We evaluate the loss due to channelindependent (universal) subNyquist design through a sampled capacity loss metric, that is, the gap between the undersampled channel capacity and the Nyquistrate capacity. We investigate sampling methods that minimize the worstcase (minimax) capacity loss over all channel states. A fundamental lower bound on the minimax capacity loss is first developed, which depends only on the band sparsity ratio and the undersampling factor, modulo a residual term that vanishes at high signaltonoise ratio. We then quantify the capacity loss under Landaurate sampling with periodic modulation and lowpass filters, when the Fourier coefficients of the modulation waveforms are randomly generated and independent (resp. i.i.d. Gaussiandistributed), termed independent random sampling (resp. Gaussian sampling). Our results indicate that with exponentially high probability, independent random sampling and Gaussian sampling achieve minimax sampled capacity loss in the Landaurate and superLandau
The minimax noise sensitivity in compressed sensing
 in Information Theory Proceedings (ISIT), 2013 IEEE International Symposium on
, 2013
"... AbstractConsider the compressed sensing problem of estimating an unknown ksparse nvector from a set of m noisy linear equations. Recent work focused on the noise sensitivity of particular algorithms the scaling of the reconstruction error with added noise. In this paper, we study the minimax no ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
AbstractConsider the compressed sensing problem of estimating an unknown ksparse nvector from a set of m noisy linear equations. Recent work focused on the noise sensitivity of particular algorithms the scaling of the reconstruction error with added noise. In this paper, we study the minimax noise sensitivity the minimum is over all possible recovery algorithms and the maximum is over all vectors obeying a sparsity constraint. This fundamental quantity characterizes the difficulty of recovery when nothing is known about the vector other than the fact that it has at most k nonzero entries. Assuming random sensing matrices (i.i.d. Gaussian), we obtain nonasymptotic bounds which show that the minimax noise sensitivity is finite if m ≥ k + 3 and infinite if m ≤ k + 1. We also study the large system behavior where δ = m/n ∈ (0, 1) denotes the undersampling fraction and k/n = ε ∈ (0, 1) denotes the sparsity fraction. There is a phase transition separating successful and unsuccessful recovery: the minimax noise sensitivity is bounded for any δ > ε and is unbounded for any δ < ε. One consequence of our results is that the Bayes optimal phase transitions of Wu and Verdu can be obtained uniformly over the class of all sparse vectors.