Results 1  10
of
43
ExpectationMaximization GaussianMixture Approximate Message Passing
"... Abstract—When recovering a sparse signal from noisy compressive linear measurements, the distribution of the signal’s nonzero coefficients can have a profound affect on recovery meansquared error (MSE). If this distribution was apriori known, one could use efficient approximate message passing (AM ..."
Abstract

Cited by 40 (12 self)
 Add to MetaCart
(Show Context)
Abstract—When recovering a sparse signal from noisy compressive linear measurements, the distribution of the signal’s nonzero coefficients can have a profound affect on recovery meansquared error (MSE). If this distribution was apriori known, one could use efficient approximate message passing (AMP) techniques for nearly minimum MSE (MMSE) recovery. In practice, though, the distribution is unknown, motivating the use of robust algorithms like Lasso—which is nearly minimax optimal—at the cost of significantly larger MSE for nonleastfavorable distributions. As an alternative, we propose an empiricalBayesian technique that simultaneously learns the signal distribution while MMSErecovering the signal—according to the learned distribution—using AMP. In particular, we model the nonzero distribution as a Gaussian mixture, and learn its parameters through expectation maximization, using AMP to implement the expectation step. Numerical experiments confirm the stateoftheart performance of our approach on a range of 1 2 signal classes. I.
Graphical Models Concepts in Compressed Sensing
"... This paper surveys recent work in applying ideas from graphical models and message passing algorithms to solve large scale regularized regression problems. In particular, the focus is on compressed sensing reconstruction via ℓ1 penalized leastsquares (known as LASSO or BPDN). We discuss how to deri ..."
Abstract

Cited by 37 (2 self)
 Add to MetaCart
(Show Context)
This paper surveys recent work in applying ideas from graphical models and message passing algorithms to solve large scale regularized regression problems. In particular, the focus is on compressed sensing reconstruction via ℓ1 penalized leastsquares (known as LASSO or BPDN). We discuss how to derive fast approximate message passing algorithms to solve this problem. Surprisingly, the analysis of such algorithms allows to prove exact highdimensional limit results for the LASSO risk. This paper will appear as a chapter in a book on ‘Compressed Sensing ’ edited by Yonina Eldar and Gitta Kutynok. 1
Expectationmaximization BernoulliGaussian approximate message passing
 in Proc. Asilomar Conf. Signals Syst. Comput
, 2011
"... Abstract—The approximate message passing (AMP) algorithm originally proposed by Donoho, Maleki, and Montanari yields a computationally attractive solution to the usual ℓ1regularized leastsquares problem faced in compressed sensing, whose solution is known to be robust to the signal distribution. W ..."
Abstract

Cited by 31 (2 self)
 Add to MetaCart
(Show Context)
Abstract—The approximate message passing (AMP) algorithm originally proposed by Donoho, Maleki, and Montanari yields a computationally attractive solution to the usual ℓ1regularized leastsquares problem faced in compressed sensing, whose solution is known to be robust to the signal distribution. When the signal is drawn i.i.d from a marginal distribution that is not leastfavorable, better performance can be attained using a Bayesian variation of AMP. The latter, however, assumes that the distribution is perfectly known. In this paper, we navigate the space between these two extremes by modeling the signal as i.i.d BernoulliGaussian (BG) with unknown prior sparsity, mean, and variance, and the noise as zeromean Gaussian with unknown variance, and we simultaneously reconstruct the signal while learning the prior signal and noise parameters. To accomplish this task, we embed the BGAMP algorithm within an expectationmaximization (EM) framework. Numerical experiments confirm the excellent performance of our proposed EMBGAMP on a range of signal types. 12 I.
Submodular Dictionary Selection for Sparse Representation
"... We develop an efficient learning framework to construct signal dictionaries for sparse representation by selecting the dictionary columns from multiple candidate bases. By sparse, we mean that only a few dictionary elements, compared to the ambient signal dimension, can exactly represent or wellapp ..."
Abstract

Cited by 27 (0 self)
 Add to MetaCart
(Show Context)
We develop an efficient learning framework to construct signal dictionaries for sparse representation by selecting the dictionary columns from multiple candidate bases. By sparse, we mean that only a few dictionary elements, compared to the ambient signal dimension, can exactly represent or wellapproximate the signals of interest. We formulate both the selection of the dictionary columns and the sparse representation of signals as a joint combinatorial optimization problem. The proposed combinatorial objective maximizes variance reduction over the set of training signals by constraining the size of the dictionary as well as the number of dictionary columns that can be used to represent each signal. We show that if the available dictionary column vectors are incoherent, our objective function satisfies approximate submodularity. We exploit this property to develop SDSOMP and SDSMA, two greedy algorithms with approximation guarantees. We also describe how our learning framework enables dictionary selection for structured sparse representations, e.g., where the sparse coefficients occur in restricted patterns. We evaluate our approach on synthetic signals and natural images for representation and inpainting problems. 1.
Bayesian generalized double Pareto shrinkage
, 2010
"... We propose a generalized double Pareto prior for shrinkage estimation in linear models. The prior can be obtained via a scale mixture of Laplace or normal distributions, while forming a bridge between the Laplace and NormalJeffreys ’ priors. While it has a spike at zero like the Laplace density, it ..."
Abstract

Cited by 23 (4 self)
 Add to MetaCart
We propose a generalized double Pareto prior for shrinkage estimation in linear models. The prior can be obtained via a scale mixture of Laplace or normal distributions, while forming a bridge between the Laplace and NormalJeffreys ’ priors. While it has a spike at zero like the Laplace density, it also has a Studenttlike tail behavior. We show strong consistency of the posterior in regression models with a diverging number of parameters, providing a template to be used for other priors in similar settings. Bayesian computation is straightforward via a simple Gibbs sampling algorithm. We also investigate the properties of the maximum a posteriori estimator and reveal connections with some wellestablished regularization procedures. The performance of the new prior is tested through simulations.
Approximate message passing with consistent parameter estimation and applications to sparse learning,” arXiv:1207.3859 [cs.IT
, 2012
"... We consider the estimation of an i.i.d. vector x ∈ Rn from measurements y ∈ Rm obtained by a general cascade model consisting of a known linear transform followed by a probabilistic componentwise (possibly nonlinear) measurement channel. We present a method, called adaptive generalized approximate ..."
Abstract

Cited by 21 (4 self)
 Add to MetaCart
(Show Context)
We consider the estimation of an i.i.d. vector x ∈ Rn from measurements y ∈ Rm obtained by a general cascade model consisting of a known linear transform followed by a probabilistic componentwise (possibly nonlinear) measurement channel. We present a method, called adaptive generalized approximate message passing (Adaptive GAMP), that enables joint learning of the statistics of the prior and measurement channel along with estimation of the unknown vector x. Our method can be applied to a large class of learning problems including the learning of sparse priors in compressed sensing or identification of linearnonlinear cascade models in dynamical systems and neural spiking processes. We prove that for large i.i.d. Gaussian transform matrices the asymptotic componentwise behavior of the adaptive GAMP algorithm is predicted by a simple set of scalar state evolution equations. This analysis shows that the adaptive GAMP method can yield asymptotically consistent parameter estimates, which implies that the algorithm achieves a reconstruction quality equivalent to the oracle algorithm that knows the correct parameter values. The adaptive GAMP methodology thus provides a systematic, general and computationally efficient method applicable to a large range of complex linearnonlinear models with provable guarantees. 1
Sparse Signal Recovery and Acquisition with Graphical Models  A review of a broad set of sparse models, analysis tools, and recovery algorithms within the graphical models formalism
, 2010
"... Many applications in digital signal processing, machine learning, and communications feature a linear regression problem in which unknown data points, hidden variables, or code words are projected into a lower dimensional space via y 5 Fx 1 n. (1) In the signal processing context, we refer to x [ R ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
Many applications in digital signal processing, machine learning, and communications feature a linear regression problem in which unknown data points, hidden variables, or code words are projected into a lower dimensional space via y 5 Fx 1 n. (1) In the signal processing context, we refer to x [ R N as the signal, y [ R M as measurements with M, N, F[R M3N as the measurement matrix, and n [ R M as the noise. The measurement matrix F is a matrix with random entries in data streaming, an overcomplete dictionary of features in sparse Bayesian learning, or a code matrix in communications [1]–[3]. Extracting x from y in (1) is ill posed in general since M, N and the measurement matrix F hence has a nontrivial null space; given any vector v in this null space, x 1 v defines a solution that produces the same observations y. Additional information is therefore necessary to distinguish the true x among the infinitely many possible solutions [1], [2], [4], [5]. It is now well known that sparse
A hierarchical Bayesian framework for constructing sparsityinducing priors
, 2010
"... Variable selection techniques have become increasingly popular amongst statisticians due to an increased number of regression and classification applications involving highdimensional data where we expect some predictors to be unimportant. In this context, Bayesian variable selection techniques inv ..."
Abstract

Cited by 14 (5 self)
 Add to MetaCart
(Show Context)
Variable selection techniques have become increasingly popular amongst statisticians due to an increased number of regression and classification applications involving highdimensional data where we expect some predictors to be unimportant. In this context, Bayesian variable selection techniques involving Markov chain Monte Carlo exploration of the posterior distribution over models can be prohibitively computationally expensive and so there has been attention paid to quasiBayesian approaches such as maximum a posteriori (MAP) estimation using priors that induce sparsity in such estimates. We focus on this latter approach, expanding on the hierarchies proposed to date to provide a Bayesian interpretation and generalization of stateoftheart penalized optimization approaches and providing simultaneously a natural way to include prior information about parameters within this framework. We give examples of how to use this hierarchy to compute MAP estimates for linear and logistic regression as well as sparse precisionmatrix estimates in Gaussian graphical models. In addition, an adaptive group lasso method is derived using the framework. 1
Greedy Dictionary Selection for Sparse Representation
"... We discuss how to construct a dictionary by selecting its columns from multiple candidate bases to allow sparse representation of signals. By sparse representation, we mean that only a few dictionary elements, compared to the ambient signal dimension, can be used to wellapproximate the signals. We ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
(Show Context)
We discuss how to construct a dictionary by selecting its columns from multiple candidate bases to allow sparse representation of signals. By sparse representation, we mean that only a few dictionary elements, compared to the ambient signal dimension, can be used to wellapproximate the signals. We formulate both the selection of the dictionary columns and the sparse representation of signals as a joint combinatorial optimization problem. The proposed combinatorial objective maximizes variance reduction over the set of signals by constraining the size of the dictionary as well as the number of dictionary columns that can be used to represent each signal. We show that if the columns of the candidate bases are incoherent, our objective function satisfies approximate submodularity. We exploit this property to develop efficient greedy algorithms with wellcharacterized theoretical performance. Applications of dictionary selection include denoising, inpainting, and compressive sensing. We evaluate our approach to reconstruct dictionaries from sparse samples, and also apply it to an image inpainting problem. 1