Results 1  10
of
77
Message passing algorithms for compressed sensing: I. motivation and construction
 Proc. ITW
, 2010
"... Abstract—In a recent paper, the authors proposed a new class of lowcomplexity iterative thresholding algorithms for reconstructing sparse signals from a small set of linear measurements [1]. The new algorithms are broadly referred to as AMP, for approximate message passing. This is the second of tw ..."
Abstract

Cited by 163 (19 self)
 Add to MetaCart
(Show Context)
Abstract—In a recent paper, the authors proposed a new class of lowcomplexity iterative thresholding algorithms for reconstructing sparse signals from a small set of linear measurements [1]. The new algorithms are broadly referred to as AMP, for approximate message passing. This is the second of two conference papers describing the derivation of these algorithms, connection with related literature, extensions of original framework, and new empirical evidence. This paper describes the state evolution formalism for analyzing these algorithms, and some of the conclusions that can be drawn from this formalism. We carried out extensive numerical simulations to confirm these predictions. We present here a few representative results. I. GENERAL AMP AND STATE EVOLUTION We consider the model
Generalized Approximate Message Passing for Estimation with Random Linear Mixing
, 2012
"... We consider the estimation of an i.i.d. random vector observed through a linear transform followed by a componentwise, probabilistic (possibly nonlinear) measurement channel. A novel algorithm, called generalized approximate message passing (GAMP), is presented that provides computationally effici ..."
Abstract

Cited by 123 (18 self)
 Add to MetaCart
We consider the estimation of an i.i.d. random vector observed through a linear transform followed by a componentwise, probabilistic (possibly nonlinear) measurement channel. A novel algorithm, called generalized approximate message passing (GAMP), is presented that provides computationally efficient approximate implementations of maxsum and sumproblem loopy belief propagation for such problems. The algorithm extends earlier approximate message passing methods to incorporate arbitrary distributions on both the input and output of the transform and can be applied to a wide range of problems in nonlinear compressed sensing and learning. Extending an analysis by Bayati and Montanari, we argue that the asymptotic componentwise behavior of the GAMP method under large, i.i.d. Gaussian transforms is described by a simple set of state evolution (SE) equations. From the SE equations, one can exactly predict the asymptotic value of virtually any componentwise performance metric including meansquared error or detection accuracy. Moreover, the analysis is valid for arbitrary input and output distributions, even when the corresponding optimization problems are nonconvex. The results match predictions by Guo and Wang for relaxed belief propagation on large sparse matrices and, in certain instances, also agree with the optimal performance predicted by the replica method. The GAMP methodology thus provides a computationally efficient methodology, applicable to a large class of nonGaussian estimation problems with precise asymptotic performance guarantees.
A Singleletter Characterization of Optimal Noisy Compressed Sensing
"... Abstract—Compressed sensing deals with the reconstruction of a highdimensional signal from far fewer linear measurements, where the signal is known to admit a sparse representation in a certain linear space. The asymptotic scaling of the number of measurements needed for reconstruction as the dimen ..."
Abstract

Cited by 56 (16 self)
 Add to MetaCart
(Show Context)
Abstract—Compressed sensing deals with the reconstruction of a highdimensional signal from far fewer linear measurements, where the signal is known to admit a sparse representation in a certain linear space. The asymptotic scaling of the number of measurements needed for reconstruction as the dimension of the signal increases has been studied extensively. This work takes a fundamental perspective on the problem of inferring about individual elements of the sparse signal given the measurements, where the dimensions of the system become increasingly large. Using the replica method, the outcome of inferring about any fixed collection of signal elements is shown to be asymptotically decoupled, i.e., those elements become independent conditioned on the measurements. Furthermore, the problem of inferring about each signal element admits a singleletter characterization in the sense that the posterior distribution of the element, which is a sufficient statistic, becomes asymptotically identical to the posterior of inferring about the same element in scalar Gaussian noise. The result leads to simple characterization of all other elemental metrics of the compressed sensing problem, such as the mean squared error and the error probability for reconstructing the support set of the sparse signal. Finally, the singleletter characterization is rigorously justified in the special case of sparse measurement matrices where belief propagation becomes asymptotically optimal. I.
Estimation with Random Linear Mixing, Belief Propagation and Compressed Sensing
, 2010
"... We apply Guo and Wang’s relaxed belief propagation (BP) method to the estimation of a random vector from linear measurements followed by a componentwise probabilistic measurement channel. Relaxed BP uses a Gaussian approximation in standard BP to obtain significant computational savings for dense ..."
Abstract

Cited by 43 (10 self)
 Add to MetaCart
We apply Guo and Wang’s relaxed belief propagation (BP) method to the estimation of a random vector from linear measurements followed by a componentwise probabilistic measurement channel. Relaxed BP uses a Gaussian approximation in standard BP to obtain significant computational savings for dense measurement matrices. The main contribution of this paper is to extend the relaxed BP method and analysis to general (nonAWGN) output channels. Specifically, we present detailed equations for implementing relaxed BP for general channels and show that relaxed BP has an identical asymptotic large sparse limit behavior as standard BP, as predicted by the Guo and Wang’s state evolution (SE) equations. Applications are presented to compressed sensing and estimation with bounded noise.
Accurate Prediction of Phase Transitions in Compressed Sensing via a Connection to Minimax Denoising
, 2012
"... Compressed sensing posits that, within limits, one can undersample a sparse signal and yet reconstruct it accurately. Knowing the precise limits to such undersampling is important both for theory and practice. We present a formula that characterizes the allowed undersampling of generalized sparse ob ..."
Abstract

Cited by 41 (5 self)
 Add to MetaCart
Compressed sensing posits that, within limits, one can undersample a sparse signal and yet reconstruct it accurately. Knowing the precise limits to such undersampling is important both for theory and practice. We present a formula that characterizes the allowed undersampling of generalized sparse objects. The formula applies to Approximate Message Passing (AMP) algorithms for compressed sensing, which are here generalized to employ denoising operators besides the traditional scalar soft thresholding denoiser. This paper gives several examples including scalar denoisers not derived from convex penalization – the firm shrinkage nonlinearity and the minimax nonlinearity – and also nonscalar denoisers – block thresholding, monotone regression, and total variation minimization. Let the variables ε = k/N and δ = n/N denote the generalized sparsity and undersampling fractions for sampling the kgeneralizedsparse Nvector x0 according to y = Ax0. Here A is an n × N measurement matrix whose entries are iid standard Gaussian. The formula states that the phase transition curve δ = δ(ε) separating successful from unsuccessful reconstruction of x0
Graphical Models Concepts in Compressed Sensing
"... This paper surveys recent work in applying ideas from graphical models and message passing algorithms to solve large scale regularized regression problems. In particular, the focus is on compressed sensing reconstruction via ℓ1 penalized leastsquares (known as LASSO or BPDN). We discuss how to deri ..."
Abstract

Cited by 37 (2 self)
 Add to MetaCart
(Show Context)
This paper surveys recent work in applying ideas from graphical models and message passing algorithms to solve large scale regularized regression problems. In particular, the focus is on compressed sensing reconstruction via ℓ1 penalized leastsquares (known as LASSO or BPDN). We discuss how to derive fast approximate message passing algorithms to solve this problem. Surprisingly, the analysis of such algorithms allows to prove exact highdimensional limit results for the LASSO risk. This paper will appear as a chapter in a book on ‘Compressed Sensing ’ edited by Yonina Eldar and Gitta Kutynok. 1
The noisesensitivity phase transition in compressed sensing
, 2010
"... Consider the noisy underdetermined system of linear equations: y = Ax0 + z0, with n × N measurement matrix A, n < N, and Gaussian white noise z0 ∼ N(0, σ2I). Both y and A are known, both x0 and z0 are unknown, and we seek an approximation to x0. When x0 has few nonzeros, useful approximations are ..."
Abstract

Cited by 32 (3 self)
 Add to MetaCart
Consider the noisy underdetermined system of linear equations: y = Ax0 + z0, with n × N measurement matrix A, n < N, and Gaussian white noise z0 ∼ N(0, σ2I). Both y and A are known, both x0 and z0 are unknown, and we seek an approximation to x0. When x0 has few nonzeros, useful approximations are often obtained by ℓ1penalized ℓ2 minimization, in which the reconstruction ˆx 1,λ solves min ‖y − Ax‖2 2/2 + λ‖x‖1. Evaluate performance by meansquared error (MSE = Eˆx 1,λ − x0   2 2/N). Consider matrices A with iid Gaussian entries and a largesystem limit in which n, N → ∞ with n/N → δ and k/n → ρ. Call the ratio MSE/σ2 the noise sensitivity. We develop formal expressions for the MSE of ˆx 1,λ, and evaluate its worstcase formal noise sensitivity over all types of ksparse signals. The phase space 0 ≤ δ, ρ ≤ 1 is partitioned by curve ρ = ρMSE(δ) into two regions. Formal noise sensitivity is bounded throughout the region ρ < ρMSE(δ) and is unbounded throughout the region ρ> ρMSE(δ). The phase boundary ρ = ρMSE(δ) is identical to the previouslyknown phase transition curve for equivalence of ℓ1 − ℓ0 minimization in the ksparse noiseless case. Hence a single phase
Support recovery with sparsely sampled free random matrices
 in Proc. IEEE Int. Symp. Inf. Theory, Saint
, 2011
"... Abstract—Consider a BernoulliGaussian complex nvector whose components are Vi = XiBi, with Xi ∼ CN (0, Px) and binary Bi mutually independent and iid across i. This random qsparse vector is multiplied by a square random matrix U, and a randomly chosen subset, of average size np, p ∈ [0, 1], of th ..."
Abstract

Cited by 27 (1 self)
 Add to MetaCart
(Show Context)
Abstract—Consider a BernoulliGaussian complex nvector whose components are Vi = XiBi, with Xi ∼ CN (0, Px) and binary Bi mutually independent and iid across i. This random qsparse vector is multiplied by a square random matrix U, and a randomly chosen subset, of average size np, p ∈ [0, 1], of the resulting vector components is then observed in additive Gaussian noise. We extend the scope of conventional noisy compressive sampling models where U is typically a matrix with iid components, to allow U satisfying a certain freeness condition. This class of matrices encompasses Haar matrices and other unitarily invariant matrices. We use the replica method and the decoupling principle of Guo and Verdú, as well as a number of information theoretic bounds, to study the inputoutput mutual information and the support recovery error rate in the limit of n → ∞. We also extend the scope of the large deviation approach of Rangan, Fletcher and Goyal and characterize the performance of a class of estimators encompassing thresholded linear MMSE and ℓ1 relaxation.