Results 1  10
of
10
State Evolution for General Approximate Message Passing Algorithms, with Applications to Spatial Coupling
, 2012
"... We consider a class of approximated message passing (AMP) algorithms and characterize their highdimensional behavior in terms of a suitable state evolution recursion. Our proof applies to Gaussian matrices with independent but not necessarily identically distributed entries. It covers – in particul ..."
Abstract

Cited by 23 (7 self)
 Add to MetaCart
We consider a class of approximated message passing (AMP) algorithms and characterize their highdimensional behavior in terms of a suitable state evolution recursion. Our proof applies to Gaussian matrices with independent but not necessarily identically distributed entries. It covers – in particular – the analysis of generalized AMP, introduced by Rangan, and of AMP reconstruction in compressed sensing with spatially coupled sensing matrices. The proof technique builds on the one of [BM11], while simplifying and generalizing several steps. 1
Compressed sensing of approximatelysparse signals: Phase transitions and optimal reconstruction
 in 50th Annual Allerton Conference on Communication, Control, and Computing
, 2012
"... Abstract—Compressed sensing is designed to measure sparse signals directly in a compressed form. However, most signals of interest are only “approximately sparse”, i.e. even though the signal contains only a small fraction of relevant (large) components the other components are not strictly equal to ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
(Show Context)
Abstract—Compressed sensing is designed to measure sparse signals directly in a compressed form. However, most signals of interest are only “approximately sparse”, i.e. even though the signal contains only a small fraction of relevant (large) components the other components are not strictly equal to zero, but are only close to zero. In this paper we model the approximately sparse signal with a Gaussian distribution of small components, and we study its compressed sensing with dense random matrices. We use replica calculations to determine the meansquared error of the Bayesoptimal reconstruction for such signals, as a function of the variance of the small components, the density of large components and the measurement rate. We then use the GAMP algorithm and we quantify the region of parameters for which this algorithm achieves optimality (for large systems). Finally, we show that in the region where the GAMP for the homogeneous measurement matrices is not optimal, a special “seeding ” design of a spatiallycoupled measurement matrix allows to restore optimality. I.
Replica analysis and approximate message passing decoder for superposition codes,” arXiv preprint arXiv:1403.8024
, 2014
"... Abstract—Superposition codes are efficient for the Additive White Gaussian Noise channel. We provide here a replica analysis of the performances of these codes for large signals. We also consider a Bayesian Approximate Message Passing decoder based on a beliefpropagation approach, and discuss its p ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract—Superposition codes are efficient for the Additive White Gaussian Noise channel. We provide here a replica analysis of the performances of these codes for large signals. We also consider a Bayesian Approximate Message Passing decoder based on a beliefpropagation approach, and discuss its performance using the density evolution technic. Our main findings are 1) for the sizes we can access, the messagepassing decoder outperforms other decoders studied in the literature 2) its performance is limited by a sharp phase transition and 3) while these codes reach capacity as B (a crucial parameter in the code) increases, the performance of the message passing decoder worsen as the phase transition goes to lower rates. Superposition coding is a scheme for errorcorrection over the Additive White Gaussian Noise (AWGN) channel where a codeword Y ̃ is a sparse linear superposition of a random i.i.d
Reconstruction
"... All intext references underlined in blue are linked to publications on ResearchGate, letting you access and read them immediately. ..."
Abstract
 Add to MetaCart
(Show Context)
All intext references underlined in blue are linked to publications on ResearchGate, letting you access and read them immediately.
1Analysis of Compressed Sensing with SpatiallyCoupled Orthogonal Matrices
"... Abstract—Recent development in compressed sensing (CS) has revealed that the use of a special design of measurement matrix, namely the spatiallycoupled matrix, can achieve the informationtheoretic limit of CS. In this paper, we consider the measurement matrix which consists of the spatiallycouple ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—Recent development in compressed sensing (CS) has revealed that the use of a special design of measurement matrix, namely the spatiallycoupled matrix, can achieve the informationtheoretic limit of CS. In this paper, we consider the measurement matrix which consists of the spatiallycoupled orthogonal matrices. One example of such matrices are the randomly selected discrete Fourier transform (DFT) matrices. Such selection enjoys a less memory complexity and a faster multiplication procedure. Our contributions are the replica calculations to find the meansquareerror (MSE) of the Bayesoptimal reconstruction for such setup. We illustrate that the reconstruction thresholds under the spatiallycoupled orthogonal and Gaussian ensembles are quite different especially in the noisy cases. In particular, the spatially coupled orthogonal matrices achieve the faster convergence rate, the lower measurement rate, and the reduced MSE. I.
1On Sparse Vector Recovery Performance in Structurally Orthogonal Matrices via LASSO
"... In this paper, we consider a compressed sensing problem of reconstructing a sparse signal from an undersampled set of noisy linear measurements. The regularized least squares or least absolute shrinkage and selection operator (LASSO) formulation is used for signal estimation. The measurement matrix ..."
Abstract
 Add to MetaCart
In this paper, we consider a compressed sensing problem of reconstructing a sparse signal from an undersampled set of noisy linear measurements. The regularized least squares or least absolute shrinkage and selection operator (LASSO) formulation is used for signal estimation. The measurement matrix is assumed to be constructed by concatenating several randomly orthogonal bases, referred to as structurally orthogonal matrices. Such measurement matrix is highly relevant to largescale compressive sensing applications because it facilitates fast computation and also supports parallel processing. Using the replica method from statistical physics, we derive the meansquarederror (MSE) formula of reconstruction over the structurally orthogonal matrix in the largesystem regime. Extensive numerical experiments are provided to verify the analytical result. We then use the analytical result to study the MSE behaviors of LASSO over the structurally orthogonal matrix, with a particular focus on performance comparisons to matrices with independent and identically distributed (i.i.d.) Gaussian entries. We demonstrate that the structurally orthogonal matrices are at least as well performed as their i.i.d. Gaussian counterparts, and therefore the use of structurally orthogonal matrices is highly motivated in practical applications.
Ecole Normale Supérieure,
"... We study a compressed sensing solver called Approximate MessagePassing when the i.i.d matrices —for which it has been designed — are replaced by structured operators allowing computationally fast matrix multiplications. We show empirically that after proper randomization, the underlying structure ..."
Abstract
 Add to MetaCart
(Show Context)
We study a compressed sensing solver called Approximate MessagePassing when the i.i.d matrices —for which it has been designed — are replaced by structured operators allowing computationally fast matrix multiplications. We show empirically that after proper randomization, the underlying structure of the operators does not significantly affect the performances of the solver. In particular, for some specially designed ”spatially coupled ” operators, this allows a computationally fast and memory efficient reconstruction in compressed sensing up to the information theoretical limit. Index Terms — compressed sensing, spatialcoupling, messagepassing, state evolution, fast operators 1.
1TwoPart Reconstruction with NoisySudocodes
"... Abstract—We develop a twopart reconstruction framework for signal recovery in compressed sensing (CS), where a fast algorithm is applied to provide partial recovery in Part 1, and a CS algorithm is applied to complete the residual problem in Part 2. Partitioning the reconstruction process into two ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—We develop a twopart reconstruction framework for signal recovery in compressed sensing (CS), where a fast algorithm is applied to provide partial recovery in Part 1, and a CS algorithm is applied to complete the residual problem in Part 2. Partitioning the reconstruction process into two complementary parts provides a natural tradeoff between runtime and reconstruction quality. To exploit the advantages of the twopart framework, we propose a NoisySudocodes algorithm that performs twopart reconstruction of sparse signals in the presence of measurement noise. Specifically, we design a fast algorithm for Part 1 of NoisySudocodes that identifies the zero coefficients of the input signal from its noisy measurements. Many existing CS algorithms could be applied to Part 2, and we investigate approximate message passing (AMP) and binary iterative hard thresholding (BIHT). For NoisySudocodes with AMP in Part 2, we provide a theoretical analysis that characterizes the tradeoff between runtime and reconstruction quality. In a 1bit CS setting where a new 1bit quantizer is constructed for Part 1 and BIHT is applied to Part 2, numerical results show that the NoisySudocodes algorithm improves over BIHT in both runtime and reconstruction quality. Index Terms—compressed sensing, twopart reconstruction, 1bit CS. I.