Results 11  20
of
52
Sparse estimation with the swept approximated messagepassing algorithm,” Arxiv preprint arxiv:1406.4311
, 2014
"... Approximate Message Passing (AMP) has been shown to be a superior method for inference problems, such as the recovery of signals from sets of noisy, lowerdimensionality measurements, both in terms of reconstruction accuracy and in computational efficiency. However, AMP suffers from serious converge ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
Approximate Message Passing (AMP) has been shown to be a superior method for inference problems, such as the recovery of signals from sets of noisy, lowerdimensionality measurements, both in terms of reconstruction accuracy and in computational efficiency. However, AMP suffers from serious convergence issues in contexts that do not exactly match its assumptions. We propose a new approach to stabilizing AMP in these contexts by applying AMP updates to individual coefficients rather than in parallel. Our results show that this change to the AMP iteration can provide theoretically expected, but hitherto unobtainable, performance for problems on which the standard AMP iteration diverges. Additionally, we find that the computational costs of this swept coefficient update scheme is not unduly burdensome, allowing it to be applied efficiently to signals of large dimensionality. I.
Compressed sensing of approximatelysparse signals: Phase transitions and optimal reconstruction
 in 50th Annual Allerton Conference on Communication, Control, and Computing
, 2012
"... Abstract—Compressed sensing is designed to measure sparse signals directly in a compressed form. However, most signals of interest are only “approximately sparse”, i.e. even though the signal contains only a small fraction of relevant (large) components the other components are not strictly equal to ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
(Show Context)
Abstract—Compressed sensing is designed to measure sparse signals directly in a compressed form. However, most signals of interest are only “approximately sparse”, i.e. even though the signal contains only a small fraction of relevant (large) components the other components are not strictly equal to zero, but are only close to zero. In this paper we model the approximately sparse signal with a Gaussian distribution of small components, and we study its compressed sensing with dense random matrices. We use replica calculations to determine the meansquared error of the Bayesoptimal reconstruction for such signals, as a function of the variance of the small components, the density of large components and the measurement rate. We then use the GAMP algorithm and we quantify the region of parameters for which this algorithm achieves optimality (for large systems). Finally, we show that in the region where the GAMP for the homogeneous measurement matrices is not optimal, a special “seeding ” design of a spatiallycoupled measurement matrix allows to restore optimality. I.
SHOFA: Robust compressive sensing with orderoptimal complexity, measurements, and bits
"... Suppose x is any exactly ksparse vector in �n. We present a class of “sparse” matrices A, and a corresponding algorithm that we call SHOFA (for Short and Fast1) that, with high probability over A, can reconstruct x from Ax. The SHOFA algorithm is related to the Invertible Bloom Lookup Tables (IBL ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Suppose x is any exactly ksparse vector in �n. We present a class of “sparse” matrices A, and a corresponding algorithm that we call SHOFA (for Short and Fast1) that, with high probability over A, can reconstruct x from Ax. The SHOFA algorithm is related to the Invertible Bloom Lookup Tables (IBLTs) recently introduced by Goodrich et al., with two important distinctions – SHOFA relies on linear measurements, and is robust to noise and approximate sparsity. The SHOFA algorithm is the first to simultaneously have the following properties: (a) it requires only O(k) measurements, (b) the bitprecision of each measurement and each arithmetic operation is O (log(n) + P) (here 2 −P corresponds to the desired relative error in the reconstruction of x), (c) the computational complexity of decoding is O(k) arithmetic operations, and (d) if the reconstruction goal is simply to recover a single component of x instead of all of x, with high probability over A this can be done in constant time. All constants above are independent of all problem parameters other than the desired probability of success. For a wide range of parameters these properties are informationtheoretically orderoptimal. In addition, our SHOFA algorithm is robust to random noise, and (random) approximate sparsity for a large range of k. In particular, suppose the measured vector equals A(x+z)+e, where z and e correspond respectively to the source tail and measurement noise. Under reasonable statistical assumptions on z and e our decoding algorithm reconstructs x with an estimation error of O(z1 + (log k) 2 e1). The SHOFA algorithm works with high probability over A, z, and e, and still requires only O(k) steps and O(k) measurements over O(log(n))bit numbers. This is in contrast to most existing algorithms which focus on the “worstcase" z model, where it is known Ω(k log(n/k)) measurements over O(log(n))bit numbers are necessary.
On convergence of approximate message passing
 in Information Theory Proceedings (ISIT), 2014 IEEE International Symposium on
, 2014
"... Abstract—Approximate message passing is an iterative algorithm for compressed sensing and related applications. A solid theory about the performance and convergence of the algorithm exists for measurement matrices having iid entries of zero mean. However, it was observed by several authors that for ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
Abstract—Approximate message passing is an iterative algorithm for compressed sensing and related applications. A solid theory about the performance and convergence of the algorithm exists for measurement matrices having iid entries of zero mean. However, it was observed by several authors that for more general matrices the algorithm often encounters convergence problems. In this paper we identify the reason of the nonconvergence for measurement matrices with iid entries and nonzero mean in the context of Bayes optimal inference. Finally we demonstrate numerically that when the iterative update is changed from parallel to sequential the convergence is restored. I.
Analysis of Regularized LS Reconstruction and Random Matrix Ensembles in Compressed Sensing
, 2013
"... Performance of regularized leastsquares estimation in noisy compressed sensing is analyzed in the limit when the dimensions of the measurement matrix grow large. The sensing matrix is considered to be from a class of random ensembles that encloses as special cases standard Gaussian, roworthogonal ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Performance of regularized leastsquares estimation in noisy compressed sensing is analyzed in the limit when the dimensions of the measurement matrix grow large. The sensing matrix is considered to be from a class of random ensembles that encloses as special cases standard Gaussian, roworthogonal and socalled Torthogonal constructions. Source vectors that have nonuniform sparsity are included in the system model. Regularization based on `1norm and leading to LASSO estimation, or basis pursuit denoising, is given the main emphasis in the analysis. Extensions to `2norm and “zeronorm ” regularization are also briefly discussed. The analysis is carried out using the replica method in conjunction with some novel matrix integration results. Numerical experiments for LASSO are provided to verify the accuracy of the analytical results. The numerical experiments show that for noisy compressed sensing, the standard Gaussian ensemble is a suboptimal choice for the measurement matrix. Orthogonal constructions provide a superior performance in all considered scenarios and are easier to implement in practical applications. It is also discovered that for nonuniform sparsity patterns the Torthogonal matrices can further improve the mean square error behavior of the reconstruction when the noise level is not too high. However, as the additive noise becomes more prominent in the system, the simple roworthogonal measurement matrix appears to be the best choice out of the considered ensembles.
Spatially coupled LDPC codes constructed from protographs
 IEEE TRANSACTIONS ON INFORMATION THEORY
, 2014
"... ..."
(Show Context)
Performance improvement of iterative multiuser detection for large sparselyspread CDMA systems by spatial coupling,” submitted to
 IEEE Trans. Inf. Theory, 2012, [Online]. Available
"... ar ..."
(Show Context)
Adaptive sensing using deterministic partial Hadamard matrices
"... Abstract—This paper investigates the construction of deterministic measurement matrices preserving the entropy of a random vector with a given probability distribution. In particular, it is shown that for a random vector with i.i.d. discrete components, this is achieved by selecting a subset of rows ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Abstract—This paper investigates the construction of deterministic measurement matrices preserving the entropy of a random vector with a given probability distribution. In particular, it is shown that for a random vector with i.i.d. discrete components, this is achieved by selecting a subset of rows of a Hadamard matrix such that (i) the selection is deterministic (ii) the fraction of selected rows is vanishing. In contrast, it is shown that for a random vector with i.i.d. continuous components, no entropy preserving measurement matrix allows dimensionality reduction. These results are in agreement with the results of WuVerdu on almost lossless analog compression. This paper is however motivated by the complexity attribute of Hadamard matrices, which allows the use of efficient and stable reconstruction algorithms. The proof technique is based on a polar code martingale argument and on a new entropy power inequality for integervalued random variables. Index Terms—Entropypreserving matrices, Analog compression, Compressed sensing, Entropy power inequality.