Results 1  10
of
98
Message passing algorithms for compressed sensing: I. motivation and construction
 Proc. ITW
, 2010
"... Abstract—In a recent paper, the authors proposed a new class of lowcomplexity iterative thresholding algorithms for reconstructing sparse signals from a small set of linear measurements [1]. The new algorithms are broadly referred to as AMP, for approximate message passing. This is the second of tw ..."
Abstract

Cited by 170 (19 self)
 Add to MetaCart
(Show Context)
Abstract—In a recent paper, the authors proposed a new class of lowcomplexity iterative thresholding algorithms for reconstructing sparse signals from a small set of linear measurements [1]. The new algorithms are broadly referred to as AMP, for approximate message passing. This is the second of two conference papers describing the derivation of these algorithms, connection with related literature, extensions of original framework, and new empirical evidence. This paper describes the state evolution formalism for analyzing these algorithms, and some of the conclusions that can be drawn from this formalism. We carried out extensive numerical simulations to confirm these predictions. We present here a few representative results. I. GENERAL AMP AND STATE EVOLUTION We consider the model
Compressed Sensing: Theory and Applications
, 2012
"... Compressed sensing is a novel research area, which was introduced in 2006, and since then has already become a key concept in various areas of applied mathematics, computer science, and electrical engineering. It surprisingly predicts that highdimensional signals, which allow a sparse representati ..."
Abstract

Cited by 119 (30 self)
 Add to MetaCart
(Show Context)
Compressed sensing is a novel research area, which was introduced in 2006, and since then has already become a key concept in various areas of applied mathematics, computer science, and electrical engineering. It surprisingly predicts that highdimensional signals, which allow a sparse representation by a suitable basis or, more generally, a frame, can be recovered from what was previously considered highly incomplete linear measurements by using efficient algorithms. This article shall serve as an introduction to and a survey about compressed sensing. Key Words. Dimension reduction. Frames. Greedy algorithms. Illposed inverse problems. `1 minimization. Random matrices. Sparse approximation. Sparse recovery.
Structured compressed sensing: From theory to applications
 IEEE TRANS. SIGNAL PROCESS
, 2011
"... Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discretetodiscrete measurement architectures using matrices of randomized nature and signal models based on standard ..."
Abstract

Cited by 98 (15 self)
 Add to MetaCart
(Show Context)
Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discretetodiscrete measurement architectures using matrices of randomized nature and signal models based on standard sparsity. In recent years, CS has worked its way into several new application areas. This, in turn, necessitates a fresh look on many of the basics of CS. The random matrix measurement operator must be replaced by more structured sensing architectures that correspond to the characteristics of feasible acquisition hardware. The standard sparsity prior has to be extended to include a much richer class of signals and to encode broader data models, including continuoustime signals. In our overview, the theme is exploiting signal and measurement structure in compressive sensing. The prime focus is bridging theory and practice; that is, to pinpoint the potential of structured CS strategies to emerge from the math to the hardware. Our summary highlights new directions as well as relations to more traditional CS, with the hope of serving both as a review to practitioners wanting to join this emerging field, and as a reference for researchers that attempts to put some of the existing ideas in perspective of practical applications.
Optimally tuned iterative reconstruction algorithms for compressed sensing
 Selected Topics in Signal Processing
"... Abstract — We conducted an extensive computational experiment, lasting multiple CPUyears, to optimally select parameters for two important classes of algorithms for finding sparse solutions of underdetermined systems of linear equations. We make the optimally tuned implementations available at spar ..."
Abstract

Cited by 66 (4 self)
 Add to MetaCart
(Show Context)
Abstract — We conducted an extensive computational experiment, lasting multiple CPUyears, to optimally select parameters for two important classes of algorithms for finding sparse solutions of underdetermined systems of linear equations. We make the optimally tuned implementations available at sparselab.stanford.edu; they run ‘out of the box ’ with no user tuning: it is not necessary to select thresholds or know the likely degree of sparsity. Our class of algorithms includes iterative hard and soft thresholding with or without relaxation, as well as CoSaMP, subspace pursuit and some natural extensions. As a result, our optimally tuned algorithms dominate such proposals. Our notion of optimality is defined in terms of phase transitions, i.e. we maximize the number of nonzeros at which the algorithm can successfully operate. We show that the phase transition is a welldefined quantity with our suite of random underdetermined linear systems. Our tuning gives the highest transition possible within each class of algorithms. We verify by extensive computation the robustness of our recommendations to the amplitude distribution of the nonzero coefficients as well as the matrix ensemble defining the underdetermined system. Our findings include: (a) For all algorithms, the worst amplitude distribution for nonzeros is generally the constantamplitude randomsign distribution, where all nonzeros are the same amplitude. (b) Various random matrix ensembles give the same phase transitions; random partial isometries may give different transitions and require different tuning; (c) Optimally tuned subspace pursuit dominates optimally tuned CoSaMP, particularly so when the system is almost square. I.
Precise Undersampling Theorems
"... Undersampling Theorems state that we may gather far fewer samples than the usual sampling theorem while exactly reconstructing the object of interest – provided the object in question obeys a sparsity condition, the samples measure appropriate linear combinations of signal values, and we reconstruc ..."
Abstract

Cited by 61 (4 self)
 Add to MetaCart
Undersampling Theorems state that we may gather far fewer samples than the usual sampling theorem while exactly reconstructing the object of interest – provided the object in question obeys a sparsity condition, the samples measure appropriate linear combinations of signal values, and we reconstruct with a particular nonlinear procedure. While there are many ways to crudely demonstrate such undersampling phenomena, we know of only one approach which precisely quantifies the true sparsityundersampling tradeoff curve of standard algorithms and standard compressed sensing matrices. That approach, based on combinatorial geometry, predicts the exact location in sparsityundersampling domain where standard algorithms exhibit phase transitions in performance. We review the phase transition approach here and describe the broad range of cases where it applies. We also mention exceptions and state challenge problems for future research. Sample result: one can efficiently reconstruct a ksparse signal of length N from n measurements, provided n � 2k · log(N/n), for (k, n, N) large, k ≪ N.
ExpectationMaximization GaussianMixture Approximate Message Passing
"... Abstract—When recovering a sparse signal from noisy compressive linear measurements, the distribution of the signal’s nonzero coefficients can have a profound affect on recovery meansquared error (MSE). If this distribution was apriori known, one could use efficient approximate message passing (AM ..."
Abstract

Cited by 41 (15 self)
 Add to MetaCart
(Show Context)
Abstract—When recovering a sparse signal from noisy compressive linear measurements, the distribution of the signal’s nonzero coefficients can have a profound affect on recovery meansquared error (MSE). If this distribution was apriori known, one could use efficient approximate message passing (AMP) techniques for nearly minimum MSE (MMSE) recovery. In practice, though, the distribution is unknown, motivating the use of robust algorithms like Lasso—which is nearly minimax optimal—at the cost of significantly larger MSE for nonleastfavorable distributions. As an alternative, we propose an empiricalBayesian technique that simultaneously learns the signal distribution while MMSErecovering the signal—according to the learned distribution—using AMP. In particular, we model the nonzero distribution as a Gaussian mixture, and learn its parameters through expectation maximization, using AMP to implement the expectation step. Numerical experiments confirm the stateoftheart performance of our approach on a range of 1 2 signal classes. I.
Accurate Prediction of Phase Transitions in Compressed Sensing via a Connection to Minimax Denoising
, 1111
"... Compressed sensing posits that, within limits, one can undersample a sparse signal and yet reconstruct it accurately. Knowing the precise limits to such undersampling is important both for theory and practice. We present a formula that characterizes the allowed undersampling of generalized sparse ob ..."
Abstract

Cited by 40 (4 self)
 Add to MetaCart
(Show Context)
Compressed sensing posits that, within limits, one can undersample a sparse signal and yet reconstruct it accurately. Knowing the precise limits to such undersampling is important both for theory and practice. We present a formula that characterizes the allowed undersampling of generalized sparse objects. The formula applies to Approximate Message Passing (AMP) algorithms for compressed sensing, which are here generalized to employ denoising operators besides the traditional scalar soft thresholding denoiser. This paper gives several examples including scalar denoisers not derived from convex penalization – the firm shrinkage nonlinearity and the minimax nonlinearity – and also nonscalar denoisers – block thresholding, monotone regression, and total variation minimization. Let the variables ε = k/N and δ = n/N denote the generalized sparsity and undersampling fractions for sampling the kgeneralizedsparse Nvector x0 according to y = Ax0. Here A is an n × N measurement matrix whose entries are iid standard Gaussian. The formula states that the phase transition curve δ = δ(ε) separating successful from unsuccessful reconstruction of x0
Living on the edge: A geometric theory of phase transitions in convex optimization
, 2013
"... Recent empirical research indicates that many convex optimization problems with random constraints exhibit a phase transition as the number of constraints increases. For example, this phenomenon emerges in the `1 minimization method for identifying a sparse vector from random linear samples. Indee ..."
Abstract

Cited by 35 (4 self)
 Add to MetaCart
(Show Context)
Recent empirical research indicates that many convex optimization problems with random constraints exhibit a phase transition as the number of constraints increases. For example, this phenomenon emerges in the `1 minimization method for identifying a sparse vector from random linear samples. Indeed, this approach succeeds with high probability when the number of samples exceeds a threshold that depends on the sparsity level; otherwise, it fails with high probability. This paper provides the first rigorous analysis that explains why phase transitions are ubiquitous in random convex optimization problems. It also describes tools for making reliable predictions about the quantitative aspects of the transition, including the location and the width of the transition region. These techniques apply to regularized linear inverse problems with random measurements, to demixing problems under a random incoherence model, and also to cone programs with random affine constraints. These applications depend on foundational research in conic geometry. This paper introduces a new summary parameter, called the statistical dimension, that canonically extends the dimension of a linear subspace to the class of convex cones. The main technical result demonstrates that the sequence of conic intrinsic volumes of a convex cone concentrates sharply near the statistical dimension. This fact leads to an approximate version of the conic kinematic formula that gives bounds on the probability that a randomly oriented cone shares a ray with a fixed cone.
Various thresholds for ℓ1optimization in compressed sensing
, 2009
"... Recently, [14, 28] theoretically analyzed the success of a polynomial ℓ1optimization algorithm in solving an underdetermined system of linear equations. In a large dimensional and statistical context [14, 28] proved that if the number of equations (measurements in the compressed sensing terminolog ..."
Abstract

Cited by 33 (17 self)
 Add to MetaCart
Recently, [14, 28] theoretically analyzed the success of a polynomial ℓ1optimization algorithm in solving an underdetermined system of linear equations. In a large dimensional and statistical context [14, 28] proved that if the number of equations (measurements in the compressed sensing terminology) in the system is proportional to the length of the unknown vector then there is a sparsity (number of nonzero elements of the unknown vector) also proportional to the length of the unknown vector such that ℓ1optimization succeeds in solving the system. In this paper, we provide an alternative performance analysis of ℓ1optimization and obtain the proportionality constants that in certain cases match or improve on the best currently known ones from [28, 29].
The noisesensitivity phase transition in compressed sensing,” arXiv:1004.1218
, 2010
"... Consider the noisy underdetermined system of linear equations: y = Ax0 + z0, with n × N measurement matrix A, n < N, and Gaussian white noise z0 ∼ N(0, σ2I). Both y and A are known, both x0 and z0 are unknown, and we seek an approximation to x0. When x0 has few nonzeros, useful approximations are ..."
Abstract

Cited by 30 (2 self)
 Add to MetaCart
(Show Context)
Consider the noisy underdetermined system of linear equations: y = Ax0 + z0, with n × N measurement matrix A, n < N, and Gaussian white noise z0 ∼ N(0, σ2I). Both y and A are known, both x0 and z0 are unknown, and we seek an approximation to x0. When x0 has few nonzeros, useful approximations are often obtained by ℓ1penalized ℓ2 minimization, in which the reconstruction ˆx 1,λ solves min ‖y − Ax‖2 2/2 + λ‖x‖1. Evaluate performance by meansquared error (MSE = Eˆx 1,λ − x0   2 2/N). Consider matrices A with iid Gaussian entries and a largesystem limit in which n, N → ∞ with n/N → δ and k/n → ρ. Call the ratio MSE/σ2 the noise sensitivity. We develop formal expressions for the MSE of ˆx 1,λ, and evaluate its worstcase formal noise sensitivity over all types of ksparse signals. The phase space 0 ≤ δ, ρ ≤ 1 is partitioned by curve ρ = ρMSE(δ) into two regions. Formal noise sensitivity is bounded throughout the region ρ < ρMSE(δ) and is unbounded throughout the region ρ> ρMSE(δ). The phase boundary ρ = ρMSE(δ) is identical to the previouslyknown phase transition curve for equivalence of ℓ1 − ℓ0 minimization in the ksparse noiseless case. Hence a single phase