Results 1  10
of
31
Message passing algorithms for compressed sensing: I. motivation and construction
 Proc. ITW
, 2010
"... Abstract—In a recent paper, the authors proposed a new class of lowcomplexity iterative thresholding algorithms for reconstructing sparse signals from a small set of linear measurements [1]. The new algorithms are broadly referred to as AMP, for approximate message passing. This is the second of tw ..."
Abstract

Cited by 67 (9 self)
 Add to MetaCart
Abstract—In a recent paper, the authors proposed a new class of lowcomplexity iterative thresholding algorithms for reconstructing sparse signals from a small set of linear measurements [1]. The new algorithms are broadly referred to as AMP, for approximate message passing. This is the second of two conference papers describing the derivation of these algorithms, connection with related literature, extensions of original framework, and new empirical evidence. This paper describes the state evolution formalism for analyzing these algorithms, and some of the conclusions that can be drawn from this formalism. We carried out extensive numerical simulations to confirm these predictions. We present here a few representative results. I. GENERAL AMP AND STATE EVOLUTION We consider the model
Optimally tuned iterative reconstruction algorithms for compressed sensing
 Selected Topics in Signal Processing
"... Abstract — We conducted an extensive computational experiment, lasting multiple CPUyears, to optimally select parameters for two important classes of algorithms for finding sparse solutions of underdetermined systems of linear equations. We make the optimally tuned implementations available at spar ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
Abstract — We conducted an extensive computational experiment, lasting multiple CPUyears, to optimally select parameters for two important classes of algorithms for finding sparse solutions of underdetermined systems of linear equations. We make the optimally tuned implementations available at sparselab.stanford.edu; they run ‘out of the box ’ with no user tuning: it is not necessary to select thresholds or know the likely degree of sparsity. Our class of algorithms includes iterative hard and soft thresholding with or without relaxation, as well as CoSaMP, subspace pursuit and some natural extensions. As a result, our optimally tuned algorithms dominate such proposals. Our notion of optimality is defined in terms of phase transitions, i.e. we maximize the number of nonzeros at which the algorithm can successfully operate. We show that the phase transition is a welldefined quantity with our suite of random underdetermined linear systems. Our tuning gives the highest transition possible within each class of algorithms. We verify by extensive computation the robustness of our recommendations to the amplitude distribution of the nonzero coefficients as well as the matrix ensemble defining the underdetermined system. Our findings include: (a) For all algorithms, the worst amplitude distribution for nonzeros is generally the constantamplitude randomsign distribution, where all nonzeros are the same amplitude. (b) Various random matrix ensembles give the same phase transitions; random partial isometries may give different transitions and require different tuning; (c) Optimally tuned subspace pursuit dominates optimally tuned CoSaMP, particularly so when the system is almost square. I.
Precise Undersampling Theorems
"... Undersampling Theorems state that we may gather far fewer samples than the usual sampling theorem while exactly reconstructing the object of interest – provided the object in question obeys a sparsity condition, the samples measure appropriate linear combinations of signal values, and we reconstruc ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
Undersampling Theorems state that we may gather far fewer samples than the usual sampling theorem while exactly reconstructing the object of interest – provided the object in question obeys a sparsity condition, the samples measure appropriate linear combinations of signal values, and we reconstruct with a particular nonlinear procedure. While there are many ways to crudely demonstrate such undersampling phenomena, we know of only one approach which precisely quantifies the true sparsityundersampling tradeoff curve of standard algorithms and standard compressed sensing matrices. That approach, based on combinatorial geometry, predicts the exact location in sparsityundersampling domain where standard algorithms exhibit phase transitions in performance. We review the phase transition approach here and describe the broad range of cases where it applies. We also mention exceptions and state challenge problems for future research. Sample result: one can efficiently reconstruct a ksparse signal of length N from n measurements, provided n � 2k · log(N/n), for (k, n, N) large, k ≪ N.
Structured compressed sensing: From theory to applications
 IEEE Trans. Signal Process
, 2011
"... Abstract—Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discretetodiscrete measurement architectures using matrices of randomized nature and signal models based on ..."
Abstract

Cited by 17 (6 self)
 Add to MetaCart
Abstract—Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discretetodiscrete measurement architectures using matrices of randomized nature and signal models based on standard sparsity. In recent years, CS has worked its way into several new application areas. This, in turn, necessitates a fresh look on many of the basics of CS. The random matrix measurement operator must be replaced by more structured sensing architectures that correspond to the characteristics of feasible acquisition hardware. The standard sparsity prior has to be extended to include a much richer class of signals and to encode broader data models, including continuoustime signals. In our overview, the theme is exploiting signal and measurement structure in compressive sensing. The prime focus is bridging theory and practice; that is, to pinpoint the potential of structured CS strategies to emerge from the math to the hardware. Our summary highlights new directions as well as relations to more traditional CS, with the hope of serving both as a review to practitioners wanting to join this emerging field, and as a reference for researchers that attempts to put some of the existing ideas in perspective of practical applications. Index Terms—Approximation algorithms, compressed sensing, compression algorithms, data acquisition, data compression, sampling methods. I.
Asymptotic analysis of complex LASSO via complex approximate message passing
 IEEE Trans. Inf. Theory
, 2011
"... Recovering a sparse signal from an undersampled set of random linear measurements is the main problem of interest in compressed sensing. In this paper, we consider the case where both the signal and the measurements are complexvalued. We study the popular reconstruction method of ℓ1regularized lea ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
Recovering a sparse signal from an undersampled set of random linear measurements is the main problem of interest in compressed sensing. In this paper, we consider the case where both the signal and the measurements are complexvalued. We study the popular reconstruction method of ℓ1regularized least squares or LASSO. While several studies have shown that the LASSO algorithm offers desirable solutions under certain conditions, the precise asymptotic performance of this algorithm in the complex setting is not yet known. In this paper, we extend the approximate message passing (AMP) algorithm to the complexvalued signals and measurements to obtain the complex approximate message passing algorithm (CAMP). We then generalize the state evolution framework recently introduced for the analysis of AMP, to the complex setting. Using the state evolution, we derive accurate formulas for the phase transition and noise sensitivity of both LASSO and CAMP. Our results are theoretically proved for the case of i.i.d. Gaussian sensing matrices. But we confirm through simulations that our results hold for larger class of random matrices. 1
Accurate Prediction of Phase Transitions in Compressed Sensing via a Connection to Minimax Denoising
, 1111
"... Compressed sensing posits that, within limits, one can undersample a sparse signal and yet reconstruct it accurately. Knowing the precise limits to such undersampling is important both for theory and practice. We present a formula that characterizes the allowed undersampling of generalized sparse ob ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
Compressed sensing posits that, within limits, one can undersample a sparse signal and yet reconstruct it accurately. Knowing the precise limits to such undersampling is important both for theory and practice. We present a formula that characterizes the allowed undersampling of generalized sparse objects. The formula applies to Approximate Message Passing (AMP) algorithms for compressed sensing, which are here generalized to employ denoising operators besides the traditional scalar soft thresholding denoiser. This paper gives several examples including scalar denoisers not derived from convex penalization – the firm shrinkage nonlinearity and the minimax nonlinearity – and also nonscalar denoisers – block thresholding, monotone regression, and total variation minimization. Let the variables ε = k/N and δ = n/N denote the generalized sparsity and undersampling fractions for sampling the kgeneralizedsparse Nvector x0 according to y = Ax0. Here A is an n × N measurement matrix whose entries are iid standard Gaussian. The formula states that the phase transition curve δ = δ(ε) separating successful from unsuccessful reconstruction of x0
Statistical challenges of highdimensional data
 52 SPIKE AND SLAB PRIORS FOR BAYESIAN GROUP FEATURE SELECTION
, 1906
"... Modern applications of statistical theory and methods can involve extremely large datasets, often with huge numbers of measurements on each of a comparatively small number of experimental units. New methodology and accompanying theory have emerged in response: the goal of this theme issue is to ill ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Modern applications of statistical theory and methods can involve extremely large datasets, often with huge numbers of measurements on each of a comparatively small number of experimental units. New methodology and accompanying theory have emerged in response: the goal of this theme issue is to illustrate a number of these recent developments. This overview article introduces the difficulties that arise with highdimensional data in the context of the very familiar linear statistical model: we give a taste of what can nevertheless be achieved when the parameter vector of interest is sparse, that is, contains many zero elements. We describe other ways of identifying lowdimensional subspaces of the data space that contain all useful information. The topic of classification is then reviewed along with the problem of identifying, from within a very large set, the variables that help to classify observations. Brief mention is made of the visualization of highdimensional data and ways to handle computational problems in Bayesian analysis are described. At appropriate points, reference is made to the other papers in the issue.
The noisesensitivity phase transition in compressed sensing,” arXiv:1004.1218
, 2010
"... Consider the noisy underdetermined system of linear equations: y = Ax0 + z0, with n × N measurement matrix A, n < N, and Gaussian white noise z0 ∼ N(0, σ2I). Both y and A are known, both x0 and z0 are unknown, and we seek an approximation to x0. When x0 has few nonzeros, useful approximations are of ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
Consider the noisy underdetermined system of linear equations: y = Ax0 + z0, with n × N measurement matrix A, n < N, and Gaussian white noise z0 ∼ N(0, σ2I). Both y and A are known, both x0 and z0 are unknown, and we seek an approximation to x0. When x0 has few nonzeros, useful approximations are often obtained by ℓ1penalized ℓ2 minimization, in which the reconstruction ˆx 1,λ solves min ‖y − Ax‖2 2/2 + λ‖x‖1. Evaluate performance by meansquared error (MSE = Eˆx 1,λ − x0   2 2/N). Consider matrices A with iid Gaussian entries and a largesystem limit in which n, N → ∞ with n/N → δ and k/n → ρ. Call the ratio MSE/σ2 the noise sensitivity. We develop formal expressions for the MSE of ˆx 1,λ, and evaluate its worstcase formal noise sensitivity over all types of ksparse signals. The phase space 0 ≤ δ, ρ ≤ 1 is partitioned by curve ρ = ρMSE(δ) into two regions. Formal noise sensitivity is bounded throughout the region ρ < ρMSE(δ) and is unbounded throughout the region ρ> ρMSE(δ). The phase boundary ρ = ρMSE(δ) is identical to the previouslyknown phase transition curve for equivalence of ℓ1 − ℓ0 minimization in the ksparse noiseless case. Hence a single phase
Sparse Legendre expansions via ℓ1minimization
 Journal of Approximation Theory
"... We consider the problem of recovering polynomials that are sparse with respect to the basis of Legendre polynomials from a small number of random samples. In particular, we show that a Legendre ssparse polynomial of maximal degree N can be recovered from m ≃ s log 4 (N) random samples that are chos ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
We consider the problem of recovering polynomials that are sparse with respect to the basis of Legendre polynomials from a small number of random samples. In particular, we show that a Legendre ssparse polynomial of maximal degree N can be recovered from m ≃ s log 4 (N) random samples that are chosen independently according to the Chebyshev probability measure dν(x) = π−1 (1 − x2) −1/2dx. As an efficient recovery method, ℓ1minimization can be used. We establish these results by verifying the restricted isometry property of a preconditioned random Legendre matrix. We then extend these results to a large class of orthogonal polynomial systems, including the Jacobi polynomials, of which the Legendre polynomials are a special case. Finally, we transpose these results into the setting of approximate recovery for functions in certain infinitedimensional function spaces.
Various thresholds for ℓ1optimization in compressed sensing
, 2009
"... Recently, [14, 28] theoretically analyzed the success of a polynomial ℓ1optimization algorithm in solving an underdetermined system of linear equations. In a large dimensional and statistical context [14, 28] proved that if the number of equations (measurements in the compressed sensing terminolog ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Recently, [14, 28] theoretically analyzed the success of a polynomial ℓ1optimization algorithm in solving an underdetermined system of linear equations. In a large dimensional and statistical context [14, 28] proved that if the number of equations (measurements in the compressed sensing terminology) in the system is proportional to the length of the unknown vector then there is a sparsity (number of nonzero elements of the unknown vector) also proportional to the length of the unknown vector such that ℓ1optimization succeeds in solving the system. In this paper, we provide an alternative performance analysis of ℓ1optimization and obtain the proportionality constants that in certain cases match or improve on the best currently known ones from [28, 29].