Results 1  10
of
17
HARD THRESHOLDING PURSUIT: AN ALGORITHM FOR COMPRESSIVE SENSING
"... We introduce a new iterative algorithm to find sparse solutions of underdetermined linear systems. The algorithm, a simple combination of the Iterative Hard Thresholding algorithm and of the Compressive Sampling Matching Pursuit or Subspace Pursuit algorithms, is called Hard Thresholding Pursuit. ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
We introduce a new iterative algorithm to find sparse solutions of underdetermined linear systems. The algorithm, a simple combination of the Iterative Hard Thresholding algorithm and of the Compressive Sampling Matching Pursuit or Subspace Pursuit algorithms, is called Hard Thresholding Pursuit. We study its general convergence, and notice in particular that only a finite number of iterations are required. We then show that, under a certain condition on the restricted isometry constant of the matrix of the system, the Hard Thresholding Pursuit algorithm indeed finds all ssparse solutions. This condition, which reads δ3s < 1 / √ 3, is heuristically better than the sufficient conditions currently available for other Compressive Sensing algorithms. It applies to fast versions of the algorithm, too, including the Iterative Hard Thresholding algorithm. Stability with respect to sparsity defect and robustness with respect to measurement error are also guaranteed under the condition δ3s < 1 / √ 3. We conclude with some numerical experiments to demonstrate the good empirical performance and the low complexity of the Hard Thresholding Pursuit algorithm.
Asymptotic analysis of complex LASSO via complex approximate message passing
 IEEE Trans. Inf. Theory
, 2011
"... Recovering a sparse signal from an undersampled set of random linear measurements is the main problem of interest in compressed sensing. In this paper, we consider the case where both the signal and the measurements are complexvalued. We study the popular reconstruction method of ℓ1regularized lea ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
Recovering a sparse signal from an undersampled set of random linear measurements is the main problem of interest in compressed sensing. In this paper, we consider the case where both the signal and the measurements are complexvalued. We study the popular reconstruction method of ℓ1regularized least squares or LASSO. While several studies have shown that the LASSO algorithm offers desirable solutions under certain conditions, the precise asymptotic performance of this algorithm in the complex setting is not yet known. In this paper, we extend the approximate message passing (AMP) algorithm to the complexvalued signals and measurements to obtain the complex approximate message passing algorithm (CAMP). We then generalize the state evolution framework recently introduced for the analysis of AMP, to the complex setting. Using the state evolution, we derive accurate formulas for the phase transition and noise sensitivity of both LASSO and CAMP. Our results are theoretically proved for the case of i.i.d. Gaussian sensing matrices. But we confirm through simulations that our results hold for larger class of random matrices. 1
AN ALPS VIEW OF SPARSE RECOVERY
"... We provide two compressive sensing (CS) recovery algorithms based on iterative hardthresholding. The algorithms, collectively dubbed as algebraic pursuits (ALPS), exploit the restricted isometry properties of the CS measurement matrix within the algebra of Nesterov’s optimal gradient methods. We th ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
We provide two compressive sensing (CS) recovery algorithms based on iterative hardthresholding. The algorithms, collectively dubbed as algebraic pursuits (ALPS), exploit the restricted isometry properties of the CS measurement matrix within the algebra of Nesterov’s optimal gradient methods. We theoretically characterize the approximation guarantees of ALPS for signals that are sparse on orthobases as well as on tightframes. Simulation results demonstrate a great potential for ALPS in terms of phasetransition, noise robustness, and CS reconstruction. 1.
Phase transitions for greedy sparse approximation algorithms. submitted
, 2009
"... A major enterprise in compressed sensing and sparse approximation is the design and analysis of computationally tractable algorithms for recovering sparse, exact or approximate, solutions of underdetermined linear systems of equations. Many such algorithms have now been proven using the ubiquitous R ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
A major enterprise in compressed sensing and sparse approximation is the design and analysis of computationally tractable algorithms for recovering sparse, exact or approximate, solutions of underdetermined linear systems of equations. Many such algorithms have now been proven using the ubiquitous Restricted Isometry Property (RIP) [9] to have optimalorder uniform recovery guarantees. However, it is unclear when the RIPbased sufficient conditions on the algorithm are satisfied. We present a framework in which this task can be achieved; translating these conditions for Gaussian measurement matrices into requirements on the signal’s sparsity level, size and number of measurements. We illustrate this approach on three of the stateoftheart greedy algorithms: CoSaMP [27], Subspace Pursuit (SP) [11] and Iterated Hard Thresholding (IHT) [6]. Designed to allow a direct comparison of existing theory, our framework implies that IHT, the lowest of the three in computational cost, also requires fewer compressed sensing measurements than CoSaMP and SP. Key words: Compressed sensing, greedy algorithms, sparse solutions to underdetermined
Accurate Prediction of Phase Transitions in Compressed Sensing via a Connection to Minimax Denoising
, 1111
"... Compressed sensing posits that, within limits, one can undersample a sparse signal and yet reconstruct it accurately. Knowing the precise limits to such undersampling is important both for theory and practice. We present a formula that characterizes the allowed undersampling of generalized sparse ob ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
Compressed sensing posits that, within limits, one can undersample a sparse signal and yet reconstruct it accurately. Knowing the precise limits to such undersampling is important both for theory and practice. We present a formula that characterizes the allowed undersampling of generalized sparse objects. The formula applies to Approximate Message Passing (AMP) algorithms for compressed sensing, which are here generalized to employ denoising operators besides the traditional scalar soft thresholding denoiser. This paper gives several examples including scalar denoisers not derived from convex penalization – the firm shrinkage nonlinearity and the minimax nonlinearity – and also nonscalar denoisers – block thresholding, monotone regression, and total variation minimization. Let the variables ε = k/N and δ = n/N denote the generalized sparsity and undersampling fractions for sampling the kgeneralizedsparse Nvector x0 according to y = Ax0. Here A is an n × N measurement matrix whose entries are iid standard Gaussian. The formula states that the phase transition curve δ = δ(ε) separating successful from unsuccessful reconstruction of x0
On Accelerated Hard Thresholding Methods for Sparse Approximation
, 2011
"... We propose and analyze acceleration schemes for hard thresholding methods with applications to sparse approximation in linear inverse systems. Our acceleration schemes fuse combinatorial, sparse projection algorithms with convex optimization algebra to provide computationally efficient and robust sp ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
We propose and analyze acceleration schemes for hard thresholding methods with applications to sparse approximation in linear inverse systems. Our acceleration schemes fuse combinatorial, sparse projection algorithms with convex optimization algebra to provide computationally efficient and robust sparse recovery methods. We compare and contrast the (dis)advantages of the proposed schemes with the stateoftheart, not only within hard thresholding methods, but also within convex sparse recovery algorithms. 1
Optimal phase transitions in compressed sensing
 IEEE Trans. Inf. Theory
"... Abstract—Compressed sensing deals with efficient recovery of analog signals from linear encodings. This paper presents a statistical study of compressed sensing by modeling the input signal as an i.i.d. process with known distribution. Three classes of encoders are considered, namely optimal nonline ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Abstract—Compressed sensing deals with efficient recovery of analog signals from linear encodings. This paper presents a statistical study of compressed sensing by modeling the input signal as an i.i.d. process with known distribution. Three classes of encoders are considered, namely optimal nonlinear, optimal linear, and random linear encoders. Focusing on optimal decoders, we investigate the fundamental tradeoff between measurement rate and reconstruction fidelity gauged by error probability and noise sensitivity in the absence and presence of measurement noise, respectively. The optimal phasetransition threshold is determined as a functional of the input distribution and compared to suboptimal thresholds achieved by popular reconstruction algorithms. In particular, we show that Gaussian sensing matrices incur no penalty on the phasetransition threshold with respect to optimal nonlinear encoding. Our results also provide a rigorous justification of previous results based on replica heuristics in the weaknoise regime. Index Terms—Compressed sensing, joint sourcechannel coding, minimum meansquare error (MMSE) dimension, phase transition, random matrix, Rényi information dimension, Shannon theory.
Fast Hard Thresholding with Nesterov’s Gradient Method
"... We provide an algorithmic framework for structured sparse recovery which unifies combinatorial optimization with the nonsmooth convex optimization framework by Nesterov [1, 2]. Our algorithm, dubbed Nesterov iterative hardthresholding (NIHT), is similar to the algebraic pursuits (ALPS) in [3] in s ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We provide an algorithmic framework for structured sparse recovery which unifies combinatorial optimization with the nonsmooth convex optimization framework by Nesterov [1, 2]. Our algorithm, dubbed Nesterov iterative hardthresholding (NIHT), is similar to the algebraic pursuits (ALPS) in [3] in spirit: we use the gradient information in the convex data error objective to navigate over the nonconvex set of structured sparse signals. While ALPS feature a priori approximation guarantees, we were only able to provide an online approximation guarantee for NIHT (e.g., the guarantees require the algorithm execution). Experiments show however that NIHT can empirically outperform ALPS and other stateoftheart convex optimizationbased algorithms in sparse recovery. 1
Virtual Probe: A Statistical Framework for LowCost Silicon Characterization of Nanoscale Integrated Circuits
"... Abstract—In this paper, we propose a new technique, referred to as virtual probe (VP), to efficiently measure, characterize, and monitor spatiallycorrelated interdie and/or intradie variations in nanoscale manufacturing process. VP exploits recent breakthroughs in compressed sensing to accurately ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract—In this paper, we propose a new technique, referred to as virtual probe (VP), to efficiently measure, characterize, and monitor spatiallycorrelated interdie and/or intradie variations in nanoscale manufacturing process. VP exploits recent breakthroughs in compressed sensing to accurately predict spatial variations from an exceptionally small set of measurement data, thereby reducing the cost of silicon characterization. By exploring the underlying sparse pattern in spatial frequency domain, VP achieves substantially lower sampling frequency than the wellknown Nyquist rate. In addition, VP is formulated as a linear programming problem and, therefore, can be solved both robustly and efficiently. Our industrial measurement data demonstrate the superior accuracy of VP over several traditional methods, including 2D interpolation, Kriging prediction, and kLSE estimation. Index Terms—Characterization, compressed sensing, integrated circuit, process variation. I.
Supporting Information to: Message Passing Algorithms for Compressed Sensing
, 2009
"... This document presents details concerning analytical derivations and numerical experiments that support the claims made in the main text ‘Message Passing Algorithms for Compressed Sensing’, submitted for publication in the Proceedings of the National Academy of Sciences, USA. Hereafter ’main text’. ..."
Abstract
 Add to MetaCart
This document presents details concerning analytical derivations and numerical experiments that support the claims made in the main text ‘Message Passing Algorithms for Compressed Sensing’, submitted for publication in the Proceedings of the National Academy of Sciences, USA. Hereafter ’main text’. One can find here: • Derivations of explicit Formulas for the MSE Map, and the optimal thresholds; see Section 3 below. • Proof of Theorem 1; see Section 3 below. • Proof of concavity of the MSE Map, see again Section 3 below. • Explanation of the connection between Minimax Thresholding, Minimax Risk, and rigorous proof of formula [19] in the main text; see Section 4 below, Theorem 4.2. • Formulas for the rate exponent b of Theorem 2 in the main text, expressed in terms of the minimax threshold risk; see Section 5 below.