Results 1  10
of
31
A Probabilistic and RIPless Theory of Compressed Sensing
, 2010
"... This paper introduces a simple and very general theory of compressive sensing. In this theory, the sensing mechanism simply selects sensing vectors independently at random from a probability distribution F; it includes all models — e.g. Gaussian, frequency measurements — discussed in the literature, ..."
Abstract

Cited by 95 (3 self)
 Add to MetaCart
(Show Context)
This paper introduces a simple and very general theory of compressive sensing. In this theory, the sensing mechanism simply selects sensing vectors independently at random from a probability distribution F; it includes all models — e.g. Gaussian, frequency measurements — discussed in the literature, but also provides a framework for new measurement strategies as well. We prove that if the probability distribution F obeys a simple incoherence property and an isotropy property, one can faithfully recover approximately sparse signals from a minimal number of noisy measurements. The novelty is that our recovery results do not require the restricted isometry property (RIP) — they make use of a much weaker notion — or a random model for the signal. As an example, the paper shows that a signal with s nonzero entries can be faithfully recovered from about s log n Fourier coefficients that are contaminated with noise.
Compressive Sensing
, 2010
"... Compressive sensing is a new type of sampling theory, which predicts that sparse signals and images can be reconstructed from what was previously believed to be incomplete information. As a main feature, efficient algorithms such as ℓ1minimization can be used for recovery. The theory has many poten ..."
Abstract

Cited by 50 (12 self)
 Add to MetaCart
(Show Context)
Compressive sensing is a new type of sampling theory, which predicts that sparse signals and images can be reconstructed from what was previously believed to be incomplete information. As a main feature, efficient algorithms such as ℓ1minimization can be used for recovery. The theory has many potential applications in signal processing and imaging. This chapter gives an introduction and overview on both theoretical and numerical aspects of compressive sensing.
The computational complexity of the restricted isometry property, the nullspace property, and related concepts in compressed sensing
 IEEE Trans. Inf. Theory
, 2014
"... ar ..."
(Show Context)
Phase transitions for greedy sparse approximation algorithms. 2009. available at arxiv
"... Abstract This paper applies the phase transition framework to three greedy algorithms so we can actually tell which is best. This can be at most 250 words. Index Terms Compressed sensing, greedy algorithms, CoSaMP, iterative hard thresholding, subspace pursuit, sparsity, sparse approximation, spars ..."
Abstract

Cited by 23 (10 self)
 Add to MetaCart
Abstract This paper applies the phase transition framework to three greedy algorithms so we can actually tell which is best. This can be at most 250 words. Index Terms Compressed sensing, greedy algorithms, CoSaMP, iterative hard thresholding, subspace pursuit, sparsity, sparse approximation, sparse solutions to underdetermined systems, restricted isometry property, restricted isometry constants, phase transitions, convex relaxation, random matrices, Gaussian matrices, Wishart matrices, singular values of random matrices, eigenvalues of random matrices (this is way too many for IEEE)
The restricted isometry property for random block diagonal matrices,” Applied and Computational Harmonic Analysis
, 2014
"... In Compressive Sensing, the Restricted Isometry Property (RIP) ensures that robust recovery of sparse vectors is possible from noisy, undersampled measurements via computationally tractable algorithms. It is by now wellknown that Gaussian (or, more generally, subGaussian) random matrices satisfy t ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
(Show Context)
In Compressive Sensing, the Restricted Isometry Property (RIP) ensures that robust recovery of sparse vectors is possible from noisy, undersampled measurements via computationally tractable algorithms. It is by now wellknown that Gaussian (or, more generally, subGaussian) random matrices satisfy the RIP under certain conditions on the number of measurements. Their use can be limited in practice, however, due to storage limitations, computational considerations, or the mismatch of such matrices with certain measurement architectures. These issues have recently motivated considerable effort towards studying the RIP for structured random matrices. In this paper, we study the RIP for block diagonal measurement matrices where each block on the main diagonal is itself a subGaussian random matrix. Our main result states that such matrices can indeed satisfy the RIP but that the requisite number of measurements depends on certain properties of the basis in which the signals are sparse. In the best case, these matrices perform nearly as well as dense Gaussian random matrices, despite having many fewer nonzero entries.
Compressive CFAR radar detection
 In Proc. IEEE Radar Conference (RADAR
, 2012
"... Abstract—In this paper we develop the first Compressive Sensing (CS) adaptive radar detector. We propose three novel architectures and demonstrate how a classical Constant False Alarm Rate (CFAR) detector can be combined with ℓ1norm minimization. Using asymptotic arguments and the Complex Approxima ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
(Show Context)
Abstract—In this paper we develop the first Compressive Sensing (CS) adaptive radar detector. We propose three novel architectures and demonstrate how a classical Constant False Alarm Rate (CFAR) detector can be combined with ℓ1norm minimization. Using asymptotic arguments and the Complex Approximate Message Passing (CAMP) algorithm we characterize the statistics of the ℓ1norm reconstruction error and derive closed form expressions for both the detection and false alarm probabilities. We support our theoretical findings with a range of experiments that show that our theoretical conclusions hold even in nonasymptotic setting. We also report on the results from a radar measurement campaign, where we designed ad hoc transmitted waveforms to obtain a set of CS frequency measurements. We compare the performance of our new detection schemes using Receiver Operating Characteristic (ROC) curves. I.
Proof of Convergence and Performance Analysis for Sparse Recovery via Zeropoint Attracting Projection
, 2011
"... A recursive algorithm named Zeropoint Attracting Projection (ZAP) is proposed recently for sparse signal reconstruction. Compared with the reference algorithms, ZAP demonstrates rather good performance in recovery precision and robustness. However, any theoretical analysis about the mentioned algor ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
(Show Context)
A recursive algorithm named Zeropoint Attracting Projection (ZAP) is proposed recently for sparse signal reconstruction. Compared with the reference algorithms, ZAP demonstrates rather good performance in recovery precision and robustness. However, any theoretical analysis about the mentioned algorithm, even a proof on its convergence, is not available. In this work, a strict proof on the convergence of ZAP is provided and the condition of convergence is put forward. Based on the theoretical analysis, it is further proved that ZAP is nonbiased and can approach the sparse solution to any extent, with the proper choice of stepsize. Furthermore, the case of inaccurate measurements in noisy scenario is also discussed. It is proved that disturbance power linearly reduces the recovery precision, which is predictable but not preventable. The reconstruction deviation of pcompressible signal is also provided. Finally, numerical simulations are performed to verify the theoretical analysis.
Alternating Maximization: Unifying Framework for 8 Sparse PCA Formulations and Efficient Parallel Codes ∗
, 2012
"... Given a multivariate data set, sparse principal component analysis (SPCA) aims to extract severallinearcombinationsofthevariablesthattogetherexplainthevarianceinthedataasmuch aspossible,whilecontrollingthenumberofnonzeroloadingsinthesecombinations. Inthispaper we consider 8 different optimization fo ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
Given a multivariate data set, sparse principal component analysis (SPCA) aims to extract severallinearcombinationsofthevariablesthattogetherexplainthevarianceinthedataasmuch aspossible,whilecontrollingthenumberofnonzeroloadingsinthesecombinations. Inthispaper we consider 8 different optimization formulations for computing a single sparse loading vector; these are obtained by combining the following factors: we employ two norms for measuring variance (L2, L1) and two sparsityinducing norms (L0, L1), which are used in two different ways (constraint, penalty). Three of our formulations, notably the one with L0 constraint and L1 variance, have not been considered in the literature. We give a unifying reformulation which we propose to solve via a natural alternating maximization (AM) method. We show the the AM method is nontrivially equivalent to GPower (Journée et al; JMLR 11:517–553, 2010) for all our formulations. Besides this, we provide 24 efficient parallel SPCA implementations: 3 codes (multicore, GPU and cluster) for each of the 8 problems. Parallelism in the methods is aimed at i) speeding up computations (our GPU code can be 100 times faster than an efficient serial code written in C++), ii) obtaining solutions explaining more variance and iii) dealing with big data problems (our cluster code is able to solve a 357 GB problem in about a minute).
Restricted isometry property in quantized network coding of sparse messages,” arXiv preprint arXiv:1203.1892
, 2012
"... Abstract—In this paper, we study joint network coding and distributed source coding of internode dependent messages, with the perspective of compressed sensing. Specifically, the theoretical guarantees for robust `1min recovery of an underdetermined set of linear network coded sparse messages are ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
(Show Context)
Abstract—In this paper, we study joint network coding and distributed source coding of internode dependent messages, with the perspective of compressed sensing. Specifically, the theoretical guarantees for robust `1min recovery of an underdetermined set of linear network coded sparse messages are investigated. We discuss the guarantees for `1min decoding of quantized network coded messages, based on Restricted Isometry Property (RIP) of the resulting measurement matrix. This is done by deriving the relation between tail probability of `2norms and satisfaction of RIP. The obtained relation is then used to compare our designed measurement matrix, with i.i.d. Gaussian measurement matrix, in terms of RIP satisfaction. Finally, we present our numerical evaluations, which shows that the proposed design of network coding coefficients results in a measurement matrix with an RIP behavior, similar to that of i.i.d. Gaussian matrix. Index Terms—Compressed sensing, linear network coding, restricted isometry property, `1min decoding, Gaussian ensembles. I.