Results 1  10
of
72
Computational methods for sparse solution of linear inverse problems
, 2009
"... The goal of sparse approximation problems is to represent a target signal approximately as a linear combination of a few elementary signals drawn from a fixed collection. This paper surveys the major practical algorithms for sparse approximation. Specific attention is paid to computational issues, ..."
Abstract

Cited by 60 (0 self)
 Add to MetaCart
The goal of sparse approximation problems is to represent a target signal approximately as a linear combination of a few elementary signals drawn from a fixed collection. This paper surveys the major practical algorithms for sparse approximation. Specific attention is paid to computational issues, to the circumstances in which individual methods tend to perform well, and to the theoretical guarantees available. Many fundamental questions in electrical engineering, statistics, and applied mathematics can be posed as sparse approximation problems, making these algorithms versatile and relevant to a wealth of applications.
Compressive Sensing and Structured Random Matrices
 RADON SERIES COMP. APPL. MATH XX, 1–95 © DE GRUYTER 20YY
"... These notes give a mathematical introduction to compressive sensing focusing on recovery using ℓ1minimization and structured random matrices. An emphasis is put on techniques for proving probabilistic estimates for condition numbers of structured random matrices. Estimates of this type are key to ..."
Abstract

Cited by 59 (13 self)
 Add to MetaCart
These notes give a mathematical introduction to compressive sensing focusing on recovery using ℓ1minimization and structured random matrices. An emphasis is put on techniques for proving probabilistic estimates for condition numbers of structured random matrices. Estimates of this type are key to providing conditions that ensure exact or approximate recovery of sparse vectors using ℓ1minimization.
ModifiedCS: Modifying compressive sensing for problems with partially known support
 in Proc. IEEE Int. Symp. Inf. Theory (ISIT), 2009
"... Abstract—We study the problem of reconstructing a sparse signal from a limited number of its linear projections when a part of its support is known, although the known part may contain some errors. The “known ” part of the support, denoted, may be available from prior knowledge. Alternatively, in a ..."
Abstract

Cited by 42 (14 self)
 Add to MetaCart
Abstract—We study the problem of reconstructing a sparse signal from a limited number of its linear projections when a part of its support is known, although the known part may contain some errors. The “known ” part of the support, denoted, may be available from prior knowledge. Alternatively, in a problem of recursively reconstructing time sequences of sparse spatial signals, one may use the support estimate from the previous time instant as the “known ” part. The idea of our proposed solution (modifiedCS) is to solve a convex relaxation of the following problem: find the signal that satisfies the data constraint and is sparsest outside of. We obtain sufficient conditions for exact reconstruction using modifiedCS. These are much weaker than those needed for compressive sensing (CS) when the sizes of the unknown part of the support and of errors in the known part are small compared to the support size. An important extension called regularized modifiedCS (RegModCS) is developed which also uses prior signal estimate knowledge. Simulation comparisons for both sparse and compressible signals are shown. Index Terms—Compressive sensing, modifiedCS, partially known support, prior knowledge, sparse reconstruction.
Circulant and Toeplitz Matrices in Compressed Sensing
"... Compressed sensing seeks to recover a sparse vector from a small number of linear and nonadaptive measurements. While most work so far focuses on Gaussian or Bernoulli random measurements we investigate the use of partial random circulant and Toeplitz matrices in connection with recovery by ℓ1mini ..."
Abstract

Cited by 32 (9 self)
 Add to MetaCart
Compressed sensing seeks to recover a sparse vector from a small number of linear and nonadaptive measurements. While most work so far focuses on Gaussian or Bernoulli random measurements we investigate the use of partial random circulant and Toeplitz matrices in connection with recovery by ℓ1minization. In contrast to recent work in this direction we allow the use of an arbitrary subset of rows of a circulant and Toeplitz matrix. Our recovery result predicts that the necessary number of measurements to ensure sparse reconstruction by ℓ1minimization with random partial circulant or Toeplitz matrices scales linearly in the sparsity up to a logfactor in the ambient dimension. This represents a significant improvement over previous recovery results for such matrices. As a main tool for the proofs we use a new version of the noncommutative Khintchine inequality.
Shifting Inequality and Recovery of Sparse Signals
 IEEE Transactions on Signal Processing
"... Abstract—In this paper, we present a concise and coherent analysis of the constrained `1 minimization method for stable recovering of highdimensional sparse signals both in the noiseless case and noisy case. The analysis is surprisingly simple and elementary, while leads to strong results. In parti ..."
Abstract

Cited by 28 (5 self)
 Add to MetaCart
Abstract—In this paper, we present a concise and coherent analysis of the constrained `1 minimization method for stable recovering of highdimensional sparse signals both in the noiseless case and noisy case. The analysis is surprisingly simple and elementary, while leads to strong results. In particular, it is shown that the sparse recovery problem can be solved via `1 minimization under weaker conditions than what is known in the literature. A key technical tool is an elementary inequality, called Shifting Inequality, which, for a given nonnegative decreasing sequence, bounds the `2 norm of a subsequence in terms of the `1 norm of another subsequence by shifting the elements to the upper end. Index Terms — 1 minimization, restricted isometry property, shifting inequality, sparse recovery. I.
Sparse Recovery from Combined Fusion Frame Measurements
 IEEE Trans. Inform. Theory
"... Sparse representations have emerged as a powerful tool in signal and information processing, culminated by the success of new acquisition and processing techniques such as Compressed Sensing (CS). Fusion frames are very rich new signal representation methods that use collections of subspaces instead ..."
Abstract

Cited by 23 (10 self)
 Add to MetaCart
Sparse representations have emerged as a powerful tool in signal and information processing, culminated by the success of new acquisition and processing techniques such as Compressed Sensing (CS). Fusion frames are very rich new signal representation methods that use collections of subspaces instead of vectors to represent signals. This work combines these exciting fields to introduce a new sparsity model for fusion frames. Signals that are sparse under the new model can be compressively sampled and uniquely reconstructed in ways similar to sparse signals using standard CS. The combination provides a promising new set of mathematical tools and signal models useful in a variety of applications. With the new model, a sparse signal has energy in very few of the subspaces of the fusion frame, although it does not need to be sparse within each of the subspaces it occupies. This sparsity model is captured using a mixed ℓ1/ℓ2 norm for fusion frames. A signal sparse in a fusion frame can be sampled using very few random projections and exactly reconstructed using a convex optimization that minimizes this mixed ℓ1/ℓ2 norm. The provided sampling conditions generalize coherence and RIP conditions used in standard CS theory. It is demonstrated that they are sufficient to guarantee sparse recovery of any signal sparse in our model. Moreover, an average case analysis is provided using a probability model on the sparse signal that shows that under very mild conditions the probability of recovery failure decays exponentially with increasing dimension of the subspaces. Index Terms
Precise Undersampling Theorems
"... Undersampling Theorems state that we may gather far fewer samples than the usual sampling theorem while exactly reconstructing the object of interest – provided the object in question obeys a sparsity condition, the samples measure appropriate linear combinations of signal values, and we reconstruc ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
Undersampling Theorems state that we may gather far fewer samples than the usual sampling theorem while exactly reconstructing the object of interest – provided the object in question obeys a sparsity condition, the samples measure appropriate linear combinations of signal values, and we reconstruct with a particular nonlinear procedure. While there are many ways to crudely demonstrate such undersampling phenomena, we know of only one approach which precisely quantifies the true sparsityundersampling tradeoff curve of standard algorithms and standard compressed sensing matrices. That approach, based on combinatorial geometry, predicts the exact location in sparsityundersampling domain where standard algorithms exhibit phase transitions in performance. We review the phase transition approach here and describe the broad range of cases where it applies. We also mention exceptions and state challenge problems for future research. Sample result: one can efficiently reconstruct a ksparse signal of length N from n measurements, provided n � 2k · log(N/n), for (k, n, N) large, k ≪ N.
Compressive Estimation of Doubly Selective Channels: Exploiting Channel Sparsity to Improve Spectral Efficiency in Multicarrier Transmissions
"... We consider the estimation of doubly selective wireless channels within pulseshaping multicarrier systems (which include OFDM systems as a special case). A pilotassisted channel estimation technique using the methodology of compressed sensing (CS) is proposed. By exploiting a channel’s delayDopple ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
We consider the estimation of doubly selective wireless channels within pulseshaping multicarrier systems (which include OFDM systems as a special case). A pilotassisted channel estimation technique using the methodology of compressed sensing (CS) is proposed. By exploiting a channel’s delayDoppler sparsity, CSbased channel estimation allows an increase in spectral efficiency through a reduction of the number of pilot symbols that have to be transmitted. We also present an extension of our basic channel estimator that employs a sparsityimproving basis expansion. We propose a framework for optimizing the basis and an iterative approximate basis optimization algorithm. Simulation results using three different CS recovery algorithms demonstrate significant performance gains (in terms of improved estimation accuracy or reduction of the number of pilots) relative to conventional leastsquares estimation, as well as substantial advantages of using an optimized basis.
Sparse unmixing of hyperspectral data
 IEEE Transactions on Geoscience and Remote Sensing
"... Abstract—Linear spectral unmixing is a popular tool in remotely sensed hyperspectral data interpretation. It aims at estimating the fractional abundances of pure spectral signatures (also called as endmembers) in each mixed pixel collected by an imaging spectrometer. In many situations, the identifi ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
Abstract—Linear spectral unmixing is a popular tool in remotely sensed hyperspectral data interpretation. It aims at estimating the fractional abundances of pure spectral signatures (also called as endmembers) in each mixed pixel collected by an imaging spectrometer. In many situations, the identification of the endmember signatures in the original data set may be challenging due to insufficient spatial resolution, mixtures happening at different scales, and unavailability of completely pure spectral signatures in the scene. However, the unmixing problem can also be approached in semisupervised fashion, i.e., by assuming that the observed image signatures can be expressed in the form of linear combinations of a number of pure spectral signatures known in advance (e.g., spectra collected on the ground by a field spectroradiometer). Unmixing then amounts to finding the optimal subset of signatures in a (potentially very large) spectral library that can best model
New bounds for restricted isometry constants
 LINGCHEN KONG, LEVENT TUNÇEL, NAIHUA XIU
, 2010
"... Abstract—This paper discusses new bounds for restricted isometry constants in compressed sensing. Let 8 be an n2p real matrix and k be a positive integer with k n. One of the main results of this paper shows that if the restricted isometry constant k of 8 satisfies k < 0:307 then ksparse signals ar ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
Abstract—This paper discusses new bounds for restricted isometry constants in compressed sensing. Let 8 be an n2p real matrix and k be a positive integer with k n. One of the main results of this paper shows that if the restricted isometry constant k of 8 satisfies k < 0:307 then ksparse signals are guaranteed to be recovered exactly via `1 minimization when no noise is present and ksparse signals can be estimated stably in the noisy case. It is also shown that the bound cannot be substantially improved. An explicit example is constructed in which k = k01 < 0:5, but it is impossible to 2k01 recover certain ksparse signals. Index Terms—Compressed sensing, 1 minimization, restricted isometry property, sparse signal recovery. I.