Results 1  10
of
29
Stable recovery of sparse overcomplete representations in the presence of noise
 IEEE TRANS. INFORM. THEORY
, 2006
"... Overcomplete representations are attracting interest in signal processing theory, particularly due to their potential to generate sparse representations of signals. However, in general, the problem of finding sparse representations must be unstable in the presence of noise. This paper establishes t ..."
Abstract

Cited by 291 (20 self)
 Add to MetaCart
Overcomplete representations are attracting interest in signal processing theory, particularly due to their potential to generate sparse representations of signals. However, in general, the problem of finding sparse representations must be unstable in the presence of noise. This paper establishes the possibility of stable recovery under a combination of sufficient sparsity and favorable structure of the overcomplete system. Considering an ideal underlying signal that has a sufficiently sparse representation, it is assumed that only a noisy version of it can be observed. Assuming further that the overcomplete system is incoherent, it is shown that the optimally sparse approximation to the noisy data differs from the optimally sparse decomposition of the ideal noiseless signal by at most a constant multiple of the noise level. As this optimalsparsity method requires heavy (combinatorial) computational effort, approximation algorithms are considered. It is shown that similar stability is also available using the basis and the matching pursuit algorithms. Furthermore, it is shown that these methods result in sparse approximation of the noisy data that contains only terms also appearing in the unique sparsest representation of the ideal noiseless sparse signal.
Basis Pursuit
, 1994
"... The TimeFrequency and TimeScale communities have recently developed an enormous number of overcomplete signal dictionaries  wavelets, wavelet packets, cosine packets, wilson bases, chirplets, warped bases, and hyperbolic cross bases being a few examples. Basis Pursuit is a technique for decompos ..."
Abstract

Cited by 119 (15 self)
 Add to MetaCart
The TimeFrequency and TimeScale communities have recently developed an enormous number of overcomplete signal dictionaries  wavelets, wavelet packets, cosine packets, wilson bases, chirplets, warped bases, and hyperbolic cross bases being a few examples. Basis Pursuit is a technique for decomposing a signal into an "optimal" superposition of dictionary elements. The optimization criterion is the l 1 norm of coefficients. The method has several advantages over Matching Pursuit and Best Ortho Basis, including superresolution and stability. 1 Introduction Over the last five years or so, there has been an explosion of awareness of alternatives to traditional signal representations. Instead of just representing objects as superpositions of sinusoids (the traditional Fourier representation) we now have available alternate dictionaries  signal representation schemes  of which the Wavelets dictionary is only the most wellknown. Wavelet dictionaries, Gabor dictionaries, Multiscale...
A Sparse Signal Reconstruction Perspective for Source Localization With Sensor Arrays
 M.S. thesis, Mass. Inst. Technol
, 2003
"... Abstract—We present a source localization method based on a sparse representation of sensor measurements with an overcomplete basis composed of samples from the array manifold. We enforce sparsity by imposing penalties based on the 1norm. A number of recent theoretical results on sparsifying proper ..."
Abstract

Cited by 112 (4 self)
 Add to MetaCart
Abstract—We present a source localization method based on a sparse representation of sensor measurements with an overcomplete basis composed of samples from the array manifold. We enforce sparsity by imposing penalties based on the 1norm. A number of recent theoretical results on sparsifying properties of 1 penalties justify this choice. Explicitly enforcing the sparsity of the representation is motivated by a desire to obtain a sharp estimate of the spatial spectrum that exhibits superresolution. We propose to use the singular value decomposition (SVD) of the data matrix to summarize multiple time or frequency samples. Our formulation leads to an optimization problem, which we solve efficiently in a secondorder cone (SOC) programming framework by an interior point implementation. We propose a grid refinement method to mitigate the effects of limiting estimates to a grid of spatial locations and introduce an automatic selection criterion for the regularization parameter involved in our approach. We demonstrate the effectiveness of the method on simulated data by plots of spatial spectra and by comparing the estimator variance to the Cramér–Rao bound (CRB). We observe that our approach has a number of advantages over other source localization techniques, including increased resolution, improved robustness to noise, limitations in data quantity, and correlation of the sources, as well as not requiring an accurate initialization. Index Terms—Directionofarrival estimation, overcomplete representation, sensor array processing, source localization, sparse representation, superresolution. I.
Recovery algorithms for vector valued data with joint sparsity constraints
, 2006
"... Vector valued data appearing in concrete applications often possess sparse expansions with respect to a preassigned frame for each vector component individually. Additionally, different components may also exhibit common sparsity patterns. Recently, there were introduced sparsity measures that take ..."
Abstract

Cited by 71 (21 self)
 Add to MetaCart
Vector valued data appearing in concrete applications often possess sparse expansions with respect to a preassigned frame for each vector component individually. Additionally, different components may also exhibit common sparsity patterns. Recently, there were introduced sparsity measures that take into account such joint sparsity patterns, promoting coupling of nonvanishing components. These measures are typically constructed as weighted ℓ1 norms of componentwise ℓq norms of frame coefficients. We show how to compute solutions of linear inverse problems with such joint sparsity regularization constraints by fast thresholded Landweber algorithms. Next we discuss the adaptive choice of suitable weights appearing in the definition of sparsity measures. The weights are interpreted as indicators of the sparsity pattern and are iteratively updated after each new application of the thresholded Landweber algorithm. The resulting twostep algorithm is interpreted as a doubleminimization scheme for a suitable target functional. We show its ℓ2norm convergence. An implementable version of the algorithm is also formulated, and its norm convergence is proven. Numerical experiments in color image restoration are presented.
Accelerated Projected Gradient Method for Linear Inverse Problems with Sparsity Constraints
 THE JOURNAL OF FOURIER ANALYSIS AND APPLICATIONS
, 2004
"... Regularization of illposed linear inverse problems via ℓ1 penalization has been proposed for cases where the solution is known to be (almost) sparse. One way to obtain the minimizer of such an ℓ1 penalized functional is via an iterative softthresholding algorithm. We propose an alternative implem ..."
Abstract

Cited by 58 (10 self)
 Add to MetaCart
Regularization of illposed linear inverse problems via ℓ1 penalization has been proposed for cases where the solution is known to be (almost) sparse. One way to obtain the minimizer of such an ℓ1 penalized functional is via an iterative softthresholding algorithm. We propose an alternative implementation to ℓ1constraints, using a gradient method, with projection on ℓ1balls. The corresponding algorithm uses again iterative softthresholding, now with a variable thresholding parameter. We also propose accelerated versions of this iterative method, using ingredients of the (linear) steepest descent method. We prove convergence in norm for one of these projected gradient methods, without and with acceleration.
Iterative thresholding algorithms
 in Preprint, 2007. [Online]. Available : http ://www.dsp.ece.rice.edu/cs
"... This article provides a variational formulation for hard and firm thresholding. A related functional can be used to regularize inverse problems by sparsity constraints. We show that a damped hard or firm thresholded Landweber iteration converges to its minimizer. This provides an alternative to an a ..."
Abstract

Cited by 26 (6 self)
 Add to MetaCart
This article provides a variational formulation for hard and firm thresholding. A related functional can be used to regularize inverse problems by sparsity constraints. We show that a damped hard or firm thresholded Landweber iteration converges to its minimizer. This provides an alternative to an algorithm recently studied by the authors. We prove stability of minimizers with respect to the parameters of the functional and its regularization properties by means of Γconvergence. All investigations are done in the general setting of vectorvalued (multichannel) data.
Regularization properties of Tikhonov regularization with sparsity constraints
 Electron. Trans. Numer. Anal
"... Abstract. In this paper, we investigate the regularization properties of Tikhonov regularization with a sparsity (or Besov) penalty for the inversion of nonlinear operator equations. We propose an a posteriori parameter choice rule that ensures convergence in the used norm as the data error goes to ..."
Abstract

Cited by 12 (6 self)
 Add to MetaCart
Abstract. In this paper, we investigate the regularization properties of Tikhonov regularization with a sparsity (or Besov) penalty for the inversion of nonlinear operator equations. We propose an a posteriori parameter choice rule that ensures convergence in the used norm as the data error goes to zero. We show that the method of surrogate functionals will at least reconstruct a critical point of the Tikhonov functional. Finally, we present some numerical results for a nonlinear Hammerstein equation. Key words. inverse problems, sparsity AMS subject classifications. 65J15, 65J20, 65J22
Simultaneously Sparse Solutions to Linear Inverse Problems with Multiple System Matrices and a Single Observation Vector
 SIAM Journal of Scientific Computing 2008
"... Abstract. A linear inverse problem is proposed that requires the determination of multiple unknown signal vectors. Each unknown vector passes through a different system matrix and the results are added to yield a single observation vector. Given the matrices and lone observation, the objective is to ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Abstract. A linear inverse problem is proposed that requires the determination of multiple unknown signal vectors. Each unknown vector passes through a different system matrix and the results are added to yield a single observation vector. Given the matrices and lone observation, the objective is to find a simultaneously sparse set of unknown vectors that solves the system. We will refer to this as the multiplesystem singleoutput (MSSO) simultaneous sparsity problem. This manuscript contrasts the MSSO problem with other simultaneous sparsity problems and conducts a thorough initial exploration of algorithms with which to solve it. Seven algorithms are formulated that approximately solve this NPHard problem. Three greedy techniques are developed (matching pursuit, orthogonal matching pursuit, and least squares matching pursuit) along with four methods based on a convex relaxation (iteratively reweighted least squares, two forms of iterative shrinkage, and formulation as a secondorder cone program). While deriving the algorithms, we prove that seeking a single sparse complexvalued vector is equivalent to seeking two simultaneously sparse realvalued vectors. In other words, singlevector sparse approximation of a complex vector readily maps to the MSSO problem, increasing the applicability of MSSO algorithms. The algorithms are evaluated across three experiments: the first and second involve sparsity
Nonlinear approximation theory on finite groups
, 1999
"... Motivated by problems in signal recovery, we will investigate the distribution of the energy of the Fourier transform of a positive function on a finite group. In particular, we are able to bound from below the fraction of energy contained in various subsets of the Fourier transform of a positive fu ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Motivated by problems in signal recovery, we will investigate the distribution of the energy of the Fourier transform of a positive function on a finite group. In particular, we are able to bound from below the fraction of energy contained in various subsets of the Fourier transform of a positive function defined on a finite group. Applications to signal recovery for positive functions, as well as partial spectral analysis for data on finite groups are also presented.
Iterative Methods for Image Reconstruction
, 2008
"... These annotated slides were prepared by Jeff Fessler for attendees of the ISBI tutorial on statistical image reconstruction methods. The purpose of the annotation is to provide supplemental details, and particularly to provide extensive literature references for further study. For a fascinating hist ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
These annotated slides were prepared by Jeff Fessler for attendees of the ISBI tutorial on statistical image reconstruction methods. The purpose of the annotation is to provide supplemental details, and particularly to provide extensive literature references for further study. For a fascinating history of tomography, see [1]. For broad coverage of image science, see [2]. For further references on image reconstruction, see review papers and chapters, e.g., [3–9].