Results 1  10
of
23
Just Relax: Convex Programming Methods for Identifying Sparse Signals in Noise
, 2006
"... This paper studies a difficult and fundamental problem that arises throughout electrical engineering, applied mathematics, and statistics. Suppose that one forms a short linear combination of elementary signals drawn from a large, fixed collection. Given an observation of the linear combination that ..."
Abstract

Cited by 298 (1 self)
 Add to MetaCart
This paper studies a difficult and fundamental problem that arises throughout electrical engineering, applied mathematics, and statistics. Suppose that one forms a short linear combination of elementary signals drawn from a large, fixed collection. Given an observation of the linear combination that has been contaminated with additive noise, the goal is to identify which elementary signals participated and to approximate their coefficients. Although many algorithms have been proposed, there is little theory which guarantees that these algorithms can accurately and efficiently solve the problem. This paper studies a method called convex relaxation, which attempts to recover the ideal sparse signal by solving a convex program. This approach is powerful because the optimization can be completed in polynomial time with standard scientific software. The paper provides general conditions which ensure that convex relaxation succeeds. As evidence of the broad impact of these results, the paper describes how convex relaxation can be used for several concrete signal recovery problems. It also describes applications to channel coding, linear regression, and numerical analysis.
Signal recovery from partial information via Orthogonal Matching Pursuit.” Submitted to
 IEEE Trans. Inform. Theory
, 2005
"... Abstract. This article demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal. This is a massive improvement over previou ..."
Abstract

Cited by 149 (8 self)
 Add to MetaCart
Abstract. This article demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal. This is a massive improvement over previous results for OMP, which require O(m 2) measurements. The new results for OMP are comparable with recent results for another algorithm called Basis Pursuit (BP). The OMP algorithm is much faster and much easier to implement, which makes it an attractive alternative to BP for signal recovery problems. 1.
Recovery algorithms for vector valued data with joint sparsity constraints
, 2006
"... Vector valued data appearing in concrete applications often possess sparse expansions with respect to a preassigned frame for each vector component individually. Additionally, different components may also exhibit common sparsity patterns. Recently, there were introduced sparsity measures that take ..."
Abstract

Cited by 71 (21 self)
 Add to MetaCart
Vector valued data appearing in concrete applications often possess sparse expansions with respect to a preassigned frame for each vector component individually. Additionally, different components may also exhibit common sparsity patterns. Recently, there were introduced sparsity measures that take into account such joint sparsity patterns, promoting coupling of nonvanishing components. These measures are typically constructed as weighted ℓ1 norms of componentwise ℓq norms of frame coefficients. We show how to compute solutions of linear inverse problems with such joint sparsity regularization constraints by fast thresholded Landweber algorithms. Next we discuss the adaptive choice of suitable weights appearing in the definition of sparsity measures. The weights are interpreted as indicators of the sparsity pattern and are iteratively updated after each new application of the thresholded Landweber algorithm. The resulting twostep algorithm is interpreted as a doubleminimization scheme for a suitable target functional. We show its ℓ2norm convergence. An implementable version of the algorithm is also formulated, and its norm convergence is proven. Numerical experiments in color image restoration are presented.
Iteratively reweighted least squares minimization for sparse recovery
 Comm. Pure Appl. Math
"... Under certain conditions (known as the Restricted Isometry Property or RIP) on the m ×Nmatrix Φ (where m < N), vectors x ∈ RN that are sparse (i.e. have most of their entries equal to zero) can be recovered exactly from y: = Φx even though Φ−1 (y) is typically an (N − m)dimensional hyperplane; in ad ..."
Abstract

Cited by 64 (5 self)
 Add to MetaCart
Under certain conditions (known as the Restricted Isometry Property or RIP) on the m ×Nmatrix Φ (where m < N), vectors x ∈ RN that are sparse (i.e. have most of their entries equal to zero) can be recovered exactly from y: = Φx even though Φ−1 (y) is typically an (N − m)dimensional hyperplane; in addition x is then equal to the element in Φ−1 (y) of minimal ℓ1norm. This minimal element can be identified via linear programming algorithms. We study an alternative method of determining x, as the limit of an Iteratively Reweighted Least Squares (IRLS) algorithm. The main step of this IRLS finds, for a given weight vector w, the element in Φ−1 (y) with smallest ℓ2(w)norm. If x (n) is the solution at iteration step n, then the new weight w (n) is defined by w (n) i:=
Accelerated Projected Gradient Method for Linear Inverse Problems with Sparsity Constraints
 THE JOURNAL OF FOURIER ANALYSIS AND APPLICATIONS
, 2004
"... Regularization of illposed linear inverse problems via ℓ1 penalization has been proposed for cases where the solution is known to be (almost) sparse. One way to obtain the minimizer of such an ℓ1 penalized functional is via an iterative softthresholding algorithm. We propose an alternative implem ..."
Abstract

Cited by 58 (10 self)
 Add to MetaCart
Regularization of illposed linear inverse problems via ℓ1 penalization has been proposed for cases where the solution is known to be (almost) sparse. One way to obtain the minimizer of such an ℓ1 penalized functional is via an iterative softthresholding algorithm. We propose an alternative implementation to ℓ1constraints, using a gradient method, with projection on ℓ1balls. The corresponding algorithm uses again iterative softthresholding, now with a variable thresholding parameter. We also propose accelerated versions of this iterative method, using ingredients of the (linear) steepest descent method. We prove convergence in norm for one of these projected gradient methods, without and with acceleration.
Sparse recovery using sparse random matrices
, 2008
"... We consider the approximate sparse recovery problem, where the goal is to (approximately) recover a highdimensional vector x from its lowerdimensional sketch Ax. A popular way of performing this recovery is by finding x # such that Ax = Ax # , and �x # �1 is minimal. It is known that this approach ..."
Abstract

Cited by 41 (4 self)
 Add to MetaCart
We consider the approximate sparse recovery problem, where the goal is to (approximately) recover a highdimensional vector x from its lowerdimensional sketch Ax. A popular way of performing this recovery is by finding x # such that Ax = Ax # , and �x # �1 is minimal. It is known that this approach “works” if A is a random dense matrix, chosen from a proper distribution. In this paper, we investigate this procedure for the case where A is binary and very sparse. We show that, both in theory and in practice, sparse matrices are essentially as “good” as the dense ones. At the same time, sparse binary matrices provide additional benefits, such as reduced encoding and decoding time.
Algorithmic linear dimension reduction in the ℓ1 norm for sparse vectors. Submitted for publication
, 2006
"... Abstract. We can recover approximately a sparse signal with limited noise, i.e, a vector of length d with at least d − m zeros or nearzeros, using little more than m log(d) nonadaptive linear measurements rather than the d measurements needed to recover an arbitrary signal of length d. Several rese ..."
Abstract

Cited by 27 (8 self)
 Add to MetaCart
Abstract. We can recover approximately a sparse signal with limited noise, i.e, a vector of length d with at least d − m zeros or nearzeros, using little more than m log(d) nonadaptive linear measurements rather than the d measurements needed to recover an arbitrary signal of length d. Several research communities are interested in techniques for measuring and recovering such signals and a variety of approaches have been proposed. We focus on two important properties of such algorithms. • Uniformity. A single measurement matrix should work simultaneously for all signals. • Computational Efficiency. The time to recover such an msparse signal should be close to the obvious lower bound, mlog(d/m). To date, algorithms for signal recovery that provide a uniform measurement matrix with approximately the optimal number of measurements, such as first proposed by Donoho and his collaborators, and, separately, by Candès and Tao, are based on linear programming and require time poly(d) instead of m polylog(d). On the other hand, fast decoding algorithms to date from the Theoretical Computer Science and Database communities fail with probability at least 1 / poly(d), whereas we need failure probability no more than around 1/d m to achieve a uniform failure guarantee. This paper develops a new method for recovering msparse signals that is simultaneously uniform
A Simple Proof for Recoverability of ℓ1Minimization: Go Over or Under
, 2005
"... It is wellknown by now that ℓ1 minimization can help recover sparse solutions to underdetermined linear equations or sparsely corrupted solutions to overdetermined equations, and the two problems are equivalent under appropriate conditions. So far almost all theoretic results have been obtained t ..."
Abstract

Cited by 22 (4 self)
 Add to MetaCart
It is wellknown by now that ℓ1 minimization can help recover sparse solutions to underdetermined linear equations or sparsely corrupted solutions to overdetermined equations, and the two problems are equivalent under appropriate conditions. So far almost all theoretic results have been obtained through studying the “underdetermined side ” of the problem. In this note, we take a different approach from the “overdetermined side ” and show that a recoverability result (with the best available order) follows almost immediately from an inequality of Garnaev and Gluskin. We also connect dots with recoverability conditions obtained from different spaces. 1
Domain decomposition methods for linear inverse problems with sparsity constraints
, 2007
"... Quantities of interest appearing in concrete applications often possess sparse expansions with respect to a preassigned frame. Recently, there were introduced sparsity measures which are typically constructed on the basis of weighted ℓ1 norms of frame coefficients. One can model the reconstruction o ..."
Abstract

Cited by 20 (6 self)
 Add to MetaCart
Quantities of interest appearing in concrete applications often possess sparse expansions with respect to a preassigned frame. Recently, there were introduced sparsity measures which are typically constructed on the basis of weighted ℓ1 norms of frame coefficients. One can model the reconstruction of a sparse vector from noisy linear measurements as the minimization of the functional defined by the sum of the discrepancy with respect to the data and the weighted ℓ1norm of suitable frame coefficients. Thresholded Landweber iterations were proposed for the solution of the variational problem. Despite of its simplicity which makes it very attractive to users, this algorithm converges slowly. In this paper we investigate methods to accelerate significantly the convergence. We introduce and analyze sequential and parallel iterative algorithms based on alternating subspace corrections for the solution of the linear inverse problem with sparsity constraints. We prove their norm convergence to minimizers of the functional. We compare the computational cost and the behavior of these new algorithms with respect to the thresholded Landweber iterations.
A negative result concerning explicit matrices with the restricted isometry property
, 2008
"... In this note, we prove that matrices whose entries are all 0 or 1 cannot achieve good performance with respect to the Restricted Isometry Property (RIP). Most currently known deterministic constructions of matrices satisfying the RIP fall into this category, and hence these constructions suffer inhe ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
In this note, we prove that matrices whose entries are all 0 or 1 cannot achieve good performance with respect to the Restricted Isometry Property (RIP). Most currently known deterministic constructions of matrices satisfying the RIP fall into this category, and hence these constructions suffer inherent limitations. In particular, we show that DeVore’s construction of matrices satisfying the RIP is close to optimal once we add the constraint that all entries of the matrix are 0 or 1. 1