Results 1 - 10
of
232
From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images
, 2007
"... A full-rank matrix A ∈ IR n×m with n < m generates an underdetermined system of linear equations Ax = b having infinitely many solutions. Suppose we seek the sparsest solution, i.e., the one with the fewest nonzero entries: can it ever be unique? If so, when? As optimization of sparsity is combin ..."
Abstract
-
Cited by 427 (36 self)
- Add to MetaCart
(Show Context)
A full-rank matrix A ∈ IR n×m with n < m generates an underdetermined system of linear equations Ax = b having infinitely many solutions. Suppose we seek the sparsest solution, i.e., the one with the fewest nonzero entries: can it ever be unique? If so, when? As optimization of sparsity is combinatorial in nature, are there efficient methods for finding the sparsest solution? These questions have been answered positively and constructively in recent years, exposing a wide variety of surprising phenomena; in particular, the existence of easily-verifiable conditions under which optimally-sparse solutions can be found by concrete, effective computational methods. Such theoretical results inspire a bold perspective on some important practical problems in signal and image processing. Several well-known signal and image processing problems can be cast as demanding solutions of undetermined systems of equations. Such problems have previously seemed, to many, intractable. There is considerable evidence that these problems often have sparse solutions. Hence, advances in finding sparse solutions to underdetermined systems energizes research on such signal and image processing problems – to striking effect. In this paper we review the theoretical results on sparse solutions of linear systems, empirical
Sparse Reconstruction by Separable Approximation
, 2007
"... Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), wavelet-based deconvolution and reconstruction, and compressed sensing ..."
Abstract
-
Cited by 373 (38 self)
- Add to MetaCart
Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), wavelet-based deconvolution and reconstruction, and compressed sensing (CS) are a few well-known areas in which problems of this type appear. One standard approach is to minimize an objective function that includes a quadratic (ℓ2) error term added to a sparsity-inducing (usually ℓ1) regularizer. We present an algorithmic framework for the more general problem of minimizing the sum of a smooth convex function and a nonsmooth, possibly nonconvex, sparsity-inducing function. We propose iterative methods in which each step is an optimization subproblem involving a separable quadratic term (diagonal Hessian) plus the original sparsity-inducing term. Our approach is suitable for cases in which this subproblem can be solved much more rapidly than the original problem. In addition to solving the standard ℓ2 − ℓ1 case, our approach handles other problems, e.g., ℓp regularizers with p � = 1, or group-separable (GS) regularizers. Experiments with CS problems show that our approach provides state-of-the-art speed for the standard ℓ2 − ℓ1 problem, and is also efficient on problems with GS regularizers. Index Terms — sparse approximation, compressed sensing, optimization, reconstruction.
Algorithms for simultaneous sparse approximation. Part II: Convex relaxation
, 2004
"... Abstract. A simultaneous sparse approximation problem requests a good approximation of several input signals at once using different linear combinations of the same elementary signals. At the same time, the problem balances the error in approximation against the total number of elementary signals th ..."
Abstract
-
Cited by 366 (5 self)
- Add to MetaCart
Abstract. A simultaneous sparse approximation problem requests a good approximation of several input signals at once using different linear combinations of the same elementary signals. At the same time, the problem balances the error in approximation against the total number of elementary signals that participate. These elementary signals typically model coherent structures in the input signals, and they are chosen from a large, linearly dependent collection. The first part of this paper proposes a greedy pursuit algorithm, called Simultaneous Orthogonal Matching Pursuit, for simultaneous sparse approximation. Then it presents some numerical experiments that demonstrate how a sparse model for the input signals can be identified more reliably given several input signals. Afterward, the paper proves that the S-OMP algorithm can compute provably good solutions to several simultaneous sparse approximation problems. The second part of the paper develops another algorithmic approach called convex relaxation, and it provides theoretical results on the performance of convex relaxation for simultaneous sparse approximation. Date: Typeset on March 17, 2005. Key words and phrases. Greedy algorithms, Orthogonal Matching Pursuit, multiple measurement vectors, simultaneous
Robust Recovery of Signals From a Structured Union of Subspaces
, 2008
"... Traditional sampling theories consider the problem of reconstructing an unknown signal x from a series of samples. A prevalent assumption which often guarantees recovery from the given measurements is that x lies in a known subspace. Recently, there has been growing interest in nonlinear but structu ..."
Abstract
-
Cited by 221 (47 self)
- Add to MetaCart
(Show Context)
Traditional sampling theories consider the problem of reconstructing an unknown signal x from a series of samples. A prevalent assumption which often guarantees recovery from the given measurements is that x lies in a known subspace. Recently, there has been growing interest in nonlinear but structured signal models, in which x lies in a union of subspaces. In this paper we develop a general framework for robust and efficient recovery of such signals from a given set of samples. More specifically, we treat the case in which x lies in a sum of k subspaces, chosen from a larger set of m possibilities. The samples are modelled as inner products with an arbitrary set of sampling functions. To derive an efficient and robust recovery algorithm, we show that our problem can be formulated as that of recovering a block-sparse vector whose non-zero elements appear in fixed blocks. We then propose a mixed ℓ2/ℓ1 program for block sparse recovery. Our main result is an equivalence condition under which the proposed convex algorithm is guaranteed to recover the original signal. This result relies on the notion of block restricted isometry property (RIP), which is a generalization of the standard RIP used extensively in the context of compressed sensing. Based on RIP we also prove stability of our approach in the presence of noise and modeling errors. A special case of our framework is that of recovering multiple measurement vectors (MMV) that share a joint sparsity pattern. Adapting our results to this context leads to new MMV recovery methods as well as equivalence conditions under which the entire set can be determined efficiently.
Theoretical results on sparse representations of multiple-measurement vectors
- IEEE Trans. Signal Process
, 2006
"... Abstract — Multiple measurement vector (MMV) is a relatively new problem in sparse representations. Efficient methods have been proposed. Considering many theoretical results that are available in a simple case – single measure vector (SMV) – the theoretical analysis regarding MMV is lacking. In th ..."
Abstract
-
Cited by 147 (2 self)
- Add to MetaCart
(Show Context)
Abstract — Multiple measurement vector (MMV) is a relatively new problem in sparse representations. Efficient methods have been proposed. Considering many theoretical results that are available in a simple case – single measure vector (SMV) – the theoretical analysis regarding MMV is lacking. In this paper, some known results of SMV are generalized to MMV. Some of these new results take advantages of additional information in the formulation of MMV. We consider the uniqueness under both an ℓ0-norm like criterion and an ℓ1-norm like criterion. The consequent equivalence between the ℓ0-norm approach and the ℓ1-norm approach indicates a computationally efficient way of finding the sparsest representation in an over-complete dictionary. For greedy algorithms, it is proven that under certain conditions, orthogonal matching pursuit (OMP) can find the sparsest representation of an MMV with computational efficiency, just like in SMV. Simulations show that the predictions made by the proved theorems tend to be very conservative; this is consistent with some recent theoretical advances in probability. The connections will be discussed.
Structured compressed sensing: From theory to applications
- IEEE TRANS. SIGNAL PROCESS
, 2011
"... Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discrete-to-discrete measurement architectures using matrices of randomized nature and signal models based on standard ..."
Abstract
-
Cited by 104 (16 self)
- Add to MetaCart
(Show Context)
Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discrete-to-discrete measurement architectures using matrices of randomized nature and signal models based on standard sparsity. In recent years, CS has worked its way into several new application areas. This, in turn, necessitates a fresh look on many of the basics of CS. The random matrix measurement operator must be replaced by more structured sensing architectures that correspond to the characteristics of feasible acquisition hardware. The standard sparsity prior has to be extended to include a much richer class of signals and to encode broader data models, including continuous-time signals. In our overview, the theme is exploiting signal and measurement structure in compressive sensing. The prime focus is bridging theory and practice; that is, to pinpoint the potential of structured CS strategies to emerge from the math to the hardware. Our summary highlights new directions as well as relations to more traditional CS, with the hope of serving both as a review to practitioners wanting to join this emerging field, and as a reference for researchers that attempts to put some of the existing ideas in perspective of practical applications.
An empirical bayesian strategy for solving the simultaneous sparse approximation problem
- IEEE Trans. Sig. Proc
, 2007
"... Abstract—Given a large overcomplete dictionary of basis vectors, the goal is to simultaneously represent 1 signal vectors using coefficient expansions marked by a common sparsity profile. This generalizes the standard sparse representation problem to the case where multiple responses exist that were ..."
Abstract
-
Cited by 91 (16 self)
- Add to MetaCart
(Show Context)
Abstract—Given a large overcomplete dictionary of basis vectors, the goal is to simultaneously represent 1 signal vectors using coefficient expansions marked by a common sparsity profile. This generalizes the standard sparse representation problem to the case where multiple responses exist that were putatively generated by the same small subset of features. Ideally, the associated sparse generating weights should be recovered, which can have physical significance in many applications (e.g., source localization). The generic solution to this problem is intractable and, therefore, approximate procedures are sought. Based on the concept of automatic relevance determination, this paper uses an empirical Bayesian prior to estimate a convenient posterior distribution over candidate basis vectors. This particular approximation enforces a common sparsity profile and consistently places its prominent posterior mass on the appropriate region of weight-space necessary for simultaneous sparse recovery. The resultant algorithm is then compared with multiple response extensions of matching pursuit, basis pursuit, FOCUSS, and Jeffreys prior-based Bayesian methods, finding that it often outperforms the others. Additional motivation for this particular choice of cost function is also provided, including the analysis of global and local minima and a variational derivation that highlights the similarities and differences between the proposed algorithm and previous approaches. Index Terms—Automatic relevance determination, empirical Bayes, multiple response models, simultaneous sparse approximation, sparse Bayesian learning, variable selection. I.
A new view of automatic relevance determination
- In NIPS 20
, 2008
"... Automatic relevance determination (ARD) and the closely-related sparse Bayesian learning (SBL) framework are effective tools for pruning large numbers of irrelevant features leading to a sparse explanatory subset. However, popular update rules used for ARD are either difficult to extend to more gene ..."
Abstract
-
Cited by 70 (9 self)
- Add to MetaCart
(Show Context)
Automatic relevance determination (ARD) and the closely-related sparse Bayesian learning (SBL) framework are effective tools for pruning large numbers of irrelevant features leading to a sparse explanatory subset. However, popular update rules used for ARD are either difficult to extend to more general problems of interest or are characterized by non-ideal convergence properties. Moreover, it remains unclear exactly how ARD relates to more traditional MAP estimation-based methods for learning sparse representations (e.g., the Lasso). This paper furnishes an alternative means of expressing the ARD cost function using auxiliary functions that naturally addresses both of these issues. First, the proposed reformulation of ARD can naturally be optimized by solving a series of re-weighted ℓ1 problems. The result is an efficient, extensible algorithm that can be implemented using standard convex programming toolboxes and is guaranteed to converge to a local minimum (or saddle point). Secondly, the analysis reveals that ARD is exactly equivalent to performing standard MAP estimation in weight space using a particular feature- and noise-dependent, non-factorial weight prior. We then demonstrate that this implicit prior maintains several desirable advantages over conventional priors with respect to feature selection. Overall these results suggest alternative cost functions and update procedures for selecting features and promoting sparse solutions in a variety of general situations. In particular, the methodology readily extends to handle problems such as non-negative sparse coding and covariance component estimation. 1
Sparse signal recovery with temporally correlated source vectors using sparse Bayesian learning
- IEEE J. Sel. Topics Signal Process
, 2011
"... Abstract — We address the sparse signal recovery problem in the context of multiple measurement vectors (MMV) when elements in each nonzero row of the solution matrix are temporally correlated. Existing algorithms do not consider such temporal correlation and thus their performance degrades signific ..."
Abstract
-
Cited by 59 (15 self)
- Add to MetaCart
(Show Context)
Abstract — We address the sparse signal recovery problem in the context of multiple measurement vectors (MMV) when elements in each nonzero row of the solution matrix are temporally correlated. Existing algorithms do not consider such temporal correlation and thus their performance degrades significantly with the correlation. In this work, we propose a block sparse Bayesian learning framework which models the temporal correlation. We derive two sparse Bayesian learning (SBL) algorithms, which have superior recovery performance compared to existing algorithms, especially in the presence of high temporal correlation. Furthermore, our algorithms are better at handling highly underdetermined problems and require less row-sparsity on the solution matrix. We also provide analysis of the global and local minima of their cost function, and show that the SBL cost function has the very desirable property that the global minimum is at the sparsest solution to the MMV problem. Extensive experiments also provide some interesting results that motivate future theoretical research on the MMV model.
Denoising by Sparse Approximation: Error Bounds Based on Rate-Distortion Theory
, 2006
"... If a signal x is known to have a sparse representation with respect to a frame, it can be estimated from a noise-corrupted observation y by finding the best sparse approximation to y. Removing noise in this manner depends on the frame efficiently representing the signal while it inefficiently repres ..."
Abstract
-
Cited by 44 (7 self)
- Add to MetaCart
If a signal x is known to have a sparse representation with respect to a frame, it can be estimated from a noise-corrupted observation y by finding the best sparse approximation to y. Removing noise in this manner depends on the frame efficiently representing the signal while it inefficiently represents the noise. The mean-squared error (MSE) of this denoising scheme and the probability that the estimate has the same sparsity pattern as the original signal are analyzed. First an MSE bound that depends on a new bound on approximating a Gaussian signal as a linear combination of elements of an overcomplete dictionary is given. Further analyses are for dictionaries generated randomly according to a spherically-symmetric distribution and signals expressible with single dictionary elements. Easily-computed approximations for the probability of selecting the correct dictionary element and the MSE are given. Asymptotic expressions reveal a critical input signal-to-noise ratio for signal recovery.