Results 1  10
of
67
Sparse signal reconstruction from limited data using FOCUSS: A reweighted minimum norm algorithm
 IEEE Trans. Signal Processing
, 1997
"... Abstract—We present a nonparametric algorithm for finding localized energy solutions from limited data. The problem we address is underdetermined, and no prior knowledge of the shape of the region on which the solution is nonzero is assumed. Termed the FOcal Underdetermined System Solver (FOCUSS), t ..."
Abstract

Cited by 215 (12 self)
 Add to MetaCart
Abstract—We present a nonparametric algorithm for finding localized energy solutions from limited data. The problem we address is underdetermined, and no prior knowledge of the shape of the region on which the solution is nonzero is assumed. Termed the FOcal Underdetermined System Solver (FOCUSS), the algorithm has two integral parts: a lowresolution initial estimate of the real signal and the iteration process that refines the initial estimate to the final localized energy solution. The iterations are based on weighted norm minimization of the dependent variable with the weights being a function of the preceding iterative solutions. The algorithm is presented as a general estimation tool usable across different applications. A detailed analysis laying the theoretical foundation for the algorithm is given and includes proofs of global and local convergence and a derivation of the rate of convergence. A view of the algorithm as a novel optimization method which combines desirable characteristics of both classical optimization and learningbased algorithms is provided. Mathematical results on conditions for uniqueness of sparse solutions are also given. Applications of the algorithm are illustrated on problems in directionofarrival (DOA) estimation and neuromagnetic imaging. I.
Algorithms for simultaneous sparse approximation. Part II: Convex relaxation
, 2004
"... Abstract. A simultaneous sparse approximation problem requests a good approximation of several input signals at once using different linear combinations of the same elementary signals. At the same time, the problem balances the error in approximation against the total number of elementary signals th ..."
Abstract

Cited by 205 (3 self)
 Add to MetaCart
Abstract. A simultaneous sparse approximation problem requests a good approximation of several input signals at once using different linear combinations of the same elementary signals. At the same time, the problem balances the error in approximation against the total number of elementary signals that participate. These elementary signals typically model coherent structures in the input signals, and they are chosen from a large, linearly dependent collection. The first part of this paper proposes a greedy pursuit algorithm, called Simultaneous Orthogonal Matching Pursuit, for simultaneous sparse approximation. Then it presents some numerical experiments that demonstrate how a sparse model for the input signals can be identified more reliably given several input signals. Afterward, the paper proves that the SOMP algorithm can compute provably good solutions to several simultaneous sparse approximation problems. The second part of the paper develops another algorithmic approach called convex relaxation, and it provides theoretical results on the performance of convex relaxation for simultaneous sparse approximation. Date: Typeset on March 17, 2005. Key words and phrases. Greedy algorithms, Orthogonal Matching Pursuit, multiple measurement vectors, simultaneous
Iterative hard thresholding for compressed sensing
 Appl. Comp. Harm. Anal
"... Compressed sensing is a technique to sample compressible signals below the Nyquist rate, whilst still allowing near optimal reconstruction of the signal. In this paper we present a theoretical analysis of the iterative hard thresholding algorithm when applied to the compressed sensing recovery probl ..."
Abstract

Cited by 136 (13 self)
 Add to MetaCart
Compressed sensing is a technique to sample compressible signals below the Nyquist rate, whilst still allowing near optimal reconstruction of the signal. In this paper we present a theoretical analysis of the iterative hard thresholding algorithm when applied to the compressed sensing recovery problem. We show that the algorithm has the following properties (made more precise in the main text of the paper) • It gives nearoptimal error guarantees. • It is robust to observation noise. • It succeeds with a minimum number of observations. • It can be used with any sampling operator for which the operator and its adjoint can be computed. • The memory requirement is linear in the problem size. Preprint submitted to Elsevier 28 January 2009 • Its computational complexity per iteration is of the same order as the application of the measurement operator or its adjoint. • It requires a fixed number of iterations depending only on the logarithm of a form of signal to noise ratio of the signal. • Its performance guarantees are uniform in that they only depend on properties of the sampling operator and signal sparsity.
Sparse solutions to linear inverse problems with multiple measurement vectors
 IEEE Trans. Signal Processing
, 2005
"... Abstract—We address the problem of finding sparse solutions to an underdetermined system of equations when there are multiple measurement vectors having the same, but unknown, sparsity structure. The single measurement sparse solution problem has been extensively studied in the past. Although known ..."
Abstract

Cited by 129 (10 self)
 Add to MetaCart
Abstract—We address the problem of finding sparse solutions to an underdetermined system of equations when there are multiple measurement vectors having the same, but unknown, sparsity structure. The single measurement sparse solution problem has been extensively studied in the past. Although known to be NPhard, many single–measurement suboptimal algorithms have been formulated that have found utility in many different applications. Here, we consider in depth the extension of two classes of algorithms–Matching Pursuit (MP) and FOCal Underdetermined System Solver (FOCUSS)–to the multiple measurement case so that they may be used in applications such as neuromagnetic imaging, where multiple measurement vectors are available, and solutions with a common sparsity structure must be computed. Cost functions appropriate to the multiple measurement problem are developed, and algorithms are derived based on their minimization. A simulation study is conducted on a testcase dictionary to show how the utilization of more than one measurement vector improves the performance of the MP and FOCUSS classes of algorithm, and their performances are compared. I.
An affine scaling methodology for best basis selection
 IEEE Trans. Signal Processing
, 1999
"... Abstract — A methodology is developed to derive algorithms for optimal basis selection by minimizing diversity measures proposed by Wickerhauser and Donoho. These measures include the pnormlike (`(p 1)) diversity measures and the Gaussian and Shannon entropies. The algorithm development methodolog ..."
Abstract

Cited by 78 (11 self)
 Add to MetaCart
Abstract — A methodology is developed to derive algorithms for optimal basis selection by minimizing diversity measures proposed by Wickerhauser and Donoho. These measures include the pnormlike (`(p 1)) diversity measures and the Gaussian and Shannon entropies. The algorithm development methodology uses a factored representation for the gradient and involves successive relaxation of the Lagrangian necessary condition. This yields algorithms that are intimately related to the Affine Scaling Transformation (AST) based methods commonly employed by the interior point approach to nonlinear optimization. The algorithms minimizing the `(p 1) diversity measures are equivalent to a recently developed class of algorithms called FOCal Underdetermined System Solver (FOCUSS). The general nature of the methodology provides a systematic approach for deriving this class of algorithms and a natural mechanism for extending them. It also facilitates a better understanding of the convergence behavior and a strengthening of the convergence results. The Gaussian entropy minimization algorithm is shown to be equivalent to a wellbehaved p =0normlike optimization algorithm. Computer experiments demonstrate that the pnormlike and the Gaussian entropy algorithms perform well, converging to sparse solutions. The Shannon entropy algorithm produces solutions that are concentrated but are shown to not converge to a fully sparse solution. I.
Sparse Bayesian learning for basis selection
 IEEE Transactions on Signal Processing
, 2004
"... Abstract—Sparse Bayesian learning (SBL) and specifically relevance vector machines have received much attention in the machine learning literature as a means of achieving parsimonious representations in the context of regression and classification. The methodology relies on a parameterized prior tha ..."
Abstract

Cited by 75 (5 self)
 Add to MetaCart
Abstract—Sparse Bayesian learning (SBL) and specifically relevance vector machines have received much attention in the machine learning literature as a means of achieving parsimonious representations in the context of regression and classification. The methodology relies on a parameterized prior that encourages models with few nonzero weights. In this paper, we adapt SBL to the signal processing problem of basis selection from overcomplete dictionaries, proving several results about the SBL cost function that elucidate its general behavior and provide solid theoretical justification for this application. Specifically, we have shown that SBL retains a desirable property of the 0norm diversity measure (i.e., the global minimum is achieved at the maximally sparse solution) while often possessing a more limited constellation of local minima. We have also demonstrated that the local minima that do exist are achieved at sparse solutions. Later, we provide a novel interpretation of SBL that gives us valuable insight into why it is successful in producing sparse representations. Finally, we include simulation studies comparing sparse Bayesian learning with Basis Pursuit and the more recent FOCal Underdetermined System Solver (FOCUSS) class of basis selection algorithms. These results indicate that our theoretical insights translate directly into improved performance. Index Terms—Basis selection, diversity measures, linear inverse problems, sparse Bayesian learning, sparse representations. I.
Theoretical results on sparse representations of multiplemeasurement vectors
 IEEE Trans. Signal Process
, 2006
"... Abstract — Multiple measurement vector (MMV) is a relatively new problem in sparse representations. Efficient methods have been proposed. Considering many theoretical results that are available in a simple case – single measure vector (SMV) – the theoretical analysis regarding MMV is lacking. In th ..."
Abstract

Cited by 67 (2 self)
 Add to MetaCart
Abstract — Multiple measurement vector (MMV) is a relatively new problem in sparse representations. Efficient methods have been proposed. Considering many theoretical results that are available in a simple case – single measure vector (SMV) – the theoretical analysis regarding MMV is lacking. In this paper, some known results of SMV are generalized to MMV. Some of these new results take advantages of additional information in the formulation of MMV. We consider the uniqueness under both an ℓ0norm like criterion and an ℓ1norm like criterion. The consequent equivalence between the ℓ0norm approach and the ℓ1norm approach indicates a computationally efficient way of finding the sparsest representation in an overcomplete dictionary. For greedy algorithms, it is proven that under certain conditions, orthogonal matching pursuit (OMP) can find the sparsest representation of an MMV with computational efficiency, just like in SMV. Simulations show that the predictions made by the proved theorems tend to be very conservative; this is consistent with some recent theoretical advances in probability. The connections will be discussed.
Reduce and Boost: Recovering Arbitrary Sets of Jointly Sparse Vectors
, 2008
"... The rapid developing area of compressed sensing suggests that a sparse vector lying in a high dimensional space can be accurately and efficiently recovered from only a small set of nonadaptive linear measurements, under appropriate conditions on the measurement matrix. The vector model has been ext ..."
Abstract

Cited by 60 (35 self)
 Add to MetaCart
The rapid developing area of compressed sensing suggests that a sparse vector lying in a high dimensional space can be accurately and efficiently recovered from only a small set of nonadaptive linear measurements, under appropriate conditions on the measurement matrix. The vector model has been extended both theoretically and practically to a finite set of sparse vectors sharing a common sparsity pattern. In this paper, we treat a broader framework in which the goal is to recover a possibly infinite set of jointly sparse vectors. Extending existing algorithms to this model is difficult due to the infinite structure of the sparse vector set. Instead, we prove that the entire infinite set of sparse vectors can be recovered by solving a single, reducedsize finitedimensional problem, corresponding to recovery of a finite set of sparse vectors. We then show that the problem can be further reduced to the basic model of a single sparse vector by randomly combining the measurements. Our approach is exact for both countable and uncountable sets as it does not rely on discretization or heuristic techniques. To efficiently find the single sparse vector produced by the last reduction step, we suggest an empirical boosting strategy that improves the recovery ability of any given suboptimal method for recovering a sparse vector. Numerical experiments on random data demonstrate that when applied to infinite sets our strategy outperforms discretization techniques in terms of both run time and empirical recovery rate. In the finite model, our boosting algorithm has fast run time and much higher recovery rate than known popular methods.
Compressed Sensing of Analog Signals in ShiftInvariant Spaces
, 2009
"... A traditional assumption underlying most data converters is that the signal should be sampled at a rate exceeding twice the highest frequency. This statement is based on a worstcase scenario in which the signal occupies the entire available bandwidth. In practice, many signals are sparse so that on ..."
Abstract

Cited by 47 (30 self)
 Add to MetaCart
A traditional assumption underlying most data converters is that the signal should be sampled at a rate exceeding twice the highest frequency. This statement is based on a worstcase scenario in which the signal occupies the entire available bandwidth. In practice, many signals are sparse so that only part of the bandwidth is used. In this paper, we develop methods for lowrate sampling of continuoustime sparse signals in shiftinvariant (SI) spaces, generated by m kernels with period T. We model sparsity by treating the case in which only k out of the m generators are active, however, we do not know which k are chosen. We show how to sample such signals at a rate much lower than m/T, which is the minimal sampling rate without exploiting sparsity. Our approach combines ideas from analog sampling in a subspace with a recently developed block diagram that converts an infinite set of sparse equations to a finite counterpart. Using these two components we formulate our problem within the framework of finite compressed sensing (CS) and then rely on algorithms developed in that context. The distinguishing feature of our results is that in contrast to standard CS, which treats finitelength vectors, we consider sampling of analog signals for which no underlying finitedimensional model exists. The proposed framework allows to extend much of the recent literature on CS to the analog domain.
Subset selection in noise based on diversity measure minimization
 IEEE Trans. Signal Processing
, 2003
"... Abstract—In this paper, we develop robust methods for subset selection based on the minimization of diversity measures. A Bayesian framework is used to account for noise in the data and a maximum a posteriori (MAP) estimation procedure leads to an iterative procedure which is a regularized version o ..."
Abstract

Cited by 40 (10 self)
 Add to MetaCart
Abstract—In this paper, we develop robust methods for subset selection based on the minimization of diversity measures. A Bayesian framework is used to account for noise in the data and a maximum a posteriori (MAP) estimation procedure leads to an iterative procedure which is a regularized version of the FOCal Underdetermined System Solver (FOCUSS) algorithm. The convergence of the regularized FOCUSS algorithm is established and it is shown that the stable fixed points of the algorithm are sparse. We investigate three different criteria for choosing the regularization parameter: quality of fit, sparsity criterion, andcurve. Thecurve method, as applied to the problem of subset selection, is found not to be robust, and we propose a novel modifiedcurve procedure that solves this problem. Each of the regularized FOCUSS algorithms is evaluated through simulation of a detection problem, and the results are compared with those obtained using a sequential forward selection algorithm termed orthogonal matching pursuit (OMP). In each case, the regularized FOCUSS algorithm is shown to be superior to the OMP in noisy environments. Index Terms—Diversity measures, linear inverse problems, matching pursuit, regularization, sparsity, subset selection, undetermined systems. I.