Results 1  10
of
19
Sparse signal reconstruction from limited data using FOCUSS: A reweighted minimum norm algorithm
 IEEE Trans. Signal Processing
, 1997
"... Abstract—We present a nonparametric algorithm for finding localized energy solutions from limited data. The problem we address is underdetermined, and no prior knowledge of the shape of the region on which the solution is nonzero is assumed. Termed the FOcal Underdetermined System Solver (FOCUSS), t ..."
Abstract

Cited by 218 (12 self)
 Add to MetaCart
Abstract—We present a nonparametric algorithm for finding localized energy solutions from limited data. The problem we address is underdetermined, and no prior knowledge of the shape of the region on which the solution is nonzero is assumed. Termed the FOcal Underdetermined System Solver (FOCUSS), the algorithm has two integral parts: a lowresolution initial estimate of the real signal and the iteration process that refines the initial estimate to the final localized energy solution. The iterations are based on weighted norm minimization of the dependent variable with the weights being a function of the preceding iterative solutions. The algorithm is presented as a general estimation tool usable across different applications. A detailed analysis laying the theoretical foundation for the algorithm is given and includes proofs of global and local convergence and a derivation of the rate of convergence. A view of the algorithm as a novel optimization method which combines desirable characteristics of both classical optimization and learningbased algorithms is provided. Mathematical results on conditions for uniqueness of sparse solutions are also given. Applications of the algorithm are illustrated on problems in directionofarrival (DOA) estimation and neuromagnetic imaging. I.
Sparse solutions to linear inverse problems with multiple measurement vectors
 IEEE Trans. Signal Processing
, 2005
"... Abstract—We address the problem of finding sparse solutions to an underdetermined system of equations when there are multiple measurement vectors having the same, but unknown, sparsity structure. The single measurement sparse solution problem has been extensively studied in the past. Although known ..."
Abstract

Cited by 131 (10 self)
 Add to MetaCart
Abstract—We address the problem of finding sparse solutions to an underdetermined system of equations when there are multiple measurement vectors having the same, but unknown, sparsity structure. The single measurement sparse solution problem has been extensively studied in the past. Although known to be NPhard, many single–measurement suboptimal algorithms have been formulated that have found utility in many different applications. Here, we consider in depth the extension of two classes of algorithms–Matching Pursuit (MP) and FOCal Underdetermined System Solver (FOCUSS)–to the multiple measurement case so that they may be used in applications such as neuromagnetic imaging, where multiple measurement vectors are available, and solutions with a common sparsity structure must be computed. Cost functions appropriate to the multiple measurement problem are developed, and algorithms are derived based on their minimization. A simulation study is conducted on a testcase dictionary to show how the utilization of more than one measurement vector improves the performance of the MP and FOCUSS classes of algorithm, and their performances are compared. I.
An affine scaling methodology for best basis selection
 IEEE Trans. Signal Processing
, 1999
"... Abstract — A methodology is developed to derive algorithms for optimal basis selection by minimizing diversity measures proposed by Wickerhauser and Donoho. These measures include the pnormlike (`(p 1)) diversity measures and the Gaussian and Shannon entropies. The algorithm development methodolog ..."
Abstract

Cited by 79 (11 self)
 Add to MetaCart
Abstract — A methodology is developed to derive algorithms for optimal basis selection by minimizing diversity measures proposed by Wickerhauser and Donoho. These measures include the pnormlike (`(p 1)) diversity measures and the Gaussian and Shannon entropies. The algorithm development methodology uses a factored representation for the gradient and involves successive relaxation of the Lagrangian necessary condition. This yields algorithms that are intimately related to the Affine Scaling Transformation (AST) based methods commonly employed by the interior point approach to nonlinear optimization. The algorithms minimizing the `(p 1) diversity measures are equivalent to a recently developed class of algorithms called FOCal Underdetermined System Solver (FOCUSS). The general nature of the methodology provides a systematic approach for deriving this class of algorithms and a natural mechanism for extending them. It also facilitates a better understanding of the convergence behavior and a strengthening of the convergence results. The Gaussian entropy minimization algorithm is shown to be equivalent to a wellbehaved p =0normlike optimization algorithm. Computer experiments demonstrate that the pnormlike and the Gaussian entropy algorithms perform well, converging to sparse solutions. The Shannon entropy algorithm produces solutions that are concentrated but are shown to not converge to a fully sparse solution. I.
Sparse Bayesian learning for basis selection
 IEEE Transactions on Signal Processing
, 2004
"... Abstract—Sparse Bayesian learning (SBL) and specifically relevance vector machines have received much attention in the machine learning literature as a means of achieving parsimonious representations in the context of regression and classification. The methodology relies on a parameterized prior tha ..."
Abstract

Cited by 75 (5 self)
 Add to MetaCart
Abstract—Sparse Bayesian learning (SBL) and specifically relevance vector machines have received much attention in the machine learning literature as a means of achieving parsimonious representations in the context of regression and classification. The methodology relies on a parameterized prior that encourages models with few nonzero weights. In this paper, we adapt SBL to the signal processing problem of basis selection from overcomplete dictionaries, proving several results about the SBL cost function that elucidate its general behavior and provide solid theoretical justification for this application. Specifically, we have shown that SBL retains a desirable property of the 0norm diversity measure (i.e., the global minimum is achieved at the maximally sparse solution) while often possessing a more limited constellation of local minima. We have also demonstrated that the local minima that do exist are achieved at sparse solutions. Later, we provide a novel interpretation of SBL that gives us valuable insight into why it is successful in producing sparse representations. Finally, we include simulation studies comparing sparse Bayesian learning with Basis Pursuit and the more recent FOCal Underdetermined System Solver (FOCUSS) class of basis selection algorithms. These results indicate that our theoretical insights translate directly into improved performance. Index Terms—Basis selection, diversity measures, linear inverse problems, sparse Bayesian learning, sparse representations. I.
Interpolation and extrapolation using a highresolution discrete fourier transform
 IEEE Transaction on Signal Processing
, 1998
"... Abstract—We present an iterative nonparametric approach to spectral estimation that is particularly suitable for estimation of line spectra. This approach minimizes a cost function derived from Bayes ’ theorem. The method is suitable for line spectra since a “long tailed ” distribution is used to mo ..."
Abstract

Cited by 35 (5 self)
 Add to MetaCart
Abstract—We present an iterative nonparametric approach to spectral estimation that is particularly suitable for estimation of line spectra. This approach minimizes a cost function derived from Bayes ’ theorem. The method is suitable for line spectra since a “long tailed ” distribution is used to model the prior distribution of spectral amplitudes. An important aspect of this method is that since the data themselves are used as constraints, phase information can also be recovered and used to extend the data outside the original window. The objective function is formulated in terms of hyperparameters that control the degree of fit and spectral resolution. Noise rejection can also be achieved by truncating the number of iterations. Spectral resolution and extrapolation length are controlled by a single parameter. When this parameter is large compared with the spectral powers, the algorithm leads to zero extrapolation of the data, and the estimated Fourier transform yields the periodogram. When the data are sampled at a constant rate, the algorithm uses one Levinson recursion per iteration. For irregular sampling (unevenly sampled and/or gapped data), the algorithm uses one Cholesky decomposition per iteration. The performance of the algorithm is illustrated with three different problems that frequently arise in geophysical data processing: 1) harmonic retrieval from a time series contaminated with noise; 2) linear event detection from a finite aperture array of receivers [which, in fact, is an extension of 1)], 3) interpolation/extrapolation of gapped data. The performance of the algorithm as a spectral estimator is tested with the Kay and Marple data set. It is shown that the achieved resolution is comparable with parametric methods but with more accurate representation of the relative power in the spectral lines. Index Terms—Bayes procedures, discrete Fourier transforms, interpolation, inverse problems, iterative methods, signal restoration, signal sampling/reconstruction, spectral analysis. I.
Interpolation and the Discrete PapoulisGerchberg Algorithm
 IEEE Trans. Signal Processing
, 1994
"... In this paper we analyze the performance of an iterative algorithm, similar to the discrete PaponiisGerchberg algorithm, and which can be used to recover missing samples in finitelength records of bandlimited data. No assumptions are made regarding the distribution of the missing samples, in cont ..."
Abstract

Cited by 32 (20 self)
 Add to MetaCart
In this paper we analyze the performance of an iterative algorithm, similar to the discrete PaponiisGerchberg algorithm, and which can be used to recover missing samples in finitelength records of bandlimited data. No assumptions are made regarding the distribution of the missing samples, in contrast with the often studied extrapolation problem, in which the known samples are grouped together. Indeed, it is possible to regard the observed signal as a sampled version of the original one, and to interpret the reconstruction result studied herein as a sampling result. We show that the iterative algorithm converges if the density of the sampling set exceeds a certain minimum value which naturally increases with the bandwidth of the data. We give upper and lower bounds for the error as a function of the number of iterations, together with the signals for which the bounds are attained. Also, we analyze the effect of a relaxation constant present in the algorithm on the spectral radius of the iteration matrix. From this analysis we infer the optimum value of the relaxation constant. We also point out, among all sampling sets with the same density, those for which the convergence rate of the recovery algorithm is maximum or minimum. For lowpass signals it turns out that the best convergence rates result when the distances among the missing samples are a multiple of a certain integer. The worst convergence rates generally occur when the missing samples are contiguous.
Comparison of Basis Selection Methods
, 1996
"... In this paper, we describe and evaluate three forward sequential basis selection methods: Basic Matching Pursuit (BMP), Order Recursive Matching Pursuit (ORMP) and Modified Matching Pursuit (MMP), and a parallel basis selection method: the FOCal Underdetermined System Solver (FOCUSS) algorithm. Com ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
In this paper, we describe and evaluate three forward sequential basis selection methods: Basic Matching Pursuit (BMP), Order Recursive Matching Pursuit (ORMP) and Modified Matching Pursuit (MMP), and a parallel basis selection method: the FOCal Underdetermined System Solver (FOCUSS) algorithm. Computer simulations show that the ORMP method is superior to the BMP method in terms of its ability to select a compact basis set. However, it is computationally more complex. The MMP algorithm is developed which is of intermediate computational complexity and has performance comparable to the ORMP method. All the sequential selection methods are shown to have difficulty in environments where the basis set contains highly correlated vectors. The drawback can be traced to the sequential nature of these methods suggesting the need for a parallel basis selection method like FOCUSS. Simulations demonstrate that the FOCUSS algorithm does indeed perform well in such correlated environments. However,...
A Method for Extrapolation of Missing Digital Audio Data
 J. Audio Eng. Soc
, 1994
"... Thispreprinthas beenreproducedfromthe author'sadvance manuscript,withoutediting,correctionsorconsiderationby the ReviewBoard. TheAES takesno responsibilityfor the contents. Additionalpreprintsmaybe obtainedby sendingrequestand remittanceto theAudioEngineeringSociety,60 East42nd St., ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
Thispreprinthas beenreproducedfromthe author'sadvance manuscript,withoutediting,correctionsorconsiderationby the ReviewBoard. TheAES takesno responsibilityfor the contents. Additionalpreprintsmaybe obtainedby sendingrequestand remittanceto theAudioEngineeringSociety,60 East42nd St.,
A sparsitybased method for the estimation of spectral lines from irregularly sampled data
 IEEE Journal of Selected Topics in Signal Processing
, 2007
"... © 2007 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other w ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
© 2007 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. Abstract—We address the problem of estimating spectral lines from irregularly sampled data within the framework of sparse representations. Spectral analysis is formulated as a linear inverse problem, which is solved by minimizing an 1norm penalized cost function. This approach can be viewed as a Basis Pursuit DeNoising (BPDN) problem using a dictionary of cisoids with high frequency resolution. In the studied case, however, usual BPDN characterizations of uniqueness and sparsity do not apply. This paper deals with the 1norm penalization of complexvalued variables, that brings satisfactory prior modeling for the estimation of spectral lines. An analytical characterization of the minimizer of the criterion is given and geometrical properties are derived about the uniqueness and the sparsity of the solution. An efficient optimization strategy is proposed. Convergence properties of the Iterative Coordinate Descent (ICD) and Iterative Reweighted LeastSquares (IRLS) algorithms are first examined. Then, both strategies are merged in a convergent procedure, that takes advantage of the specificities of ICD and IRLS, considerably improving the convergence speed. The computation of the resulting spectrum estimator can be implemented efficiently for any sampling scheme. Algorithm performance and estimation quality are illustrated throughout the paper using an artificial data set, typical of some astrophysical problems, where sampling irregularities are caused by day/night alternation. We show that accurate frequency location is achieved with high resolution. In particular, compared with sequential Matching Pursuit methods, the proposed approach is shown to achieve more robustness regarding sampling artifacts. Index Terms—Algorithms, estimation, inverse problems, optimization methods, sparse representations, spectral analysis, time series.
Regularized Estimation of Mixed Spectra Using a Circular GibbsMarkov Model
 IEEE Trans. Signal Processing
, 2001
"... Formulated as a linear inverse problem, spectral estimation is particularly underdetermined when only short data sets are available. Regularization by penalization is an appealing nonparametric approach to solve such illposed problems. Following Sacchi et al., we first address line spectra reco ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
Formulated as a linear inverse problem, spectral estimation is particularly underdetermined when only short data sets are available. Regularization by penalization is an appealing nonparametric approach to solve such illposed problems. Following Sacchi et al., we first address line spectra recovering in this framework. Then, we extend the methodology to situations of increasing difficulty: the case of smooth spectra and the case of mixed spectra, i.e., peaks embedded in smooth spectral contributions.