## Sparse signal reconstruction from limited data using FOCUSS: A re-weighted minimum norm algorithm (1997)

### Cached

### Download Links

Venue: | IEEE Trans. Signal Processing |

Citations: | 242 - 13 self |

### BibTeX

@ARTICLE{Gorodnitsky97sparsesignal,

author = {Irina F. Gorodnitsky and Bhaskar D. Rao},

title = {Sparse signal reconstruction from limited data using FOCUSS: A re-weighted minimum norm algorithm},

journal = {IEEE Trans. Signal Processing},

year = {1997},

pages = {600--616}

}

### Years of Citing Articles

### OpenURL

### Abstract

Abstract—We present a nonparametric algorithm for finding localized energy solutions from limited data. The problem we address is underdetermined, and no prior knowledge of the shape of the region on which the solution is nonzero is assumed. Termed the FOcal Underdetermined System Solver (FOCUSS), the algorithm has two integral parts: a low-resolution initial estimate of the real signal and the iteration process that refines the initial estimate to the final localized energy solution. The iterations are based on weighted norm minimization of the dependent variable with the weights being a function of the preceding iterative solutions. The algorithm is presented as a general estimation tool usable across different applications. A detailed analysis laying the theoretical foundation for the algorithm is given and includes proofs of global and local convergence and a derivation of the rate of convergence. A view of the algorithm as a novel optimization method which combines desirable characteristics of both classical optimization and learning-based algorithms is provided. Mathematical results on conditions for uniqueness of sparse solutions are also given. Applications of the algorithm are illustrated on problems in direction-of-arrival (DOA) estimation and neuromagnetic imaging. I.

### Citations

4100 |
Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images
- Geman, Geman
- 1984
(Show Context)
Citation Context ...rse signals include exhaustive searches (e.g., greedy algorithms [1], [2]), evolutionary searches (e.g., genetic algorithms with a sparsity constraint [3]), and Bayesian restoration with Gibbs priors =-=[4]-=-. These algorithms do not utilize any additional information about the solution except its sparsity. Thus, their results are not well constrained, and the bases they select are essentially arbitrary w... |

1198 |
and Nonlinear Programming
- Luenberger, “Linear
- 1984
(Show Context)
Citation Context ...inition because they provide valid minimum support representations. Sparse solutions also arise in LP and -norm minimization problems, and we borrow some useful terminology from that area. Definition =-=[30]-=-: Given a set of simultaneous linear equations in unknowns (1), let be any nonsingular submatrix made up of columns of . Then, if all components of not associated with the columns of are set equal to ... |

1166 | Matching pursuits with time-frequency dictionaries
- Mallat, Zhang
- 1993
(Show Context)
Citation Context ...single vector sample, such as a single time series or a single snapshot from a sensor array. Some common techniques used to compute sparse signals include exhaustive searches (e.g., greedy algorithms =-=[1]-=-, [2]), evolutionary searches (e.g., genetic algorithms with a sparsity constraint [3]), and Bayesian restoration with Gibbs priors [4]. These algorithms do not utilize any additional information abou... |

500 |
Introduction to Applied Nonlinear Dynamical Systems and Chaos
- Wiggins
- 1990
(Show Context)
Citation Context ...ferent convergence points. We first provide some background in nonlinear systems to motivate our analysis steps. This material is a compilation from several sources. For references, see, for example, =-=[33]-=- and the references therein.606 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 45, NO. 3, MARCH 1997 A phase space is a collection of trajectories that trace the temporal evolution of a nonlinear algor... |

339 |
Sparse Approximate Solutions To Linear Systems
- Natarajan
- 1995
(Show Context)
Citation Context ...e vector sample, such as a single time series or a single snapshot from a sensor array. Some common techniques used to compute sparse signals include exhaustive searches (e.g., greedy algorithms [1], =-=[2]-=-), evolutionary searches (e.g., genetic algorithms with a sparsity constraint [3]), and Bayesian restoration with Gibbs priors [4]. These algorithms do not utilize any additional information about the... |

234 |
Generalized Inverses of Linear Transformations
- Campbell, Meyer
- 1991
(Show Context)
Citation Context ...iterion on the solution. 1 This solution is unique and is computed as 1 Unless explicitly stated, all k1k norms in this paper will refer to the 2-norm. (1) (2) where denotes the Moore–Penrose inverse =-=[29]-=-. The solution has a number of computational advantages, but it does not provide sparse solutions. Rather, it has the tendency to spread the energy among a large number of entries of instead of puttin... |

226 |
Analysis of discrete ill-posed problems by means of the L-curve
- Hansen
- 1992
(Show Context)
Citation Context ...first left and right singular vectors of . is the diagonal matrix containing corresponding singular values. The TSVD FOCUSS iteration is then (14) The parameter can be found using the -curve criteria =-=[35]-=-, for example. The performance of both regularized versions of FOCUSS was studied in the context of the neuromagnetic imaging problem in [8]. The cost of inverse operations and efficient algorithms fo... |

208 |
Genetic algorithms
- Holland
- 1992
(Show Context)
Citation Context ...array. Some common techniques used to compute sparse signals include exhaustive searches (e.g., greedy algorithms [1], [2]), evolutionary searches (e.g., genetic algorithms with a sparsity constraint =-=[3]-=-), and Bayesian restoration with Gibbs priors [4]. These algorithms do not utilize any additional information about the solution except its sparsity. Thus, their results are not well constrained, and ... |

110 |
A new algorithm in spectral analysis and band-limited extrapolation
- Papoulis
- 1975
(Show Context)
Citation Context ...ion of bandlimited signals has been vigorously studied in the past but mostly in the context of spectral estimation, and many works pertain to the problem where signal bandwidth is known. Papoulis in =-=[10]-=- and Gerchberg in [11] proposed what is known as the Papoulis–Gerchberg (PG) algorithm which, given a continuous signal of known bandwidth on a finite interval of time, iteratively recovered the entir... |

99 |
Super-resolution through error energy reduction
- Gerchberg
- 1974
(Show Context)
Citation Context ...nals has been vigorously studied in the past but mostly in the context of spectral estimation, and many works pertain to the problem where signal bandwidth is known. Papoulis in [10] and Gerchberg in =-=[11]-=- proposed what is known as the Papoulis–Gerchberg (PG) algorithm which, given a continuous signal of known bandwidth on a finite interval of time, iteratively recovered the entire signal. A one-step e... |

84 | Neuromagnetic source imaging with FOCUSS: a recursive weighted minimum norm algorithm - Gorodnitsky, George, et al. - 1995 |

32 |
Continuous probabilistic solutions to the biomagnetic inverse problem. Inverse Probl
- Ioannides, Bolton, et al.
- 1990
(Show Context)
Citation Context ...is recursive weighting to enhance resolution in harmonic retrieval was studied in [18], [19], and the references therein. A similar iterative procedure was independently proposed in neuroimaging [8], =-=[20]-=-–[22], although the implementation of the recursive constraints was not explicitly exposed in [20]. In [22], Srebro developed an interesting and slightly different implementation of the recursive weig... |

27 |
Extrapolation algorithms for discrete signals with application in spectrum estimation
- Jain, Ranganath
- 1981
(Show Context)
Citation Context ...n a continuous signal of known bandwidth on a finite interval of time, iteratively recovered the entire signal. A one-step extrapolation algorithm for this procedure was later suggested in [12]. Jain =-=[13]-=- unified many of the existing bandlimited extrapolation algorithms under the criterion of minimum norm least squares extrapolation and suggested another recursive least squares algorithm. A similar al... |

26 |
A generalisation of the sampling theorem
- Linden, Abramson
- 1960
(Show Context)
Citation Context ...oof: The result follows readily from Theorem 1 and Corollary 1. Corollary 2 is a generalization of the Bandpass Filtering Theorem used in spectral estimation that is derived from the Sampling Theorem =-=[32]-=-. The Bandpass Filtering Theorem states that the length of a sampling region twice the bandwidth of a real signal is sufficient to recover this signal. This is different from the condition on the dens... |

21 |
A new algorithm for computing sparse solutions to linear inverse problems
- Harikumar, Btesler
- 1996
(Show Context)
Citation Context ... not known, and these techniques also return an essentially arbitrary solution with respect to the real signal. Another approach used to find sparse solutions is to compute maximally sparse solutions =-=[6]-=-. In Section III, we show that in general, maximum sparsity is not a suitable constraint for finding sparse signals and derive the conditions under which its use is appropriate. Some of the techniques... |

21 |
Extrapolation and spectral estimation with iterative weighted norm modification
- Cabrera, Parks
- 1991
(Show Context)
Citation Context ... the estimate at each iteration to reduce spectral support of the solution in the subsequent iteration. The first use of what is equivalent to the AST was proposed in a spectral estimation context in =-=[16]-=- and [17]. The authors modified the Papoulis–Chamzas algorithm to use the entire solution from a preceding iteration as the weight for the next iteration. The use of this recursive weighting to enhanc... |

11 |
Detection of hidden periodicities by adaptive extrapolation
- Papoulis, Chamzas
- 1979
(Show Context)
Citation Context ...squares extrapolation and suggested another recursive least squares algorithm. A similar algorithm, with no restrictions on the shape of the sampled region or the bandwidth, was presented in [14]. In =-=[15]-=-, Papoulis and Chamzas modified the PG algorithm by truncating the spectrum of the estimate at each iteration to reduce spectral support of the solution in the subsequent iteration. The first use of w... |

11 |
An application of the Wiener-Kolmogorov smoothing theory to matrix inversion
- Foster
- 1961
(Show Context)
Citation Context ... changes in in response to even small noise in the data. Here, we suggest two regularized versions of FOCUSS based on the two most common regularization techniques [8]. One is Tikhonov regularization =-=[34]-=- used at each iteration. The second is truncated singular value decomposition (TSVD), which is also used at each iteration. Tikhonov Regularization: In this method, the optimization objective is modif... |

10 |
Recovery of a sparse spike time series by L1 norm deconvolution
- O’Brien, Sinclair, et al.
- 1994
(Show Context)
Citation Context ...o the real signal. Alternatively, -norm and -norm minimization and Linear Programming (LP) methods which produce a solution by optimizing some cost function are also used for sparse signal estimation =-=[5]-=-. Unfortunately, in most signal processing problems, the relationship of the real signal to the cost functions is not known, and these techniques also return an essentially arbitrary solution with res... |

9 |
Computational experience with discrete lpapproximation
- Merle, Spath
- 1974
(Show Context)
Citation Context ...a different optimization objective, in the design of fast interior point methods in LP, including the Karmarkar algorithm, and in minimizing the -norm of the residual error in overdetermined problems =-=[9]-=-. A posteriori constrained extrapolation and interpolation of bandlimited signals has been vigorously studied in the past but mostly in the context of spectral estimation, and many works pertain to th... |

8 |
An extrapolation procedure for band-limited signals
- Cadzow
- 1979
(Show Context)
Citation Context ...which, given a continuous signal of known bandwidth on a finite interval of time, iteratively recovered the entire signal. A one-step extrapolation algorithm for this procedure was later suggested in =-=[12]-=-. Jain [13] unified many of the existing bandlimited extrapolation algorithms under the criterion of minimum norm least squares extrapolation and suggested another recursive least squares algorithm. A... |

8 | Improvement of discrete band-limited signal extrapolation by iterative subspace modification - Lee, Sullivan, et al. - 1987 |

7 |
Discrete and continuous band-limited signal extrapolation
- Sanz, Huang
- 1983
(Show Context)
Citation Context ...rm least squares extrapolation and suggested another recursive least squares algorithm. A similar algorithm, with no restrictions on the shape of the sampled region or the bandwidth, was presented in =-=[14]-=-. In [15], Papoulis and Chamzas modified the PG algorithm by truncating the spectrum of the estimate at each iteration to reduce spectral support of the solution in the subsequent iteration. The first... |

5 |
Iterative refinement of the minimum norm solution of the bioelectric inverse problem
- Srebro
- 1996
(Show Context)
Citation Context ...cursive weighting to enhance resolution in harmonic retrieval was studied in [18], [19], and the references therein. A similar iterative procedure was independently proposed in neuroimaging [8], [20]–=-=[22]-=-, although the implementation of the recursive constraints was not explicitly exposed in [20]. In [22], Srebro developed an interesting and slightly different implementation of the recursive weighting... |

5 |
A new iterative weighted norm minimization algorithm and its applications
- Gorodnitsky, Rao
- 1992
(Show Context)
Citation Context ...l optimization and learning-based neural networks. Applications of FOCUSS to DOA and neuromagnetic imaging problems are presented in Section VIII. Several other applications of FOCUSS can be found in =-=[23]-=-, [25], and [26]. The paper focuses on the theoretical foundation of the a posteriori constrained algorithm in which we restrict ourselves to a noise-free environment. Issues pertaining to noisy data,... |

4 |
von (1853): Uber einige Gesetze der Vertheilung electrischer Stome in korperlochen Leitern, mit Anwedung auf die thierisch-elektrischen Versuche. Annalen der Physik und Chemie 7:211–233
- Helmholtz
(Show Context)
Citation Context ...romagnetic fields. Even when the sampling set is completely dense, i.e., the field is completely known everywhere outside the conducting volume, the current inside the volume cannot be uniquely found =-=[31]-=-. Given such intrinsic ill-posedness, sparse solutions, including the maximally sparse solutions, are never unique. However, depending on the physics, the net effect of the intrinsic ill-posedness on ... |

3 | Affine scaling transformation based methods for computing low complexity sparse solutions - Rao, Gorodnitsky - 1996 |

2 |
Can compact neural currents be uniquely determined
- Gorodnitsky
- 1996
(Show Context)
Citation Context ... high computational cost and, in some cases, compromised convergence, are serious drawbacks of these methods. For completeness, we want to mention parametric methods for estimating sparse signals. In =-=[7]-=-, we show that sparse solutions can be significantly better constrained by multiple data samples, such as multiple snapshots from a sensor array; therefore, parametric techniques based on such data ca... |

2 |
Source localization in magnetoencephalography using an iterative weighted minimum norm algorithm
- Gorodnitsky, Rao, et al.
- 1992
(Show Context)
Citation Context ...nd the suggested algorithms simply amounted to refinement of a minimum 2-norm type initial estimate. The use of different initializations and generalizations of the basic iterations were suggested in =-=[21]-=- and [24]. The use of a more general, non-AST objective function at each iterative step was suggested in [6]. The contributions of this paper are as follows. We present the development of the re-weigh... |

2 |
A novel class of recursively constrained algorithms for localized energy solutions: Theory and application to magnetoencephalography and signal processing
- Gorodnitsky
- 1995
(Show Context)
Citation Context ...nd learning-based neural networks. Applications of FOCUSS to DOA and neuromagnetic imaging problems are presented in Section VIII. Several other applications of FOCUSS can be found in [23], [25], and =-=[26]-=-. The paper focuses on the theoretical foundation of the a posteriori constrained algorithm in which we restrict ourselves to a noise-free environment. Issues pertaining to noisy data, such as perform... |

1 |
Estimation of sinusoids by adaptive minimum norm extrapolation
- Cabrera, Yang, et al.
- 1990
(Show Context)
Citation Context ...as algorithm to use the entire solution from a preceding iteration as the weight for the next iteration. The use of this recursive weighting to enhance resolution in harmonic retrieval was studied in =-=[18]-=-, [19], and the references therein. A similar iterative procedure was independently proposed in neuroimaging [8], [20]–[22], although the implementation of the recursive constraints was not explicitly... |

1 |
Resolution enhancement in timefrequency distributions based on adaptive time extrapolations
- Thomas, Cabrera
- 1994
(Show Context)
Citation Context ...orithm to use the entire solution from a preceding iteration as the weight for the next iteration. The use of this recursive weighting to enhance resolution in harmonic retrieval was studied in [18], =-=[19]-=-, and the references therein. A similar iterative procedure was independently proposed in neuroimaging [8], [20]–[22], although the implementation of the recursive constraints was not explicitly expos... |

1 |
recursive weighted minimum norm algorithm: Analysis and applications
- “A
- 1993
(Show Context)
Citation Context ...ggested algorithms simply amounted to refinement of a minimum 2-norm type initial estimate. The use of different initializations and generalizations of the basic iterations were suggested in [21] and =-=[24]-=-. The use of a more general, non-AST objective function at each iterative step was suggested in [6]. The contributions of this paper are as follows. We present the development of the re-weighted minim... |

1 |
A novel recurrent network for signal processing
- Rao, Gorodnitsky
- 1993
(Show Context)
Citation Context ...mization and learning-based neural networks. Applications of FOCUSS to DOA and neuromagnetic imaging problems are presented in Section VIII. Several other applications of FOCUSS can be found in [23], =-=[25]-=-, and [26]. The paper focuses on the theoretical foundation of the a posteriori constrained algorithm in which we restrict ourselves to a noise-free environment. Issues pertaining to noisy data, such ... |

1 |
Fast algorithms for biomedical tomography problems
- Gorodnitsky, Beransky
- 1996
(Show Context)
Citation Context ...blem. We also give an example with noisy data in Section VIII. The computational requirements of inverse algorithms and efficient computational algorithms for large-scale problems are investigated in =-=[28]-=-. II. NONPARAMETRIC FORMULATION AND MINIMUM NORM OPTIMIZATION We review the nonparametric formulation of a signal estimation problem and the common minimum norm solutions. We work in complex space wit... |