Results 1  10
of
23
Sparse signal reconstruction from limited data using FOCUSS: A reweighted minimum norm algorithm
 IEEE Trans. Signal Processing
, 1997
"... Abstract—We present a nonparametric algorithm for finding localized energy solutions from limited data. The problem we address is underdetermined, and no prior knowledge of the shape of the region on which the solution is nonzero is assumed. Termed the FOcal Underdetermined System Solver (FOCUSS), t ..."
Abstract

Cited by 236 (13 self)
 Add to MetaCart
(Show Context)
Abstract—We present a nonparametric algorithm for finding localized energy solutions from limited data. The problem we address is underdetermined, and no prior knowledge of the shape of the region on which the solution is nonzero is assumed. Termed the FOcal Underdetermined System Solver (FOCUSS), the algorithm has two integral parts: a lowresolution initial estimate of the real signal and the iteration process that refines the initial estimate to the final localized energy solution. The iterations are based on weighted norm minimization of the dependent variable with the weights being a function of the preceding iterative solutions. The algorithm is presented as a general estimation tool usable across different applications. A detailed analysis laying the theoretical foundation for the algorithm is given and includes proofs of global and local convergence and a derivation of the rate of convergence. A view of the algorithm as a novel optimization method which combines desirable characteristics of both classical optimization and learningbased algorithms is provided. Mathematical results on conditions for uniqueness of sparse solutions are also given. Applications of the algorithm are illustrated on problems in directionofarrival (DOA) estimation and neuromagnetic imaging. I.
An affine scaling methodology for best basis selection
 IEEE Trans. Signal Processing
, 1999
"... Abstract — A methodology is developed to derive algorithms for optimal basis selection by minimizing diversity measures proposed by Wickerhauser and Donoho. These measures include the pnormlike (`(p 1)) diversity measures and the Gaussian and Shannon entropies. The algorithm development methodolog ..."
Abstract

Cited by 88 (14 self)
 Add to MetaCart
(Show Context)
Abstract — A methodology is developed to derive algorithms for optimal basis selection by minimizing diversity measures proposed by Wickerhauser and Donoho. These measures include the pnormlike (`(p 1)) diversity measures and the Gaussian and Shannon entropies. The algorithm development methodology uses a factored representation for the gradient and involves successive relaxation of the Lagrangian necessary condition. This yields algorithms that are intimately related to the Affine Scaling Transformation (AST) based methods commonly employed by the interior point approach to nonlinear optimization. The algorithms minimizing the `(p 1) diversity measures are equivalent to a recently developed class of algorithms called FOCal Underdetermined System Solver (FOCUSS). The general nature of the methodology provides a systematic approach for deriving this class of algorithms and a natural mechanism for extending them. It also facilitates a better understanding of the convergence behavior and a strengthening of the convergence results. The Gaussian entropy minimization algorithm is shown to be equivalent to a wellbehaved p =0normlike optimization algorithm. Computer experiments demonstrate that the pnormlike and the Gaussian entropy algorithms perform well, converging to sparse solutions. The Shannon entropy algorithm produces solutions that are concentrated but are shown to not converge to a fully sparse solution. I.
Enhancing Sparsity by Reweighted ℓ1 Minimization
, 2007
"... It is now well understood that (1) it is possible to reconstruct sparse signals exactly from what appear to be highly incomplete sets of linear measurements and (2) that this can be done by constrained ℓ1 minimization. In this paper, we study a novel method for sparse signal recovery that in many si ..."
Abstract

Cited by 81 (4 self)
 Add to MetaCart
It is now well understood that (1) it is possible to reconstruct sparse signals exactly from what appear to be highly incomplete sets of linear measurements and (2) that this can be done by constrained ℓ1 minimization. In this paper, we study a novel method for sparse signal recovery that in many situations outperforms ℓ1 minimization in the sense that substantially fewer measurements are needed for exact recovery. The algorithm consists of solving a sequence of weighted ℓ1minimization problems where the weights used for the next iteration are computed from the value of the current solution. We present a series of experiments demonstrating the remarkable performance and broad applicability of this algorithm in the areas of sparse signal recovery, statistical estimation, error correction and image processing. Interestingly, superior gains are also achieved when our method is applied to recover signals with assumed nearsparsity in overcomplete representations—not by reweighting the ℓ1 norm of the coefficient sequence as is common, but by reweighting the ℓ1 norm of the transformed object. An immediate consequence is the possibility of highly efficient data acquisition protocols by improving on a technique known as compressed sensing.
Theoretical results on sparse representations of multiplemeasurement vectors
 IEEE Trans. Signal Process
, 2006
"... Abstract — Multiple measurement vector (MMV) is a relatively new problem in sparse representations. Efficient methods have been proposed. Considering many theoretical results that are available in a simple case – single measure vector (SMV) – the theoretical analysis regarding MMV is lacking. In th ..."
Abstract

Cited by 73 (2 self)
 Add to MetaCart
(Show Context)
Abstract — Multiple measurement vector (MMV) is a relatively new problem in sparse representations. Efficient methods have been proposed. Considering many theoretical results that are available in a simple case – single measure vector (SMV) – the theoretical analysis regarding MMV is lacking. In this paper, some known results of SMV are generalized to MMV. Some of these new results take advantages of additional information in the formulation of MMV. We consider the uniqueness under both an ℓ0norm like criterion and an ℓ1norm like criterion. The consequent equivalence between the ℓ0norm approach and the ℓ1norm approach indicates a computationally efficient way of finding the sparsest representation in an overcomplete dictionary. For greedy algorithms, it is proven that under certain conditions, orthogonal matching pursuit (OMP) can find the sparsest representation of an MMV with computational efficiency, just like in SMV. Simulations show that the predictions made by the proved theorems tend to be very conservative; this is consistent with some recent theoretical advances in probability. The connections will be discussed.
Enhacing sparsity by reweighted ℓ1 minimization
 Journal of Fourier Analysis and Applications
, 2008
"... It is now well understood that (1) it is possible to reconstruct sparse signals exactly from what appear to be highly incomplete sets of linear measurements and (2) that this can be done by constrained ℓ1 minimization. In this paper, we study a novel method for sparse signal recovery that in many si ..."
Abstract

Cited by 34 (1 self)
 Add to MetaCart
(Show Context)
It is now well understood that (1) it is possible to reconstruct sparse signals exactly from what appear to be highly incomplete sets of linear measurements and (2) that this can be done by constrained ℓ1 minimization. In this paper, we study a novel method for sparse signal recovery that in many situations outperforms ℓ1 minimization in the sense that substantially fewer measurements are needed for exact recovery. The algorithm consists of solving a sequence of weighted ℓ1minimization problems where the weights used for the next iteration are computed from the value of the current solution. We present a series of experiments demonstrating the remarkable performance and broad applicability of this algorithm in the areas of sparse signal recovery, statistical estimation, error correction and image processing. Interestingly, superior gains are also achieved when our method is applied to recover signals with assumed nearsparsity in overcomplete representations—not by reweighting the ℓ1 norm of the coefficient sequence as is common, but by reweighting the ℓ1 norm of the transformed object. An immediate consequence is the possibility of highly efficient data acquisition protocols by improving on a technique known as compressed sensing.
On The Optimallity Of The Backward Greedy Algorithm For The Subset Selection Problem
, 1998
"... The following linear inverse problem is considered: given a full column rank m \Theta n data matrix A and a length m observation vector b, find the best least squares solution to Ax = b with at most r ! n nonzero components. The backward greedy algorithm computes a sparse solution to Ax = b by remo ..."
Abstract

Cited by 32 (1 self)
 Add to MetaCart
The following linear inverse problem is considered: given a full column rank m \Theta n data matrix A and a length m observation vector b, find the best least squares solution to Ax = b with at most r ! n nonzero components. The backward greedy algorithm computes a sparse solution to Ax = b by removing greedily columns from A until r columns are left. A simple implementation based on a QR downdating scheme by Givens rotations is described. The backward greedy algorithm is shown to be optimal for this problem in the sense that it selects the "correct" subset of columns from A if the perturbation of the data vector b is small enough.
Lowauthority controller design via convex optimization
 AIAA Journal of Guidance, Control, and Dynamics
, 1999
"... In this paper we address the problem of lowauthority controller (LAC) design. The premise is that the actuators have limited authority, and hence cannot significantly shift the eigenvalues of the system. As a result, the closedloop eigenvalues can be well approximated analytically using perturbati ..."
Abstract

Cited by 31 (14 self)
 Add to MetaCart
(Show Context)
In this paper we address the problem of lowauthority controller (LAC) design. The premise is that the actuators have limited authority, and hence cannot significantly shift the eigenvalues of the system. As a result, the closedloop eigenvalues can be well approximated analytically using perturbation theory. These analytical approximations may suffice to predict the behavior of the closedloop system in practical cases, and will provide at least a very strong rationale for the first step in the design iteration loop. We will show that LAC design can be cast as convex optimization problems that can be solved efficiently in practice using interiorpoint methods. Also, we will show that by optimizing the ℓ1 norm of the feedback gains, we can arrive at sparse designs, i.e., designs in which only a small number of the control gains are nonzero. Thus, in effect, we can also solve actuator/sensor placement or controller architecture design problems. Keywords: Lowauthority control, actuator/sensor placement, linear operator perturbation theory, convex optimization, secondorder cone programming, semidefinite programming, linear matrix inequality. 1
On the application of the global matched”. filter to DOA estimation with uniform circular arrays
 IEEE Trans on S.P
, 2001
"... ..."
Fast optimal and suboptimal algorithms for sparse solutions to linear inverse problems
 in Proc. of ICASSP
, 1998
"... We present two “fast ” approaches tothe NPhard problem of computing a maximally sparse approximate solution to linear inverse problems, also known as best subset selection. The first approach, a heuristic, is an iterative algorithm globally convergent to sparse elements of any given convex, compac ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
(Show Context)
We present two “fast ” approaches tothe NPhard problem of computing a maximally sparse approximate solution to linear inverse problems, also known as best subset selection. The first approach, a heuristic, is an iterative algorithm globally convergent to sparse elements of any given convex, compact S C Wmr. We demonstrate its effectiveness in bandlimited extrapolation and in sparse filter design. The second approach isa polynomialtime greedy sequential backward elimination algorithm. We show that if A has full column rank and c is small enough, then the algorithm will find the sparsest x satifying l]Ax bll 5 c. if such exists. 1.
Sparse basis selection, ica, and majorization: towards a unified perspective
 in Proceedings of ICASSP’99
, 1999
"... Sparse solutions to the linear inverse problem Ax = y and the determination of an environmentally adapted overcomplete dictionary (the columns of A) depend upon the choice of a “regularizing function ” d(x) in several recently proposed procedures. We discuss the interpretation of d(x) within a Bay ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
(Show Context)
Sparse solutions to the linear inverse problem Ax = y and the determination of an environmentally adapted overcomplete dictionary (the columns of A) depend upon the choice of a “regularizing function ” d(x) in several recently proposed procedures. We discuss the interpretation of d(x) within a Bayesian framework, and the desirable properties that “good ” (i.e., sparsity ensuring) regularizing functions, d(x) might have. These properties are: Schurconcavity (d(x) is consistent with majorization); concavity (d(x) has sparse minima); parameterizability (d(x) is drawn from a large, parameterizable class); and factorizability of the gradient of d(x) in a certain manner. The last property (which naturally leads one to consider separable regularizing functions) allows d(x) to be efficiently minimized subject to Ax = y using an Affine Scaling Transformation (AST)like algorithm “adapted ” to