Results 1 
9 of
9
An affine scaling methodology for best basis selection
 IEEE Trans. Signal Processing
, 1999
"... Abstract — A methodology is developed to derive algorithms for optimal basis selection by minimizing diversity measures proposed by Wickerhauser and Donoho. These measures include the pnormlike (`(p 1)) diversity measures and the Gaussian and Shannon entropies. The algorithm development methodolog ..."
Abstract

Cited by 109 (20 self)
 Add to MetaCart
(Show Context)
Abstract — A methodology is developed to derive algorithms for optimal basis selection by minimizing diversity measures proposed by Wickerhauser and Donoho. These measures include the pnormlike (`(p 1)) diversity measures and the Gaussian and Shannon entropies. The algorithm development methodology uses a factored representation for the gradient and involves successive relaxation of the Lagrangian necessary condition. This yields algorithms that are intimately related to the Affine Scaling Transformation (AST) based methods commonly employed by the interior point approach to nonlinear optimization. The algorithms minimizing the `(p 1) diversity measures are equivalent to a recently developed class of algorithms called FOCal Underdetermined System Solver (FOCUSS). The general nature of the methodology provides a systematic approach for deriving this class of algorithms and a natural mechanism for extending them. It also facilitates a better understanding of the convergence behavior and a strengthening of the convergence results. The Gaussian entropy minimization algorithm is shown to be equivalent to a wellbehaved p =0normlike optimization algorithm. Computer experiments demonstrate that the pnormlike and the Gaussian entropy algorithms perform well, converging to sparse solutions. The Shannon entropy algorithm produces solutions that are concentrated but are shown to not converge to a fully sparse solution. I.
An improved FOCUSSbased learning algorithm for solving sparse linear inverse problems
 in Conf. Record of the ThirtyFifth Asilomar Conf. on Signals, Systems and Computers
, 2001
"... We develop an improved algorithm for solving blind sparse linear inverse problems where both the dictionary (possibly overcomplete) and the sources are unknown. The algorithm is derived in the Bayesian framework by the maximum a posteriori method, with the choice of prior distribution restricted to ..."
Abstract

Cited by 28 (5 self)
 Add to MetaCart
(Show Context)
We develop an improved algorithm for solving blind sparse linear inverse problems where both the dictionary (possibly overcomplete) and the sources are unknown. The algorithm is derived in the Bayesian framework by the maximum a posteriori method, with the choice of prior distribution restricted to the class of concave/Schurconcave functions, which has been shown previously to be a sufficient condition for sparse solutions. This formulation leads to a constrained and regularized minimization problem which can be solved in part using the FOCUSS (Focal Underdetermined System Solver) algorithm for vector selection. We introduce three key improvements in the algorithm: an efficient way of adjusting the regularization parameter, column normalization that restricts the learned dictionary, and reinitialization to escape from local optima. Experiments were performed using synthetic data with matrix sizes up to 64x128, and the algorithm is shown to solve the blind identification problem, recovering both the dictionary and the sparse sources. The improved algorithm is shown to be much more accurate than the original FOCUSSDictionary Learning algorithm when using large matrices. We also test our algorithm on natural images, and show that a learned overcomplete representation can encode the data more efficiently than a complete basis at the same level of accuracy. 1
FOCUSSbased dictionary learning algorithms
 in Proceedings of the SPIE Volume 4119: Wavelet Applications in Signal and Image Processing VIII
, 2000
"... Algorithms for datadriven learning of domainspecific overcomplete dictionaries are developed to obtain maximum likelihood and maximum a posteriori dictionary estimates based on the use of Bayesian models with concave/Schurconcave (CSC) negative logpriors. Such priors are appropriate for obtaini ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
(Show Context)
Algorithms for datadriven learning of domainspecific overcomplete dictionaries are developed to obtain maximum likelihood and maximum a posteriori dictionary estimates based on the use of Bayesian models with concave/Schurconcave (CSC) negative logpriors. Such priors are appropriate for obtaining sparse representations of environmental signals within an appropriately chosen (environmentally matched) dictionary. The elements of the dictionary can be interpreted as ‘concepts, ’ ‘features ’ or ‘words ’ capable of succinct expression of events encountered in the environment (the source of the measured signals). This is a generalization of vector quantization in that one is interested in a description involving a few dictionary entries (the proverbial ‘25 words or less’), but not necessarily as succinct as one entry. To learn an environmentallyadapted dictionary capable of concise expression of signals generated by the environment, we develop algorithms that iterate between a representative set of sparse representations found by variants of FOCUSS, an affine scaling transformation (AST)like sparse signal representation algorithm recently developed at UCSD, and an update of the dictionary using these sparse representations. 1
Backward sequential elimination for sparse vector subset selection. Signal Processing 81
, 1849
"... selection ..."
(Show Context)
Bayesian Modelling of Music: Algorithmic Advances and . . .
, 2005
"... In order to perform many signal processing tasks such as classification, pattern recognition and coding, it is helpful to specify a signal model in terms of meaningful signal structures. In general, designing such a model is complicated and for many signals it is not feasible to specify the appropri ..."
Abstract
 Add to MetaCart
In order to perform many signal processing tasks such as classification, pattern recognition and coding, it is helpful to specify a signal model in terms of meaningful signal structures. In general, designing such a model is complicated and for many signals it is not feasible to specify the appropriate structure. Adaptive models overcome this problem by learning structures from a set of signals. Such adaptive models need to be general enough, so that they can represent relevant structures. However, more general models often require additional constraints to guide the learning procedure. In this thesis
Methodes De Separation De Sources Dans Le Cas
"... In this contribution, the underdetermined blind source separation problem is addressed. We recall some known identi ability results, and present various methods for the identi cation of the mixture matrix, and the extraction of the sources. Finally computer simulations illustrate the identi catio ..."
Abstract
 Add to MetaCart
In this contribution, the underdetermined blind source separation problem is addressed. We recall some known identi ability results, and present various methods for the identi cation of the mixture matrix, and the extraction of the sources. Finally computer simulations illustrate the identi cation algorithms.
Incremental MultiSource Recognition with NonNegative Matrix Factorization Master’s Thesis
, 2009
"... Revised on February 09, 2010 to correct some errors and typos. This master’s thesis is dedicated to incremental multisource recognition using nonnegative matrix factorization. A particular attention is paid to providing a mathematical framework for sparse coding schemes in this context. The applic ..."
Abstract
 Add to MetaCart
Revised on February 09, 2010 to correct some errors and typos. This master’s thesis is dedicated to incremental multisource recognition using nonnegative matrix factorization. A particular attention is paid to providing a mathematical framework for sparse coding schemes in this context. The applications of nonnegative matrix factorization problems to sound recognition are discussed to give the outlines, positions and contributions of the present work with respect to the literature. The problem of incremental recognition is addressed within the framework of nonnegative decomposition, a modified nonnegative matrix factorization scheme where the incoming signal is projected onto a basis of templates learned offline prior to the decomposition. As it appears that sparsity is one of the main issue in this context, a theoretical approach is followed to overcome the problem. The main contribution of the present work is in the formulation of a sparse nonnegative matrix factorization framework. This formulation is motivated and illustrated with a synthetic experiment, and then addressed with convex optimization techniques such as gradient optimization, convex quadratic programming and secondorder cone programming. Several algorithms are proposed to address the question of sparsity. To provide results and validations, some of these algorithms are applied to preliminary evaluations, notably that of incremental multiplepitch and multipleinstrument recognition, and that of incremental analysis of complex auditory scenes.
Learning NonNegative Sparse Image Codes by Convex Programming
"... Examplebased learning of codes that statistically encode general image classes is of vital importance for computational vision. Recently, nonnegative matrix factorization (NMF) was suggested to provide image codes that are both sparse and localized, in contrast to established nonlocal methods like ..."
Abstract
 Add to MetaCart
(Show Context)
Examplebased learning of codes that statistically encode general image classes is of vital importance for computational vision. Recently, nonnegative matrix factorization (NMF) was suggested to provide image codes that are both sparse and localized, in contrast to established nonlocal methods like PCA. In this paper we adopt and generalize this approach to develop a novel learning framework that allows to efficiently compute sparsitycontrolled invariant image codes by a welldefined sequence of convex conic programs. Applying the corresponding parameterfree algorithm to various image classes results in semantically relevant and transformationinvariant image representations that are remarkably robust against noise and quantization. 1. Introduction and Related
LETTER Communicated by Hagai Attias Dictionary Learning Algorithms for Sparse Representation
"... Algorithms for datadriven learning of domainspecific overcomplete dictionaries are developed to obtain maximum likelihood and maximum a posteriori dictionary estimates based on the use of Bayesian models with concave/Schurconcave (CSC) negative log priors. Such priors are appropriate for obtainin ..."
Abstract
 Add to MetaCart
(Show Context)
Algorithms for datadriven learning of domainspecific overcomplete dictionaries are developed to obtain maximum likelihood and maximum a posteriori dictionary estimates based on the use of Bayesian models with concave/Schurconcave (CSC) negative log priors. Such priors are appropriate for obtaining sparse representations of environmental signals within an appropriately chosen (environmentally matched) dictionary. The elements of the dictionary can be interpreted as concepts, features, or words capable of succinct expression of events encountered in the environment (the source of the measured signals). This is a generalization of vector quantization in that one is interested in a description involving a few dictionary entries (the proverbial “25 words or less”), but not necessarily as succinct as one entry. To learn an environmentally adapted dictionary capable of concise expression of signals generated by the environment, we develop algorithms that iterate between a representative set of sparse representations found by variants of FOCUSS and an update of