Results 1 
6 of
6
Dictionaries for Sparse Representation Modeling
"... Sparse and redundant representation modeling of data assumes an ability to describe signals as linear combinations of a few atoms from a prespecified dictionary. As such, the choice of the dictionary that sparsifies the signals is crucial for the success of this model. In general, the choice of a p ..."
Abstract

Cited by 44 (3 self)
 Add to MetaCart
Sparse and redundant representation modeling of data assumes an ability to describe signals as linear combinations of a few atoms from a prespecified dictionary. As such, the choice of the dictionary that sparsifies the signals is crucial for the success of this model. In general, the choice of a proper dictionary can be done using one of two ways: (i) building a sparsifying dictionary based on a mathematical model of the data, or (ii) learning a dictionary to perform best on a training set. In this paper we describe the evolution of these two paradigms. As manifestations of the first approach, we cover topics such as wavelets, wavelet packets, contourlets, and curvelets, all aiming to exploit 1D and 2D mathematical models for constructing effective dictionaries for signals and images. Dictionary learning takes a different route, attaching the dictionary to a set of examples it is supposed to serve. From the seminal work of Field and Olshausen, through the MOD, the KSVD, the Generalized PCA and others, this paper surveys the various options such training has to offer, up to the most recent contributions and structures.
linear mixture model and beyond
, 2013
"... Sparsity constraints for hyperspectral data analysis: ..."
Author manuscript, published in "SPARS'09 Signal Processing with Adaptive Sparse Structured Representations (2009)" DICTIONARY LEARNING WITH SPATIOSPECTRAL SPARSITY CONSTRAINTS
"... where the entries of the sparse matrix of coefficients ν representing X in the multichannel dictionary Ω = A ⊗ Φ are noted νk ′ k and N ∈ Rm,t is included to account for Gaussian instrumental noise or modeling errors. GMCA further assumes that the dictionary of spatial waveforms Φ is known beforhand ..."
Abstract
 Add to MetaCart
where the entries of the sparse matrix of coefficients ν representing X in the multichannel dictionary Ω = A ⊗ Φ are noted νk ′ k and N ∈ Rm,t is included to account for Gaussian instrumental noise or modeling errors. GMCA further assumes that the dictionary of spatial waveforms Φ is known beforhand while the spectral components A, also called the mixing matrix in blind source separation (BSS) applications, is learned from the data. The image from the pth channel is represented here as the pth row of X, xp. The successful use of GMCA in a variety of multichannel data processing applications such as BSS [2], color image restoration and inpainting [1] motivated research to extend its applicability. In particular, there are instances where one is urged by additional prior knowledge to further constrain the dictionary space. For instance, one may want to enforce equality constraints on some atoms, or the positivity or the sparsity of the learned dictionary atoms. Builiding on GMCA, the purpose of this contribution is to describe a new dictionary learning algorithm for socalled hyperspectral data processing. Hyperspectral imaging systems collect data in a large number (up to several hundreds) of contiguous regions of the spectrum so that it makes sense to consider for instance that some physical property will show some regularity from one channel to the next. In fact, the proposed algorithm, referred to as hypGMCA, assumes that the multichannel atoms to be learned from the collected data exhibit diversely sparse spatial morphologies as well as diversely sparse spectral signatures in specified dictionaries Φ ∈ Rt,t ′ and Ψ ∈ Rm,m ′ of respectively spatial and spectral waveforms. The proposed algorithm is used to learn from the data rank one multichannel atoms which are diversely sparse [2] in a given larger multichannel dictioinria00369488,
PARAMETRIC DICTIONARY LEARNING USING STEEPEST DESCENT
"... In this paper, we suggest to use a steepest descent algorithm for learning a parametric dictionary in which the structure or atom functions are known in advance. The structure of the atoms allows us to find a steepest descent direction of parameters instead of the steepest descent direction of the d ..."
Abstract
 Add to MetaCart
In this paper, we suggest to use a steepest descent algorithm for learning a parametric dictionary in which the structure or atom functions are known in advance. The structure of the atoms allows us to find a steepest descent direction of parameters instead of the steepest descent direction of the dictionary itself. We also use a thresholded version of Smoothedℓ0 (SL0) algorithm for sparse representation step in our proposed method. Our simulation results show that using atom structure similar to the Gabor functions and learning the parameters of these Gaborlike atoms yield better representations of our noisy speech signal than non parametric dictionary learning methods like KSVD, in terms of mean square error of sparse representations. Index Terms — Dictionary learning, Sparse representation, parametric dictionary, Sparse Component Analysis.
Audio Source Separation using Sparse Representations
"... We address the problem of audio source separation, namely, the recovery of audio signals from recordings of mixtures of those signals. The sparse component analysis framework is a powerful method for achieving this. Sparse orthogonal transforms, in which only few transform coefficients differ signif ..."
Abstract
 Add to MetaCart
We address the problem of audio source separation, namely, the recovery of audio signals from recordings of mixtures of those signals. The sparse component analysis framework is a powerful method for achieving this. Sparse orthogonal transforms, in which only few transform coefficients differ significantly from zero, are developed; once the signal has been transformed, energy is apportioned from each transform coefficient to each estimated source, and, finally, the signal is reconstructed using the inverse transform. The overriding aim of this chapter is to demonstrate how this framework, as exemplified here by two different decomposition methods which adapt to the signal to represent it sparsely, can be used to solve different problems in different mixing scenarios. To address the instantaneous (neither delays nor echoes) and underdetermined (more sources than mixtures) mixing model, a lapped orthogonal transform is adapted to the signal by selecting a basis from a library of predetermined bases. This method is highly related to the windowing methods used in the MPEG audio coding framework. In considering the anechoic (delays but no echoes) and determined (equal number of sources and mixtures) mixing case, a greedy adaptive transform is used based on orthogonal basis functions that are learned from the observed data, instead of being selected from a predetermined library of bases. This is found to encode the signal characteristics, by introducing a feedback system between the bases and the observed data. Experiments on mixtures of speech and music signals demonstrate that these methods give good signal approximations and separation performance, and indicate promising directions for future research.