Results 1 - 10
of
13
Local Identification of Overcomplete Dictionaries
, 2015
"... This paper presents the first theoretical results showing that stable identification of over-complete µ-coherent dictionaries Φ ∈ Rd×K is locally possible from training signals with sparsity levels S up to the order O(µ−2) and signal to noise ratios up to O( d). In particular the dictionary is recov ..."
Abstract
-
Cited by 5 (0 self)
- Add to MetaCart
This paper presents the first theoretical results showing that stable identification of over-complete µ-coherent dictionaries Φ ∈ Rd×K is locally possible from training signals with sparsity levels S up to the order O(µ−2) and signal to noise ratios up to O( d). In particular the dictionary is recoverable as the local maximum of a new maximization criterion that generalizes the K-means criterion. For this maximization criterion results for asymptotic exact recovery for sparsity levels up to O(µ−1) and stable recovery for sparsity levels up to O(µ−2) as well as signal to noise ratios up to O( d) are provided. These asymptotic results translate to finite sample size recovery results with high probability as long as the sample size N scales as O(K3dSε̃−2), where the recovery precision ε ̃ can go down to the asymptotically achievable precision. Further, to actually find the local maxima of the new criterion, a very simple Iterative Thresholding and K (signed) Means algorithm (ITKM), which has complexity O(dKN) in each iteration, is presented and its local efficiency is demonstrated in several experiments.
Noisy Matrix Completion under Sparse Factor Models
- IEEE TRANSACTIONS ON INFORMATION THEORY
, 2014
"... ..."
Sparse and spurious: dictionary learning with noise and outliers
, 2014
"... A popular approach within the signal processing and machine learning communities consists in mod-elling signals as sparse linear combinations of atoms selected from a learned dictionary. While this paradigm has led to numerous empirical successes in various fields ranging from image to audio process ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
(Show Context)
A popular approach within the signal processing and machine learning communities consists in mod-elling signals as sparse linear combinations of atoms selected from a learned dictionary. While this paradigm has led to numerous empirical successes in various fields ranging from image to audio process-ing, there have only been a few theoretical arguments supporting these evidences. In particular, sparse coding, or sparse dictionary learning, relies on a non-convex procedure whose local minima have not been fully analyzed yet. In this paper, we consider a probabilistic model of sparse signals, and show that, with high probability, sparse coding admits a local minimum around the reference dictionary generating the signals. Our study takes into account the case of over-complete dictionaries, noisy signals, and possible outliers, thus extending previous work limited to noiseless settings and/or under-complete dictionaries. The analysis we conduct is non-asymptotic and makes it possible to understand how the key quantities of the problem, such as the coherence or the level of noise, can scale with respect to the dimension of the signals, the number of atoms, the sparsity and the number of observations. 1
1Local Identification of Overcomplete Dictionaries
"... This paper presents the first theoretical results showing that stable identification of overcomplete µ-coherent dictionaries Φ ∈ Rd×K is locally possible from training signals with sparsity levels S up to the order O(µ−2) and signal to noise ratios up to O( d). In particular the dictionary is recove ..."
Abstract
- Add to MetaCart
This paper presents the first theoretical results showing that stable identification of overcomplete µ-coherent dictionaries Φ ∈ Rd×K is locally possible from training signals with sparsity levels S up to the order O(µ−2) and signal to noise ratios up to O( d). In particular the dictionary is recoverable as the local maximum of a new maximisation criterion that generalises the K-means criterion. For this maximisation criterion results for asymptotic exact recovery for sparsity levels up toO(µ−1) and stable recovery for sparsity levels up to O(µ−2) as well as signal to noise ratios up to O( d) are provided. These asymptotic results translate to finite sample size recovery results with high probability as long as the sample size N scales as O(K3dSε̃−2), where the recovery precision ε ̃ can go down to the asymptotically achievable precision. Further to actually find the local maxima of the new criterion, a very simple Iterative Thresholding & K (signed) Means algorithm (ITKM), which has complexity O(dKN) in each iteration, is presented and its local efficiency is demonstrated in several experiments. Index Terms dictionary learning, dictionary identification, sparse coding, sparse component analysis, vector quanti-sation, K-means, finite sample size, sampling complexity, maximisation criterion, sparse representation 1
IEEE TRANSACTIONS ON INFORMATION THEORY (SUBMITTED) 1 Noisy Matrix Completion under Sparse Factor Models
"... ar ..."
1On the Identifiability of Overcomplete Dictionaries via the Minimisation Principle Underlying K-SVD
, 2013
"... This article gives theoretical insights into the performance of K-SVD, a dictionary learning algorithm that has gained significant popularity in practical applications. The particular question studied here is when a dictionary Φ ∈ Rd×K can be recovered as local minimum of the minimisation criterion ..."
Abstract
- Add to MetaCart
This article gives theoretical insights into the performance of K-SVD, a dictionary learning algorithm that has gained significant popularity in practical applications. The particular question studied here is when a dictionary Φ ∈ Rd×K can be recovered as local minimum of the minimisation criterion underlying K-SVD from a set of N training signals yn = Φxn. A theoretical analysis of the problem leads to two types of identifiability results assuming the training signals are generated from a tight frame with coefficients drawn from a random symmetric distribution. First asymptotic results showing, that in expectation the generating dictionary can be recovered exactly as a local minimum of the K-SVD criterion if the coefficient distribution exhibits sufficient decay. This decay can be characterised by the coherence of the dictionary and the `1-norm of the coefficients. Based on the asymptotic results it is further demonstrated that given a finite number of training samples N, such that N / logN = O(K3d), except with probability O(N−Kd) there is a local minimum of the K-SVD criterion within distance O(KN−1/4) to the generating dictionary. Index Terms dictionary learning, sparse coding, K-SVD, finite sample size, sampling complexity, dictionary identification, minimisation criterion, sparse representation 1