• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

On the Identifiability of Overcomplete Dictionaries via the Minimisation Principle Underlying K-SVD (2013)

by Karin Schnass
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 13
Next 10 →

Sample Complexity of Dictionary Learning and other Matrix Factorizations

by Remi Gribonval, Rodolphe Jenatton, Francis Bach, Martin Kleinsteuber, Matthias Seibert , 2014
"... ..."
Abstract - Cited by 7 (4 self) - Add to MetaCart
Abstract not found

Local Identification of Overcomplete Dictionaries

by Karin Schnass , 2015
"... This paper presents the first theoretical results showing that stable identification of over-complete µ-coherent dictionaries Φ ∈ Rd×K is locally possible from training signals with sparsity levels S up to the order O(µ−2) and signal to noise ratios up to O( d). In particular the dictionary is recov ..."
Abstract - Cited by 5 (0 self) - Add to MetaCart
This paper presents the first theoretical results showing that stable identification of over-complete µ-coherent dictionaries Φ ∈ Rd×K is locally possible from training signals with sparsity levels S up to the order O(µ−2) and signal to noise ratios up to O( d). In particular the dictionary is recoverable as the local maximum of a new maximization criterion that generalizes the K-means criterion. For this maximization criterion results for asymptotic exact recovery for sparsity levels up to O(µ−1) and stable recovery for sparsity levels up to O(µ−2) as well as signal to noise ratios up to O( d) are provided. These asymptotic results translate to finite sample size recovery results with high probability as long as the sample size N scales as O(K3dSε̃−2), where the recovery precision ε ̃ can go down to the asymptotically achievable precision. Further, to actually find the local maxima of the new criterion, a very simple Iterative Thresholding and K (signed) Means algorithm (ITKM), which has complexity O(dKN) in each iteration, is presented and its local efficiency is demonstrated in several experiments.

Noisy Matrix Completion under Sparse Factor Models

by Akshay Soni, Swayambhoo Jain, Jarvis Haupt, Stefano Gonella - IEEE TRANSACTIONS ON INFORMATION THEORY , 2014
"... ..."
Abstract - Cited by 2 (1 self) - Add to MetaCart
Abstract not found

Sparse and spurious: dictionary learning with noise and outliers

by Rodolphe Jenatton, Francis Bach , 2014
"... A popular approach within the signal processing and machine learning communities consists in mod-elling signals as sparse linear combinations of atoms selected from a learned dictionary. While this paradigm has led to numerous empirical successes in various fields ranging from image to audio process ..."
Abstract - Cited by 2 (0 self) - Add to MetaCart
A popular approach within the signal processing and machine learning communities consists in mod-elling signals as sparse linear combinations of atoms selected from a learned dictionary. While this paradigm has led to numerous empirical successes in various fields ranging from image to audio process-ing, there have only been a few theoretical arguments supporting these evidences. In particular, sparse coding, or sparse dictionary learning, relies on a non-convex procedure whose local minima have not been fully analyzed yet. In this paper, we consider a probabilistic model of sparse signals, and show that, with high probability, sparse coding admits a local minimum around the reference dictionary generating the signals. Our study takes into account the case of over-complete dictionaries, noisy signals, and possible outliers, thus extending previous work limited to noiseless settings and/or under-complete dictionaries. The analysis we conduct is non-asymptotic and makes it possible to understand how the key quantities of the problem, such as the coherence or the level of noise, can scale with respect to the dimension of the signals, the number of atoms, the sparsity and the number of observations. 1
(Show Context)

Citation Context

...e function based on `1 minimization under equality constraints [Gribonval and Schnass, 2010, Geng et al., 2011], for which there is no known efficient heuristic implementation, or on an `0 criterion [=-=Schnass, 2013-=-] à la K-SVD [Aharon et al., 2006]. More algorithmic approaches have also recently emerged [Spielman et al., 2012, Arora et al., 2013] demonstrating the existence of provably good algorithms of polyn...

PERFORMANCE LIMITS OF DICTIONARY LEARNING FOR SPARSE CODING

by Alexander Jung, Yonina C. Eldar, et al. , 2014
"... ..."
Abstract - Cited by 2 (0 self) - Add to MetaCart
Abstract not found

1Local Identification of Overcomplete Dictionaries

by Karin Schnass
"... This paper presents the first theoretical results showing that stable identification of overcomplete µ-coherent dictionaries Φ ∈ Rd×K is locally possible from training signals with sparsity levels S up to the order O(µ−2) and signal to noise ratios up to O( d). In particular the dictionary is recove ..."
Abstract - Add to MetaCart
This paper presents the first theoretical results showing that stable identification of overcomplete µ-coherent dictionaries Φ ∈ Rd×K is locally possible from training signals with sparsity levels S up to the order O(µ−2) and signal to noise ratios up to O( d). In particular the dictionary is recoverable as the local maximum of a new maximisation criterion that generalises the K-means criterion. For this maximisation criterion results for asymptotic exact recovery for sparsity levels up toO(µ−1) and stable recovery for sparsity levels up to O(µ−2) as well as signal to noise ratios up to O( d) are provided. These asymptotic results translate to finite sample size recovery results with high probability as long as the sample size N scales as O(K3dSε̃−2), where the recovery precision ε ̃ can go down to the asymptotically achievable precision. Further to actually find the local maxima of the new criterion, a very simple Iterative Thresholding & K (signed) Means algorithm (ITKM), which has complexity O(dKN) in each iteration, is presented and its local efficiency is demonstrated in several experiments. Index Terms dictionary learning, dictionary identification, sparse coding, sparse component analysis, vector quanti-sation, K-means, finite sample size, sampling complexity, maximisation criterion, sparse representation 1

PERFORMANCE LIMITS OF DICTIONARY LEARNING FOR SPARSE CODING

by Er Junga, Yonina C. Eldarb
"... ar ..."
Abstract - Add to MetaCart
Abstract not found

IEEE TRANSACTIONS ON INFORMATION THEORY (SUBMITTED) 1 Noisy Matrix Completion under Sparse Factor Models

by Akshay Soni, Swayambhoo Jain, Jarvis Haupt, Stefano Gonella
"... ar ..."
Abstract - Add to MetaCart
Abstract not found

Sample Complexity of Dictionary Learning and other Matrix Factorizations

by Ieee Fellow, Rodolphe Jenatton, Francis Bach, Martin Kleinsteuber, Matthias Seibert
"... ar ..."
Abstract - Add to MetaCart
Abstract not found

1On the Identifiability of Overcomplete Dictionaries via the Minimisation Principle Underlying K-SVD

by Karin Schnass , 2013
"... This article gives theoretical insights into the performance of K-SVD, a dictionary learning algorithm that has gained significant popularity in practical applications. The particular question studied here is when a dictionary Φ ∈ Rd×K can be recovered as local minimum of the minimisation criterion ..."
Abstract - Add to MetaCart
This article gives theoretical insights into the performance of K-SVD, a dictionary learning algorithm that has gained significant popularity in practical applications. The particular question studied here is when a dictionary Φ ∈ Rd×K can be recovered as local minimum of the minimisation criterion underlying K-SVD from a set of N training signals yn = Φxn. A theoretical analysis of the problem leads to two types of identifiability results assuming the training signals are generated from a tight frame with coefficients drawn from a random symmetric distribution. First asymptotic results showing, that in expectation the generating dictionary can be recovered exactly as a local minimum of the K-SVD criterion if the coefficient distribution exhibits sufficient decay. This decay can be characterised by the coherence of the dictionary and the `1-norm of the coefficients. Based on the asymptotic results it is further demonstrated that given a finite number of training samples N, such that N / logN = O(K3d), except with probability O(N−Kd) there is a local minimum of the K-SVD criterion within distance O(KN−1/4) to the generating dictionary. Index Terms dictionary learning, sparse coding, K-SVD, finite sample size, sampling complexity, dictionary identification, minimisation criterion, sparse representation 1
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University