Results 1  10
of
11
Fast and robust recursive algorithms for separable nonnegative matrix factorization. arXiv preprint arXiv:1208.1237
, 2012
"... ar ..."
(Show Context)
R.: Robust nearseparable nonnegative matrix factorization using linear optimization
 Journal of Machine Learning Research
, 2014
"... ar ..."
(Show Context)
The why and how of nonnegative matrix factorization
 REGULARIZATION, OPTIMIZATION, KERNELS, AND SUPPORT VECTOR MACHINES. CHAPMAN & HALL/CRC
, 2014
"... ..."
(Show Context)
Hierarchical Clustering of Hyperspectral Images Using RankTwo Nonnegative Matrix Factorization
 IEEE, Transactions on Geoscience and Remote Sensing
, 2015
"... In this paper, we design a hierarchical clustering algorithm for highresolution hyperspectral images. At the core of the algorithm, a new ranktwo nonnegative matrix factorizations (NMF) algorithm is used to split the clusters, which is motivated by convex geometry concepts. The method starts with ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
In this paper, we design a hierarchical clustering algorithm for highresolution hyperspectral images. At the core of the algorithm, a new ranktwo nonnegative matrix factorizations (NMF) algorithm is used to split the clusters, which is motivated by convex geometry concepts. The method starts with a single cluster containing all pixels, and, at each step, (i) selects a cluster in such a way that the error at the next step is minimized, and (ii) splits the selected cluster into two disjoint clusters using ranktwo NMF in such a way that the clusters are well balanced and stable. The proposed method can also be used as an endmember extraction algorithm in the presence of pure pixels. The effectiveness of this approach is illustrated on several synthetic and realworld hyperspectral images, and shown to outperform standard clustering techniques such as kmeans, spherical kmeans and standard NMF.
A Vavasis, “Semidefinite programming based preconditioning for more robust nearseparable nonnegative matrix factorization,” arXiv preprint arXiv:1310.2273
, 2013
"... ar ..."
Ellipsoidal Rounding for Nonnegative Matrix Factorization Under Noisy Separability
, 2013
"... We present a numerical algorithm for nonnegative matrix factorization (NMF) problems under noisy separability. An NMF problem under separability can be stated as one of finding all vertices of the convex hull of data points. The research interest of this paper is to find the vectors as close to the ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
We present a numerical algorithm for nonnegative matrix factorization (NMF) problems under noisy separability. An NMF problem under separability can be stated as one of finding all vertices of the convex hull of data points. The research interest of this paper is to find the vectors as close to the vertices as possible in a situation in which noise is added to the data points. Our algorithm is designed to capture the shape of the convex hull of data points by using its enclosing ellipsoid. We show that the algorithm has correctness and robustness properties from theoretical and practical perspectives; correctness here means that if the data points do not contain any noise, the algorithm can find the vertices of their convex hull; robustness means that if the data points contain noise, the algorithm can find the nearvertices. Finally, we apply the algorithm to document clustering, and report the experimental results.
Random projections for nonnegative matrix factorization. arXiv preprint arXiv:1405.4275
, 2014
"... Nonnegative matrix factorization (NMF) is a widely used tool for exploratory data analysis in many disciplines. In this paper, we describe an approach to NMF based on random projections and give a geometric analysis of a prototypical algorithm. Our main result shows the protoalgorithm requires κ̄k ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Nonnegative matrix factorization (NMF) is a widely used tool for exploratory data analysis in many disciplines. In this paper, we describe an approach to NMF based on random projections and give a geometric analysis of a prototypical algorithm. Our main result shows the protoalgorithm requires κ̄k log k optimizations to find all the extreme columns of the matrix, where k is the number of extreme columns, and κ ̄ is a geometric condition number. We show empirically that the protoalgorithm is robust to noise and wellsuited to modern distributed computing architectures.
Provable Algorithms for Machine Learning Problems
, 2013
"... Modern machine learning algorithms can extract useful information from text, images and videos. All these applications involve solving NPhard problems in average case using heuristics. What properties of the input allow it to be solved efficiently? Theoretically analyzing the heuristics is often v ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Modern machine learning algorithms can extract useful information from text, images and videos. All these applications involve solving NPhard problems in average case using heuristics. What properties of the input allow it to be solved efficiently? Theoretically analyzing the heuristics is often very challenging. Few results were known. This thesis takes a different approach: we identify natural properties of the input, then design new algorithms that provably works assuming the input has these properties. We are able to give new, provable and sometimes practical algorithms for learning tasks related to text corpus, images and social networks. The first part of the thesis presents new algorithms for learning thematic structure in documents. We show under a reasonable assumption, it is possible to provably learn many topic models, including the famous Latent Dirichlet Allocation. Our algorithm is the first provable algorithms for topic modeling. An implementation runs 50 times faster than latest MCMC implementation and produces comparable results. The second part of the thesis provides ideas for provably learning deep, sparse representations. We start with sparse linear representations, and give the first algorithm for dictionary learning problem with provable guarantees. Then we apply similar ideas to deep learning: under reasonable assumptions our algorithms can learn a deep network built by denoising autoencoders. The final part of the thesis develops a framework for learning latent variable models. We demonstrate how various latent variable models can be reduced to orthogonal tensor decomposition, and then be solved using tensor power method. We give a tight perturbation analysis for tensor power method, which reduces the number of samples required to learn many latent variable models. In theory, the assumptions in this thesis help us understand why intractable problems in machine learning can often be solved; in practice, the results suggest inherently new approaches for machine learning. We hope the assumptions and algorithms inspire new research problems and learning algorithms. iii
SelfDictionary Sparse Regression for Hyperspectral Unmixing: Greedy Pursuit and Pure Pixel Search Are Related
"... Abstract—This paper considers a recently emerged hyperspectral unmixing formulation based on sparse regression of a selfdictionary multiple measurement vector (SDMMV) model, wherein the measured hyperspectral pixels are used as the dictionary. Operating under the pure pixel assumption, this SD ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract—This paper considers a recently emerged hyperspectral unmixing formulation based on sparse regression of a selfdictionary multiple measurement vector (SDMMV) model, wherein the measured hyperspectral pixels are used as the dictionary. Operating under the pure pixel assumption, this SDMMV formalism is special in that it allows simultaneous identification of the endmember spectral signatures and the number of endmembers. Previous SDMMV studies mainly focus on convex relaxations. In this study, we explore the alternative of greedy pursuit, which generally provides efficient and simple algorithms. In particular, we design a greedy SDMMV algorithm using simultaneous orthogonal matching pursuit. Intriguingly, the proposed greedy algorithm is shown to be closely related to some existing pure pixel search algorithms, especially, the successive projection algorithm (SPA). Thus, a link between SDMMV and pure pixel search is revealed. We then perform exact recovery analyses, and prove that the proposed greedy algorithm is robust to noiseincluding its identification of the (unknown) number of endmembersunder a sufficiently low noise level. The identification performance of the proposed greedy algorithm is demonstrated through both synthetic and realdata experiments. Index Terms—Greedy pursuit, hyperspectral unmixing, number of endmembers estimation, selfdictionary sparse regression.