Results 1  10
of
66
Sparse nonnegative matrix factorizations via alternating nonnegativityconstrained least squares for microarray data analysis
 VOL. 23 NO. 12 2007, PAGES 1495–1502 BIOINFORMATICS
, 2007
"... ..."
NONNEGATIVE MATRIX FACTORIZATION BASED ON ALTERNATING NONNEGATIVITY CONSTRAINED LEAST SQUARES AND ACTIVE SET METHOD
"... The nonnegative matrix factorization (NMF) determines a lower rank approximation of a ¢¤£¦¥¨§�©���� �� � matrix where an ������������������ � interger is given and nonnegativity is imposed on all components of the factors applied to numerous data analysis problems. In applications where the compone ..."
Abstract

Cited by 86 (7 self)
 Add to MetaCart
(Show Context)
The nonnegative matrix factorization (NMF) determines a lower rank approximation of a ¢¤£¦¥¨§�©���� �� � matrix where an ������������������ � interger is given and nonnegativity is imposed on all components of the factors applied to numerous data analysis problems. In applications where the components of the data are necessarily nonnegative such as chemical concentrations in experimental results or pixels in digital images, the NMF provides a more relevant interpretation of the results since it gives nonsubtractive combinations of nonnegative basis vectors. In this paper, we introduce an algorithm for the NMF based on alternating nonnegativity constrained least squares (NMF/ANLS) and the active set based fast algorithm for nonnegativity constrained least squares with multiple right hand side vectors, and discuss its convergence properties and a rigorous convergence criterion based on the KarushKuhnTucker (KKT) conditions. In addition, we also describe algorithms for sparse NMFs and regularized NMF. We show how we impose a sparsity constraint on one of the factors by �� �norm minimization and discuss its convergence properties. Our algorithms are compared to other commonly used NMF algorithms in the literature on several test data sets in terms of their convergence behavior. £�¥�§�©� � and � £�¥���©� �. The NMF has attracted much attention for over a decade and has been successfully
Nonnegative Matrix Factorization with Constrained Second Order Optimization
, 2007
"... Nonnegative Matrix Factorization (NMF) solves the following problem: find nonnegative matrices A ∈ R M×R X ∈ R R×T + such that Y ∼ = AX, given only Y ∈ R M×T and the assigned index R. This method has found a wide spectrum of applications in signal and image processing, such as blind source separati ..."
Abstract

Cited by 25 (8 self)
 Add to MetaCart
Nonnegative Matrix Factorization (NMF) solves the following problem: find nonnegative matrices A ∈ R M×R X ∈ R R×T + such that Y ∼ = AX, given only Y ∈ R M×T and the assigned index R. This method has found a wide spectrum of applications in signal and image processing, such as blind source separation, spectra recovering, pattern recognition, segmentation or clustering. Such a factorization is usually performed with an alternating gradient descent technique that is applied to the squared Euclidean distance or KullbackLeibler divergence. This approach has been used in the widely known LeeSeung NMF algorithms that belong to a class of multiplicative iterative algorithms. It is wellknown that these algorithms, in spite of their low complexity, are slowlyconvergent, give only a positive solution (not nonnegative), and can easily fall in to local minima of a nonconvex cost function. In this paper, we propose to take advantage of the second order terms of a cost function to overcome the disadvantages of gradient (multiplicative) algorithms. First, a projected quasiNewton method is presented, where a regularized Hessian with the LevenbergMarquardt approach is inverted with the Qless QR decomposition. Since the matrices A and/or X are usually sparse, a more sophisticated hybrid approach based on the Gradient Projection Conjugate Gradient (GPCG) algorithm, which was invented by More and Toraldo, is adapted for NMF. The Gradient Projection (GP) method is exploited to find zerovalue components (active), and then the Newton steps are taken only to compute positive components (inactive) with the Conjugate Gradient (CG) method. As a cost function, we used the αdivergence that unifies many wellknown cost functions. We applied our new NMF method to a Blind Source Separation (BSS) problem with mixed signals and images. The results demonstrate the high robustness of our method.
A novel discriminant nonnegative matrix factorization algorithm with applications to facial image characterization problems
 IEEE Transactions on Information Forensics and Security
"... Abstract—The methods introduced so far regarding discriminant nonnegative matrix factorization (DNMF) do not guarantee convergence to a stationary limit point. In order to remedy this limitation, a novel DNMF method is presented that uses projected gradients. The proposed algorithm employs some ex ..."
Abstract

Cited by 18 (5 self)
 Add to MetaCart
Abstract—The methods introduced so far regarding discriminant nonnegative matrix factorization (DNMF) do not guarantee convergence to a stationary limit point. In order to remedy this limitation, a novel DNMF method is presented that uses projected gradients. The proposed algorithm employs some extra modifications that make the method more suitable for classification tasks. The usefulness of the proposed technique to frontal face verification and facial expression recognition problems is demonstrated. Index Terms—Facial expression recognition, frontal face verification, linear discriminant analysis, nonnegative matrix factorization (NMF), projected gradients. I.
Nonnegative Matrix Factorization: A Comprehensive Review
 IEEE TRANS. KNOWLEDGE AND DATA ENG
, 2013
"... Nonnegative Matrix Factorization (NMF), a relatively novel paradigm for dimensionality reduction, has been in the ascendant since its inception. It incorporates the nonnegativity constraint and thus obtains the partsbased representation as well as enhancing the interpretability of the issue corres ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
Nonnegative Matrix Factorization (NMF), a relatively novel paradigm for dimensionality reduction, has been in the ascendant since its inception. It incorporates the nonnegativity constraint and thus obtains the partsbased representation as well as enhancing the interpretability of the issue correspondingly. This survey paper mainly focuses on the theoretical research into NMF over the last 5 years, where the principles, basic models, properties, and algorithms of NMF along with its various modifications, extensions, and generalizations are summarized systematically. The existing NMF algorithms are divided into four categories: Basic NMF (BNMF),
MULTILAYER NONNEGATIVE MATRIX FACTORIZATION USING PROJECTED GRADIENT APPROACHES
, 2007
"... The most popular algorithms for Nonnegative Matrix Factorization (NMF) belong to a class of multiplicative LeeSeung algorithms which have usually relative low complexity but are characterized by slowconvergence and the risk of getting stuck to in local minima. In this paper, we present and compare ..."
Abstract

Cited by 14 (5 self)
 Add to MetaCart
(Show Context)
The most popular algorithms for Nonnegative Matrix Factorization (NMF) belong to a class of multiplicative LeeSeung algorithms which have usually relative low complexity but are characterized by slowconvergence and the risk of getting stuck to in local minima. In this paper, we present and compare the performance of additive algorithms based on three different variations of a projected gradient approach. Additionally, we discuss a novel multilayer approach to NMF algorithms combined with multistart initializations procedure, which in general, considerably improves the performance of all the NMF algorithms. We demonstrate that this approach (the multilayer system with projected gradient algorithms) can usually give much better performance than standard multiplicative algorithms, especially, if data are illconditioned, badlyscaled, and/or a number of observations is only slightly greater than a number of nonnegative hidden components. Our new implementations of NMF are demonstrated with the simulations performed for Blind Source Separation (BSS) data.
1 Hyperspectral Unmixing Via L1/2 Sparsityconstrained Nonnegative Matrix Factorization
"... Hyperspectral unmixing is a crucial preprocessing step for material classification and recognition. In the last decade, nonnegative matrix factorization (NMF) and its extensions have been intensively studied to unmix hyperspectral imagery and recover the material endmembers. As an important constra ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
(Show Context)
Hyperspectral unmixing is a crucial preprocessing step for material classification and recognition. In the last decade, nonnegative matrix factorization (NMF) and its extensions have been intensively studied to unmix hyperspectral imagery and recover the material endmembers. As an important constraint for NMF, sparsity has been modeled making use of the L1 regularizer. Unfortunately, the L1 regularizer cannot enforce further sparsity when the full additivity constraint of material abundances is used, hence, limiting the practical efficacy of NMF methods in hyperspectral unmixing. In this paper, we extend the NMF method by incorporating the L1/2 sparsity constraint, which we name L1/2NMF. The L1/2 regularizer not only induces sparsity, but is also a better choice among Lq(0 < q < 1) regularizers. We propose an iterative estimation algorithm for L1/2NMF, which provides sparser and more accurate results than those delivered using the L1 norm. We illustrate the utility of our method on synthetic and real hyperspectral data and compare our results to those yielded by other stateoftheart methods.
Discriminant Nonnegative Tensor Factorization Algorithms
 IEEE Trans. Neural Networks
, 2009
"... Abstract—Nonnegative matrix factorization (NMF) has proven to be very successful for image analysis, especially for object representation and recognition. NMF requires the object tensor (with valence more than one) to be vectorized. This procedure may result in information loss since the local obje ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
(Show Context)
Abstract—Nonnegative matrix factorization (NMF) has proven to be very successful for image analysis, especially for object representation and recognition. NMF requires the object tensor (with valence more than one) to be vectorized. This procedure may result in information loss since the local object structure is lost due to vectorization. Recently, in order to remedy this disadvantage of NMF methods, nonnegative tensor factorizations (NTF) algorithms that can be applied directly to the tensor representation of object collections have been introduced. In this paper, we propose a series of unsupervised and supervised NTF methods. That is, we extend several NMF methods using arbitrary valence tensors. Moreover, by incorporating discriminant constraints inside the NTF decompositions, we present a series of discriminant NTF methods. The proposed approaches are tested for face verification and facial expression recognition, where it is shown that they outperform other popular subspace approaches. Index Terms—Face verification, facial expression recognition, linear discriminant analysis, nonnegative matrix factorization (NMF), nonnegative tensor factorization (NTF), subspace techniques. I.
Nonnegative Matrix Factorization with Quadratic Programming
, 2006
"... Nonnegative Matrix Factorization (NMF) solves the following problem: find such nonnegative matrices A ∈ R I×J + and X ∈ R J×K + that Y ∼ = AX, given only Y ∈ R I×K and the assigned index J (K>> I ≥ J). Basically, the factorization is achieved by alternating minimization of a given cost functi ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
Nonnegative Matrix Factorization (NMF) solves the following problem: find such nonnegative matrices A ∈ R I×J + and X ∈ R J×K + that Y ∼ = AX, given only Y ∈ R I×K and the assigned index J (K>> I ≥ J). Basically, the factorization is achieved by alternating minimization of a given cost function subject to nonnegativity constraints. In the paper, we propose to use Quadratic Programming (QP) to solve the minimization problems. The Tikhonov regularized squared Euclidean cost function is extended with a logarithmic barrier function (which satisfies nonnegativity constraints), and then using secondorder Taylor expansion, a QP problem is formulated. This problem is solved with some trustregion subproblem algorithm. The numerical tests are performed on the blind source separation problems.
Fast Nonnegative Matrix Factorization Algorithms Using Projected Gradient Approaches for LargeScale Problems
, 2008
"... Recently, a considerable growth of interest in projected gradient (PG) methods has been observed due to their high efficiency in solving largescale convex minimization problems subject to linear constraints. Since the minimization problems underlying nonnegative matrix factorization (NMF) of large ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Recently, a considerable growth of interest in projected gradient (PG) methods has been observed due to their high efficiency in solving largescale convex minimization problems subject to linear constraints. Since the minimization problems underlying nonnegative matrix factorization (NMF) of large matrices well matches this class of minimization problems, we investigate and test some recent PG methods in the context of their applicability to NMF. In particular, the paper focuses on the following modified methods: projected Landweber, BarzilaiBorwein gradient projection, projected sequential subspace optimization (PSESOP), interiorpoint Newton (IPN), and sequential coordinatewise. The proposed and implemented NMF PG algorithms are compared with respect to their performance in terms of signaltointerference ratio (SIR) and elapsed time, using a simple benchmark of mixed partially dependent nonnegative signals.