Results 1 - 10
of
66
Sparse non-negative matrix factorizations via alternating non-negativity-constrained least squares for microarray data analysis
- VOL. 23 NO. 12 2007, PAGES 1495–1502 BIOINFORMATICS
, 2007
"... ..."
NON-NEGATIVE MATRIX FACTORIZATION BASED ON ALTERNATING NON-NEGATIVITY CONSTRAINED LEAST SQUARES AND ACTIVE SET METHOD
"... The non-negative matrix factorization (NMF) determines a lower rank approximation of a ¢¤£¦¥¨§�©���� �� � matrix where an ������������������ � interger is given and nonnegativity is imposed on all components of the factors applied to numerous data analysis problems. In applications where the compone ..."
Abstract
-
Cited by 86 (7 self)
- Add to MetaCart
(Show Context)
The non-negative matrix factorization (NMF) determines a lower rank approximation of a ¢¤£¦¥¨§�©���� �� � matrix where an ������������������ � interger is given and nonnegativity is imposed on all components of the factors applied to numerous data analysis problems. In applications where the components of the data are necessarily nonnegative such as chemical concentrations in experimental results or pixels in digital images, the NMF provides a more relevant interpretation of the results since it gives non-subtractive combinations of non-negative basis vectors. In this paper, we introduce an algorithm for the NMF based on alternating non-negativity constrained least squares (NMF/ANLS) and the active set based fast algorithm for non-negativity constrained least squares with multiple right hand side vectors, and discuss its convergence properties and a rigorous convergence criterion based on the Karush-Kuhn-Tucker (KKT) conditions. In addition, we also describe algorithms for sparse NMFs and regularized NMF. We show how we impose a sparsity constraint on one of the factors by �� �-norm minimization and discuss its convergence properties. Our algorithms are compared to other commonly used NMF algorithms in the literature on several test data sets in terms of their convergence behavior. £�¥�§�©� � and � £�¥���©� �. The NMF has attracted much attention for over a decade and has been successfully
Nonnegative Matrix Factorization with Constrained Second Order Optimization
, 2007
"... Nonnegative Matrix Factorization (NMF) solves the following problem: find nonnegative matrices A ∈ R M×R X ∈ R R×T + such that Y ∼ = AX, given only Y ∈ R M×T and the assigned index R. This method has found a wide spectrum of applications in signal and image processing, such as blind source separati ..."
Abstract
-
Cited by 25 (8 self)
- Add to MetaCart
Nonnegative Matrix Factorization (NMF) solves the following problem: find nonnegative matrices A ∈ R M×R X ∈ R R×T + such that Y ∼ = AX, given only Y ∈ R M×T and the assigned index R. This method has found a wide spectrum of applications in signal and image processing, such as blind source separation, spectra recovering, pattern recognition, segmentation or clustering. Such a factorization is usually performed with an alternating gradient descent technique that is applied to the squared Euclidean distance or Kullback-Leibler divergence. This approach has been used in the widely known Lee-Seung NMF algorithms that belong to a class of multiplicative iterative algorithms. It is well-known that these algorithms, in spite of their low complexity, are slowly-convergent, give only a positive solution (not nonnegative), and can easily fall in to local minima of a non-convex cost function. In this paper, we propose to take advantage of the second order terms of a cost function to overcome the disadvantages of gradient (multiplicative) algorithms. First, a projected quasi-Newton method is presented, where a regularized Hessian with the Levenberg-Marquardt approach is inverted with the Q-less QR decomposition. Since the matrices A and/or X are usually sparse, a more sophisticated hybrid approach based on the Gradient Projection Conjugate Gradient (GPCG) algorithm, which was invented by More and Toraldo, is adapted for NMF. The Gradient Projection (GP) method is exploited to find zero-value components (active), and then the Newton steps are taken only to compute positive components (inactive) with the Conjugate Gradient (CG) method. As a cost function, we used the α-divergence that unifies many well-known cost functions. We applied our new NMF method to a Blind Source Separation (BSS) problem with mixed signals and images. The results demonstrate the high robustness of our method.
A novel discriminant non-negative matrix factorization algorithm with applications to facial image characterization problems
- IEEE Transactions on Information Forensics and Security
"... Abstract—The methods introduced so far regarding discrimi-nant non-negative matrix factorization (DNMF) do not guarantee convergence to a stationary limit point. In order to remedy this limitation, a novel DNMF method is presented that uses projected gradients. The proposed algorithm employs some ex ..."
Abstract
-
Cited by 18 (5 self)
- Add to MetaCart
Abstract—The methods introduced so far regarding discrimi-nant non-negative matrix factorization (DNMF) do not guarantee convergence to a stationary limit point. In order to remedy this limitation, a novel DNMF method is presented that uses projected gradients. The proposed algorithm employs some extra modifica-tions that make the method more suitable for classification tasks. The usefulness of the proposed technique to frontal face verifica-tion and facial expression recognition problems is demonstrated. Index Terms—Facial expression recognition, frontal face verifi-cation, linear discriminant analysis, non-negative matrix factoriza-tion (NMF), projected gradients. I.
Nonnegative Matrix Factorization: A Comprehensive Review
- IEEE TRANS. KNOWLEDGE AND DATA ENG
, 2013
"... Nonnegative Matrix Factorization (NMF), a relatively novel paradigm for dimensionality reduction, has been in the ascendant since its inception. It incorporates the nonnegativity constraint and thus obtains the parts-based representation as well as enhancing the interpretability of the issue corres ..."
Abstract
-
Cited by 17 (2 self)
- Add to MetaCart
Nonnegative Matrix Factorization (NMF), a relatively novel paradigm for dimensionality reduction, has been in the ascendant since its inception. It incorporates the nonnegativity constraint and thus obtains the parts-based representation as well as enhancing the interpretability of the issue correspondingly. This survey paper mainly focuses on the theoretical research into NMF over the last 5 years, where the principles, basic models, properties, and algorithms of NMF along with its various modifications, extensions, and generalizations are summarized systematically. The existing NMF algorithms are divided into four categories: Basic NMF (BNMF),
MULTILAYER NONNEGATIVE MATRIX FACTORIZATION USING PROJECTED GRADIENT APPROACHES
, 2007
"... The most popular algorithms for Nonnegative Matrix Factorization (NMF) belong to a class of multiplicative Lee-Seung algorithms which have usually relative low complexity but are characterized by slow-convergence and the risk of getting stuck to in local minima. In this paper, we present and compare ..."
Abstract
-
Cited by 14 (5 self)
- Add to MetaCart
(Show Context)
The most popular algorithms for Nonnegative Matrix Factorization (NMF) belong to a class of multiplicative Lee-Seung algorithms which have usually relative low complexity but are characterized by slow-convergence and the risk of getting stuck to in local minima. In this paper, we present and compare the performance of additive algorithms based on three different variations of a projected gradient approach. Additionally, we discuss a novel multilayer approach to NMF algorithms combined with multistart initializations procedure, which in general, considerably improves the performance of all the NMF algorithms. We demonstrate that this approach (the multilayer system with projected gradient algorithms) can usually give much better performance than standard multiplicative algorithms, especially, if data are ill-conditioned, badly-scaled, and/or a number of observations is only slightly greater than a number of nonnegative hidden components. Our new implementations of NMF are demonstrated with the simulations performed for Blind Source Separation (BSS) data.
1 Hyperspectral Unmixing Via L1/2 Sparsity-constrained Nonnegative Matrix Factorization
"... Hyperspectral unmixing is a crucial preprocessing step for material classification and recognition. In the last decade, nonnegative matrix factorization (NMF) and its extensions have been intensively studied to unmix hyperspectral imagery and recover the material end-members. As an important constra ..."
Abstract
-
Cited by 12 (2 self)
- Add to MetaCart
(Show Context)
Hyperspectral unmixing is a crucial preprocessing step for material classification and recognition. In the last decade, nonnegative matrix factorization (NMF) and its extensions have been intensively studied to unmix hyperspectral imagery and recover the material end-members. As an important constraint for NMF, sparsity has been modeled making use of the L1 regularizer. Unfortunately, the L1 regularizer cannot enforce further sparsity when the full additivity constraint of material abundances is used, hence, limiting the practical efficacy of NMF methods in hyperspectral unmixing. In this paper, we extend the NMF method by incorporating the L1/2 sparsity constraint, which we name L1/2-NMF. The L1/2 regularizer not only induces sparsity, but is also a better choice among Lq(0 < q < 1) regularizers. We propose an iterative estimation algorithm for L1/2-NMF, which provides sparser and more accurate results than those delivered using the L1 norm. We illustrate the utility of our method on synthetic and real hyperspectral data and compare our results to those yielded by other state-of-the-art methods.
Discriminant Nonnegative Tensor Factorization Algorithms
- IEEE Trans. Neural Networks
, 2009
"... Abstract—Nonnegative matrix factorization (NMF) has proven to be very successful for image analysis, especially for object rep-resentation and recognition. NMF requires the object tensor (with valence more than one) to be vectorized. This procedure may result in information loss since the local obje ..."
Abstract
-
Cited by 12 (2 self)
- Add to MetaCart
(Show Context)
Abstract—Nonnegative matrix factorization (NMF) has proven to be very successful for image analysis, especially for object rep-resentation and recognition. NMF requires the object tensor (with valence more than one) to be vectorized. This procedure may result in information loss since the local object structure is lost due to vec-torization. Recently, in order to remedy this disadvantage of NMF methods, nonnegative tensor factorizations (NTF) algorithms that can be applied directly to the tensor representation of object col-lections have been introduced. In this paper, we propose a series of unsupervised and supervised NTF methods. That is, we extend several NMF methods using arbitrary valence tensors. Moreover, by incorporating discriminant constraints inside the NTF decom-positions, we present a series of discriminant NTF methods. The proposed approaches are tested for face verification and facial ex-pression recognition, where it is shown that they outperform other popular subspace approaches. Index Terms—Face verification, facial expression recognition, linear discriminant analysis, nonnegative matrix factorization (NMF), nonnegative tensor factorization (NTF), subspace tech-niques. I.
Nonnegative Matrix Factorization with Quadratic Programming
, 2006
"... Nonnegative Matrix Factorization (NMF) solves the following problem: find such nonnegative matrices A ∈ R I×J + and X ∈ R J×K + that Y ∼ = AX, given only Y ∈ R I×K and the assigned index J (K>> I ≥ J). Basically, the factorization is achieved by alternating minimization of a given cost functi ..."
Abstract
-
Cited by 9 (2 self)
- Add to MetaCart
Nonnegative Matrix Factorization (NMF) solves the following problem: find such nonnegative matrices A ∈ R I×J + and X ∈ R J×K + that Y ∼ = AX, given only Y ∈ R I×K and the assigned index J (K>> I ≥ J). Basically, the factorization is achieved by alternating minimization of a given cost function subject to nonnegativity constraints. In the paper, we propose to use Quadratic Programming (QP) to solve the minimization problems. The Tikhonov regularized squared Euclidean cost function is extended with a logarithmic barrier function (which satisfies nonnegativity constraints), and then using second-order Taylor expansion, a QP problem is formulated. This problem is solved with some trust-region subproblem algorithm. The numerical tests are performed on the blind source separation problems.
Fast Nonnegative Matrix Factorization Algorithms Using Projected Gradient Approaches for Large-Scale Problems
, 2008
"... Recently, a considerable growth of interest in projected gradient (PG) methods has been observed due to their high efficiency in solving large-scale convex minimization problems subject to linear constraints. Since the minimization problems underlying nonnegative matrix factorization (NMF) of large ..."
Abstract
-
Cited by 8 (0 self)
- Add to MetaCart
Recently, a considerable growth of interest in projected gradient (PG) methods has been observed due to their high efficiency in solving large-scale convex minimization problems subject to linear constraints. Since the minimization problems underlying nonnegative matrix factorization (NMF) of large matrices well matches this class of minimization problems, we investigate and test some recent PG methods in the context of their applicability to NMF. In particular, the paper focuses on the following modified methods: projected Landweber, Barzilai-Borwein gradient projection, projected sequential subspace optimization (PSESOP), interior-point Newton (IPN), and sequential coordinate-wise. The proposed and implemented NMF PG algorithms are compared with respect to their performance in terms of signal-to-interference ratio (SIR) and elapsed time, using a simple benchmark of mixed partially dependent nonnegative signals.