Results 1  10
of
43
Fast Local Algorithms for Large Scale Nonnegative Matrix and Tensor Factorizations
, 2008
"... Nonnegative matrix factorization (NMF) and its extensions such as Nonnegative Tensor Factorization (NTF) have become prominent techniques for blind sources separation (BSS), analysis of image databases, data mining and other information retrieval and clustering applications. In this paper we propose ..."
Abstract

Cited by 49 (13 self)
 Add to MetaCart
Nonnegative matrix factorization (NMF) and its extensions such as Nonnegative Tensor Factorization (NTF) have become prominent techniques for blind sources separation (BSS), analysis of image databases, data mining and other information retrieval and clustering applications. In this paper we propose a family of efficient algorithms for NMF/NTF, as well as sparse nonnegative coding and representation, that has many potential applications in computational neuroscience, multisensory processing, compressed sensing and multidimensional data analysis. We have developed a class of optimized local algorithms which are referred to as Hierarchical Alternating Least Squares (HALS) algorithms. For these purposes, we have performed sequential constrained minimization on a set of squared Euclidean distances. We then extend this approach to robust cost functions using the Alpha and Beta divergences and derive flexible update rules. Our algorithms are locally stable and work well for NMFbased blind source separation (BSS) not only for the overdetermined case but also for an underdetermined (overcomplete) case (i.e., for a system which has less sensors than sources) if data are sufficiently sparse. The NMF learning rules are extended and generalized for Nth order nonnegative tensor factorization (NTF). Moreover, these algorithms can be tuned to different noise statistics by adjusting a single parameter. Extensive experimental results confirm the accuracy and computational performance of the developed algorithms, especially, with usage of multilayer hierarchical NMF approach [3].
Fast nonnegative matrix factorization: An activesetlike method and comparisons
 SIAM Journal on Scientific Computing
, 2011
"... Abstract. Nonnegative matrix factorization (NMF) is a dimension reduction method that has been widelyused fornumerousapplications including text mining, computer vision, pattern discovery, and bioinformatics. A mathematical formulation for NMF appears as a nonconvex optimization problem, and variou ..."
Abstract

Cited by 35 (6 self)
 Add to MetaCart
(Show Context)
Abstract. Nonnegative matrix factorization (NMF) is a dimension reduction method that has been widelyused fornumerousapplications including text mining, computer vision, pattern discovery, and bioinformatics. A mathematical formulation for NMF appears as a nonconvex optimization problem, and various types of algorithms have been devised to solve the problem. The alternating nonnegative leastsquares (ANLS)frameworkisablock coordinate descent approach forsolving NMF, which was recently shown to be theoretically sound and empiricallyefficient. In this paper, we present a novel algorithm for NMF based on the ANLS framework. Our new algorithm builds upon the block principal pivoting method for the nonnegativityconstrained least squares problem that overcomes a limitation of the active set method. We introduce ideas that efficiently extend the block principal pivoting method within the context of NMF computation. Our algorithm inherits the convergence property of the ANLS framework and can easily be extended to other constrained NMF formulations. Extensive computational comparisons using data sets that are from real life applications as well as those artificially generated show that the proposed algorithm provides stateoftheart performance in terms of computational speed.
Nonnegativity Constraints in Numerical Analysis
"... A survey of the development of algorithms for enforcing nonnegativity constraints in scientific computation is given. Special emphasis is placed on such constraints in least squares computations in numerical linear algebra and in nonlinear optimization. Techniques involving nonnegative lowrank matr ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
(Show Context)
A survey of the development of algorithms for enforcing nonnegativity constraints in scientific computation is given. Special emphasis is placed on such constraints in least squares computations in numerical linear algebra and in nonlinear optimization. Techniques involving nonnegative lowrank matrix and tensor factorizations are also emphasized. Details are provided for some important classical and modern applications in science and engineering. For completeness, this report also includes an effort toward a literature survey of the various algorithms and applications of nonnegativity constraints in numerical analysis. Key Words: nonnegativity constraints, nonnegative least squares, matrix and tensor factorizations, image processing, optimization.
Algorithms for nonnegative matrix and tensor factorizations: a unified view based on block coordinate descent framework
 J GLOB OPTIM
, 2013
"... ..."
Nonnegative factorization and the maximum edge biclique problem
, 2008
"... Nonnegative Matrix Factorization (NMF) is a data analysis technique which allows compression and interpretation of nonnegative data. NMF became widely studied after the publication of the seminal paper by Lee and Seung (Learning the Parts of Objects by Nonnegative Matrix Factorization, Nature, 1999, ..."
Abstract

Cited by 18 (7 self)
 Add to MetaCart
(Show Context)
Nonnegative Matrix Factorization (NMF) is a data analysis technique which allows compression and interpretation of nonnegative data. NMF became widely studied after the publication of the seminal paper by Lee and Seung (Learning the Parts of Objects by Nonnegative Matrix Factorization, Nature, 1999, vol. 401, pp. 788–791), which introduced an algorithm based on Multiplicative Updates (MU). More recently, another class of methods called Hierarchical Alternating Least Squares (HALS) was introduced that seems to be much more efficient in practice. In this paper, we consider the problem of approximating a not necessarily nonnegative matrix with the product of two nonnegative matrices, which we refer to as Nonnegative Factorization (NF); this is the subproblem that HALS methods implicitly try to solve at each iteration. We prove that NF is NPhard for any fixed factorization rank, using a reduction to the maximum edge biclique problem. We also generalize the multiplicative updates to NF, which allows us to shed some light on the differences between the MU and HALS algorithms for NMF and give an explanation for the better performance of HALS. Finally, we link stationary points of NF with feasible solutions of the biclique problem to obtain a new type of biclique finding algorithm (based on MU) whose iterations have an algorithmic complexity proportional to the number of edges in the graph, and show that it performs better than comparable existing methods.
Tensor Methods for Hyperspectral Data Analysis: A Space Object Material Identification Study
"... An important and well studied problem in hyperspectral image data applications is to identify materials present in the object or scene being imaged and to quantify their abundance in the mixture. Due to the increasing quantity of data usually encountered in hyperspectral datasets, effective data com ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
(Show Context)
An important and well studied problem in hyperspectral image data applications is to identify materials present in the object or scene being imaged and to quantify their abundance in the mixture. Due to the increasing quantity of data usually encountered in hyperspectral datasets, effective data compression is also an important consideration. In this paper, we develop novel methods based on tensor analysis that focus on all three of these goals: material identification, material abundance estimation, and data compression. Test results are reported in all three perspectives. c ○ 2008 Optical Society of
Using underapproximations for sparse nonnegative matrix factorization
 Pattern Recognition
, 2010
"... Nonnegative Matrix Factorization (NMF) has gathered a lot of attention in the last decade and has been successfully applied in numerous applications. It consists in the factorization of a nonnegative matrix by the product of two lowrank nonnegative matrices: M ≈ V W. In this paper, we attempt to so ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
Nonnegative Matrix Factorization (NMF) has gathered a lot of attention in the last decade and has been successfully applied in numerous applications. It consists in the factorization of a nonnegative matrix by the product of two lowrank nonnegative matrices: M ≈ V W. In this paper, we attempt to solve NMF problems in a recursive way. In order to do that, we introduce a new variant called Nonnegative Matrix Underapproximation (NMU) by adding the upper bound constraint V W ≤ M. Besides enabling a recursive procedure for NMF, these inequalities make NMU particularly wellsuited to achieve a sparse representation, improving the partbased decomposition. Although NMU is NPhard (which we prove using its equivalence with the maximum edge biclique problem in bipartite graphs), we present two approaches to solve it: a method based on convex reformulations and a method based on Lagrangian relaxation. Finally, we provide some encouraging numerical results for image processing applications.
Descent methods for Nonnegative Matrix Factorization
, 2008
"... In this paper, we present several descent methods that can be applied to nonnegative matrix factorization and we analyze a recently developed fast block coordinate method. We also give a comparison of these different methods and show that the new block coordinate method has better properties in term ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
(Show Context)
In this paper, we present several descent methods that can be applied to nonnegative matrix factorization and we analyze a recently developed fast block coordinate method. We also give a comparison of these different methods and show that the new block coordinate method has better properties in terms of approximation error and complexity. By interpreting this method as a rankone approximation of the residue matrix, we also extend it to the nonnegative tensor factorization and introduce some variants of the method by imposing some additional controllable constraints such as: sparsity, discreteness and smoothness.
EXTENDED HALS ALGORITHM FOR NONNEGATIVE TUCKER DECOMPOSITION AND ITS APPLICATIONS FOR MULTIWAY ANALYSIS AND CLASSIFICATION
"... Analysis of high dimensional data in modern applications, such as neuroscience, text mining, spectral analysis or chemometrices naturally requires tensor decomposition methods. The Tucker decompositions allow us to extract hidden factors (component matrices) with a different dimension in each mode a ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
(Show Context)
Analysis of high dimensional data in modern applications, such as neuroscience, text mining, spectral analysis or chemometrices naturally requires tensor decomposition methods. The Tucker decompositions allow us to extract hidden factors (component matrices) with a different dimension in each mode and investigate interactions among various modes. The Alternating Least Squares (ALS) algorithms have been confirmed effective and efficient in most of tensor decompositions, especially, Tucker with orthogonality constraints. However, for nonnegative Tucker decomposition (NTD), standard ALS algorithms suffer from unstable convergence properties, demand high computational cost for large scale problems due to matrix inversion and often return suboptimal solutions. Moreover, they are quite sensitive with respect to noise, and can be relatively slow in the special case when the data are nearly collinear. In this paper, we propose a new algorithm for nonnegative Tucker decomposition based on constrained minimization of a set of local cost functions and Hierarchical Alternating Least Squares (HALS). The developed HALS NTD algorithm sequentially updates components, hence avoids matrix inversion, and is suitable for largescale problems. The proposed algorithm is also regularized with additional constraint terms such as sparseness, orthogonality, smoothness, and especially discriminant constraints for classification problems. Extensive experiments confirm the validity and higher performance of the developed algorithm in comparison with other existing algorithms.