Results 1  10
of
83
Algorithms and applications for approximate nonnegative matrix factorization
 Computational Statistics and Data Analysis
, 2006
"... In this paper we discuss the development and use of lowrank approximate nonnegative matrix factorization (NMF) algorithms for feature extraction and identification in the fields of text mining and spectral data analysis. The evolution and convergence properties of hybrid methods based on both spars ..."
Abstract

Cited by 169 (7 self)
 Add to MetaCart
(Show Context)
In this paper we discuss the development and use of lowrank approximate nonnegative matrix factorization (NMF) algorithms for feature extraction and identification in the fields of text mining and spectral data analysis. The evolution and convergence properties of hybrid methods based on both sparsity and smoothness constraints for the resulting nonnegative matrix factors are discussed. The interpretability of NMF outputs in specific contexts are provided along with opportunities for future work in the modification of NMF algorithms for largescale and timevarying datasets. Key words: nonnegative matrix factorization, text mining, spectral data analysis, email surveillance, conjugate gradient, constrained least squares.
Convex and SemiNonnegative Matrix Factorizations
, 2008
"... We present several new variations on the theme of nonnegative matrix factorization (NMF). Considering factorizations of the form X = F GT, we focus on algorithms in which G is restricted to contain nonnegative entries, but allow the data matrix X to have mixed signs, thus extending the applicable ra ..."
Abstract

Cited by 91 (8 self)
 Add to MetaCart
We present several new variations on the theme of nonnegative matrix factorization (NMF). Considering factorizations of the form X = F GT, we focus on algorithms in which G is restricted to contain nonnegative entries, but allow the data matrix X to have mixed signs, thus extending the applicable range of NMF methods. We also consider algorithms in which the basis vectors of F are constrained to be convex combinations of the data points. This is used for a kernel extension of NMF. We provide algorithms for computing these new factorizations and we provide supporting theoretical analysis. We also analyze the relationships between our algorithms and clustering algorithms, and consider the implications for sparseness of solutions. Finally, we present experimental results that explore the properties of these new methods.
SVD based initialization: A head start for nonnegative matrix factorization
 PATTERN RECOGNITION
, 2007
"... ..."
(Show Context)
Hierarchical ALS Algorithms for Nonnegative Matrix and 3D Tensor Factorization
 In: Independent Component Analysis, ICA07
"... Abstract. In the paper we present new Alternating Least Squares (ALS) algorithms for Nonnegative Matrix Factorization (NMF) and their extensions to 3D Nonnegative Tensor Factorization (NTF) that are robust in the presence of noise and have many potential applications, including multiway Blind Sourc ..."
Abstract

Cited by 33 (6 self)
 Add to MetaCart
(Show Context)
Abstract. In the paper we present new Alternating Least Squares (ALS) algorithms for Nonnegative Matrix Factorization (NMF) and their extensions to 3D Nonnegative Tensor Factorization (NTF) that are robust in the presence of noise and have many potential applications, including multiway Blind Source Separation (BSS), multisensory or multidimensional data analysis, and nonnegative neural sparse coding. We propose to use local cost functions whose simultaneous or sequential (one by one) minimization leads to a very simple ALS algorithm which works under some sparsity constraints both for an underdetermined (a system which has less sensors than sources) and overdetermined model. The extensive experimental results confirm the validity and high performance of the developed algorithms, especially with usage of the multilayer hierarchical NMF. Extension of the proposed algorithm to
Nonnegative tensor factorization using alpha and beta divergencies
 IN: PROC. IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING (ICASSP07
, 2007
"... In this paper we propose new algorithms for 3D tensor decomposition/factorization with many potential applications, especially in multiway Blind Source Separation (BSS), multidimensional data analysis, and sparse signal/image representations. We derive and compare three classes of algorithms: Multi ..."
Abstract

Cited by 31 (12 self)
 Add to MetaCart
In this paper we propose new algorithms for 3D tensor decomposition/factorization with many potential applications, especially in multiway Blind Source Separation (BSS), multidimensional data analysis, and sparse signal/image representations. We derive and compare three classes of algorithms: Multiplicative, FixedPoint Alternating Least Squares (FPALS) and Alternating InteriorPoint Gradient (AIPG) algorithms. Some of the proposed algorithms are characterized by improved robustness, efficiency and convergence rates and can be applied for various distributions of data and additive noise.
Nonnegative matrix factorization with quasiNewton optimization
 in Proceedings of the 8th International Conference on Artificial Intelligence and Soft Computing (ICAISC
, 2006
"... Abstract. Nonnegative matrix factorization (NMF) is an emerging method with wide spectrum of potential applications in data analysis, feature extraction and blind source separation. Currently, most applications use relative simple multiplicative NMF learning algorithms which were proposed by Lee an ..."
Abstract

Cited by 30 (7 self)
 Add to MetaCart
(Show Context)
Abstract. Nonnegative matrix factorization (NMF) is an emerging method with wide spectrum of potential applications in data analysis, feature extraction and blind source separation. Currently, most applications use relative simple multiplicative NMF learning algorithms which were proposed by Lee and Seung, and are based on minimization of the KullbackLeibler divergence and Frobenius norm. Unfortunately, these algorithms are relatively slow and often need a few thousands of iterations to achieve a local minimum. In order to increase a convergence rate and to improve performance of NMF, we proposed to use a more general cost function: socalled Amari alpha divergence. Taking into account a special structure of the Hessian of this cost function, we derived a relatively simple secondorder quasiNewton method for NMF. The validity and performance of the proposed algorithm has been extensively tested for blind source separation problems, both for signals and images. The performance of the developed NMF algorithm is illustrated for separation of statistically dependent signals and images from their linear mixtures. 1
Fast nonnegative matrix factorization: An activesetlike method and comparisons
 SIAM Journal on Scientific Computing
, 2011
"... Abstract. Nonnegative matrix factorization (NMF) is a dimension reduction method that has been widelyused fornumerousapplications including text mining, computer vision, pattern discovery, and bioinformatics. A mathematical formulation for NMF appears as a nonconvex optimization problem, and variou ..."
Abstract

Cited by 26 (6 self)
 Add to MetaCart
(Show Context)
Abstract. Nonnegative matrix factorization (NMF) is a dimension reduction method that has been widelyused fornumerousapplications including text mining, computer vision, pattern discovery, and bioinformatics. A mathematical formulation for NMF appears as a nonconvex optimization problem, and various types of algorithms have been devised to solve the problem. The alternating nonnegative leastsquares (ANLS)frameworkisablock coordinate descent approach forsolving NMF, which was recently shown to be theoretically sound and empiricallyefficient. In this paper, we present a novel algorithm for NMF based on the ANLS framework. Our new algorithm builds upon the block principal pivoting method for the nonnegativityconstrained least squares problem that overcomes a limitation of the active set method. We introduce ideas that efficiently extend the block principal pivoting method within the context of NMF computation. Our algorithm inherits the convergence property of the ANLS framework and can easily be extended to other constrained NMF formulations. Extensive computational comparisons using data sets that are from real life applications as well as those artificially generated show that the proposed algorithm provides stateoftheart performance in terms of computational speed.
Multipitch Analysis with Harmonic Nonnegative Matrix Approximation
 in ISMIR 2007, 8th International Conference on Music Information Retrieval
, 2007
"... This paper presents a new approach to multipitch analysis by utilizing the Harmonic Nonnegative Matrix Approximation, a harmonicallyconstrained and penalized version of the Nonnegative Matrix Approximation (NNMA) method. It also includes a description of a note onset, offset and amplitude retrieval ..."
Abstract

Cited by 25 (4 self)
 Add to MetaCart
(Show Context)
This paper presents a new approach to multipitch analysis by utilizing the Harmonic Nonnegative Matrix Approximation, a harmonicallyconstrained and penalized version of the Nonnegative Matrix Approximation (NNMA) method. It also includes a description of a note onset, offset and amplitude retrieval procedure based on that technique. Compared with the previous NNMA approaches, specific initialization of the basis matrix is employed – the basis matrix is initialized with zeros everywhere but at positions corresponding to harmonic frequencies of consequent notes of the equal temperament scale. This results in the basis containing nothing but harmonically structured vectors, even after the learning process, and the activity matrix’s rows containing peaks corresponding to note onset times and amplitudes. Furthermore, additional penalties of mutual uncorrelation and sparseness of rows are placed upon the activity matrix. The proposed method is able to uncover the underlying musical structure better than the previous NNMA approaches and makes the note detection process very straightforward. 1
Nonnegative matrix approximation: algorithms and applications
, 2006
"... Low dimensional data representations are crucial to numerous applications in machine learning, statistics, and signal processing. Nonnegative matrix approximation (NNMA) is a method for dimensionality reduction that respects the nonnegativity of the input data while constructing a lowdimensional ap ..."
Abstract

Cited by 24 (4 self)
 Add to MetaCart
(Show Context)
Low dimensional data representations are crucial to numerous applications in machine learning, statistics, and signal processing. Nonnegative matrix approximation (NNMA) is a method for dimensionality reduction that respects the nonnegativity of the input data while constructing a lowdimensional approximation. NNMA has been used in a multitude of applications, though without commensurate theoretical development. In this report we describe generic methods for minimizing generalized divergences between the input and its low rank approximant. Some of our general methods are even extensible to arbitrary convex penalties. Our methods yield efficient multiplicative iterative schemes for solving the proposed problems. We also consider interesting extensions such as the use of penalty functions, nonlinear relationships via “link ” functions, weighted errors, and multifactor approximations. We present some experiments as an illustration of our algorithms. For completeness, the report also includes a brief literature survey of the various algorithms and the applications of NNMA. Keywords: Nonnegative matrix factorization, weighted approximation, Bregman divergence, multiplicative