Results 1  10
of
70
Conditions for nonnegative independent component analysis
 IEEE Signal Processing Letters
, 2002
"... We consider the noiseless linear independent component analysis problem, in the case where the hidden sources s are nonnegative. We assume that the random variables s i s are wellgrounded in that they have a nonvanishing pdf in the (positive) neighbourhood of zero. For an orthonormal rotation y = ..."
Abstract

Cited by 89 (11 self)
 Add to MetaCart
(Show Context)
We consider the noiseless linear independent component analysis problem, in the case where the hidden sources s are nonnegative. We assume that the random variables s i s are wellgrounded in that they have a nonvanishing pdf in the (positive) neighbourhood of zero. For an orthonormal rotation y = Wx of prewhitened observations x = QAs, under certain reasonable conditions we show that y is a permutation of the s (apart from a scaling factor) if and only if y is nonnegative with probability 1. We suggest that this may enable the construction of practical learning algorithms, particularly for sparse nonnegative sources.
Learning in Linear Neural Networks: a Survey
 IEEE Transactions on neural networks
, 1995
"... Networks of linear units are the simplest kind of networks, where the basic questions related to learning, generalization, and selforganisation can sometimes be answered analytically. We survey most of the known results on linear networks, including: (1) backpropagation learning and the structure ..."
Abstract

Cited by 60 (4 self)
 Add to MetaCart
(Show Context)
Networks of linear units are the simplest kind of networks, where the basic questions related to learning, generalization, and selforganisation can sometimes be answered analytically. We survey most of the known results on linear networks, including: (1) backpropagation learning and the structure of the error function landscape; (2) the temporal evolution of generalization; (3) unsupervised learning algorithms and their properties. The connections to classical statistical ideas, such as principal component analysis (PCA), are emphasized as well as several simple but challenging open questions. A few new results are also spread across the paper, including an analysis of the effect of noise on backpropagation networks and a unified view of all unsupervised algorithms. Keywords linear networks, supervised and unsupervised learning, Hebbian learning, principal components, generalization, local minima, selforganisation I. Introduction This paper addresses the problems of supervise...
A ‘nonnegative PCA’ algorithm for independent component analysis, 2002, submitted for publication
"... We consider the task of independent component analysis when the independent sources are known to be nonnegative and wellgrounded, so that they have a nonzero probability density function (pdf) in the region of zero. We propose the use of a "nonnegative principal component analysis (nonnegative ..."
Abstract

Cited by 37 (3 self)
 Add to MetaCart
(Show Context)
We consider the task of independent component analysis when the independent sources are known to be nonnegative and wellgrounded, so that they have a nonzero probability density function (pdf) in the region of zero. We propose the use of a "nonnegative principal component analysis (nonnegative PCA) " algorithm, which is a special case of the nonlinear PCA algorithm, but with a rectification nonlinearity, and we conjecture that this algorithm will find such nonnegative wellgrounded independent sources, under reasonable initial conditions. While the algorithm has proved difficult to analyze in the general case, we give some analytical results that are consistent with this conjecture and some numerical simulations that illustrate its operation. Index Terms independent component analysis learning (artificial intelligence) matrix decomposition principal component analysis independent component analysis nonlinear principal component analysis nonnegative PCA algorithm nonnegative matrix factorization nonzero probability density function rectification nonlinearity subspace learning rule ©2004 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any
BYY Harmony Learning, Independent State Space, and Generalized APT Financial Analyses
, 2001
"... First, the relationship between factor analysis (FA) and the wellknown arbitrage pricing theory (APT) for financial market has been discussed comparatively, with a number of tobeimproved problems listed. An overview has been made from a unified perspective on the related studies in the literature ..."
Abstract

Cited by 24 (21 self)
 Add to MetaCart
First, the relationship between factor analysis (FA) and the wellknown arbitrage pricing theory (APT) for financial market has been discussed comparatively, with a number of tobeimproved problems listed. An overview has been made from a unified perspective on the related studies in the literatures of statistics, control theory, signal processing, and neural networks. Second, we introduce the fundamentals of the Bayesian Ying Yang (BYY) system and the harmony learning principle which has been systematically developed in past several years as a unified statistical framework for parameter learning, regularization and model selection, in both nontemporal and temporal stochastic environments. We further show that a specific case of the framework, called BYY independent state space (ISS) system, provides a general guide for systematically tackling various FA related learning tasks and the above tobeimproved problems for the APT analyses. Third, on various specific cases of the BYY ISS s...
The Nonlinear PCA Criterion in Blind Source Separation: Relations with Other Approaches
 Neurocomputing
, 1998
"... We present new results on the nonlinear PCA (Principal Component Analysis) criterion in blind source separation (BSS). We derive the criterion in a form that allows easy comparisons with other BSS and Independent Component Analysis (ICA) contrast functions like cumulants, Bussgang criteria, and info ..."
Abstract

Cited by 21 (3 self)
 Add to MetaCart
(Show Context)
We present new results on the nonlinear PCA (Principal Component Analysis) criterion in blind source separation (BSS). We derive the criterion in a form that allows easy comparisons with other BSS and Independent Component Analysis (ICA) contrast functions like cumulants, Bussgang criteria, and information theoretic contrasts. This clarifies how the nonlinearity should be chosen optimally. We also discuss the connections of the nonlinear PCA learning rule with the BellSejnowski algorithm and the adaptive EASI algorithm. Furthermore, we show that a nonlinear PCA criterion can be minimized using leastsquares approaches, leading to computationally efficient and fast converging algorithms. The paper shows that nonlinear PCA is a versatile starting point for deriving different kinds of algorithms for blind signal processing problems.
Bayesian Ying Yang system, best harmony learning, and Gaussian manifold based family
 Computational Intelligence: Research Frontiers, WCCI2008 Plenary/Invited Lectures. Lecture Notes in Computer Science
"... five action circling ..."
(Show Context)
A new look at the power method for fast subspace tracking
 Digital Signal Processing
, 1999
"... A class of fast subspace tracking methods such as the Oja method, the projection approximation subspace tracking (PAST) method, and the novel information criterion (NIC) method can be viewed as powerbased methods. Unlike many nonpowerbased methods such as the Given’s rotation based URV updating m ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
(Show Context)
A class of fast subspace tracking methods such as the Oja method, the projection approximation subspace tracking (PAST) method, and the novel information criterion (NIC) method can be viewed as powerbased methods. Unlike many nonpowerbased methods such as the Given’s rotation based URV updating method and the operator restriction algorithm, the powerbased methods with arbitrary initial conditions are convergent to the principal subspace of a vector sequence under a mild assumption. This paper elaborates on a natural version of the power method. The natural power method is shown to have the fastest convergence rate among the powerbased methods. Three types of implementations of the natural power method are presented in detail, which require respectively O(n2p), O(np2), and O(np) flops of computation at each iteration (update), where n is the dimension of the vector sequence and p is the dimension of the principal subspace. All of the three implementations are shown to be globally convergent under a mild assumption. The O(np) implementation of the natural power method is shown to be superior to the O(np) equivalent of the Oja, PAST, and NIC methods. Like all powerbased methods, the natural power method can be easily modified via subspace deflation to track the principal components and, hence, the rank of the principal subspace. �1999 Academic Press 1.
An Experimental Comparison of Neural Algorithms for Independent Component Analysis and Blind Separation
, 1999
"... In this paper, we compare the performance of five prominent neural or adaptive algorithms designed for Independent Component Analysis (ICA) and blind source separation (BSS). In the first part of the study, we use artificial data for comparing the accuracy, convergence speed, computational load, and ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
In this paper, we compare the performance of five prominent neural or adaptive algorithms designed for Independent Component Analysis (ICA) and blind source separation (BSS). In the first part of the study, we use artificial data for comparing the accuracy, convergence speed, computational load, and other relevant properties of the algorithms. In the second part, the algorithms are applied to three dioeerent realworld data sets. The task is either blind source separation or finding interesting directions in the data for visualisation purposes. We develop criteria for selecting the most meaningful basis vectors of ICA and measuring the quality of the results. The comparison reveals characteristic differences between the studied ICA algorithms. The most important conclusions of our comparison are robustness of the ICA algorithms with respect to modest modeling imperfections, and the superiority of fixedpoint algorithms with respect to the computational load.
Fast subspace tracking and neural network learning by a novel information criterion
 IEEE Trans. Signal Processing
, 1998
"... Abstract — We introduce a novel information criterion (NIC) for searching for the optimum weights of a twolayer linear neural network (NN). The NIC exhibits a single global maximum attained if and only if the weights span the (desired) principal subspace of a covariance matrix. The other stationary ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
(Show Context)
Abstract — We introduce a novel information criterion (NIC) for searching for the optimum weights of a twolayer linear neural network (NN). The NIC exhibits a single global maximum attained if and only if the weights span the (desired) principal subspace of a covariance matrix. The other stationary points of the NIC are (unstable) saddle points. We develop an adaptive algorithm based on the NIC for estimating and tracking the principal subspace of a vector sequence. The NIC algorithm provides a fast online learning of the optimum weights for the twolayer linear NN. We establish the connections between the NIC algorithm and the conventional meansquareerror (MSE) based algorithms such as Oja’s algorithm, LMSER, PAST, APEX, and GHA. The NIC algorithm has several key advantages such as faster convergence, which is illustrated through analysis and simulation. I.