Results 1  10
of
44
Conditions for nonnegative independent component analysis
 IEEE Signal Processing Letters
, 2002
"... We consider the noiseless linear independent component analysis problem, in the case where the hidden sources s are nonnegative. We assume that the random variables s i s are wellgrounded in that they have a nonvanishing pdf in the (positive) neighbourhood of zero. For an orthonormal rotation y = ..."
Abstract

Cited by 63 (11 self)
 Add to MetaCart
We consider the noiseless linear independent component analysis problem, in the case where the hidden sources s are nonnegative. We assume that the random variables s i s are wellgrounded in that they have a nonvanishing pdf in the (positive) neighbourhood of zero. For an orthonormal rotation y = Wx of prewhitened observations x = QAs, under certain reasonable conditions we show that y is a permutation of the s (apart from a scaling factor) if and only if y is nonnegative with probability 1. We suggest that this may enable the construction of practical learning algorithms, particularly for sparse nonnegative sources.
Learning in Linear Neural Networks: a Survey
 IEEE Transactions on neural networks
, 1995
"... Networks of linear units are the simplest kind of networks, where the basic questions related to learning, generalization, and selforganisation can sometimes be answered analytically. We survey most of the known results on linear networks, including: (1) backpropagation learning and the structure ..."
Abstract

Cited by 56 (4 self)
 Add to MetaCart
Networks of linear units are the simplest kind of networks, where the basic questions related to learning, generalization, and selforganisation can sometimes be answered analytically. We survey most of the known results on linear networks, including: (1) backpropagation learning and the structure of the error function landscape; (2) the temporal evolution of generalization; (3) unsupervised learning algorithms and their properties. The connections to classical statistical ideas, such as principal component analysis (PCA), are emphasized as well as several simple but challenging open questions. A few new results are also spread across the paper, including an analysis of the effect of noise on backpropagation networks and a unified view of all unsupervised algorithms. Keywords linear networks, supervised and unsupervised learning, Hebbian learning, principal components, generalization, local minima, selforganisation I. Introduction This paper addresses the problems of supervise...
A ‘nonnegative PCA’ algorithm for independent component analysis, 2002, submitted for publication
"... We consider the task of independent component analysis when the independent sources are known to be nonnegative and wellgrounded, so that they have a nonzero probability density function (pdf) in the region of zero. We propose the use of a "nonnegative principal component analysis (nonnegative PCA) ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
We consider the task of independent component analysis when the independent sources are known to be nonnegative and wellgrounded, so that they have a nonzero probability density function (pdf) in the region of zero. We propose the use of a "nonnegative principal component analysis (nonnegative PCA) " algorithm, which is a special case of the nonlinear PCA algorithm, but with a rectification nonlinearity, and we conjecture that this algorithm will find such nonnegative wellgrounded independent sources, under reasonable initial conditions. While the algorithm has proved difficult to analyze in the general case, we give some analytical results that are consistent with this conjecture and some numerical simulations that illustrate its operation. Index Terms independent component analysis learning (artificial intelligence) matrix decomposition principal component analysis independent component analysis nonlinear principal component analysis nonnegative PCA algorithm nonnegative matrix factorization nonzero probability density function rectification nonlinearity subspace learning rule ©2004 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any
BYY Harmony Learning, Independent State Space, and Generalized APT Financial Analyses
, 2001
"... First, the relationship between factor analysis (FA) and the wellknown arbitrage pricing theory (APT) for financial market has been discussed comparatively, with a number of tobeimproved problems listed. An overview has been made from a unified perspective on the related studies in the literature ..."
Abstract

Cited by 23 (20 self)
 Add to MetaCart
First, the relationship between factor analysis (FA) and the wellknown arbitrage pricing theory (APT) for financial market has been discussed comparatively, with a number of tobeimproved problems listed. An overview has been made from a unified perspective on the related studies in the literatures of statistics, control theory, signal processing, and neural networks. Second, we introduce the fundamentals of the Bayesian Ying Yang (BYY) system and the harmony learning principle which has been systematically developed in past several years as a unified statistical framework for parameter learning, regularization and model selection, in both nontemporal and temporal stochastic environments. We further show that a specific case of the framework, called BYY independent state space (ISS) system, provides a general guide for systematically tackling various FA related learning tasks and the above tobeimproved problems for the APT analyses. Third, on various specific cases of the BYY ISS s...
The Nonlinear PCA Criterion in Blind Source Separation: Relations with Other Approaches
 Neurocomputing
, 1998
"... We present new results on the nonlinear PCA (Principal Component Analysis) criterion in blind source separation (BSS). We derive the criterion in a form that allows easy comparisons with other BSS and Independent Component Analysis (ICA) contrast functions like cumulants, Bussgang criteria, and info ..."
Abstract

Cited by 19 (3 self)
 Add to MetaCart
We present new results on the nonlinear PCA (Principal Component Analysis) criterion in blind source separation (BSS). We derive the criterion in a form that allows easy comparisons with other BSS and Independent Component Analysis (ICA) contrast functions like cumulants, Bussgang criteria, and information theoretic contrasts. This clarifies how the nonlinearity should be chosen optimally. We also discuss the connections of the nonlinear PCA learning rule with the BellSejnowski algorithm and the adaptive EASI algorithm. Furthermore, we show that a nonlinear PCA criterion can be minimized using leastsquares approaches, leading to computationally efficient and fast converging algorithms. The paper shows that nonlinear PCA is a versatile starting point for deriving different kinds of algorithms for blind signal processing problems.
An Experimental Comparison of Neural Algorithms for Independent Component Analysis and Blind Separation
, 1999
"... In this paper, we compare the performance of five prominent neural or adaptive algorithms designed for Independent Component Analysis (ICA) and blind source separation (BSS). In the first part of the study, we use artificial data for comparing the accuracy, convergence speed, computational load, and ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
In this paper, we compare the performance of five prominent neural or adaptive algorithms designed for Independent Component Analysis (ICA) and blind source separation (BSS). In the first part of the study, we use artificial data for comparing the accuracy, convergence speed, computational load, and other relevant properties of the algorithms. In the second part, the algorithms are applied to three dioeerent realworld data sets. The task is either blind source separation or finding interesting directions in the data for visualisation purposes. We develop criteria for selecting the most meaningful basis vectors of ICA and measuring the quality of the results. The comparison reveals characteristic differences between the studied ICA algorithms. The most important conclusions of our comparison are robustness of the ICA algorithms with respect to modest modeling imperfections, and the superiority of fixedpoint algorithms with respect to the computational load.
Bayesian Ying Yang system, best harmony learning, and Gaussian manifold based family
 Computational Intelligence: Research Frontiers, WCCI2008 Plenary/Invited Lectures. Lecture Notes in Computer Science
"... five action circling ..."
Blind Separation Of Positive Sources Using NonNegative PCA
 In 4th International Symposium on Independent Component Analysis and Blind Signal Separation
, 2003
"... The instantaneous noisefree linear mixing model in independent component analysis is largely a solved problem under the usual assumption of independent nongaussian sources and full rank mixing matrix. However, with some prior information on the sources, like positivity, new analysis and perhaps si ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
The instantaneous noisefree linear mixing model in independent component analysis is largely a solved problem under the usual assumption of independent nongaussian sources and full rank mixing matrix. However, with some prior information on the sources, like positivity, new analysis and perhaps simplified solution methods may yet become possible. In this paper, we consider the task of independent component analysis when the independent sources are known to be nonnegative and wellgrounded, which means that they have a nonzero pdf in the region of zero. We propose the use of a `NonNegative PCA' algorithm which is a special case of the nonlinear PCA algorithm, but with a rectification nonlinearity, and we show that this algorithm will find such nonnegative wellgrounded independent sources. Although the algorithm has proved difficult to analyze in the general case, we give an analytical convergence result here, complemented by a numerical simulation which illustrates its operation.
Bayesian Kullback YingYang dependence reduction theory
 Neurocomputing
, 1998
"... Bayesian Kullback YingYang dependence reduction system and theory is presented. Via stochastic approximation, implementable algorithms and criteria are given for parameter learning and model selection, respectively. Three typical architectures are further studied on several special cases. The for ..."
Abstract

Cited by 10 (9 self)
 Add to MetaCart
Bayesian Kullback YingYang dependence reduction system and theory is presented. Via stochastic approximation, implementable algorithms and criteria are given for parameter learning and model selection, respectively. Three typical architectures are further studied on several special cases. The forward one is a general information theoretic dependence reduction model that maps an observation x into a representation y of k independent components, with k detectable by criteria. For the special cases of invertible map xPy, a general adaptive algorithm is obtained, which not only is applicable to nonlinear or postnonlinear mixtures, but also provides an adaptive EM algorithm that implements the previously proposed learned parametric mixture method for independent component analysis (ICA) on linear mixtures. The backward architecture provides a maximum likelihood independent factor model for modeling observations from unknown number of independent factors via a linear or nonlinear system under noisy situations. For the special cases of linear or postnonlinear mixture under Gaussian noise, the simplified adaptive algorithm and the criterion for detecting k aregiven,withanapproximatelyoptimal linear mapping xPy suggested. Moreover, if the independent factors are assumed to be standard Gaussians, we are further led to the conventional factor analysis, but with a new adaptive algorithm for its estimation and a criterion for deciding the number of factors. The bidirectional architecture combines the advantages of backward and forward ones. A mean field approximation is presented, with a simplified adaptive parameter learning algorithm and an approximate kselection criterion. Moreover, its special cases lead to the existing least mean square error reconstruction learning and...