Results 1  10
of
16
Independent Component Analysis
 Neural Computing Surveys
, 2001
"... A common problem encountered in such disciplines as statistics, data analysis, signal processing, and neural network research, is nding a suitable representation of multivariate data. For computational and conceptual simplicity, such a representation is often sought as a linear transformation of the ..."
Abstract

Cited by 1492 (93 self)
 Add to MetaCart
A common problem encountered in such disciplines as statistics, data analysis, signal processing, and neural network research, is nding a suitable representation of multivariate data. For computational and conceptual simplicity, such a representation is often sought as a linear transformation of the original data. Wellknown linear transformation methods include, for example, principal component analysis, factor analysis, and projection pursuit. A recently developed linear transformation method is independent component analysis (ICA), in which the desired representation is the one that minimizes the statistical dependence of the components of the representation. Such a representation seems to capture the essential structure of the data in many applications. In this paper, we survey the existing theory and methods for ICA. 1
Fast and robust fixedpoint algorithms for independent component analysis
 IEEE TRANS. NEURAL NETW
, 1999
"... Independent component analysis (ICA) is a statistical method for transforming an observed multidimensional random vector into components that are statistically as independent from each other as possible. In this paper, we use a combination of two different approaches for linear ICA: Comon’s informat ..."
Abstract

Cited by 511 (34 self)
 Add to MetaCart
Independent component analysis (ICA) is a statistical method for transforming an observed multidimensional random vector into components that are statistically as independent from each other as possible. In this paper, we use a combination of two different approaches for linear ICA: Comon’s informationtheoretic approach and the projection pursuit approach. Using maximum entropy approximations of differential entropy, we introduce a family of new contrast (objective) functions for ICA. These contrast functions enable both the estimation of the whole decomposition by minimizing mutual information, and estimation of individual independent components as projection pursuit directions. The statistical properties of the estimators based on such contrast functions are analyzed under the assumption of the linear mixture model, and it is shown how to choose contrast functions that are robust and/or of minimum variance. Finally, we introduce simple fixedpoint algorithms for practical optimization of the contrast functions. These algorithms optimize the contrast functions very fast and reliably.
Emergence of Phase and ShiftInvariant Features by Decomposition of Natural Images into Independent Feature Subspaces
, 2000
"... this article, we show that the same principle of independence maximization can explain the emergence of phase and shiftinvariant features, similar to those found in complex cells. This new kind of emergence is obtained by maximizing the independence between norms of projections on linear subspaces ..."
Abstract

Cited by 169 (33 self)
 Add to MetaCart
this article, we show that the same principle of independence maximization can explain the emergence of phase and shiftinvariant features, similar to those found in complex cells. This new kind of emergence is obtained by maximizing the independence between norms of projections on linear subspaces (instead of the independence of simple linear filter outputs). Thenorms of the projections on such "independent feature subspaces" then indicate the values of invariant features
Independent component analysis in the presence of gaussian noise by maximizing joint likelihood
 Neurocomputing
, 1998
"... We consider the estimation of the data model of independent component analysis when gaussian noise is present. We show that the joint maximum likelihood estimation of the independent components and the mixing matrix leads to an objective function already proposed by Olshausen and Field using a di er ..."
Abstract

Cited by 32 (3 self)
 Add to MetaCart
We consider the estimation of the data model of independent component analysis when gaussian noise is present. We show that the joint maximum likelihood estimation of the independent components and the mixing matrix leads to an objective function already proposed by Olshausen and Field using a di erent derivation. Due to the complicated nature of the objective function, we introduce approximations that greatly simplify the optimization problem. We show that the presence of noise implies that the relation between the observed data and the estimates of the independent components is nonlinear, and show how to approximate this nonlinearity. In particular, the nonlinearity may be approximated by a simple shrinkage operation in the case of supergaussian (sparse) data. Using these approximations, we propose an e cient algorithm for approximate maximization of the likelihood. In the case of supergaussian components, this may be approximated by simple competitive learning, and in the case of subgaussian components, by anticompetitive learning. Key words: Independent component analysis, blind source separation, maximum likelihood, competitive learning, neural networks. 1
Simple Neuron Models for Independent Component Analysis
 Int. Journal of Neural Systems
, 1997
"... Recently, several neural algorithms have been introduced for Independent Component Analysis. Here we approach the problem from the point of view of a single neuron. First, simple Hebbianlike learning rules are introduced for estimating one of the independent components from sphered data. Some of th ..."
Abstract

Cited by 22 (3 self)
 Add to MetaCart
Recently, several neural algorithms have been introduced for Independent Component Analysis. Here we approach the problem from the point of view of a single neuron. First, simple Hebbianlike learning rules are introduced for estimating one of the independent components from sphered data. Some of the learning rules can be used to estimate an independent component which has a negative kurtosis, and the others estimate a component of positive kurtosis. Next, a twounit system is introduced to estimate an independent component of any kurtosis. The results are then generalized to estimate independent components from nonsphered (raw) mixtures. To separate several independent components, a system of several neurons with linear negative feedback is used. The convergence of the learning rules is rigorously proven without any unnecessary hypotheses on the distributions of the independent components.
Gaussian Moments for Noisy Independent Component Analysis
, 1999
"... A novel approach for the problem of estimating the data model of independent component analysis (or blind source separation) in the presence of gaussian noise is introduced. We define the gaussian moments of a random variable as the expectations of the gaussian function (and some related functions) ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
A novel approach for the problem of estimating the data model of independent component analysis (or blind source separation) in the presence of gaussian noise is introduced. We define the gaussian moments of a random variable as the expectations of the gaussian function (and some related functions) with different scale parameters, and show how the gaussian moments of a random variable can be estimated from noisy observations. This enables us to use gaussian moments as oneunit contrast functions that have no asymptotic bias even in the presence of noise, and that are robust against outliers. To implement the maximization of the contrast functions based on gaussian moments, a modification of the fixedpoint (FastICA) algorithm is introduced.
A Fast Algorithm For Estimating Overcomplete Ica Bases For Image Windows
 In Proc. Int. Joint Conf. on Neural Networks
, 1999
"... We introduce a very fast method for estimating overcomplete bases of independent components from image data. This is based on the concept of quasiorthogonality, which means that in a very highdimensional space, there can be a large, overcomplete set of vectors that are almost orthogonal to each ot ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
We introduce a very fast method for estimating overcomplete bases of independent components from image data. This is based on the concept of quasiorthogonality, which means that in a very highdimensional space, there can be a large, overcomplete set of vectors that are almost orthogonal to each other. Thus we may estimate an overcomplete basis by using oneunit ICA algorithms and forcing only partial decorrelation between the different independent components. The method can be implemented using a modification of the FastICA algorithm, which leads to a computationally highly efficient method.
Independent Component Analysis by Minimization of Mutual Information
, 1997
"... Independent component analysis (ICA) is a statistical method for transforming an observed multidimensional random vector into components that are statistically as independent from each other as possible. In this paper, the linear version of the ICA problem is approached from an informationtheoretic ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
Independent component analysis (ICA) is a statistical method for transforming an observed multidimensional random vector into components that are statistically as independent from each other as possible. In this paper, the linear version of the ICA problem is approached from an informationtheoretic viewpoint, using Comon's framework of minimizing mutual information of the components. Using maximum entropy approximations of dioeerential entropy, we introduce a family of new contrast (objective) functions for ICA, which can also be considered 1D projection pursuit indexes. The statistical properties of the estimators based on such contrast functions are analyzed under the assumption of the linear mixture model. It is shown how to choose optimal contrast functions according to dioeerent criteria. Novel algorithms for maximizing the contrast functions are then introduced. Hebbianlike learning rules are shown to result from gradient descent methods. Finally, in order to speed up the conv...
A Simple Threshold Nonlinearity For Blind Signal Separation
 in Proc. ISCAS
, 2000
"... A computationally simple nonlinearity in the form of a threshold device is shown to serve as contrast function in blind signal separation. Convergence is shown to be robust, fast, and comparable with that of more complex polynomial nonlinearities. Together with the known signum nonlinearity for supe ..."
Abstract

Cited by 8 (8 self)
 Add to MetaCart
A computationally simple nonlinearity in the form of a threshold device is shown to serve as contrast function in blind signal separation. Convergence is shown to be robust, fast, and comparable with that of more complex polynomial nonlinearities. Together with the known signum nonlinearity for superGaussian distributions, which basically is a threshold device with the threshold set to zero, the general threshold nonlinearity (with an appropriate threshold) can separate any nonGaussian signals. 1. INTRODUCTION Blind signal separation using higherorder statistics either explicitly or implicitly has attracted many researchers whose main goal is to separate a set of mixed signals as fast as possible with the smallest residual mixing. Throughout this paper we assume a linear mixing and separation process as depicted in Fig. 1. A W s x u separation process mixing process sensors separated sources sources Figure 1: Blind source separation model. The measured signals x = [x 1 , . . . ...
ICA of complex valued signals: a fast and robust deflationary algorithm
 Proc. of IJCNN
, 2000
"... Separation of complex valued signals is a frequently arising problem in signal processing. ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Separation of complex valued signals is a frequently arising problem in signal processing.