Results 1  10
of
91
Independent Component Analysis
 Neural Computing Surveys
, 2001
"... A common problem encountered in such disciplines as statistics, data analysis, signal processing, and neural network research, is nding a suitable representation of multivariate data. For computational and conceptual simplicity, such a representation is often sought as a linear transformation of the ..."
Abstract

Cited by 1492 (93 self)
 Add to MetaCart
A common problem encountered in such disciplines as statistics, data analysis, signal processing, and neural network research, is nding a suitable representation of multivariate data. For computational and conceptual simplicity, such a representation is often sought as a linear transformation of the original data. Wellknown linear transformation methods include, for example, principal component analysis, factor analysis, and projection pursuit. A recently developed linear transformation method is independent component analysis (ICA), in which the desired representation is the one that minimizes the statistical dependence of the components of the representation. Such a representation seems to capture the essential structure of the data in many applications. In this paper, we survey the existing theory and methods for ICA. 1
Fast and robust fixedpoint algorithms for independent component analysis
 IEEE TRANS. NEURAL NETW
, 1999
"... Independent component analysis (ICA) is a statistical method for transforming an observed multidimensional random vector into components that are statistically as independent from each other as possible. In this paper, we use a combination of two different approaches for linear ICA: Comon’s informat ..."
Abstract

Cited by 511 (34 self)
 Add to MetaCart
Independent component analysis (ICA) is a statistical method for transforming an observed multidimensional random vector into components that are statistically as independent from each other as possible. In this paper, we use a combination of two different approaches for linear ICA: Comon’s informationtheoretic approach and the projection pursuit approach. Using maximum entropy approximations of differential entropy, we introduce a family of new contrast (objective) functions for ICA. These contrast functions enable both the estimation of the whole decomposition by minimizing mutual information, and estimation of individual independent components as projection pursuit directions. The statistical properties of the estimators based on such contrast functions are analyzed under the assumption of the linear mixture model, and it is shown how to choose contrast functions that are robust and/or of minimum variance. Finally, we introduce simple fixedpoint algorithms for practical optimization of the contrast functions. These algorithms optimize the contrast functions very fast and reliably.
Independent Component Analysis Using an Extended Infomax Algorithm for Mixed SubGaussian and SuperGaussian Sources
, 1999
"... An extension of the infomax algorithm of Bell and Sejnowski (1995) is presented that is able to blindly separate mixed signals with sub and superGaussian source distributions. This was achieved by using a simple type of learning rule first derived by Girolami (1997) by choosing negentropy as a pro ..."
Abstract

Cited by 202 (21 self)
 Add to MetaCart
An extension of the infomax algorithm of Bell and Sejnowski (1995) is presented that is able to blindly separate mixed signals with sub and superGaussian source distributions. This was achieved by using a simple type of learning rule first derived by Girolami (1997) by choosing negentropy as a projection pursuit index. Parameterized probability distributions that have suband superGaussian regimes were used to derive a general learning rule that preserves the simple architecture proposed by Bell and Sejnowski (1995), is optimized using the natural gradient by Amari (1998), and uses the stability analysis of Cardoso and Laheld (1996) to switch between sub and superGaussian regimes. We demonstrate that the extended infomax algorithm is able to easily separate 20 sources with a variety of source distributions. Applied to highdimensional data from electroencephalographic (EEG) recordings, it is effective at separating artifacts such as eye blinks and line noise from weaker electrical ...
Independent component analysis of electroencephalographic data
 Adv. Neural Inform. Process. Syst
, 1996
"... The electroencephalogram (EEG) is a noninvasive measure of brain electrical activity recorded as changes in potential difference between points on the human scalp. Because of volume conduction through cerebrospinal fluid, skull and scalp, EEG data collected from any point on the scalp includes acti ..."
Abstract

Cited by 194 (53 self)
 Add to MetaCart
The electroencephalogram (EEG) is a noninvasive measure of brain electrical activity recorded as changes in potential difference between points on the human scalp. Because of volume conduction through cerebrospinal fluid, skull and scalp, EEG data collected from any point on the scalp includes activity from processes occurring within a large brain volume.
Face recognition by independent component analysis
 IEEE Transactions on Neural Networks
, 2002
"... Abstract—A number of current face recognition algorithms use face representations found by unsupervised statistical methods. Typically these methods find a set of basis images and represent faces as a linear combination of those images. Principal component analysis (PCA) is a popular example of such ..."
Abstract

Cited by 189 (4 self)
 Add to MetaCart
Abstract—A number of current face recognition algorithms use face representations found by unsupervised statistical methods. Typically these methods find a set of basis images and represent faces as a linear combination of those images. Principal component analysis (PCA) is a popular example of such methods. The basis images found by PCA depend only on pairwise relationships between pixels in the image database. In a task such as face recognition, in which important information may be contained in the highorder relationships among pixels, it seems reasonable to expect that better basis images may be found by methods sensitive to these highorder statistics. Independent component analysis (ICA), a generalization of PCA, is one such method. We used a version of ICA derived from the principle of optimal information transfer through sigmoidal neurons. ICA was performed on face images in the FERET database under two different architectures, one which treated the images as random variables and the pixels as outcomes, and a second which treated the pixels as random variables and the images as outcomes. The first architecture found spatially local basis images for the faces. The second architecture produced a factorial face code. Both ICA representations were superior to representations based on PCA for recognizing faces across days and changes in expression. A classifier that combined the two ICA representations gave the best performance. Index Terms—Eigenfaces, face recognition, independent component analysis (ICA), principal component analysis (PCA), unsupervised learning. I.
Emergence of Phase and ShiftInvariant Features by Decomposition of Natural Images into Independent Feature Subspaces
, 2000
"... this article, we show that the same principle of independence maximization can explain the emergence of phase and shiftinvariant features, similar to those found in complex cells. This new kind of emergence is obtained by maximizing the independence between norms of projections on linear subspaces ..."
Abstract

Cited by 169 (33 self)
 Add to MetaCart
this article, we show that the same principle of independence maximization can explain the emergence of phase and shiftinvariant features, similar to those found in complex cells. This new kind of emergence is obtained by maximizing the independence between norms of projections on linear subspaces (instead of the independence of simple linear filter outputs). Thenorms of the projections on such "independent feature subspaces" then indicate the values of invariant features
Blind source separation of more sources than mixtures using overcomplete representations
 IEEE Sig. Proc. Lett
, 1999
"... Abstract—Empirical results were obtained for the blind source separation of more sources than mixtures using a recently proposed framework for learning overcomplete representations. This technique assumes a linear mixing model with additive noise and involves two steps: 1) learning an overcomplete r ..."
Abstract

Cited by 100 (2 self)
 Add to MetaCart
Abstract—Empirical results were obtained for the blind source separation of more sources than mixtures using a recently proposed framework for learning overcomplete representations. This technique assumes a linear mixing model with additive noise and involves two steps: 1) learning an overcomplete representation for the observed data and 2) inferring sources given a sparse prior on the coefficients. We demonstrate that three speech signals can be separated with good fidelity given only two mixtures of the three signals. Similar results were obtained with mixtures of two speech signals and one music signal. Index Terms—Blind source separation, independent component analysis, overcomplete dictionary, overcomplete representation, speech signal separation. (a) (b)
Conditions for nonnegative independent component analysis
 IEEE Signal Processing Letters
, 2002
"... We consider the noiseless linear independent component analysis problem, in the case where the hidden sources s are nonnegative. We assume that the random variables s i s are wellgrounded in that they have a nonvanishing pdf in the (positive) neighbourhood of zero. For an orthonormal rotation y = ..."
Abstract

Cited by 63 (11 self)
 Add to MetaCart
We consider the noiseless linear independent component analysis problem, in the case where the hidden sources s are nonnegative. We assume that the random variables s i s are wellgrounded in that they have a nonvanishing pdf in the (positive) neighbourhood of zero. For an orthonormal rotation y = Wx of prewhitened observations x = QAs, under certain reasonable conditions we show that y is a permutation of the s (apart from a scaling factor) if and only if y is nonnegative with probability 1. We suggest that this may enable the construction of practical learning algorithms, particularly for sparse nonnegative sources.
Independent component approach to the analysis of EEG and MEG recordings
 IEEE Transactions on Biomedical Engineering
, 2000
"... This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of Helsinki University of Technology's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this m ..."
Abstract

Cited by 57 (8 self)
 Add to MetaCart
This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of Helsinki University of Technology's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to