Results 1  10
of
306
Independent component analysis: algorithms and applications
 NEURAL NETWORKS
, 2000
"... ..."
(Show Context)
Blind Signal Separation: Statistical Principles
, 2003
"... Blind signal separation (BSS) and independent component analysis (ICA) are emerging techniques of array processing and data analysis, aiming at recovering unobserved signals or `sources' from observed mixtures (typically, the output of an array of sensors), exploiting only the assumption of mut ..."
Abstract

Cited by 529 (4 self)
 Add to MetaCart
Blind signal separation (BSS) and independent component analysis (ICA) are emerging techniques of array processing and data analysis, aiming at recovering unobserved signals or `sources' from observed mixtures (typically, the output of an array of sensors), exploiting only the assumption of mutual independence between the signals. The weakness of the assumptions makes it a powerful approach but requires to venture beyond familiar second order statistics. The objective of this paper is to review some of the approaches that have been recently developed to address this exciting problem, to show how they stem from basic principles and how they relate to each other.
Face recognition by independent component analysis
 IEEE Transactions on Neural Networks
, 2002
"... Abstract—A number of current face recognition algorithms use face representations found by unsupervised statistical methods. Typically these methods find a set of basis images and represent faces as a linear combination of those images. Principal component analysis (PCA) is a popular example of such ..."
Abstract

Cited by 348 (5 self)
 Add to MetaCart
(Show Context)
Abstract—A number of current face recognition algorithms use face representations found by unsupervised statistical methods. Typically these methods find a set of basis images and represent faces as a linear combination of those images. Principal component analysis (PCA) is a popular example of such methods. The basis images found by PCA depend only on pairwise relationships between pixels in the image database. In a task such as face recognition, in which important information may be contained in the highorder relationships among pixels, it seems reasonable to expect that better basis images may be found by methods sensitive to these highorder statistics. Independent component analysis (ICA), a generalization of PCA, is one such method. We used a version of ICA derived from the principle of optimal information transfer through sigmoidal neurons. ICA was performed on face images in the FERET database under two different architectures, one which treated the images as random variables and the pixels as outcomes, and a second which treated the pixels as random variables and the images as outcomes. The first architecture found spatially local basis images for the faces. The second architecture produced a factorial face code. Both ICA representations were superior to representations based on PCA for recognizing faces across days and changes in expression. A classifier that combined the two ICA representations gave the best performance. Index Terms—Eigenfaces, face recognition, independent component analysis (ICA), principal component analysis (PCA), unsupervised learning. I.
Independent Component Analysis Using an Extended Infomax Algorithm for Mixed Subgaussian and Supergaussian Sources
, 1999
"... An extension of the infomax algorithm of Bell and Sejnowski (1995) is presented that is able blindly to separate mixed signals with sub and supergaussian source distributions. This was achieved by using a simple type of learning rule first derived by Girolami (1997) by choosing negentropy as a proj ..."
Abstract

Cited by 314 (22 self)
 Add to MetaCart
An extension of the infomax algorithm of Bell and Sejnowski (1995) is presented that is able blindly to separate mixed signals with sub and supergaussian source distributions. This was achieved by using a simple type of learning rule first derived by Girolami (1997) by choosing negentropy as a projection pursuit index. Parameterized probability distributions that have sub and supergaussian regimes were used to derive a general learning rule that preserves the simple architecture proposed by Bell and Sejnowski (1995), is optimized using the natural gradient by Amari (1998), and uses the stability analysis of Cardoso and Laheld (1996) to switch between sub and supergaussian regimes. We demonstrate that the extended infomax algorithm is able to separate 20 sources with a variety of source distributions easily. Applied to highdimensional data from electroencephalographic recordings, it is effective at separating artifacts such as eye blinks and line noise from weaker electrical signals that arise from sources in the brain.
Removing Electroencephalographic Artifacts: Comparison between ICA and PCA
, 1998
"... Pervasive electroencephalographic (EEG) artifacts associated with blinks, eyemovements, muscle noise, cardiac signals, and line noise poses a major challenge for EEG interpretation and analysis. Here, we propose a generally applicable method for removing a wide variety of artifacts from EEG records ..."
Abstract

Cited by 240 (22 self)
 Add to MetaCart
(Show Context)
Pervasive electroencephalographic (EEG) artifacts associated with blinks, eyemovements, muscle noise, cardiac signals, and line noise poses a major challenge for EEG interpretation and analysis. Here, we propose a generally applicable method for removing a wide variety of artifacts from EEG records based on an extended version of an Independent Component Analysis (ICA) algorithm [2, 12] for performing blind source separation on linear mixtures of independent source signals. Our results show that ICA can effectively separate and remove contamination from a wide variety of artifactual sources in EEG records with results comparing favorably to those obtained using Principal Component Analysis. 1 INTRODUCTION Since the landmark development of electroencephalography (EEG) in 1928 by Berger, scalp EEG has been used as a clinical tool for the diagnosis and treatment of brain diseases, and used as a noninvasive approach for research in the quantitative study of human neurophysiology. Ironic...
Independent Component Representations for Face Recognition
"... In a task such as face recognition, much of the important information may be contained in the highorder relationships among the image pixels. A number of face recognition algorithms employ principal component analysis (PCA), which is based on the secondorder statistics of the image set, and does n ..."
Abstract

Cited by 136 (9 self)
 Add to MetaCart
In a task such as face recognition, much of the important information may be contained in the highorder relationships among the image pixels. A number of face recognition algorithms employ principal component analysis (PCA), which is based on the secondorder statistics of the image set, and does not address highorder statistical dependencies such as the relationships among three or more pixels. Independent component analysis (ICA) is a generalization of PCA which separates the highorder moments of the input in addition to the secondorder moments. ICA was performed on a set of face images by an unsupervised learning algorithm derived from the principle of optimal information transfer through sigmoidal neurons. 1 The algorithm maximizes the mutual information between the input and the output, which produces statistically independent outputs under certain conditions. ICA was performed on the face images under two different architectures. The first architecture provided a statistica...
Mining eventrelated brain dynamics,”
 Trends in Cognitive Sciences,
, 2004
"... This article provides a new, more comprehensive view of eventrelated brain dynamics founded on an informationbased approach to modeling electroencephalographic (EEG) dynamics. Most EEG research focuses either on peaks 'evoked' in average eventrelated potentials (ERPs) or on changes &apo ..."
Abstract

Cited by 130 (21 self)
 Add to MetaCart
(Show Context)
This article provides a new, more comprehensive view of eventrelated brain dynamics founded on an informationbased approach to modeling electroencephalographic (EEG) dynamics. Most EEG research focuses either on peaks 'evoked' in average eventrelated potentials (ERPs) or on changes 'induced' in the EEG power spectrum by experimental events. Although these measures are nearly complementary, they do not fully model the eventrelated dynamics in the data, and cannot isolate the signals of the contributing cortical areas. We propose that many ERPs and other EEG features are better viewed as time/frequency perturbations of underlying field potential processes. The new approach combines independent component analysis (ICA), time/frequency analysis, and trialbytrial visualization that measures EEG source dynamics without requiring an explicit head model. Scalp EEG signals are produced by partial synchronization of neuronalscale field potentials across areas of cortex of centimetresquared scale. Although once viewed by some as a form of brain 'noise', it appears increasingly probable that this synchronization optimizes relations between spikemediated 'topdown' and 'bottomup' communication, both within and between brain areas. This optimization might have particular importance during motivated anticipation of, and attention to, meaningful events and associations and in response to their anticipated consequences [1 3]. This new view of cortical and scalprecorded field dynamics requires a new data analysis approach. Here, we suggest how a combination of signal processing and visualization methods can give a more adequate model of the spatially distributed eventrelated EEG dynamics that support cognitive events. Traditional analysis of eventrelated EEG data proceeds in one of two directions. In the timedomain approach, researchers average a set of data trials or epochs timelocked to some class of events, yielding an ERP waveform at each data channel. The frequencydomain approach averages changes in the frequency power spectrum of the whole EEG data time locked to the same events, producing a twodimensional image that we call the eventrelated spectral perturbation (ERSP; see Box 1). Neither ERP nor ERSP measures of eventrelated data fully model their dynamics. Imagine, by analogy, a snapshot of a seashore view created by averaging together a large number of snapshots taken at different times. This average snapshot would not show the waves! Similarly, ERP averaging filters out most of the EEG data, leaving only a small portion phaselocked to the timelocking events (see Box 1). The ERP and ERSP are nearly complementary. Oscillatory (ERSP) changes 'induced' by experimental events can be poorly represented in, or completely absent from the timedomain features of the ERP 'evoked' by the same events
A Unifying Informationtheoretic Framework for Independent Component Analysis
, 1999
"... We show that different theories recently proposed for Independent Component Analysis (ICA) lead to the same iterative learning algorithm for blind separation of mixed independent sources. We review those theories and suggest that information theory can be used to unify several lines of research. Pea ..."
Abstract

Cited by 109 (8 self)
 Add to MetaCart
We show that different theories recently proposed for Independent Component Analysis (ICA) lead to the same iterative learning algorithm for blind separation of mixed independent sources. We review those theories and suggest that information theory can be used to unify several lines of research. Pearlmutter and Parra (1996) and Cardoso (1997) showed that the infomax approach of Bell and Sejnowski (1995) and the maximum likelihood estimation approach are equivalent. We show that negentropy maximization also has equivalent properties and therefore all three approaches yield the same learning rule for a fixed nonlinearity. Girolami and Fyfe (1997a) have shown that the nonlinear Principal Component Analysis (PCA) algorithm of Karhunen and Joutsensalo (1994) and Oja (1997) can also be viewed from informationtheoretic principles since it minimizes the sum of squares of the fourthorder marginal cumulants and therefore approximately minimizes the mutual information (Comon, 1994). Lambert (19...
A ContextSensitive Generalization of ICA
, 1996
"... Source separation arises in a surprising number of signal processing applications, from speech recognition to EEG analysis. In the square linear blind source separation problem without time delays, one must find an unmixing matrix which can detangle the result of mixing n unknown independent sources ..."
Abstract

Cited by 104 (10 self)
 Add to MetaCart
(Show Context)
Source separation arises in a surprising number of signal processing applications, from speech recognition to EEG analysis. In the square linear blind source separation problem without time delays, one must find an unmixing matrix which can detangle the result of mixing n unknown independent sources through an unknown n \Theta n mixing matrix. The recently introduced ICA blind source separation algorithm (Baram and Roth 1994; Bell and Sejnowski 1995) is a powerful and surprisingly simple technique for solving this problem. ICA is all the more remarkable for performing so well despite making absolutely no use of the temporal structure of its input! This paper presents a new algorithm, contextual ICA, which derives from a maximum likelihood density estimation formulation of the problem. cICA can incorporate arbitrarily complex adaptive historysensitive source models, and thereby make use of the temporal structure of its input. This allows it to separate in a number of situations where s...