Results 1  10
of
22
An informationmaximization approach to blind separation and blind deconvolution
 NEURAL COMPUTATION
, 1995
"... ..."
Independent Component Analysis Using an Extended Infomax Algorithm for Mixed SubGaussian and SuperGaussian Sources
, 1999
"... An extension of the infomax algorithm of Bell and Sejnowski (1995) is presented that is able to blindly separate mixed signals with sub and superGaussian source distributions. This was achieved by using a simple type of learning rule first derived by Girolami (1997) by choosing negentropy as a pro ..."
Abstract

Cited by 202 (21 self)
 Add to MetaCart
An extension of the infomax algorithm of Bell and Sejnowski (1995) is presented that is able to blindly separate mixed signals with sub and superGaussian source distributions. This was achieved by using a simple type of learning rule first derived by Girolami (1997) by choosing negentropy as a projection pursuit index. Parameterized probability distributions that have suband superGaussian regimes were used to derive a general learning rule that preserves the simple architecture proposed by Bell and Sejnowski (1995), is optimized using the natural gradient by Amari (1998), and uses the stability analysis of Cardoso and Laheld (1996) to switch between sub and superGaussian regimes. We demonstrate that the extended infomax algorithm is able to easily separate 20 sources with a variety of source distributions. Applied to highdimensional data from electroencephalographic (EEG) recordings, it is effective at separating artifacts such as eye blinks and line noise from weaker electrical ...
Removing Electroencephalographic Artifacts: Comparison between ICA and PCA
, 1998
"... Pervasive electroencephalographic (EEG) artifacts associated with blinks, eyemovements, muscle noise, cardiac signals, and line noise poses a major challenge for EEG interpretation and analysis. Here, we propose a generally applicable method for removing a wide variety of artifacts from EEG records ..."
Abstract

Cited by 124 (20 self)
 Add to MetaCart
Pervasive electroencephalographic (EEG) artifacts associated with blinks, eyemovements, muscle noise, cardiac signals, and line noise poses a major challenge for EEG interpretation and analysis. Here, we propose a generally applicable method for removing a wide variety of artifacts from EEG records based on an extended version of an Independent Component Analysis (ICA) algorithm [2, 12] for performing blind source separation on linear mixtures of independent source signals. Our results show that ICA can effectively separate and remove contamination from a wide variety of artifactual sources in EEG records with results comparing favorably to those obtained using Principal Component Analysis. 1 INTRODUCTION Since the landmark development of electroencephalography (EEG) in 1928 by Berger, scalp EEG has been used as a clinical tool for the diagnosis and treatment of brain diseases, and used as a noninvasive approach for research in the quantitative study of human neurophysiology. Ironic...
A Unifying Informationtheoretic Framework for Independent Component Analysis
, 1999
"... We show that different theories recently proposed for Independent Component Analysis (ICA) lead to the same iterative learning algorithm for blind separation of mixed independent sources. We review those theories and suggest that information theory can be used to unify several lines of research. Pea ..."
Abstract

Cited by 82 (8 self)
 Add to MetaCart
We show that different theories recently proposed for Independent Component Analysis (ICA) lead to the same iterative learning algorithm for blind separation of mixed independent sources. We review those theories and suggest that information theory can be used to unify several lines of research. Pearlmutter and Parra (1996) and Cardoso (1997) showed that the infomax approach of Bell and Sejnowski (1995) and the maximum likelihood estimation approach are equivalent. We show that negentropy maximization also has equivalent properties and therefore all three approaches yield the same learning rule for a fixed nonlinearity. Girolami and Fyfe (1997a) have shown that the nonlinear Principal Component Analysis (PCA) algorithm of Karhunen and Joutsensalo (1994) and Oja (1997) can also be viewed from informationtheoretic principles since it minimizes the sum of squares of the fourthorder marginal cumulants and therefore approximately minimizes the mutual information (Comon, 1994). Lambert (19...
Blind Separation of Delayed and Convolved Sources
, 1997
"... We address the difficult problem of separating multiple speakers with multiple microphones in a real room. We combine the work of Torkkola and Amari, Cichocki and Yang, to give Natural Gradient information maximisation rules for recurrent (IIR) networks, blindly adjusting delays, separating and deco ..."
Abstract

Cited by 63 (1 self)
 Add to MetaCart
We address the difficult problem of separating multiple speakers with multiple microphones in a real room. We combine the work of Torkkola and Amari, Cichocki and Yang, to give Natural Gradient information maximisation rules for recurrent (IIR) networks, blindly adjusting delays, separating and deconvolving mixed signals. While they work well on simulated data, these rules fail in real rooms which usually involve nonminimum phase transfer functions, notinvertible using stable IIR filters. An approach that sidesteps this problem is to perform infomax on a feedforward architecture in the frequency domain (Lambert 1996). We demonstrate realroom separation of two natural signals using this approach. 1 The problem. In the linear blind signal processing problem ([3, 2] and references therein), N signals, s(t) = [s 1 (t) : : : s N (t)] T , are transmitted through a medium so that an array of N sensors picks up a set of signals x(t) = [x 1 (t) : : : xN (t)] T , each of which has bee...
Blind Source Separation of Real World Signals
 Proc. ICNN
, 1997
"... We present a method to separate and deconvolve sources which have been recorded in real environments. The use of noncausal FIR filters allows us to deal with nonminimum mixing systems. The learning rules can be derived from different viewpoints such as information maximization, maximum likelihood an ..."
Abstract

Cited by 53 (8 self)
 Add to MetaCart
We present a method to separate and deconvolve sources which have been recorded in real environments. The use of noncausal FIR filters allows us to deal with nonminimum mixing systems. The learning rules can be derived from different viewpoints such as information maximization, maximum likelihood and negentropy which result in similar rules for the weight update. We transform the learning rule into the frequency domain where the convolution and deconvolution property becomes a multiplication and division operation. In particular, the FIR polynomial algebra techniques as used by Lambert present an efficient tool to solve true phase inverse systems allowing a simple implementation of noncausal filters. The significance of the methods is shown by the successful separation of two voices and separating a voice that has been recorded with loud music in the background. The recognition rate of an automatic speech recognition system is increased after separating the speech signals. 1 Introduct...
Survey of Sparse and NonSparse Methods in Source Separation
, 2005
"... Source separation arises in a variety of signal processing applications, ranging from speech processing to medical image analysis. The separation of a superposition of multiple signals is accomplished by taking into account the structure of the mixing process and by making assumptions about the sour ..."
Abstract

Cited by 35 (1 self)
 Add to MetaCart
Source separation arises in a variety of signal processing applications, ranging from speech processing to medical image analysis. The separation of a superposition of multiple signals is accomplished by taking into account the structure of the mixing process and by making assumptions about the sources. When the information about the mixing process and sources is limited, the problem is called ‘blind’. By assuming that the sources can be represented sparsely in a given basis, recent research has demonstrated that solutions to previously problematic blind source separation problems can be obtained. In some cases, solutions are possible to problems intractable by previous nonsparse methods. Indeed, sparse methods provide a powerful approach to the separation of linear mixtures of independent data. This paper surveys the recent arrival of sparse blind source separation methods and the previously existing nonsparse methods, providing insights and appropriate hooks into the literature along the way.
Factorial coding of natural images: how effective are linear models in removing higherorder dependencies?
 JOURNAL OF THE OPTICAL SOCIETY OF AMERICA A
, 2006
"... The performance of unsupervised learning models for natural images is evaluated quantitatively by means of information theory. We estimate the gain in statistical independence (the multiinformation reduction) achieved with independent component analysis (ICA), principal component analysis (PCA), z ..."
Abstract

Cited by 21 (6 self)
 Add to MetaCart
The performance of unsupervised learning models for natural images is evaluated quantitatively by means of information theory. We estimate the gain in statistical independence (the multiinformation reduction) achieved with independent component analysis (ICA), principal component analysis (PCA), zerophase whitening, and predictive coding. Predictive coding is translated into the transform coding framework, where it can be characterized by the constraint of a triangular filter matrix. A randomly sampled whitening basis and the Haar wavelet are included into the comparison as well. The comparison of all these methods is carried out for different patch sizes, ranging from 2x2 to 16x16 pixels. In spite of large differences in the shape of the basis functions, we find only small differences in the multiinformation between all decorrelation transforms (5% or less) for all patch sizes. Among the secondorder methods, PCA is optimal for small patch sizes and predictive coding performs best for large patch sizes. The extra gain achieved with ICA is always less than 2%. In conclusion, the `edge filters&amp;amp;amp;lsquo; found with ICA lead only to a surprisingly small improvement in terms of its actual objective.
Fast Blind Separation Based on Information Theory.
 in Proc. Intern. Symp. on Nonlinear Theory and Applications (NOLTA), Las Vegas
, 1995
"... Blind separation is an information theoretic problem, and we have proposed an information theoretic `sigmoidbased' solution [2]. Here we elaborate on several aspects of that solution. Firstly, we argue that the separation matrix may be exactly found by maximising the joint entropy of the random vec ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
Blind separation is an information theoretic problem, and we have proposed an information theoretic `sigmoidbased' solution [2]. Here we elaborate on several aspects of that solution. Firstly, we argue that the separation matrix may be exactly found by maximising the joint entropy of the random vector resulting from a linear transformation of the mixtures followed by sigmoidal nonlinearities which are the cumulative density functions of the `unknown' sources. Secondly, we present the learning rule for performing this maximisation. Thirdly, we discuss the role of prior knowledge of the c.d.f.'s of the sources in customising the learning rule. We argue that sigmoidbased methods are better able to make use of this prior knowledge than cumulantbased methods, because the optimal nonlinearity they should use is just an estimate of the source c.d.f. We also suggest that they may have the edge in terms of robustness and speed of convergence. Improvements in convergence speed have been fac...
Efficient Source Adaptivity in Independent Component Analysis
, 2001
"... A basic element in most ICA algorithms is the choice of a model for the score functions of the unknown sources. While this is usually based on approximations, for large data sets it is possible to achieve `source adaptivity' by directly estimating from the data the `true' score functions of the sour ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
A basic element in most ICA algorithms is the choice of a model for the score functions of the unknown sources. While this is usually based on approximations, for large data sets it is possible to achieve `source adaptivity' by directly estimating from the data the `true' score functions of the sources. In this paper we describe an efficient scheme for achieving this by extending the fast density estimation method of Silverman (1982). We show with a real and a synthetic experiment that our method can provide more accurate solutions than stateoftheart methods when optimization is carried out in the vicinity of the global minimum of the contrast function.