Results 1  10
of
44
An informationmaximization approach to blind separation and blind deconvolution
 NEURAL COMPUTATION
, 1995
"... ..."
Blind Separation Of Convolved Sources Based On Information Maximization
 IN IEEE WORKSHOP ON NEURAL NETWORKS FOR SIGNAL PROCESSING
, 1996
"... Blind separation of independent sources from their convolutive mixtures is a problem in many real world multisensor applications. In this paper we present a solution to this problem based on the information maximization principle, which was recently proposed by Bell and Sejnowski for the case of bl ..."
Abstract

Cited by 93 (1 self)
 Add to MetaCart
Blind separation of independent sources from their convolutive mixtures is a problem in many real world multisensor applications. In this paper we present a solution to this problem based on the information maximization principle, which was recently proposed by Bell and Sejnowski for the case of blind separation of instantaneous mixtures. We present a feedback network architecture capable of coping with convolutive mixtures, and we derive the adaptation equations for the adaptive filters in the network by maximizing the information transferred through the network. Examples using speech signals are presented to illustrate the algorithm.
Nonlinear Independent Component Analysis: Existence and Uniqueness Results
 Neural Networks
, 1999
"... The question of existence and uniqueness of solutions for nonlinear independent component analysis is addressed. It is shown that if the space of mixing functions is not limited, there exists always an infinity of solutions. In particular, it is shown how to construct parameterized families of solut ..."
Abstract

Cited by 85 (4 self)
 Add to MetaCart
The question of existence and uniqueness of solutions for nonlinear independent component analysis is addressed. It is shown that if the space of mixing functions is not limited, there exists always an infinity of solutions. In particular, it is shown how to construct parameterized families of solutions. The indeterminacies involved are not trivial, as in the linear case. Next, it is shown how to utilize some results of complex analysis to obtain uniqueness of solutions. We show that for two dimensions, the solution is unique up to a rotation, if the mixing function is constrained to be a conformal mapping, together with some other assumptions. We also conjecture that the solution is strictly unique except in some degenerate cases, since the indeterminacy implied by the rotation is essentially similar to estimating the model of linear independent component analysis.
Multichannel Blind Deconvolution: Fir Matrix Algebra And Separation Of Multipath Mixtures
, 1996
"... A general tool for multichannel and multipath problems is given in FIR matrix algebra. With Finite Impulse Response (FIR) filters (or polynomials) assuming the role played by complex scalars in traditional matrix algebra, we adapt standard eigenvalue routines, factorizations, decompositions, and mat ..."
Abstract

Cited by 74 (0 self)
 Add to MetaCart
A general tool for multichannel and multipath problems is given in FIR matrix algebra. With Finite Impulse Response (FIR) filters (or polynomials) assuming the role played by complex scalars in traditional matrix algebra, we adapt standard eigenvalue routines, factorizations, decompositions, and matrix algorithms for use in multichannel /multipath problems. Using abstract algebra/group theoretic concepts, information theoretic principles, and the Bussgang property, methods of single channel filtering and source separation of multipath mixtures are merged into a general FIR matrix framework. Techniques developed for equalization may be applied to source separation and vice versa. Potential applications of these results lie in neural networks with feedforward memory connections, wideband array processing, and in problems with a multiinput, multioutput network having channels between each source and sensor, such as source separation. Particular applications of FIR polynomial matrix alg...
A first application of independent component analysis to extracting structure from stock returns
 INTERNATIONAL JOURNAL OF NEURAL SYSTEMS
, 1997
"... This paper discusses the application of a modern signal processing technique known as independent component analysis (ICA) or blind source separation to multivariate financial time series such as a portfolio of stocks. The key idea of ICA is to linearly map the observed multivariate time series int ..."
Abstract

Cited by 57 (1 self)
 Add to MetaCart
This paper discusses the application of a modern signal processing technique known as independent component analysis (ICA) or blind source separation to multivariate financial time series such as a portfolio of stocks. The key idea of ICA is to linearly map the observed multivariate time series into a new space of statistically independent components (ICs). This can be viewed as a factorization of the portfolio since joint probabilities become simple products in the coordinate system of the ICs. We apply ICA to three years of daily returns of the 28 largest Japanese stocks and compare the results with those obtained using principal component analysis. The results indicate that the estimated ICs fall into two categories, (i) infrequent but large shocks (responsible for the major changes in the stock prices), and (ii) frequent smaller fluctuations (contributing little to the overall level of the stocks). We show that the overall stock price can be reconstructed surprisingly well by using a small number of thresholded weighted ICs. In contrast, when using shocks derived from principal components instead of independent components, the reconstructed price is less similar to the original one. Independent component analysis is a potentially powerful method of analyzing and understanding driving mechanisms in financial markets. There are further
Neural Approaches to Independent Component Analysis and Source Separation
, 1996
"... Independent Component Analysis (ICA) is a recently developed technique that in many cases characterizes the data in a natural way. The main application area of the linear ICA model is blind source separation. Here, unknown source signals are estimated from their unknown linear mixtures using the str ..."
Abstract

Cited by 56 (9 self)
 Add to MetaCart
Independent Component Analysis (ICA) is a recently developed technique that in many cases characterizes the data in a natural way. The main application area of the linear ICA model is blind source separation. Here, unknown source signals are estimated from their unknown linear mixtures using the strong assumption that the sources are mutually independent. In practice, separation can be achieved by using suitable higherorder statistics or nonlinearities. Various neural approaches have recently been proposed for blind source separation and ICA. In this paper, these approaches and the respective learning algorithms are briefly reviewed, and some extensions of the basic ICA model are discussed. 1. Introduction A recent trend in neural network research is to study various forms of unsupervised learning beyond standard Principal Component Analysis (PCA). Such techniques are often called nonlinear PCA methods. They can be developed from various starting points, usually leading to different ...
InformationTheoretic Approach to Blind Separation of Sources in Nonlinear Mixture
, 1998
"... The linear mixture model is assumed in most of the papers devoted to blind separation. A more realistic model for mixture should be nonlinear. In this paper, a twolayer perceptron is used as a demixing system to separate sources in nonlinear mixture. The learning algorithms for the demixing sys ..."
Abstract

Cited by 43 (4 self)
 Add to MetaCart
The linear mixture model is assumed in most of the papers devoted to blind separation. A more realistic model for mixture should be nonlinear. In this paper, a twolayer perceptron is used as a demixing system to separate sources in nonlinear mixture. The learning algorithms for the demixing system are derived by two approaches: maximum entropy and minimum mutual information. The algorithms derived from the two approaches have a common structure. The new learning equations for the hidden layer are different from the learning equations for the output layer. The natural gradient descent method is applied in maximizing entropy and minimizing mutual information. The information (entropy or mutual information) backpropagation method is proposed to derive the learning equations for the hidden layer.
Redundancy Reduction and Independent Component Analysis: Conditions on Cumulants and Adaptive Approaches
, 1997
"... In the context of both sensory coding and signal processing, building factorized codes has been shown to be an efficient strategy. In a wide variety of situations, the signal to be processed is a linear mixture of statistically independent sources. Building a factorized code is then equivalent to pe ..."
Abstract

Cited by 32 (8 self)
 Add to MetaCart
In the context of both sensory coding and signal processing, building factorized codes has been shown to be an efficient strategy. In a wide variety of situations, the signal to be processed is a linear mixture of statistically independent sources. Building a factorized code is then equivalent to performing blind source separation. Thanks to the linear structure of the data, this can be done, in the language of signal processing, by finding an appropriate linear filter, or equivalently, in the language of neural modeling, by using a simple feedforward neural network. In this paper we discuss several aspects of the source separation problem. We give simple conditions on the network output which, if satisfied, guarantee that source separation has been obtained. Then we study adaptive approaches, in particular those based on redundancy reduction and maximisation of mutual information. We show how the resulting updating rules are related to the BCM theory of synaptic plasticity. Eventually...
Blind Separation Of Delayed Sources Based On Information Maximization
, 1996
"... Recently, Bell and Sejnowski have presented an approach to blind source separation based on the information maximization principle. We extend this approach into more general cases where the sources may have been delayed with respect to each other. We present a network architecture capable of coping ..."
Abstract

Cited by 32 (1 self)
 Add to MetaCart
Recently, Bell and Sejnowski have presented an approach to blind source separation based on the information maximization principle. We extend this approach into more general cases where the sources may have been delayed with respect to each other. We present a network architecture capable of coping with such sources, and we derive the adaptation equations for the delays and the weights in the network by maximizing the information transferred through the network. Examples using wideband sources such as speech are presented to illustrate the algorithm.
Signal Separation by Nonlinear Hebbian Learning
, 1995
"... this paper, we introduce a neural network that can be used for both source separation and the estimation of the basis vectors of ICA. The remainder of the paper is organized as follows. The next section presents the necessary background on ICA and source separation. In the third section, we introduc ..."
Abstract

Cited by 31 (1 self)
 Add to MetaCart
this paper, we introduce a neural network that can be used for both source separation and the estimation of the basis vectors of ICA. The remainder of the paper is organized as follows. The next section presents the necessary background on ICA and source separation. In the third section, we introduce and justify the basic neural network learning algorithms for signal separation. The fourth section provides mathematical analysis justifying the separation ability of the nonlinear PCA type learning algorithm. The fifth section then introduces the ICA neural network, a threelayer network whose layers perform input data whitening, separation, and ICA basis vector estimation, respectively. In the sixth section, we present experimental results. In the last section, the conclusions of this study are presented, and some possibilities for extending the data model are outlined.