Results 1  10
of
44
Independent Factor Analysis
 Neural Computation
, 1999
"... We introduce the independent factor analysis (IFA) method for recovering independent hidden sources from their observed mixtures. IFA generalizes and unifies ordinary factor analysis (FA), principal component analysis (PCA), and independent component analysis (ICA), and can handle not only square no ..."
Abstract

Cited by 219 (9 self)
 Add to MetaCart
We introduce the independent factor analysis (IFA) method for recovering independent hidden sources from their observed mixtures. IFA generalizes and unifies ordinary factor analysis (FA), principal component analysis (PCA), and independent component analysis (ICA), and can handle not only square noiseless mixing, but also the general case where the number of mixtures differs from the number of sources and the data are noisy. IFA is a twostep procedure. In the first step, the source densities, mixing matrix and noise covariance are estimated from the observed data by maximum likelihood. For this purpose we present an expectationmaximization (EM) algorithm, which performs unsupervised learning of an associated probabilistic model of the mixing situation. Each source in our model is described by a mixture of Gaussians, thus all the probabilistic calculations can be performed analytically. In the second step, the sources are reconstructed from the observed data by an optimal nonlinear ...
Principal Manifolds and Bayesian Subspaces for Visual Recognition
 International Conference on Computer Vision
, 1999
"... We investigate the use of linear and nonlinear principal manifolds for learning lowdimensional representations for visual recognition. Three techniques: Principal Component Analysis (PCA), Independent Component Analysis (ICA) and Nonlinear PCA (NLPCA) are examined and tested in a visual recognition ..."
Abstract

Cited by 39 (1 self)
 Add to MetaCart
We investigate the use of linear and nonlinear principal manifolds for learning lowdimensional representations for visual recognition. Three techniques: Principal Component Analysis (PCA), Independent Component Analysis (ICA) and Nonlinear PCA (NLPCA) are examined and tested in a visual recognition experiment using a large gallery of facial images from the ¨FERET¨database. We compare the recognition performance of a nearestneighbour matching rule with each principal manifold representation to that of a maximum a posteriori (MAP) matching rule using a Bayesian similarity measure derived from probabilistic subspaces and demonstrate the superiority of the latter.
Blind Source Separation and Deconvolution: The Dynamic Component Analysis Algorithm
 Neural Computation
, 1998
"... We derive a novel family of unsupervised learning algorithms for blind separation of mixed and convolved sources. Our approach is based on formulating the separation problem as a learning task of a spatiotemporal generative model, whose parameters are adapted iteratively to minimize suitable error ..."
Abstract

Cited by 39 (6 self)
 Add to MetaCart
We derive a novel family of unsupervised learning algorithms for blind separation of mixed and convolved sources. Our approach is based on formulating the separation problem as a learning task of a spatiotemporal generative model, whose parameters are adapted iteratively to minimize suitable error functions, thus ensuring stability of the algorithms. The resulting learning rules achieve separation by exploiting highorder spatiotemporal statistics of the mixture data. Different rules are obtained by learning generative models in the frequency and time domains, whereas a hybrid frequency/time model leads to the best performance. These algorithms generalize independent component analysis to the case of convolutive mixtures and exhibit superior performance on instantaneous mixtures. An extension of the relativegradient concept to the spatiotemporal case leads to fast and efficient learning rules with equivariant properties. Our approach can incorporate information about the mixing sit...
Joint Bayesian Endmember Extraction and Linear Unmixing for Hyperspectral Imagery
"... Abstract—This paper studies a fully Bayesian algorithm for endmember extraction and abundance estimation for hyperspectral imagery. Each pixel of the hyperspectral image is decomposed as a linear combination of pure endmember spectra following the linear mixing model. The estimation of the unknown e ..."
Abstract

Cited by 36 (25 self)
 Add to MetaCart
Abstract—This paper studies a fully Bayesian algorithm for endmember extraction and abundance estimation for hyperspectral imagery. Each pixel of the hyperspectral image is decomposed as a linear combination of pure endmember spectra following the linear mixing model. The estimation of the unknown endmember spectra is conducted in a unified manner by generating the posterior distribution of abundances and endmember parameters under a hierarchical Bayesian model. This model assumes conjugate prior distributions for these parameters, accounts for nonnegativity and fulladditivity constraints, and exploits the fact that the endmember proportions lie on a lower dimensional simplex. A Gibbs sampler is proposed to overcome the complexity of evaluating the resulting posterior distribution. This sampler generates samples distributed according to the posterior distribution and estimates the unknown parameters using these generated samples. The accuracy of the joint Bayesian estimator is illustrated by simulations conducted on synthetic and real AVIRIS images. Index Terms—Bayesian inference, endmember extraction, hyperspectral imagery, linear spectral unmixing, MCMC methods. I.
A survey on wavelet applications in data mining
 SIGKDD Explor. Newsl
"... Recently there has been significant development in the use of wavelet methods in various data mining processes. However, there has been written no comprehensive survey available on the topic. The goal of this is paper to fill the void. First, the paper presents a highlevel datamining framework tha ..."
Abstract

Cited by 30 (3 self)
 Add to MetaCart
Recently there has been significant development in the use of wavelet methods in various data mining processes. However, there has been written no comprehensive survey available on the topic. The goal of this is paper to fill the void. First, the paper presents a highlevel datamining framework that reduces the overall process into smaller components. Then applications of wavelets for each component are reviewd. The paper concludes by discussing the impact of wavelets on data mining research and outlining potential future research directions and applications. 1.
A Geometric Algorithm for Overcomplete Linear ICA
 NEUROCOMPUTING
, 2003
"... Geometric algorithms for linear quadratic independent component analysis (ICA) have recently received some attention due to their pictorial description and their relative ease of implementation. The geometric approach to ICA has been proposed first by Puntonet and Prieto [1] [2] in order to separate ..."
Abstract

Cited by 23 (11 self)
 Add to MetaCart
Geometric algorithms for linear quadratic independent component analysis (ICA) have recently received some attention due to their pictorial description and their relative ease of implementation. The geometric approach to ICA has been proposed first by Puntonet and Prieto [1] [2] in order to separate linear mixtures. We generalize these algorithms to overcomplete cases with more sources than sensors. With geometric ICA we get an efficient method for the matrixrecovery step in the framework of a twostep approach to the source separation problem. The second step  sourcerecovery  uses a maximumlikelihood approach. There we prove that the shortestpath algorithm as proposed by Bofill and Zibulevsky in [3] indeed solves the maximumlikelihood conditions.
Linear Geometric ICA: Fundamentals and Algorithms
, 2003
"... Geometric algorithms for linear independent component analysis (ICA) have recently received some attention due to their pictorial description and their relative ease of implementation. The geometric approach to ICA was proposed first by Puntonet and Prieto (1995). We will reconsider geometric ICA in ..."
Abstract

Cited by 19 (10 self)
 Add to MetaCart
Geometric algorithms for linear independent component analysis (ICA) have recently received some attention due to their pictorial description and their relative ease of implementation. The geometric approach to ICA was proposed first by Puntonet and Prieto (1995). We will reconsider geometric ICA in a theoretic framework showing that fixed points of geometric ICA fulfill a geometric convergence condition (GCC), which the mixed images of the unit vectors satisfy too. This leads to a conjecture claiming that in the nongaussian unimodal symmetric case, there is only one stable fixed point, implying the uniqueness of geometric ICA after convergence. Guided by the principles of ordinary geometric ICA, we then present a new approach to linear geometric ICA based on histograms observing a considerable improvement in separation quality of different distributions and a sizable reduction in computational cost, by a factor of 100, compared to the ordinary geometric approach. Furthermore, we explore the accuracy of the algorithm depending on the number of samples and the choice of the mixing matrix, and compare geometric algorithms with classical ICA algorithms, namely, Extended Infomax and FastICA. Finally, we discuss the problem of highdimensional data sets within the realm of geometrical ICA algorithms.
Variational learning in nonlinear Gaussian belief networks
 Neural Computation
, 1999
"... We view perceptual tasks such as vision and speech recognition as inference problems where the goal is to estimate the posterior distribution over latent variables (e.g., depth in stereo vision) given the sensory input. The recent flurry of research in independent component analysis exemplifies the ..."
Abstract

Cited by 17 (6 self)
 Add to MetaCart
We view perceptual tasks such as vision and speech recognition as inference problems where the goal is to estimate the posterior distribution over latent variables (e.g., depth in stereo vision) given the sensory input. The recent flurry of research in independent component analysis exemplifies the importance of inferring the continuousvalued latent variables of input data. The latent variables found by this method are linearly related to the input, but perception requires nonlinear inferences such as classification and depth estimation. In this paper, we present a unifying framework for stochastic neural networks with nonlinear latent variables. Nonlinear units are obtained by passing the outputs of linear Gaussian units through various nonlinearities. We present a general variational method that maximizes a lower bound on the likelihood of a training set and give results on two visual feature extraction problems. We also show how the variational method can be used for pattern classification and compare the performance of these nonlinear networks with other methods on the problem of handwritten digit recognition. 1
Developments of the generative topographic mapping
 Neurocomputing
, 1998
"... 1 Introduction Probability theory provides a powerful, consistent framework for dealing quantitatively with uncertainty (10). It is therefore ideally suited as a theoretical foundation for pattern recognition. Recently, the selforganizing map (SOM) of 19) was reformulated within a probabilistic s ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
1 Introduction Probability theory provides a powerful, consistent framework for dealing quantitatively with uncertainty (10). It is therefore ideally suited as a theoretical foundation for pattern recognition. Recently, the selforganizing map (SOM) of 19) was reformulated within a probabilistic setting(7) to give the GTM (Generative Topographic Mapping). In going to a probabilistic formulation, several limitations of the SOM were overcome, including the absence of a cost function and thelack of a convergence proof.
Adaptive separation of mixed broadband sound sources with delays by a beamforming HkraultJutten network
 IEEE Journal of Oceanic Engineering
, 1995
"... AbstractThe HbraultJutten network has been used to separate independent sound sources that have been linearly mixed. The problem of separating a mixture of several independent signals in freefield conditions or a signal and echoes in confined spaces is compounded by propagation time delays betwee ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
AbstractThe HbraultJutten network has been used to separate independent sound sources that have been linearly mixed. The problem of separating a mixture of several independent signals in freefield conditions or a signal and echoes in confined spaces is compounded by propagation time delays between the source(s) and the microphones because the conventional HbraultJutten network cannot tolerate time delays. In this paper, we combine a symmetrically balanced beamforming array with the conventional HdraultJutten network. The resulting system can adaptively separate signals that include delays introduced by the propagation medium. The proposed algorithm has been simulated in digital communication multipath channels where intersymbol interference exists. The simulation results show two clear advantages of the proposed method over the conventional adaptive equalization: 1) there is no penalty for very long impulse responses caused by long delays, and 2) no training signals are needed for equalization. The design of a multibeamformer to handle the source separation of multiple broadband signals is also presented. I.