Results 1  10
of
65
Independent Factor Analysis
 Neural Computation
, 1999
"... We introduce the independent factor analysis (IFA) method for recovering independent hidden sources from their observed mixtures. IFA generalizes and unifies ordinary factor analysis (FA), principal component analysis (PCA), and independent component analysis (ICA), and can handle not only square no ..."
Abstract

Cited by 274 (9 self)
 Add to MetaCart
We introduce the independent factor analysis (IFA) method for recovering independent hidden sources from their observed mixtures. IFA generalizes and unifies ordinary factor analysis (FA), principal component analysis (PCA), and independent component analysis (ICA), and can handle not only square noiseless mixing, but also the general case where the number of mixtures differs from the number of sources and the data are noisy. IFA is a twostep procedure. In the first step, the source densities, mixing matrix and noise covariance are estimated from the observed data by maximum likelihood. For this purpose we present an expectationmaximization (EM) algorithm, which performs unsupervised learning of an associated probabilistic model of the mixing situation. Each source in our model is described by a mixture of Gaussians, thus all the probabilistic calculations can be performed analytically. In the second step, the sources are reconstructed from the observed data by an optimal nonlinear ...
Joint Bayesian Endmember Extraction and Linear Unmixing for Hyperspectral Imagery
"... Abstract—This paper studies a fully Bayesian algorithm for endmember extraction and abundance estimation for hyperspectral imagery. Each pixel of the hyperspectral image is decomposed as a linear combination of pure endmember spectra following the linear mixing model. The estimation of the unknown e ..."
Abstract

Cited by 77 (37 self)
 Add to MetaCart
Abstract—This paper studies a fully Bayesian algorithm for endmember extraction and abundance estimation for hyperspectral imagery. Each pixel of the hyperspectral image is decomposed as a linear combination of pure endmember spectra following the linear mixing model. The estimation of the unknown endmember spectra is conducted in a unified manner by generating the posterior distribution of abundances and endmember parameters under a hierarchical Bayesian model. This model assumes conjugate prior distributions for these parameters, accounts for nonnegativity and fulladditivity constraints, and exploits the fact that the endmember proportions lie on a lower dimensional simplex. A Gibbs sampler is proposed to overcome the complexity of evaluating the resulting posterior distribution. This sampler generates samples distributed according to the posterior distribution and estimates the unknown parameters using these generated samples. The accuracy of the joint Bayesian estimator is illustrated by simulations conducted on synthetic and real AVIRIS images. Index Terms—Bayesian inference, endmember extraction, hyperspectral imagery, linear spectral unmixing, MCMC methods. I.
Blind Source Separation and Deconvolution: The Dynamic Component Analysis Algorithm
 Neural Computation
, 1998
"... We derive a novel family of unsupervised learning algorithms for blind separation of mixed and convolved sources. Our approach is based on formulating the separation problem as a learning task of a spatiotemporal generative model, whose parameters are adapted iteratively to minimize suitable error ..."
Abstract

Cited by 53 (6 self)
 Add to MetaCart
We derive a novel family of unsupervised learning algorithms for blind separation of mixed and convolved sources. Our approach is based on formulating the separation problem as a learning task of a spatiotemporal generative model, whose parameters are adapted iteratively to minimize suitable error functions, thus ensuring stability of the algorithms. The resulting learning rules achieve separation by exploiting highorder spatiotemporal statistics of the mixture data. Different rules are obtained by learning generative models in the frequency and time domains, whereas a hybrid frequency/time model leads to the best performance. These algorithms generalize independent component analysis to the case of convolutive mixtures and exhibit superior performance on instantaneous mixtures. An extension of the relativegradient concept to the spatiotemporal case leads to fast and efficient learning rules with equivariant properties. Our approach can incorporate information about the mixing sit...
Principal Manifolds and Bayesian Subspaces for Visual Recognition
 International Conference on Computer Vision
, 1999
"... We investigate the use of linear and nonlinear principal manifolds for learning lowdimensional representations for visual recognition. Three techniques: Principal Component Analysis (PCA), Independent Component Analysis (ICA) and Nonlinear PCA (NLPCA) are examined and tested in a visual recognition ..."
Abstract

Cited by 51 (1 self)
 Add to MetaCart
(Show Context)
We investigate the use of linear and nonlinear principal manifolds for learning lowdimensional representations for visual recognition. Three techniques: Principal Component Analysis (PCA), Independent Component Analysis (ICA) and Nonlinear PCA (NLPCA) are examined and tested in a visual recognition experiment using a large gallery of facial images from the ¨FERET¨database. We compare the recognition performance of a nearestneighbour matching rule with each principal manifold representation to that of a maximum a posteriori (MAP) matching rule using a Bayesian similarity measure derived from probabilistic subspaces and demonstrate the superiority of the latter.
A survey on wavelet applications in data mining
 SIGKDD Explor. Newsl
"... Recently there has been significant development in the use of wavelet methods in various data mining processes. However, there has been written no comprehensive survey available on the topic. The goal of this is paper to fill the void. First, the paper presents a highlevel datamining framework tha ..."
Abstract

Cited by 37 (4 self)
 Add to MetaCart
(Show Context)
Recently there has been significant development in the use of wavelet methods in various data mining processes. However, there has been written no comprehensive survey available on the topic. The goal of this is paper to fill the void. First, the paper presents a highlevel datamining framework that reduces the overall process into smaller components. Then applications of wavelets for each component are reviewd. The paper concludes by discussing the impact of wavelets on data mining research and outlining potential future research directions and applications. 1.
A Geometric Algorithm for Overcomplete Linear ICA
 NEUROCOMPUTING
, 2003
"... Geometric algorithms for linear quadratic independent component analysis (ICA) have recently received some attention due to their pictorial description and their relative ease of implementation. The geometric approach to ICA has been proposed first by Puntonet and Prieto [1] [2] in order to separate ..."
Abstract

Cited by 28 (11 self)
 Add to MetaCart
(Show Context)
Geometric algorithms for linear quadratic independent component analysis (ICA) have recently received some attention due to their pictorial description and their relative ease of implementation. The geometric approach to ICA has been proposed first by Puntonet and Prieto [1] [2] in order to separate linear mixtures. We generalize these algorithms to overcomplete cases with more sources than sensors. With geometric ICA we get an efficient method for the matrixrecovery step in the framework of a twostep approach to the source separation problem. The second step  sourcerecovery  uses a maximumlikelihood approach. There we prove that the shortestpath algorithm as proposed by Bofill and Zibulevsky in [3] indeed solves the maximumlikelihood conditions.
Developments of the generative topographic mapping
 Neurocomputing
, 1998
"... 1 Introduction Probability theory provides a powerful, consistent framework for dealing quantitatively with uncertainty (10). It is therefore ideally suited as a theoretical foundation for pattern recognition. Recently, the selforganizing map (SOM) of 19) was reformulated within a probabilistic s ..."
Abstract

Cited by 25 (1 self)
 Add to MetaCart
(Show Context)
1 Introduction Probability theory provides a powerful, consistent framework for dealing quantitatively with uncertainty (10). It is therefore ideally suited as a theoretical foundation for pattern recognition. Recently, the selforganizing map (SOM) of 19) was reformulated within a probabilistic setting(7) to give the GTM (Generative Topographic Mapping). In going to a probabilistic formulation, several limitations of the SOM were overcome, including the absence of a cost function and thelack of a convergence proof.
Variational learning in nonlinear Gaussian belief networks
 Neural Computation
, 1999
"... We view perceptual tasks such as vision and speech recognition as inference problems where the goal is to estimate the posterior distribution over latent variables (e.g., depth in stereo vision) given the sensory input. The recent flurry of research in independent component analysis exemplifies the ..."
Abstract

Cited by 21 (6 self)
 Add to MetaCart
We view perceptual tasks such as vision and speech recognition as inference problems where the goal is to estimate the posterior distribution over latent variables (e.g., depth in stereo vision) given the sensory input. The recent flurry of research in independent component analysis exemplifies the importance of inferring the continuousvalued latent variables of input data. The latent variables found by this method are linearly related to the input, but perception requires nonlinear inferences such as classification and depth estimation. In this paper, we present a unifying framework for stochastic neural networks with nonlinear latent variables. Nonlinear units are obtained by passing the outputs of linear Gaussian units through various nonlinearities. We present a general variational method that maximizes a lower bound on the likelihood of a training set and give results on two visual feature extraction problems. We also show how the variational method can be used for pattern classification and compare the performance of these nonlinear networks with other methods on the problem of handwritten digit recognition. 1
Design and implementation of a robot audition system for automatic speech recognition of simultaneous speech
 Proceedings of the Workshop on Automatic Speech Recognition and Understanding
, 2007
"... This paper addresses robot audition that can cope with speech that has a low signaltonoise ratio (SNR) in real time by using robotembedded microphones. To cope with such a noise, we exploited two key ideas; Preprocessing consisting of sound source localization and separation with a microphone arr ..."
Abstract

Cited by 20 (7 self)
 Add to MetaCart
(Show Context)
This paper addresses robot audition that can cope with speech that has a low signaltonoise ratio (SNR) in real time by using robotembedded microphones. To cope with such a noise, we exploited two key ideas; Preprocessing consisting of sound source localization and separation with a microphone array, and system integration based on missing feature theory (MFT). Preprocessing improves the SNR of a target sound signal using geometric source separation with multichannel postfilter. MFT uses only reliable acoustic features in speech recognition and masks unreliable parts caused by errors in preprocessing. MFT thus provides smooth integration between preprocessing and automatic speech recognition. A realtime robot audition system based on these two key ideas is constructed for Honda ASIMO and Humanoid SIG2 with 8ch microphone arrays. The paper also reports the improvement of ASR performance by using two and three simultaneous speech signals. Index Terms — Robot audition, missing feature theory, geometric source separation, automatic speech recognition 1.
Linear Geometric ICA: Fundamentals and Algorithms
, 2003
"... Geometric algorithms for linear independent component analysis (ICA) have recently received some attention due to their pictorial description and their relative ease of implementation. The geometric approach to ICA was proposed first by Puntonet and Prieto (1995). We will reconsider geometric ICA in ..."
Abstract

Cited by 20 (10 self)
 Add to MetaCart
Geometric algorithms for linear independent component analysis (ICA) have recently received some attention due to their pictorial description and their relative ease of implementation. The geometric approach to ICA was proposed first by Puntonet and Prieto (1995). We will reconsider geometric ICA in a theoretic framework showing that fixed points of geometric ICA fulfill a geometric convergence condition (GCC), which the mixed images of the unit vectors satisfy too. This leads to a conjecture claiming that in the nongaussian unimodal symmetric case, there is only one stable fixed point, implying the uniqueness of geometric ICA after convergence. Guided by the principles of ordinary geometric ICA, we then present a new approach to linear geometric ICA based on histograms observing a considerable improvement in separation quality of different distributions and a sizable reduction in computational cost, by a factor of 100, compared to the ordinary geometric approach. Furthermore, we explore the accuracy of the algorithm depending on the number of samples and the choice of the mixing matrix, and compare geometric algorithms with classical ICA algorithms, namely, Extended Infomax and FastICA. Finally, we discuss the problem of highdimensional data sets within the realm of geometrical ICA algorithms.