Results 1  10
of
91
Denoising Source Separation
"... A new algorithmic framework called denoising source separation (DSS) is introduced. The main benefit of this framework is that it allows for easy development of new source separation algorithms which are optimised for specific problems. In this framework, source separation algorithms are constuct ..."
Abstract

Cited by 49 (7 self)
 Add to MetaCart
A new algorithmic framework called denoising source separation (DSS) is introduced. The main benefit of this framework is that it allows for easy development of new source separation algorithms which are optimised for specific problems. In this framework, source separation algorithms are constucted around denoising procedures. The resulting algorithms can range from almost blind to highly specialised source separation algorithms. Both simple linear and more complex nonlinear or adaptive denoising schemes are considered. Some existing independent component analysis algorithms are reinterpreted within DSS framework and new, robust blind source separation algorithms are suggested. Although DSS algorithms need not be explicitly based on objective functions, there is often an implicit objective function that is optimised. The exact relation between the denoising procedure and the objective function is derived and a useful approximation of the objective function is presented. In the experimental section, various DSS schemes are applied extensively to artificial data, to real magnetoencephalograms and to simulated CDMA mobile network signals. Finally, various extensions to the proposed DSS algorithms are considered. These include nonlinear observation mappings, hierarchical models and overcomplete, nonorthogonal feature spaces. With these extensions, DSS appears to have relevance to many existing models of neural information processing.
Advances in nonlinear blind source separation
 In Proc. of the 4th Int. Symp. on Independent Component Analysis and Blind Signal Separation (ICA2003
, 2003
"... Abstract — In this paper, we briefly review recent advances in blind source separation (BSS) for nonlinear mixing models. After a general introduction to the nonlinear BSS and ICA (independent Component Analysis) problems, we discuss in more detail uniqueness issues, presenting some new results. A f ..."
Abstract

Cited by 37 (2 self)
 Add to MetaCart
(Show Context)
Abstract — In this paper, we briefly review recent advances in blind source separation (BSS) for nonlinear mixing models. After a general introduction to the nonlinear BSS and ICA (independent Component Analysis) problems, we discuss in more detail uniqueness issues, presenting some new results. A fundamental difficulty in the nonlinear BSS problem and even more so in the nonlinear ICA problem is that they are nonunique without extra constraints, which are often implemented by using a suitable regularization. Postnonlinear mixtures are an important special case, where a nonlinearity is applied to linear mixtures. For such mixtures, the ambiguities are essentially the same as for the linear ICA or BSS problems. In the later part of this paper, various separation techniques proposed for postnonlinear mixtures and general nonlinear mixtures are reviewed. I. THE NONLINEAR ICA AND BSS PROBLEMS Consider Æ samples of the observed data vector Ü, modeled by
Hierarchical Models of Variance Sources
 SIGNAL PROCESSING
, 2003
"... In many models, variances are assumed to be constant although this assumption is often unrealistic in practice. Joint modelling of means and variances is di#cult in many learning approaches, because it can lead into infinite probability densities. We show that a Bayesian variational technique which ..."
Abstract

Cited by 36 (13 self)
 Add to MetaCart
In many models, variances are assumed to be constant although this assumption is often unrealistic in practice. Joint modelling of means and variances is di#cult in many learning approaches, because it can lead into infinite probability densities. We show that a Bayesian variational technique which is sensitive to probability mass instead of density is able to jointly model both variances and means. We consider a model structure where a Gaussian variable, called variance node, controls the variance of another Gaussian variable. Variance nodes make it possible to build hierarchical models for both variances and means. We report experiments with artificial data which demonstrate the ability of the learning algorithm to find variance sources explaining and characterizing well the variances in the multidimensional data. Experiments with biomedical MEG data show that variance sources are present in realworld signals.
Learning appearance manifolds from video
 IN COMPUTER VISION AND PATTERN RECOGNITION (CVPR
, 2005
"... The appearance of dynamic scenes is often largely governed by a latent lowdimensional dynamic process. We show how to learn a mapping from video frames to this lowdimensional representation by exploiting the temporal coherence between frames and supervision from a user. This function maps the frame ..."
Abstract

Cited by 33 (2 self)
 Add to MetaCart
The appearance of dynamic scenes is often largely governed by a latent lowdimensional dynamic process. We show how to learn a mapping from video frames to this lowdimensional representation by exploiting the temporal coherence between frames and supervision from a user. This function maps the frames of the video to a lowdimensional sequence that evolves according to Markovian dynamics. This ensures that the recovered lowdimensional sequence represents a physically meaningful process. We relate our algorithm to manifold learning, semisupervised learning, and system identification, and demonstrate it on the tasks of tracking 3D rigid objects, deformable bodies, and articulated bodies. We also show how to use the inverse of this mapping to manipulate video.
Unsupervised variational Bayesian learning of nonlinear models
 In Advances in Neural Information Processing Systems 17
, 2005
"... In this paper we present a framework for using multilayer perceptron (MLP) networks in nonlinear generative models trained by variational Bayesian learning. The nonlinearity is handled by linearizing it using a Gauss–Hermite quadrature at the hidden neurons. This yields an accurate approximation fo ..."
Abstract

Cited by 29 (11 self)
 Add to MetaCart
(Show Context)
In this paper we present a framework for using multilayer perceptron (MLP) networks in nonlinear generative models trained by variational Bayesian learning. The nonlinearity is handled by linearizing it using a Gauss–Hermite quadrature at the hidden neurons. This yields an accurate approximation for cases of large posterior variance. The method can be used to derive nonlinear counterparts for linear algorithms such as factor analysis, independent component/factor analysis and statespace models. This is demonstrated with a nonlinear factor analysis experiment in which even 20 sources can be estimated from a real world speech data set. 1
Nonlinear Independent Factor Analysis by Hierarchical Models
 in Proc. 4th Int. Symp. on Independent Component Analysis and Blind Signal Separation (ICA2003
, 2003
"... The building blocks introduced earlier by us in [1] are used for constructing a hierarchical nonlinear model for nonlinear factor analysis. We call the resulting method hierarchical nonlinear factor analysis (HNFA). The variational Bayesian learning algorithm used in this method has a linear computa ..."
Abstract

Cited by 25 (13 self)
 Add to MetaCart
(Show Context)
The building blocks introduced earlier by us in [1] are used for constructing a hierarchical nonlinear model for nonlinear factor analysis. We call the resulting method hierarchical nonlinear factor analysis (HNFA). The variational Bayesian learning algorithm used in this method has a linear computational complexity, and it is able to infer the structure of the model in addition to estimating the unknown parameters. We show how nonlinear mixtures can be separated by first estimating a nonlinear subspace using HNFA and then rotating the subspace using linear independent component analysis. Experimental results show that the cost function minimised during learning predicts well the quality of the estimated subspace.
Approximate riemannian conjugate gradient learning for fixedform variational bayes
 Journal of Machine Learning Research
"... Variational Bayesian (VB) methods are typically only applied to models in the conjugateexponential family using the variational Bayesian expectation maximisation (VB EM) algorithm or one of its variants. In this paper we present an efficient algorithm for applying VB to more general models. The met ..."
Abstract

Cited by 23 (3 self)
 Add to MetaCart
Variational Bayesian (VB) methods are typically only applied to models in the conjugateexponential family using the variational Bayesian expectation maximisation (VB EM) algorithm or one of its variants. In this paper we present an efficient algorithm for applying VB to more general models. The method is based on specifying the functional form of the approximation, such as multivariate Gaussian. The parameters of the approximation are optimised using a conjugate gradient algorithm that utilises the Riemannian geometry of the space of the approximations. This leads to a very efficient algorithm for suitably structured approximations. It is shown empirically that the proposed method is comparable or superior in efficiency to the VB EM in a case where both are applicable. We also apply the algorithm to learning a nonlinear statespace model and a nonlinear factor analysis model for which the VB EM is not applicable. For these models, the proposed algorithm outperforms alternative gradientbased methods by a significant margin.
On the effect of the form of the posterior approximation in variational learning of ICA models
 in Proc. of the 4th Int. Symp. on Independent Component Analysis and Blind Signal Separation (ICA2003
, 2003
"... Abstract. We show that the choice of posterior approximation affects the solution found in Bayesian variational learning of linear independent component analysis models. Assuming the sources to be independent a posteriori favours a solution which has orthogonal mixing vectors. Linear mixing models w ..."
Abstract

Cited by 23 (8 self)
 Add to MetaCart
Abstract. We show that the choice of posterior approximation affects the solution found in Bayesian variational learning of linear independent component analysis models. Assuming the sources to be independent a posteriori favours a solution which has orthogonal mixing vectors. Linear mixing models with either temporally correlated sources or nonGaussian source models are considered but the analysis extends to nonlinear mixtures as well.
Unified inference for variational Bayesian linear Gaussian statespace models
 IN PROCEEDINGS OF NIPS 2006
, 2006
"... Linear Gaussian StateSpace Models are widely used and a Bayesian treatment of parameters is therefore of considerable interest. The approximate Variational Bayesian method applied to these models is an attractive approach, used successfully in applications ranging from acoustics to bioinformatics. ..."
Abstract

Cited by 19 (5 self)
 Add to MetaCart
Linear Gaussian StateSpace Models are widely used and a Bayesian treatment of parameters is therefore of considerable interest. The approximate Variational Bayesian method applied to these models is an attractive approach, used successfully in applications ranging from acoustics to bioinformatics. The most challenging aspect of implementing the method is in performing inference on the hidden state sequence of the model. We show how to convert the inference problem so that standard and stable Kalman Filtering/Smoothing recursions from the literature may be applied. This is in contrast to previously published approaches based on Belief Propagation. Our framework both simplifies and unifies the inference problem, so that future applications may be easily developed. We demonstrate the elegance of the approach on Bayesian temporal ICA, with an application to finding independent components in noisy EEG signals.
Adaptive BCI Based on Variational Bayesian Kalman Filtering: An Empirical Evaluation”,
 IEEE Transactions on Biomedical Engineering,
, 2004
"... ..."