Results 1  10
of
12
Hierarchical Models of Variance Sources
 SIGNAL PROCESSING
, 2003
"... In many models, variances are assumed to be constant although this assumption is often unrealistic in practice. Joint modelling of means and variances is di#cult in many learning approaches, because it can lead into infinite probability densities. We show that a Bayesian variational technique which ..."
Abstract

Cited by 32 (12 self)
 Add to MetaCart
In many models, variances are assumed to be constant although this assumption is often unrealistic in practice. Joint modelling of means and variances is di#cult in many learning approaches, because it can lead into infinite probability densities. We show that a Bayesian variational technique which is sensitive to probability mass instead of density is able to jointly model both variances and means. We consider a model structure where a Gaussian variable, called variance node, controls the variance of another Gaussian variable. Variance nodes make it possible to build hierarchical models for both variances and means. We report experiments with artificial data which demonstrate the ability of the learning algorithm to find variance sources explaining and characterizing well the variances in the multidimensional data. Experiments with biomedical MEG data show that variance sources are present in realworld signals.
Advances in nonlinear blind source separation
 In Proc. of the 4th Int. Symp. on Independent Component Analysis and Blind Signal Separation (ICA2003
, 2003
"... Abstract — In this paper, we briefly review recent advances in blind source separation (BSS) for nonlinear mixing models. After a general introduction to the nonlinear BSS and ICA (independent Component Analysis) problems, we discuss in more detail uniqueness issues, presenting some new results. A f ..."
Abstract

Cited by 30 (2 self)
 Add to MetaCart
Abstract — In this paper, we briefly review recent advances in blind source separation (BSS) for nonlinear mixing models. After a general introduction to the nonlinear BSS and ICA (independent Component Analysis) problems, we discuss in more detail uniqueness issues, presenting some new results. A fundamental difficulty in the nonlinear BSS problem and even more so in the nonlinear ICA problem is that they are nonunique without extra constraints, which are often implemented by using a suitable regularization. Postnonlinear mixtures are an important special case, where a nonlinearity is applied to linear mixtures. For such mixtures, the ambiguities are essentially the same as for the linear ICA or BSS problems. In the later part of this paper, various separation techniques proposed for postnonlinear mixtures and general nonlinear mixtures are reviewed. I. THE NONLINEAR ICA AND BSS PROBLEMS Consider Æ samples of the observed data vector Ü, modeled by
Blind deconvolution using a variational approach to parameter, image, and blur estimation
 IEEE Trans. on Image Processing
, 2006
"... Abstract—Following the hierarchical Bayesian framework for blind deconvolution problems, in this paper, we propose the use of simultaneous autoregressions as prior distributions for both the image and blur, and gamma distributions for the unknown parameters (hyperparameters) of the priors and the im ..."
Abstract

Cited by 21 (8 self)
 Add to MetaCart
Abstract—Following the hierarchical Bayesian framework for blind deconvolution problems, in this paper, we propose the use of simultaneous autoregressions as prior distributions for both the image and blur, and gamma distributions for the unknown parameters (hyperparameters) of the priors and the image formation noise. We show how the gamma distributions on the unknown hyperparameters can be used to prevent the proposed blind deconvolution method from converging to undesirable image and blur estimates and also how these distributions can be inferred in realistic situations. We apply variational methods to approximate the posterior probability of the unknown image, blur, and hyperparameters and propose two different approximations of the posterior distribution. One of these approximations coincides with a classical blind deconvolution method. The proposed algorithms are tested experimentally and compared with existing blind deconvolution methods. Index Terms—Bayesian framework, blind deconvolution, parameter estimation, variational methods. I.
Variational Bayesian Blind Deconvolution Using a Total Variation Prior
 IEEE TRANSACTIONS ON IMAGE PROCESSING
, 2009
"... In this paper we present novel algorithms for total variation (TV) based blind deconvolution and parameter estimation utilizing a variational framework. Using a hierarchical Bayesian model, the unknown image, blur, and hyperparameters for the image, blur, and noise priors are estimated simultaneousl ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
In this paper we present novel algorithms for total variation (TV) based blind deconvolution and parameter estimation utilizing a variational framework. Using a hierarchical Bayesian model, the unknown image, blur, and hyperparameters for the image, blur, and noise priors are estimated simultaneously. A variational inference approach is utilized so that approximations of the posterior distributions of the unknowns are obtained, thus providing a measure of the uncertainty of the estimates. Experimental results demonstrate that the proposed approaches provide higher restoration performance than nonTV based methods without any assumptions about the unknown hyperparameters.
PRACTICAL APPROACHES TO PRINCIPAL COMPONENT ANALYSIS IN THE PRESENCE OF MISSING VALUES
"... Informaatio ja luonnontieteiden tiedekunta ..."
Building Blocks For Variational Bayesian Learning Of Latent Variable Models
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2006
"... We introduce standardised building blocks designed to be used with variational Bayesian learning. The blocks include Gaussian variables, summation, multiplication, nonlinearity, and delay. A large variety of latent variable models can be constructed from these blocks, including variance models a ..."
Abstract

Cited by 11 (8 self)
 Add to MetaCart
We introduce standardised building blocks designed to be used with variational Bayesian learning. The blocks include Gaussian variables, summation, multiplication, nonlinearity, and delay. A large variety of latent variable models can be constructed from these blocks, including variance models and nonlinear modelling, which are lacking from most existing variational systems. The introduced blocks are designed to fit together and to yield e#cient update rules. Practical implementation of various models is easy thanks to an associated software package which derives the learning formulas automatically once a specific model structure has been fixed. Variational Bayesian learning provides a cost function which is used both for updating the variables of the model and for optimising the model structure. All the computations can be carried out locally, resulting in linear computational complexity. We present
Missing Values in Hierarchical Nonlinear Factor Analysis
 In Proc. of the Int. Conf. on Artificial Neural Networks and Neural Information Processing  ICANN/ICONIP 2003
, 2003
"... The properties of hierarchical nonlinear factor analysis (HNFA) recently introduced by Valpola and others [3] are studied by reconstructing values. The variational Bayesian learning algorithm for HNFA has linear computational complexity and is able to infer the structure of the model in addition to ..."
Abstract

Cited by 10 (6 self)
 Add to MetaCart
The properties of hierarchical nonlinear factor analysis (HNFA) recently introduced by Valpola and others [3] are studied by reconstructing values. The variational Bayesian learning algorithm for HNFA has linear computational complexity and is able to infer the structure of the model in addition to estimating the parameters. To compare HNFA with other methods, we continued the experiments with speech spectrograms in [1] comparing nonlinear factor analysis (NFA) with linear factor analysis (FA) and with the selforganising map. Experiments suggest that HNFA lies between FA and NFA in handling nonlinear problems. Furthermore, HNFA gives better reconstructions than FA and it is more reliable than NFA.
Variational and stochastic inference for Bayesian source separation
, 2007
"... We tackle the general linear instantaneous model (possibly underdetermined and noisy) where we model the source prior with a Student t distribution. The conjugateexponential characterisation of the t distribution as an infinite mixture of scaled Gaussians enables us to do efficient inference. We st ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
We tackle the general linear instantaneous model (possibly underdetermined and noisy) where we model the source prior with a Student t distribution. The conjugateexponential characterisation of the t distribution as an infinite mixture of scaled Gaussians enables us to do efficient inference. We study two wellknown inference methods, Gibbs sampler and variational Bayes for Bayesian source separation. We derive both techniques as local message passing algorithms to highlight their algorithmic similarities and to contrast their different convergence characteristics and computational requirements. Our simulation results suggest that typical posterior distributions in source separation have multiple local maxima. Therefore we propose a hybrid approach where we explore the state space with a Gibbs sampler and then switch to a deterministic algorithm. This approach seems to be able to combine the speed of the variational approach with the robustness of the Gibbs sampler.
Hierarchy, Priors And Wavelets: Structure
 ICA. Signal Processing
, 2004
"... In many data analysis problems it is useful to consider the data as generated from a set of unknown (latent) generators or sources. The observations we make of a system are then taken to be related to these sources through some unknown function. Furthermore, the (unknown) number of underlying lat ..."
Abstract
 Add to MetaCart
In many data analysis problems it is useful to consider the data as generated from a set of unknown (latent) generators or sources. The observations we make of a system are then taken to be related to these sources through some unknown function. Furthermore, the (unknown) number of underlying latent sources may be less than the number of observations. Recent developments in Independent Component Analysis (ICA) have shown that such data decomposition may be achieved in a mathematically elegant manner.