Results 1  10
of
22
Implementing approximate Bayesian inference for latent Gaussian models using integrated nested Laplace approximations: A manual for the inlaprogram
, 2008
"... Structured additive regression models are perhaps the most commonly used class of models in statistical applications. It includes, among others, (generalised) linear models, (generalised) additive models, smoothingspline models, statespace models, semiparametric regression, spatial and spatiotemp ..."
Abstract

Cited by 293 (20 self)
 Add to MetaCart
(Show Context)
Structured additive regression models are perhaps the most commonly used class of models in statistical applications. It includes, among others, (generalised) linear models, (generalised) additive models, smoothingspline models, statespace models, semiparametric regression, spatial and spatiotemporal models, logGaussian Coxprocesses, geostatistical and geoadditive models. In this paper we consider approximate Bayesian inference in a popular subset of structured additive regression models, latent Gaussian models, where the latent field is Gaussian, controlled by a few hyperparameters and with nonGaussian response variables. The posterior marginals are not available in closed form due to the nonGaussian response variables. For such models, Markov chain Monte Carlo methods can be implemented, but they are not without problems, both in terms of convergence and computational time. In some practical applications, the extent of these problems is such that Markov chain Monte Carlo is simply not an appropriate tool for routine analysis. We show that, by using an integrated nested Laplace approximation and its simplified version, we can directly compute very accurate approximations to the posterior marginals. The main benefit of these approximations
A Bayesian approach to unsupervised oneshot learning of object categories
 In Proceedings of the 9th International Conference on Computer Vision
, 2003
"... Learning visual models of object categories notoriously requires thousands of training examples; this is due to the diversity and richness of object appearance which requires models containing hundreds of parameters. We present a method for learning object categories from just a few images ( � �). ..."
Abstract

Cited by 208 (12 self)
 Add to MetaCart
(Show Context)
Learning visual models of object categories notoriously requires thousands of training examples; this is due to the diversity and richness of object appearance which requires models containing hundreds of parameters. We present a method for learning object categories from just a few images ( � �). It is based on incorporating “generic” knowledge which may be obtained from previously learnt models of unrelated categories. We operate in a variational Bayesian framework: object categories are represented by probabilistic models, and “prior ” knowledge is represented as a probability density function on the parameters of these models. The “posterior ” model for an object category is obtained by updating the prior in the light of one or more observations. Our ideas are demonstrated on four diverse categories (human faces, airplanes, motorcycles, spotted cats). Initially three categories are learnt from hundreds of training examples, and a “prior ” is estimated from these. Then the model of the fourth category is learnt from 1 to 5 training examples, and is used for detecting new exemplars a set of test images. 1.
Developments in Probabilistic Modelling with Neural Networks  Ensemble Learning
"... Ensemble learning by vmiational free energy minimization is a framework for statistical inference in which aa ensemble of parameter vectors is optimized rather thaa a single parameter vector. The ensemble approximates the posterior probability distribution of the parameters. ..."
Abstract

Cited by 56 (5 self)
 Add to MetaCart
Ensemble learning by vmiational free energy minimization is a framework for statistical inference in which aa ensemble of parameter vectors is optimized rather thaa a single parameter vector. The ensemble approximates the posterior probability distribution of the parameters.
2002. Variational Bayes for generalized autoregressive models
 IEEE Trans. Signal Processing
"... Abstract—We describe a variational Bayes (VB) learning algorithm for generalized autoregressive (GAR) models. The noise is modeled as a mixture of Gaussians rather than the usual single Gaussian. This allows different data points to be associated with different noise levels and effectively provides ..."
Abstract

Cited by 24 (0 self)
 Add to MetaCart
(Show Context)
Abstract—We describe a variational Bayes (VB) learning algorithm for generalized autoregressive (GAR) models. The noise is modeled as a mixture of Gaussians rather than the usual single Gaussian. This allows different data points to be associated with different noise levels and effectively provides robust estimation of AR coefficients. The VB framework is used to prevent overfitting and provides modelorder selection criteria both for AR order and noise model order. We show that for the special case of Gaussian noise and uninformative priors on the noise and weight precisions, the VB framework reduces to the Bayesian evidence framework. The algorithm is applied to synthetic and real data with encouraging results. Index Terms—Bayesian inference, generalized autoregressive models, model order selection, robust estimation. I.
Bayesian multivariate autoregressive models with structured priors
 IEE Proc
, 2002
"... ..."
(Show Context)
Variational bayes for generalised autoregressive models
 IEEE Trans. Signal Process
, 2002
"... ..."
Bayesian Methods for Autoregressive Models, in
 Proc. IEEE Int. Workshop Neural Networks Signal Processing
, 2000
"... ..."
(Show Context)
Joint NDT image restoration and segmentation using Gauss–Markov– Potts prior models and variational bayesian computation
 IEEE Transactions on Image Processing
, 2010
"... In this paper, we propose a method to simultaneously restore and to segment piecewise homogenous images degraded by a known point spread function (PSF) and additive noise. For this purpose, we propose a family of nonhomogeneous GaussMarkov fields with Potts region labels model for images to be use ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
(Show Context)
In this paper, we propose a method to simultaneously restore and to segment piecewise homogenous images degraded by a known point spread function (PSF) and additive noise. For this purpose, we propose a family of nonhomogeneous GaussMarkov fields with Potts region labels model for images to be used in a Bayesian estimation framework. The joint posterior law of all the unknowns (the unknown image, its segmentation (hidden variable) and all the hyperparameters) is approximated by a separable probability law via the variational Bayes technique. This approximation gives the possibility to obtain practically implemented joint restoration and segmentation algorithm. We will present some preliminary results and comparison with a MCMC Gibbs sampling based algorithm. We may note that the prior models proposed in this work are particularly appropriate for the images of the scenes or objects that are composed of a finite set of homogeneous materials. This is the case of many images obtained in nondestructive testing (NDT) applications.
Variational approach to factor analysis and related models
 MASTER’S THESIS, INFORMATICS AND MATHEMATICAL MODELLING, TECHNICAL UNIVERSITY OF DENMARK, DTU, RICHARD PETERSENS PLADS, BUILDING 321, DK2800 KGS. LYNGBY
, 2004
"... ..."
A measuretheoretic variational bayesian algorithm for large dimensional problems
, 2012
"... Abstract. In this paper we provide an algorithm allowing to solve the variational Bayesian issue as a functional optimization problem. The main contribution of this paper is to transpose a classical iterative algorithm of optimization in the metric space of probability densities involved in the Baye ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper we provide an algorithm allowing to solve the variational Bayesian issue as a functional optimization problem. The main contribution of this paper is to transpose a classical iterative algorithm of optimization in the metric space of probability densities involved in the Bayesian methodology. The main advantage of this methodology is that it allows to address large dimensional inverse problems by unsupervised algorithms. The interest of our algorithm is enhanced by its application to large dimensional linear inverse problems involving sparse objects. Finally, we provide simulation results. First we show the good numerical performances of our method by comparing it with classical ones on a small tomographic problem. On a second time we treat a large dimensional dictionary learning problem and compare our method with a wavelet based one.