Results 1  10
of
14
A Bayesian approach to unsupervised oneshot learning of object categories
 In Proceedings of the 9th International Conference on Computer Vision
, 2003
"... Learning visual models of object categories notoriously requires thousands of training examples; this is due to the diversity and richness of object appearance which requires models containing hundreds of parameters. We present a method for learning object categories from just a few images ( � �). ..."
Abstract

Cited by 179 (9 self)
 Add to MetaCart
Learning visual models of object categories notoriously requires thousands of training examples; this is due to the diversity and richness of object appearance which requires models containing hundreds of parameters. We present a method for learning object categories from just a few images ( � �). It is based on incorporating “generic” knowledge which may be obtained from previously learnt models of unrelated categories. We operate in a variational Bayesian framework: object categories are represented by probabilistic models, and “prior ” knowledge is represented as a probability density function on the parameters of these models. The “posterior ” model for an object category is obtained by updating the prior in the light of one or more observations. Our ideas are demonstrated on four diverse categories (human faces, airplanes, motorcycles, spotted cats). Initially three categories are learnt from hundreds of training examples, and a “prior ” is estimated from these. Then the model of the fourth category is learnt from 1 to 5 training examples, and is used for detecting new exemplars a set of test images. 1.
Implementing approximate Bayesian inference for latent Gaussian models using integrated nested Laplace approximations: A manual for the inlaprogram
, 2008
"... Structured additive regression models are perhaps the most commonly used class of models in statistical applications. It includes, among others, (generalised) linear models, (generalised) additive models, smoothingspline models, statespace models, semiparametric regression, spatial and spatiotemp ..."
Abstract

Cited by 79 (16 self)
 Add to MetaCart
Structured additive regression models are perhaps the most commonly used class of models in statistical applications. It includes, among others, (generalised) linear models, (generalised) additive models, smoothingspline models, statespace models, semiparametric regression, spatial and spatiotemporal models, logGaussian Coxprocesses, geostatistical and geoadditive models. In this paper we consider approximate Bayesian inference in a popular subset of structured additive regression models, latent Gaussian models, where the latent field is Gaussian, controlled by a few hyperparameters and with nonGaussian response variables. The posterior marginals are not available in closed form due to the nonGaussian response variables. For such models, Markov chain Monte Carlo methods can be implemented, but they are not without problems, both in terms of convergence and computational time. In some practical applications, the extent of these problems is such that Markov chain Monte Carlo is simply not an appropriate tool for routine analysis. We show that, by using an integrated nested Laplace approximation and its simplified version, we can directly compute very accurate approximations to the posterior marginals. The main benefit of these approximations
Developments in Probabilistic Modelling with Neural Networks  Ensemble Learning
, 1995
"... Ensemble learning by variational free energy minimization is a framework for statistical inference in which an ensemble of parameter vectors is optimized rather than a single parameter vector. The ensemble approximates the posterior probability distribution of the parameters. In this paper I give a ..."
Abstract

Cited by 49 (5 self)
 Add to MetaCart
Ensemble learning by variational free energy minimization is a framework for statistical inference in which an ensemble of parameter vectors is optimized rather than a single parameter vector. The ensemble approximates the posterior probability distribution of the parameters. In this paper I give a review of ensemble learning using a simple example. 1 Ensemble Learning by Free Energy Minimization A new tool has recently been introduced into the field of neural networks. In traditional approaches to model fitting, a single parameter vector w is optimized by, say, maximum likelihood or penalized maximum likelihood; in the Bayesian interpretation, these optimized parameters are viewed as defining the mode of a posterior probability distribution P (wjD; H) (given data D and model assumptions H). The new concept introduced by Hinton and van Camp (1993) is to work in terms of an approximating ensemble Q(w; `), that is, a probability distribution over the parameters, and optimize the ensemb...
2002. Variational Bayes for generalized autoregressive models
 IEEE Trans. Signal Processing
"... Abstract—We describe a variational Bayes (VB) learning algorithm for generalized autoregressive (GAR) models. The noise is modeled as a mixture of Gaussians rather than the usual single Gaussian. This allows different data points to be associated with different noise levels and effectively provides ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
Abstract—We describe a variational Bayes (VB) learning algorithm for generalized autoregressive (GAR) models. The noise is modeled as a mixture of Gaussians rather than the usual single Gaussian. This allows different data points to be associated with different noise levels and effectively provides robust estimation of AR coefficients. The VB framework is used to prevent overfitting and provides modelorder selection criteria both for AR order and noise model order. We show that for the special case of Gaussian noise and uninformative priors on the noise and weight precisions, the VB framework reduces to the Bayesian evidence framework. The algorithm is applied to synthetic and real data with encouraging results. Index Terms—Bayesian inference, generalized autoregressive models, model order selection, robust estimation. I.
Variational Bayes for Generalised Autoregressive Models
 IEEE Transactions on Signal Processing
, 2002
"... We describe a Variational Bayes (VB) learning algorithm for Generalised Autoregressive (GAR) models. The noise is modelled as a Mixture of Gaussians rather than the usual single Gaussian. This allows different data points to be associated with different noise levels and effectively provides a rob ..."
Abstract

Cited by 12 (7 self)
 Add to MetaCart
We describe a Variational Bayes (VB) learning algorithm for Generalised Autoregressive (GAR) models. The noise is modelled as a Mixture of Gaussians rather than the usual single Gaussian. This allows different data points to be associated with different noise levels and effectively provides a robust estimation of AR coefficients. The VB framework is used to prevent overfitting and provides model order selection criteria both for AR order and noise model order. We show that for the special case of Gaussian noise and uninformative priors on the noise and weight precisions, the VB framework reduces to the Bayesian Evidence framework. The VB model order criterion is compared with the Minimum Description Length (MDL) approach. The algorithm is applied to synthetic and real data with encouraging results. 1 Introduction The standard autoregressive (AR) model assumes that the noise is Gaussian and therefore that the AR coefficients can be set by minimising a least squares cost functi...
Bayesian Multivariate Autoregressive Models with Structured Priors
 IEE Proceedings on Vision, Signal and Image Processing 149(1), 33
, 2000
"... We describe a Variational Bayesian (VB) learning algorithm for parameter estimation and model order selection in Multivariate Autoregressive (MAR) models. We explore the use of structured priors in which subsets of coecients are grouped together and constrained to be of a similar magnitude. This ..."
Abstract

Cited by 12 (7 self)
 Add to MetaCart
We describe a Variational Bayesian (VB) learning algorithm for parameter estimation and model order selection in Multivariate Autoregressive (MAR) models. We explore the use of structured priors in which subsets of coecients are grouped together and constrained to be of a similar magnitude. This allows MAR models to be more readily applied to high dimensional data and to data with greater temporal complexity. We also compare the VB model order selection criterion with the Minimum Description Length (MDL) approach. Results are presented on synthetic, physiological and electroencephalogram (EEG) data. 1 Introduction The Multivariate Autoregressive (MAR) process is used to model multiple time series data in such elds as geophysics [16], economics [10] and biomedicine [5]. It can also be seen as a parametric multivariate spectral estimation procedure and will provide parsimonious estimation of coherences and partial coherences [14] [8]. One factor preventing its wider applicati...
Bayesian Methods For Autoregressive Models
 In IEEE International Workshop on Neural Networks for Signal Processing
, 2000
"... . We describe a Variational Bayesian (VB) learning algorithm for parameter estimation and model order selection in autoregressive (AR) models. With uninformative priors on the precisions of the coecient and noise distributions the VB framework is shown to be identical to the Bayesian Evidence framew ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
. We describe a Variational Bayesian (VB) learning algorithm for parameter estimation and model order selection in autoregressive (AR) models. With uninformative priors on the precisions of the coecient and noise distributions the VB framework is shown to be identical to the Bayesian Evidence framework. The VB model order selection criterion is compared with the Minimum Description Length (MDL) criterion on synthetic data and on EEG. INTRODUCTION In the autoregressive modeling of stationary stochastic processes there are two basic problems: estimation of the parameters and selection of the optimum model order. These problems are usually tackled by rst estimating the model parameters using eg. Burg, LevinsonDurbin or MaximumLikelihood algorithms [10] and then choosing the optimal model order using a variety of criteria eg. Akaikes Information Criterion (AIC), Final Prediction Error (FPE), Minimum Description Length (MDL) etc. (see [7, 4] for recent approaches). In this paper we sho...
Joint NDT image restoration and segmentation using Gauss–Markov– Potts prior models and variational bayesian computation
 IEEE Transactions on Image Processing
, 2010
"... In this paper, we propose a method to simultaneously restore and to segment piecewise homogenous images degraded by a known point spread function (PSF) and additive noise. For this purpose, we propose a family of nonhomogeneous GaussMarkov fields with Potts region labels model for images to be use ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
In this paper, we propose a method to simultaneously restore and to segment piecewise homogenous images degraded by a known point spread function (PSF) and additive noise. For this purpose, we propose a family of nonhomogeneous GaussMarkov fields with Potts region labels model for images to be used in a Bayesian estimation framework. The joint posterior law of all the unknowns (the unknown image, its segmentation (hidden variable) and all the hyperparameters) is approximated by a separable probability law via the variational Bayes technique. This approximation gives the possibility to obtain practically implemented joint restoration and segmentation algorithm. We will present some preliminary results and comparison with a MCMC Gibbs sampling based algorithm. We may note that the prior models proposed in this work are particularly appropriate for the images of the scenes or objects that are composed of a finite set of homogeneous materials. This is the case of many images obtained in nondestructive testing (NDT) applications.
Variational approach to factor analysis and related models
 MASTER’S THESIS, INFORMATICS AND MATHEMATICAL MODELLING, TECHNICAL UNIVERSITY OF DENMARK, DTU, RICHARD PETERSENS PLADS, BUILDING 321, DK2800 KGS. LYNGBY
, 2004
"... ..."
A measuretheoretic variational bayesian algorithm for large dimensional problems
, 2012
"... Abstract. In this paper we provide an algorithm allowing to solve the variational Bayesian issue as a functional optimization problem. The main contribution of this paper is to transpose a classical iterative algorithm of optimization in the metric space of probability densities involved in the Baye ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Abstract. In this paper we provide an algorithm allowing to solve the variational Bayesian issue as a functional optimization problem. The main contribution of this paper is to transpose a classical iterative algorithm of optimization in the metric space of probability densities involved in the Bayesian methodology. The main advantage of this methodology is that it allows to address large dimensional inverse problems by unsupervised algorithms. The interest of our algorithm is enhanced by its application to large dimensional linear inverse problems involving sparse objects. Finally, we provide simulation results. First we show the good numerical performances of our method by comparing it with classical ones on a small tomographic problem. On a second time we treat a large dimensional dictionary learning problem and compare our method with a wavelet based one.