Results 1  10
of
13
An efficient Markov chain Monte Carlo method for distributions with intractable normalising constants
 Biometrika
, 2006
"... Maximum likelihood parameter estimation and sampling from Bayesian posterior distributions are problematic when the probability density for the parameter of interest involves an intractable normalising constant which is also a function of that parameter. In this paper, an auxiliary variable method i ..."
Abstract

Cited by 57 (2 self)
 Add to MetaCart
Maximum likelihood parameter estimation and sampling from Bayesian posterior distributions are problematic when the probability density for the parameter of interest involves an intractable normalising constant which is also a function of that parameter. In this paper, an auxiliary variable method is presented which requires only that independent samples can be drawn from the unnormalised density at any particular parameter value. The proposal distribution is constructed so that the normalising constant cancels from the Metropolis–Hastings ratio. The method is illustrated by producing posterior samples for parameters of the Ising model given a particular lattice realisation.
Smoothing algorithms for statespace models
 in Submission IEEE Transactions on Signal Processing
, 2004
"... A prevalent problem in statistical signal processing, applied statistics, and time series analysis is the calculation of the smoothed posterior distribution, which describes the uncertainty associated with a state, or a sequence of states, conditional on data from the past, the present, and the futu ..."
Abstract

Cited by 34 (5 self)
 Add to MetaCart
A prevalent problem in statistical signal processing, applied statistics, and time series analysis is the calculation of the smoothed posterior distribution, which describes the uncertainty associated with a state, or a sequence of states, conditional on data from the past, the present, and the future. The aim of this paper is to provide a rigorous foundation for the calculation, or approximation, of such smoothed distributions, to facilitate a robust and efficient implementation. Through a cohesive and generic exposition of the scientific literature we offer several novel extensions such that one can perform smoothing in the most general case. Experimental results for: a Jump Markov Linear System; a comparison of particle smoothing methods; and parameter estimation using a particle implementation of the EM algorithm, are provided.
Markov chain Monte Carlo methods for statistical inference
, 2004
"... These notes provide an introduction to Markov chain Monte Carlo methods and their applications to both Bayesian and frequentist statistical inference. Such methods have revolutionized what can be achieved computationally, especially in the Bayesian paradigm. The account begins by discussing ordinary ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
These notes provide an introduction to Markov chain Monte Carlo methods and their applications to both Bayesian and frequentist statistical inference. Such methods have revolutionized what can be achieved computationally, especially in the Bayesian paradigm. The account begins by discussing ordinary Monte Carlo methods: these have the same goals as the Markov chain versions but can only rarely be implemented. Subsequent sections describe basic Markov chain Monte Carlo, based on the Hastings algorithm and including both the Metropolis method and the Gibbs sampler as special cases, and go on to discuss some more specialized developments, including adaptive slice sampling, exact goodness–of–fit tests, maximum likelihood estimation, the Langevin–Hastings algorithm, auxiliary variables techniques, perfect sampling via coupling from the past, reversible jumps methods for target spaces of varying dimensions, and simulated annealing. Specimen applications are described throughout the notes.
Bayesian Multivariate Isotonic Regression Splines: Applications to Carcinogenicity Studies
"... In many applications, interest focuses on assessing the relationship between a predictor and a multivariate outcome variable, and there may be prior knowledge about the shape of the regression curves. For example, regression functions relating dose of a possible risk factor to different adverse outc ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In many applications, interest focuses on assessing the relationship between a predictor and a multivariate outcome variable, and there may be prior knowledge about the shape of the regression curves. For example, regression functions relating dose of a possible risk factor to different adverse outcomes can often be assumed to be nondecreasing. In such cases, interest focuses on (1) assessing evidence of an overall adverse effect; (2) determining which outcomes are most affected; and (3) estimating outcomespecific regression curves. This article proposes a Bayesian approach for addressing this problem, motivated by multisite tumor data from carcinogenicity experiments. A multivariate smoothing spline model is specified, which accommodates dependency in the multiple curves through a hierarchical Markov random field prior for the basis coefficients, while also allowing for residual correlation. A Gibbs sampler is proposed for posterior computation, and the approach is applied to data on body weight and tumor occurrence.
Printed in Great Britain Recursive computing and simulationfree inference
"... We illustrate how the recursive algorithm of Reeves & Pettitt (2004) for general factorizable models can be extended to allow exact sampling, maximization of distributions and computation of marginal distributions. All of the methods we describe apply to discretevalued Markov random fields with ..."
Abstract
 Add to MetaCart
We illustrate how the recursive algorithm of Reeves & Pettitt (2004) for general factorizable models can be extended to allow exact sampling, maximization of distributions and computation of marginal distributions. All of the methods we describe apply to discretevalued Markov random fields with nearest neighbour integrations defined on regular lattices; in particular we illustrate that exact inference can be performed for hidden autologistic models defined on moderately sized lattices. In this context we offer an extension of this methodology which allows approximate inference to be carried out for larger lattices without resorting to simulation techniques such as Markov chain Monte Carlo. In particular our work offers the basis for an automatic inference machine for such models.
Abstract
, 2007
"... Structured additive regression models are perhaps the most commonly used class of models in statistical applications. It includes, among others, (generalised) linear models, (generalised) additive models, smoothingspline models, statespace models, semiparametric regression, spatial and spatiotemp ..."
Abstract
 Add to MetaCart
Structured additive regression models are perhaps the most commonly used class of models in statistical applications. It includes, among others, (generalised) linear models, (generalised) additive models, smoothingspline models, statespace models, semiparametric regression, spatial and spatiotemporal models, logGaussian Coxprocesses, and geostatistical models. In this paper we consider approximate Bayesian inference in a popular subset of structured additive regression models, latent Gaussian models, where the latent field is Gaussian, controlled by a few hyperparameters and with nonGaussian response variables. The posterior marginals are not available in closed form due to the nonGaussian response variables. For such models, Markov chain Monte Carlo methods can be implemented, but they are not without problems, both in terms of convergence and computational time. In some practical applications, the extent of these problems is such that Markov chain Monte Carlo is simply not an appropriate tool for routine analysis. We show that, by using an integrated nested Laplace approximation and its simplified version, we can directly compute very accurate approximations to the posterior marginals. The main benefit of these approximations is computational: where MCMC algorithms need hours and days to run, our approximations
(PE/CIMAT) A Case Study: Ordinal Responses With SpatioTemporal Dependencies
"... Abstract: Data structures with spatial and temporal dependencies are not uncommon in environmental and agronomic fields. We consider the modeling and estimation problem for these type of structures, in particular we consider proportional odds models with spatiotemporal covariables with estimation v ..."
Abstract
 Add to MetaCart
Abstract: Data structures with spatial and temporal dependencies are not uncommon in environmental and agronomic fields. We consider the modeling and estimation problem for these type of structures, in particular we consider proportional odds models with spatiotemporal covariables with estimation via maximum pseudlikelihood. We end by presenting a testing problem on treatment effects on data from a field experiment on agave tequilana.
Efficient recursions for general factorisable models
"... Let n Svalued categorical variables be jointly distributed according to a distribution known only up to an unknown normalising constant. For an unnormalised joint likelihood expressible as a product of factors, we give an algebraic recursion which can be used for computing the normalising constant ..."
Abstract
 Add to MetaCart
Let n Svalued categorical variables be jointly distributed according to a distribution known only up to an unknown normalising constant. For an unnormalised joint likelihood expressible as a product of factors, we give an algebraic recursion which can be used for computing the normalising constant and other summations. A saving in computation is achieved when each factor contains a lagged subset of the components combining in the joint distribution, with maximum computational efficiency as the subsets attain their minimum size. If each subset contains at most r + 1 of the n components in the joint distribution, we term this a lagr model, whose normalising constant can be computed using a forward recursion in O(S r+1) computations, as opposed to O(S n) for the direct computation. We show how a lagr model represents a Markov random field and allows a neighbourhood structure to be related to the unnormalised joint likelihood. We illustrate the method by showing how the normalising constant of the Ising or autologistic model can be computed.
Printed in Great Britain Efficient recursions for general factorisable models
"... Let nSvalued categorical variables be jointly distributed according to a distribution known only up to an unknown normalising constant. For an unnormalised joint likelihood expressible as a product of factors, we give an algebraic recursion which can be used for computing the normalising constant a ..."
Abstract
 Add to MetaCart
Let nSvalued categorical variables be jointly distributed according to a distribution known only up to an unknown normalising constant. For an unnormalised joint likelihood expressible as a product of factors, we give an algebraic recursion which can be used for computing the normalising constant and other summations. A saving in computation is achieved when each factor contains a lagged subset of the components combining in the joint distribution, with maximum computational efficiency as the subsets attain their minimum size. If each subset contains at most r+1 of the n components in the joint distribution, we term this a lagr model, whose normalising constant can be computed using a forward recursion in O(Sr+1) computations, as opposed to O(Sn) for the direct computation. We show how a lagr model represents a Markov random field and allows a neighbourhood structure to be related to the unnormalised joint likelihood. We illustrate the method by showing how the normalising constant of the Ising or autologistic model can be computed.
Statistics Exact marginals and normalizing constant for Gibbs distributions
, 2013
"... We present a recursive algorithm for the calculation of the marginal of a Gibbs distribution π. A direct consequence is the calculation of the normalizing constant of π. Résumé Récurrences et constante de normalisation pour des modèles de Gibbs. Nous proposons dans ce travail une récurrence sur les ..."
Abstract
 Add to MetaCart
We present a recursive algorithm for the calculation of the marginal of a Gibbs distribution π. A direct consequence is the calculation of the normalizing constant of π. Résumé Récurrences et constante de normalisation pour des modèles de Gibbs. Nous proposons dans ce travail une récurrence sur les lois marginales d’une distribution de Gibbs π. Une conséquence directe est le calcul exact de la constante de normalisation de π. 1.