Results 1  10
of
21
Estimating the integrated likelihood via posterior simulation using the harmonic mean identity
 Bayesian Statistics
, 2007
"... The integrated likelihood (also called the marginal likelihood or the normalizing constant) is a central quantity in Bayesian model selection and model averaging. It is defined as the integral over the parameter space of the likelihood times the prior density. The Bayes factor for model comparison a ..."
Abstract

Cited by 24 (2 self)
 Add to MetaCart
The integrated likelihood (also called the marginal likelihood or the normalizing constant) is a central quantity in Bayesian model selection and model averaging. It is defined as the integral over the parameter space of the likelihood times the prior density. The Bayes factor for model comparison and Bayesian testing is a ratio of integrated likelihoods, and the model weights in Bayesian model averaging are proportional to the integrated likelihoods. We consider the estimation of the integrated likelihood from posterior simulation output, aiming at a generic method that uses only the likelihoods from the posterior simulation iterations. The key is the harmonic mean identity, which says that the reciprocal of the integrated likelihood is equal to the posterior harmonic mean of the likelihood. The simplest estimator based on the identity is thus the harmonic mean of the likelihoods. While this is an unbiased and simulationconsistent estimator, its reciprocal can have infinite variance and so it is unstable in general. We describe two methods for stabilizing the harmonic mean estimator. In the first one, the parameter space is reduced in such a way that the modified estimator involves a harmonic mean of heaviertailed densities, thus resulting in a finite variance estimator. The resulting
Contemplating evidence: properties, extensions of, and alternatives to nested sampling
, 2007
"... Nested sampling is a novel simulation method for approximating marginal likelihoods, proposed by Skilling (2007a,b). We establish that nested sampling leads to an error that vanishes at the standard Monte Carlo rate N −1/2, where N is a tuning parameter that is proportional to the computational effo ..."
Abstract

Cited by 11 (10 self)
 Add to MetaCart
Nested sampling is a novel simulation method for approximating marginal likelihoods, proposed by Skilling (2007a,b). We establish that nested sampling leads to an error that vanishes at the standard Monte Carlo rate N −1/2, where N is a tuning parameter that is proportional to the computational effort, and that this error is asymptotically Gaussian. We show that the corresponding asymptotic variance typically grows linearly with the dimension of the parameter. We use these results to discuss the applicability and efficiency of nested sampling in realistic problems, including posterior distributions for mixtures. We propose an extension of nested sampling that makes it possible to avoid resorting to MCMC to obtain the simulated points. We study two alternative methods for computing marginal likelihood, which, in contrast with nested sampling, are based on draws from the posterior distribution and we conduct a comparison with nested sampling on several realistic examples.
Simulation of the Annual Loss Distribution in Operational Risk via Panjer Recursions and Volterra Integral Equations for Value at Risk and Expected Shortfall Estimation.
"... * – Corresponding Author. Following the Loss Distributional Approach (LDA), this article develops two procedures for simulation of an annual loss distribution for modeling of Operational Risk. First, we provide an overview of the typical compoundprocess LDA used widely in Operational Risk modeling ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
* – Corresponding Author. Following the Loss Distributional Approach (LDA), this article develops two procedures for simulation of an annual loss distribution for modeling of Operational Risk. First, we provide an overview of the typical compoundprocess LDA used widely in Operational Risk modeling, before expanding upon the current literature on evaluation and simulation of annual loss distributions. We present two novel Monte Carlo simulation procedures. In doing so, we make use of Panjer recursions and the Volterra integral equation of the second kind to reformulate the problem of evaluation of the density of a random sum as the calculation of an expectation. We demonstrate the use of importance sampling and transdimensional Markov Chain Monte Carlo algorithms to efficiently evaluate this expectation. We further demonstrate their use in the calculation of Value at Risk and
Hyperspectral image unmixing using a multiresolution sticky HDP
 IEEE Trans. Signal Processing
, 2012
"... Abstract—This paper is concerned with joint Bayesian endmember extraction and linear unmixing of hyperspectral images using a spatial prior on the abundance vectors. We propose a generative model for hyperspectral images in which the abundances are sampled from a Dirichlet distribution (DD) mixture ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
Abstract—This paper is concerned with joint Bayesian endmember extraction and linear unmixing of hyperspectral images using a spatial prior on the abundance vectors. We propose a generative model for hyperspectral images in which the abundances are sampled from a Dirichlet distribution (DD) mixture model, whose parameters depend on a latent label process. The label process is then used to enforces a spatial prior which encourages adjacent pixels to have the same label. A Gibbs sampling framework is used to generate samples from the posterior distributions of the abundances and the parameters of the DD mixture model. The spatial prior that is used is a treestructured sticky hierarchical Dirichlet process (SHDP) and, when used to determine the posterior endmember and abundance distributions, results in a new unmixing algorithm called spatially constrained unmixing (SCU). The directed Markov model facilitates the use of scalerecursive estimation algorithms, and is therefore more computationally efficient as compared to standard Markov random field (MRF) models. Furthermore, the proposed SCU algorithm estimates the number of regions in the image in an unsupervised fashion. The effectiveness of the proposed SCU algorithm is illustrated using synthetic and real data. Index Terms—Bayesian inference, hidden Markov trees, hyperspectral unmixing, image segmentation, spatially constrained unmixing, sticky hierarchical Dirichlet process. I.
Reversible jump MCMC
, 2009
"... Statistical problems where ‘the number of things you don’t know is one of the things you don’t know ’ are ubiquitous in statistical modelling. They arise both ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Statistical problems where ‘the number of things you don’t know is one of the things you don’t know ’ are ubiquitous in statistical modelling. They arise both
On some difficulties with a posterior probability approximation technique
, 801
"... In Scott (2002) and Congdon (2006), a new method is advanced to compute posterior probabilities of models under consideration. It is based solely on MCMC outputs restricted to single models, i.e., it is bypassing reversible jump and other model exploration techniques. While it is indeed possible to ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
In Scott (2002) and Congdon (2006), a new method is advanced to compute posterior probabilities of models under consideration. It is based solely on MCMC outputs restricted to single models, i.e., it is bypassing reversible jump and other model exploration techniques. While it is indeed possible to approximate posterior probabilities based solely on MCMC outputs from single models, as demonstrated by Gelfand and Dey (1994) and Bartolucci et al. (2006), we show that the proposals of Scott (2002) and Congdon (2006) are biased and advance several arguments towards this thesis, the primary one being the confusion between modelbased posteriors and joint pseudoposteriors.
Markov Chain Monte Carlo With Mixtures of Mutually Singular Distributions
"... Markov chain Monte Carlo (MCMC) methods for Bayesian computation are mostly used when the dominating measure is the Lebesgue measure, the counting measure, or a product of these. Many Bayesian problems give rise to distributions that are not dominated by the Lebesgue measure or the counting measure ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Markov chain Monte Carlo (MCMC) methods for Bayesian computation are mostly used when the dominating measure is the Lebesgue measure, the counting measure, or a product of these. Many Bayesian problems give rise to distributions that are not dominated by the Lebesgue measure or the counting measure alone. In this article we introduce a simple framework for using MCMC algorithms in Bayesian computation with mixtures of mutually singular distributions. The idea is to find a common dominating measure that allows the use of traditional Metropolis–Hastings algorithms. In particular, using our formulation, the Gibbs sampler can be used whenever the full conditionals are available. We compare our formulation with the reversible jump approach and show that the two are closely related. We give results for three examples, involving testing a normal mean, variable selection in regression, and hypothesis testing for differential gene expression under multiple conditions. This allows us to compare the three methods considered: Metropolis–Hastings with mutually singular distributions, Gibbs sampler with mutually singular distributions, and reversible jump. In our examples, we found the Gibbs sampler to be more precise and to need considerably less computer time than the other methods. In addition, the full conditionals used in the Gibbs sampler can be used to further improve the estimates of the model posterior probabilities via Rao–Blackwellization, at no extra cost.
Model Choice using Reversible Jump Markov Chain
, 2011
"... We review the acrossmodel simulation approach to computation for Bayesian model determination, based on the reversible jump Markov chain Monte Carlo method. Advantages, difficulties and variations of the methods are discussed. We also discuss some limitations of the ideal Bayesian view of the model ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We review the acrossmodel simulation approach to computation for Bayesian model determination, based on the reversible jump Markov chain Monte Carlo method. Advantages, difficulties and variations of the methods are discussed. We also discuss some limitations of the ideal Bayesian view of the model determination problem, for which no computational methods can provide a cure.
unknown title
, 2009
"... when π is the prior distribution and L(y  θ) is the likelihood. Those integrals are called evidence in the above papers. They naturally occur as marginals in Bayesian testing and model choice (Jeffreys, 1939; Robert, 2001, Chapters 5 and 7). Nested sampling has been well received in astronomy and h ..."
Abstract
 Add to MetaCart
when π is the prior distribution and L(y  θ) is the likelihood. Those integrals are called evidence in the above papers. They naturally occur as marginals in Bayesian testing and model choice (Jeffreys, 1939; Robert, 2001, Chapters 5 and 7). Nested sampling has been well received in astronomy and has been applied successfully to several cosmological problems, see, for instance, Mukherjee et al. (2006), Shaw et al. (2007), and Vegetti & Koopmans (2009), among others. In addition, Murray et al. (2006) develop a nested sampling algorithm for computing the normalising constant of Potts models. The purpose of this paper is to investigate the formal properties of nested sampling. A first effort in that direction is Evans (2007), which shows that nested sampling estimates converge in probability, but calls for further work on the rate of convergence and the limiting distribution. Our main result is a central limit theorem for nested sampling estimates, which says that the approximation error is dominated by a O(N −1/2) stochastic term, which has a limiting Gaussian distribution, and where N is a tuning parameter proportional to the computational effort. We also investigate the impact of the dimension d of the problem on the performances of the algorithm. In a simple example, we show that the asymptotic variance of nested sampling estimates grows linearly with d; this means that the computational cost is O(d 3 /η 2), where η is the selected error bound.