Results 1  10
of
27
Approximate Bayesian Computational methods
 Statistics and Computing
, 2011
"... Also known as likelihoodfree methods, approximate Bayesian computational (ABC) methods have appeared in the past ten years as the most satisfactory approach to intractable likelihood problems, first in genetics then in a broader spectrum of applications. However, these methods suffer to some degr ..."
Abstract

Cited by 64 (7 self)
 Add to MetaCart
Also known as likelihoodfree methods, approximate Bayesian computational (ABC) methods have appeared in the past ten years as the most satisfactory approach to intractable likelihood problems, first in genetics then in a broader spectrum of applications. However, these methods suffer to some degree from calibration difficulties that make them rather volatile in their implementation and thus render them suspicious to the users of more traditional Monte Carlo methods. In this survey, we study the various improvements and extensions brought on the original ABC algorithm in recent years.
Estimating the integrated likelihood via posterior simulation using the harmonic mean identity
 Bayesian Statistics
, 2007
"... The integrated likelihood (also called the marginal likelihood or the normalizing constant) is a central quantity in Bayesian model selection and model averaging. It is defined as the integral over the parameter space of the likelihood times the prior density. The Bayes factor for model comparison a ..."
Abstract

Cited by 49 (2 self)
 Add to MetaCart
The integrated likelihood (also called the marginal likelihood or the normalizing constant) is a central quantity in Bayesian model selection and model averaging. It is defined as the integral over the parameter space of the likelihood times the prior density. The Bayes factor for model comparison and Bayesian testing is a ratio of integrated likelihoods, and the model weights in Bayesian model averaging are proportional to the integrated likelihoods. We consider the estimation of the integrated likelihood from posterior simulation output, aiming at a generic method that uses only the likelihoods from the posterior simulation iterations. The key is the harmonic mean identity, which says that the reciprocal of the integrated likelihood is equal to the posterior harmonic mean of the likelihood. The simplest estimator based on the identity is thus the harmonic mean of the likelihoods. While this is an unbiased and simulationconsistent estimator, its reciprocal can have infinite variance and so it is unstable in general. We describe two methods for stabilizing the harmonic mean estimator. In the first one, the parameter space is reduced in such a way that the modified estimator involves a harmonic mean of heaviertailed densities, thus resulting in a finite variance estimator. The resulting
Contemplating evidence: properties, extensions of, and alternatives to nested sampling
, 2007
"... Nested sampling is a novel simulation method for approximating marginal likelihoods, proposed by Skilling (2007a,b). We establish that nested sampling leads to an error that vanishes at the standard Monte Carlo rate N −1/2, where N is a tuning parameter that is proportional to the computational effo ..."
Abstract

Cited by 15 (12 self)
 Add to MetaCart
Nested sampling is a novel simulation method for approximating marginal likelihoods, proposed by Skilling (2007a,b). We establish that nested sampling leads to an error that vanishes at the standard Monte Carlo rate N −1/2, where N is a tuning parameter that is proportional to the computational effort, and that this error is asymptotically Gaussian. We show that the corresponding asymptotic variance typically grows linearly with the dimension of the parameter. We use these results to discuss the applicability and efficiency of nested sampling in realistic problems, including posterior distributions for mixtures. We propose an extension of nested sampling that makes it possible to avoid resorting to MCMC to obtain the simulated points. We study two alternative methods for computing marginal likelihood, which, in contrast with nested sampling, are based on draws from the posterior distribution and we conduct a comparison with nested sampling on several realistic examples.
Hyperspectral image unmixing using a multiresolution sticky HDP
 IEEE Trans. Signal Processing
, 2012
"... Abstract—This paper is concerned with joint Bayesian endmember extraction and linear unmixing of hyperspectral images using a spatial prior on the abundance vectors. We propose a generative model for hyperspectral images in which the abundances are sampled from a Dirichlet distribution (DD) mixture ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
(Show Context)
Abstract—This paper is concerned with joint Bayesian endmember extraction and linear unmixing of hyperspectral images using a spatial prior on the abundance vectors. We propose a generative model for hyperspectral images in which the abundances are sampled from a Dirichlet distribution (DD) mixture model, whose parameters depend on a latent label process. The label process is then used to enforces a spatial prior which encourages adjacent pixels to have the same label. A Gibbs sampling framework is used to generate samples from the posterior distributions of the abundances and the parameters of the DD mixture model. The spatial prior that is used is a treestructured sticky hierarchical Dirichlet process (SHDP) and, when used to determine the posterior endmember and abundance distributions, results in a new unmixing algorithm called spatially constrained unmixing (SCU). The directed Markov model facilitates the use of scalerecursive estimation algorithms, and is therefore more computationally efficient as compared to standard Markov random field (MRF) models. Furthermore, the proposed SCU algorithm estimates the number of regions in the image in an unsupervised fashion. The effectiveness of the proposed SCU algorithm is illustrated using synthetic and real data. Index Terms—Bayesian inference, hidden Markov trees, hyperspectral unmixing, image segmentation, spatially constrained unmixing, sticky hierarchical Dirichlet process. I.
Simulation of the Annual Loss Distribution in Operational Risk via Panjer Recursions and Volterra Integral Equations for Value at Risk and Expected Shortfall Estimation.
"... * – Corresponding Author. Following the Loss Distributional Approach (LDA), this article develops two procedures for simulation of an annual loss distribution for modeling of Operational Risk. First, we provide an overview of the typical compoundprocess LDA used widely in Operational Risk modeling ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
(Show Context)
* – Corresponding Author. Following the Loss Distributional Approach (LDA), this article develops two procedures for simulation of an annual loss distribution for modeling of Operational Risk. First, we provide an overview of the typical compoundprocess LDA used widely in Operational Risk modeling, before expanding upon the current literature on evaluation and simulation of annual loss distributions. We present two novel Monte Carlo simulation procedures. In doing so, we make use of Panjer recursions and the Volterra integral equation of the second kind to reformulate the problem of evaluation of the density of a random sum as the calculation of an expectation. We demonstrate the use of importance sampling and transdimensional Markov Chain Monte Carlo algorithms to efficiently evaluate this expectation. We further demonstrate their use in the calculation of Value at Risk and
Automatic detection of key innovations, rate shifts and diversity dependence on phylogenetic trees
 PlosOne 9
, 2013
"... A number of methods have been developed to infer differential rates of species diversification through time and among clades using timecalibrated phylogenetic trees. However, we lack a general framework that can delineate and quantify heterogeneous mixtures of dynamic processes within single phylog ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
(Show Context)
A number of methods have been developed to infer differential rates of species diversification through time and among clades using timecalibrated phylogenetic trees. However, we lack a general framework that can delineate and quantify heterogeneous mixtures of dynamic processes within single phylogenies. I developed a method that can identify arbitrary numbers of timevarying diversification processes on phylogenies without specifying their locations in advance. The method uses reversiblejump Markov Chain Monte Carlo to move between model subspaces that vary in the number of distinct diversification regimes. The model assumes that changes in evolutionary regimes occur across the branches of phylogenetic trees under a compound Poisson process and explicitly accounts for rate variation through time and among lineages. Using simulated datasets, I demonstrate that the method can be used to quantify complex mixtures of timedependent, diversitydependent, and constantrate diversification processes. I compared the performance of the method to the MEDUSA model of rate variation among lineages. As an empirical example, I analyzed the history of speciation and extinction during the radiation of modern whales. The method described here will greatly facilitate the exploration of macroevolutionary dynamics across large phylogenetic trees, which may have been shaped by heterogeneous mixtures of distinct evolutionary processes.
Markov chain Monte Carlo with mixtures of mutually singular distributions
 J. Comput
, 2008
"... singular distributions ..."
Model Choice using Reversible Jump Markov Chain
, 2011
"... We review the acrossmodel simulation approach to computation for Bayesian model determination, based on the reversible jump Markov chain Monte Carlo method. Advantages, difficulties and variations of the methods are discussed. We also discuss some limitations of the ideal Bayesian view of the model ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
We review the acrossmodel simulation approach to computation for Bayesian model determination, based on the reversible jump Markov chain Monte Carlo method. Advantages, difficulties and variations of the methods are discussed. We also discuss some limitations of the ideal Bayesian view of the model determination problem, for which no computational methods can provide a cure.
Reversible jump MCMC
, 2009
"... Statistical problems where ‘the number of things you don’t know is one of the things you don’t know ’ are ubiquitous in statistical modelling. They arise both ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Statistical problems where ‘the number of things you don’t know is one of the things you don’t know ’ are ubiquitous in statistical modelling. They arise both