Results 1  10
of
40
A.: LikelihoodBased Inference for MaxStable Processes
 Journal of the American Statistical Association
"... The last decade has seen maxstable processes emerge as a common tool for the statistical modelling of spatial extremes. However, their application is complicated due to the unavailability of the multivariate density function, and so likelihoodbased methods remain far from providing a complete and ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
The last decade has seen maxstable processes emerge as a common tool for the statistical modelling of spatial extremes. However, their application is complicated due to the unavailability of the multivariate density function, and so likelihoodbased methods remain far from providing a complete and flexible framework for inference. In this article we develop inferentially practical, likelihoodbased methods for fitting maxstable processes derived from a compositelikelihood approach. The procedure is sufficiently reliable and versatile to permit the simultaneous modelling of joint and marginal parameters in the spatial context at a moderate computational cost. The utility of this methodology is examined via simulation, and illustrated by the analysis of U.S. precipitation extremes. Keywords: Composite likelihood; Extreme value theory; Maxstable processes; Pseudolikelihood, Rainfall; Spatial Extremes.
Time series analysis via mechanistic models. In review; prepublished at arxiv.org/abs/0802.0021
, 2008
"... The purpose of time series analysis via mechanistic models is to reconcile the known or hypothesized structure of a dynamical system with observations collected over time. We develop a framework for constructing nonlinear mechanistic models and carrying out inference. Our framework permits the consi ..."
Abstract

Cited by 13 (5 self)
 Add to MetaCart
The purpose of time series analysis via mechanistic models is to reconcile the known or hypothesized structure of a dynamical system with observations collected over time. We develop a framework for constructing nonlinear mechanistic models and carrying out inference. Our framework permits the consideration of implicit dynamic models, meaning statistical models for stochastic dynamical systems which are specified by a simulation algorithm to generate sample paths. Inference procedures that operate on implicit models are said to have the plugandplay property. Our work builds on recently developed plugandplay inference methodology for partially observed Markov models. We introduce a class of implicitly specified Markov chains with stochastic transition rates, and we demonstrate its applicability to open problems in statistical inference for biological systems. As one example, these models are shown to give a fresh perspective on measles transmission dynamics. As a second example, we present a mechanistic analysis of cholera incidence data, involving interaction between two competing strains of the pathogen Vibrio cholerae. 1. Introduction. A
An Adaptive Sequential Monte Carlo Method for Approximate Bayesian Computation
, 2008
"... Approximate Bayesian computation (ABC) is a popular approach to address inference problems where the likelihood function is intractable, or expensive to calculate. To improve over Markov chain Monte Carlo (MCMC) implementations of ABC, the use of sequential Monte Carlo (SMC) methods has recently bee ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
Approximate Bayesian computation (ABC) is a popular approach to address inference problems where the likelihood function is intractable, or expensive to calculate. To improve over Markov chain Monte Carlo (MCMC) implementations of ABC, the use of sequential Monte Carlo (SMC) methods has recently been suggested. Effective SMC algorithms that are currently available for ABC have a computational complexity that is quadratic in the number of Monte Carlo samples [4, 17, 19, 21] and require the careful choice of simulation parameters. In this article an adaptive SMC algorithm is proposed which admits a computational complexity that is linear in the number of samples and determines onthefly the simulation parameters. We demonstrate our algorithm on a toy example and a population genetics example.
Kernel Bayes ’ Rule
"... A nonparametric kernelbased method for realizing Bayes ’ rule is proposed, based on kernel representations of probabilities in reproducing kernel Hilbert spaces. The prior and conditional probabilities are expressed as empirical kernel mean and covariance operators, respectively, and the kernel mea ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
A nonparametric kernelbased method for realizing Bayes ’ rule is proposed, based on kernel representations of probabilities in reproducing kernel Hilbert spaces. The prior and conditional probabilities are expressed as empirical kernel mean and covariance operators, respectively, and the kernel mean of the posterior distribution is computed in the form of a weighted sample. The kernel Bayes ’ rule can be applied to a wide variety of Bayesian inference problems: we demonstrate Bayesian computation without likelihood, and filtering with a nonparametric statespace model. A consistency rate for the posterior estimate is established. 1
Bayesian inference, Monte Carlo sampling and operational risk
 Journal of Operational Risk
"... Operational risk is an important quantitative topic as a result of the Basel II regulatory requirements. Operational risk models need to incorporate internal and external loss data observations in combination with expert opinion surveyed from business specialists. Following the Loss Distributional A ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
Operational risk is an important quantitative topic as a result of the Basel II regulatory requirements. Operational risk models need to incorporate internal and external loss data observations in combination with expert opinion surveyed from business specialists. Following the Loss Distributional Approach, this article considers three aspects of the Bayesian approach to the modeling of operational risk. Firstly we provide an overview of the Bayesian approach to operational risk, before expanding on the current literature through consideration of general families of nonconjugate severity distributions, gandh and GB2 distributions. Bayesian model selection is presented as an alternative to popular frequentist tests, such as KolmogorovSmirnov or AndersonDarling. We present a number of examples and develop techniques for parameter estimation for general severity and frequency distribution models from a Bayesian perspective. Finally we introduce and evaluate recently developed stochastic sampling techniques and highlight their application to operational risk through the models developed.
Approximate Bayesian computation: A nonparametric perspective
 Journal of the American Statistical Association
, 2010
"... Approximate Bayesian Computation is a family of likelihoodfree inference techniques that are wellsuited to models defined in terms of a stochastic generating mechanism. In a nutshell, Approximate Bayesian Computation proceeds by computing summary statistics sobs from the data and simulating synthe ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Approximate Bayesian Computation is a family of likelihoodfree inference techniques that are wellsuited to models defined in terms of a stochastic generating mechanism. In a nutshell, Approximate Bayesian Computation proceeds by computing summary statistics sobs from the data and simulating synthetic summary statistics for different values of the parameter Θ. The posterior distribution is then approximated by an estimator of the conditional density g(Θsobs). In this paper, we derive the asymptotic bias and variance of the standard estimators of the posterior distribution which are based on rejection sampling and linear adjustment. Additionally, we introduce an original estimator of the posterior distribution based on quadratic adjustment and we show that its bias contains a smaller number of terms than the estimator with linear adjustment. Although we find that the estimators with adjustment are not universally superior to the estimator based on rejection sampling, we find that they can achieve better performance when there is a nearly homoscedastic relationship between the summary statistics and the parameter of interest. Last, we present model selection in Approximate Bayesian Computation and provide asymptotic properties of two estimators of the model probabilities. As for parameter estimation, the asymptotic results raise the importance of the curse of dimensionality in Approximate Bayesian Computation. Performing numerical simulations in a simple normal model confirms that the estimators may be less efficient as the number of summary statistics increases. Supplemental materials containing the details of the proofs are available online.
On sequential Monte Carlo, partial rejection control and approximate Bayesian computation
, 2008
"... We present a sequential Monte Carlo sampler variant of the partial rejection control algorithm introduced by Liu (2001), termed SMC sampler PRC, and show that this variant can be considered under the same framework of the sequential Monte Carlo sampler of Del Moral et al. (2006). We make connections ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
We present a sequential Monte Carlo sampler variant of the partial rejection control algorithm introduced by Liu (2001), termed SMC sampler PRC, and show that this variant can be considered under the same framework of the sequential Monte Carlo sampler of Del Moral et al. (2006). We make connections with existing algorithms and theoretical results, and extend some theoretical results to the SMC sampler PRC setting. We examine the properties of the SMC sampler PRC and give recommendations for user specified quantities. We also study the special case of SMC sampler PRC in the “likelihood free” approximate Bayesian computation framework, as introduced by Sisson et al. (2007).
A unified multiresolution coalescent: Markov lumpings of the KingmanTajima ncoalescent
, 2009
"... This work is licensed under the Creative Commons ..."
HIV with contacttracing: a case study in Approximate Bayesian Computation
, 810
"... Statistical inference with missing data is a recurrent issue in epidemiology where the infection process is only partially observable. In this paper, Approximate Bayesian Computation, an alternative to data imputation methods such as Markov Chain Monte Carlo integration, is proposed for making infer ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Statistical inference with missing data is a recurrent issue in epidemiology where the infection process is only partially observable. In this paper, Approximate Bayesian Computation, an alternative to data imputation methods such as Markov Chain Monte Carlo integration, is proposed for making inference in epidemiological models. This method of inference is not based on the likelihood function and relies exclusively on numerical simulations of the model. ABC consists in computing a distance between simulated and observed summary statistics and weighting the simulations according to this distance. We propose an original extension of ABC to pathvalued summary statistics, corresponding to the cumulated number of detected individuals as a function of time. In a simple SIR model, we show that the posterior distributions obtained with ABC are similar to those obtained with MCMC. When detection times are binned or noisy, we introduce a vector of summary statistics for which several variants of the ABC can be applied. In a refined SIR model wellsuited to the HIV contacttracing program in Cuba, we perform a comparison between ABC with full and with binned data. The last section deals with the analysis of the Cuban HIVAIDS data. We evaluate the efficiency of the detection system, and predict the evolution of the HIVAIDS disease in the forthcoming years. We show in particular that the percentage of undetected infectious individuals among the contaminated population might be of the order of 40%.
Chain Ladder Method: Bayesian Bootstrap versus Classical Bootstrap
, 2009
"... The intention of this paper is to analyse the mean square error of prediction (MSEP) under the distributionfree chain ladder (DFCL) claims reserving method. We compare the estimation obtained from the classical bootstrap method with the one obtained from a Bayesian bootstrap. To achieve this in the ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The intention of this paper is to analyse the mean square error of prediction (MSEP) under the distributionfree chain ladder (DFCL) claims reserving method. We compare the estimation obtained from the classical bootstrap method with the one obtained from a Bayesian bootstrap. To achieve this in the DFCL model we develop a novel approximate Bayesian computation (ABC) sampling algorithm to obtain the empirical posterior distribution. We need an ABC sampling algorithm because we work in a distributionfree setting. The use of this ABC methodology combined with bootstrap allows us to obtain samples from the intractable posterior distribution without the requirement of any distributional assumptions. This then enables us to calculate the MSEP and other risk measures like ValueatRisk.