Results 1  10
of
101
On Leverage in a Stochastic Volatility Model
 JOURNAL OF ECONOMETRICS
, 2005
"... This note is concerned with specification for modelling financial leverage effect in the context of stochastic volatility (SV) models. Two alternative specifications coexist in the literature. One is the Euler approximation to the well known continuous time SV model with leverage effect and the o ..."
Abstract

Cited by 58 (14 self)
 Add to MetaCart
This note is concerned with specification for modelling financial leverage effect in the context of stochastic volatility (SV) models. Two alternative specifications coexist in the literature. One is the Euler approximation to the well known continuous time SV model with leverage effect and the other is the discrete time SV model of Jacquier, Polson and Rossi (2004, Journal of Econometrics, forthcoming). Using a Gaussian nonlinear state space form with uncorrelated measurement and transition errors, I show that it is easy to interpret the leverage e#ect in the conventional model whereas it is not clear how to obtain the leverage effect in the model of Jacquier et al. Empirical comparisons of these two models via Bayesian Markov chain Monte Carlo (MCMC) methods reveal that the specification of Jacquier et al is inferior. Simulation experiments are conducted to study the sampling properties of the Bayes MCMC for the conventional model.
Estimating the integrated likelihood via posterior simulation using the harmonic mean identity
 Bayesian Statistics
, 2007
"... The integrated likelihood (also called the marginal likelihood or the normalizing constant) is a central quantity in Bayesian model selection and model averaging. It is defined as the integral over the parameter space of the likelihood times the prior density. The Bayes factor for model comparison a ..."
Abstract

Cited by 49 (2 self)
 Add to MetaCart
The integrated likelihood (also called the marginal likelihood or the normalizing constant) is a central quantity in Bayesian model selection and model averaging. It is defined as the integral over the parameter space of the likelihood times the prior density. The Bayes factor for model comparison and Bayesian testing is a ratio of integrated likelihoods, and the model weights in Bayesian model averaging are proportional to the integrated likelihoods. We consider the estimation of the integrated likelihood from posterior simulation output, aiming at a generic method that uses only the likelihoods from the posterior simulation iterations. The key is the harmonic mean identity, which says that the reciprocal of the integrated likelihood is equal to the posterior harmonic mean of the likelihood. The simplest estimator based on the identity is thus the harmonic mean of the likelihoods. While this is an unbiased and simulationconsistent estimator, its reciprocal can have infinite variance and so it is unstable in general. We describe two methods for stabilizing the harmonic mean estimator. In the first one, the parameter space is reduced in such a way that the modified estimator involves a harmonic mean of heaviertailed densities, thus resulting in a finite variance estimator. The resulting
Relevant statistics for Bayesian model choice. ArXiv eprints
, 2011
"... Summary. The choice of the summary statistics in Bayesian inference and in particular in ABC algorithms is paramount to produce a valid outcome. We derive necessary and sufficient conditions on those statistics for the corresponding Bayes factor to be convergent, namely to asymptotically select the ..."
Abstract

Cited by 18 (5 self)
 Add to MetaCart
Summary. The choice of the summary statistics in Bayesian inference and in particular in ABC algorithms is paramount to produce a valid outcome. We derive necessary and sufficient conditions on those statistics for the corresponding Bayes factor to be convergent, namely to asymptotically select the true model. Those conditions, which amount to the expectations of the summary statistics to asymptotically differ under both models, are then usable in ABC settings to determine which summary statistics are appropriate, via a standard and quick Monte Carlo validation.
Multilevel models with multivariate mixed response types.” Statistical Modelling
, 2009
"... Abstract: We build upon the existing literature to formulate a class of models for multivariate mixtures of Gaussian, ordered or unordered categorical responses and continuous distributions that are not Gaussian, each of which can be defined at any level of a multilevel data hierarchy. We describe a ..."
Abstract

Cited by 17 (7 self)
 Add to MetaCart
Abstract: We build upon the existing literature to formulate a class of models for multivariate mixtures of Gaussian, ordered or unordered categorical responses and continuous distributions that are not Gaussian, each of which can be defined at any level of a multilevel data hierarchy. We describe a Markov chain Monte Carlo algorithm for fitting such models. We show how this unifies a number of disparate problems, including partially observed data and missing data in generalized linear modelling. The twolevel model is considered in detail with worked examples of applications to a prediction problem and to multiple imputation for missing data. We conclude with a discussion outlining possible extensions and connections in the literature. Software for estimating the models is freely available. Key words: Box–Cox transformation; data augmentation; data coarsening; latent Gaussian model; maximum indicant model; MCMC; missing data; mixed response models; multilevel; multiple imputation; multivariate; normalising transformations; partially known values; prediction; priorinformed imputation; probit model
Inductive game theory and the dynamics of animal conflict
 PLoS Comput. Biol. 2010
"... ar ..."
(Show Context)
2012b). Locally adaptive spatial smoothing using conditional autoregressive models. arXiv preprint
"... Conditional autoregressive (CAR) models are commonly used to capture spatial correlation in areal unit data, and are typically specified as a prior distribution for a set of random effects, as part of a hierarchical Bayesian model. The spatial correlation structure induced by these models is deter ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
Conditional autoregressive (CAR) models are commonly used to capture spatial correlation in areal unit data, and are typically specified as a prior distribution for a set of random effects, as part of a hierarchical Bayesian model. The spatial correlation structure induced by these models is determined by geographical adjacency, so that two areas have correlated random effects if they share a common border. However, this correlation structure is too simplistic for real data, which are instead likely to include subregions of strong correlation as well as locations at which the response exhibits a stepchange. Therefore this paper proposes an extension to CAR priors, which can capture such localised spatial correlation. The proposed approach takes the form of an iterative algorithm, which sequentially updates the spatial correlation structure in the data as well as estimating the remaining model parameters. The efficacy of the approach is assessed by simulation, and its utility is illustrated in a disease mapping context, using data on respiratory disease risk in Greater