Results 1  10
of
26
Bayesian measures of model complexity and fit
 Journal of the Royal Statistical Society, Series B
, 2002
"... [Read before The Royal Statistical Society at a meeting organized by the Research ..."
Abstract

Cited by 132 (2 self)
 Add to MetaCart
[Read before The Royal Statistical Society at a meeting organized by the Research
Transdimensional Markov chain Monte Carlo
 in Highly Structured Stochastic Systems
, 2003
"... In the context of samplebased computation of Bayesian posterior distributions in complex stochastic systems, this chapter discusses some of the uses for a Markov chain with a prescribed invariant distribution whose support is a union of euclidean spaces of differing dimensions. This leads into a re ..."
Abstract

Cited by 56 (0 self)
 Add to MetaCart
In the context of samplebased computation of Bayesian posterior distributions in complex stochastic systems, this chapter discusses some of the uses for a Markov chain with a prescribed invariant distribution whose support is a union of euclidean spaces of differing dimensions. This leads into a reformulation of the reversible jump MCMC framework for constructing such ‘transdimensional ’ Markov chains. This framework is compared to alternative approaches for the same task, including methods that involve separate sampling within different fixeddimension models. We consider some of the difficulties researchers have encountered with obtaining adequate performance with some of these methods, attributing some of these to misunderstandings, and offer tentative recommendations about algorithm choice for various classes of problem. The chapter concludes with a look towards desirable future developments.
H: Computing Bayes factors using thermodynamic integration
 Syst Biol
"... Abstract.—In the Bayesian paradigm, a common method for comparing two models is to compute the Bayes factor, defined as the ratio of their respective marginal likelihoods. In recent phylogenetic works, the numerical evaluation of marginal likelihoods has often been performed using the harmonic mean ..."
Abstract

Cited by 33 (5 self)
 Add to MetaCart
Abstract.—In the Bayesian paradigm, a common method for comparing two models is to compute the Bayes factor, defined as the ratio of their respective marginal likelihoods. In recent phylogenetic works, the numerical evaluation of marginal likelihoods has often been performed using the harmonic mean estimation procedure. In the present article, we propose to employ another method, based on an analogy with statistical physics, called thermodynamic integration. We describe the method, propose an implementation, and show on two analytical examples that this numerical method yields reliable estimates. In contrast, the harmonic mean estimator leads to a strong overestimation of the marginal likelihood, which is all the more pronounced as the model is higher dimensional. As a result, the harmonic mean estimator systematically favors more parameterrich models, an artefact that might explain some recent puzzling observations, based on harmonic mean estimates, suggesting that Bayes factors tend to overscore complex models. Finally, we apply our method to the comparison of several alternative models of aminoacid replacement. We confirm our previous observations, indicating that modeling pattern heterogeneity across sites tends to yield better models than standard empirical matrices. [Bayes factor; harmonic mean; mixture model; path sampling; phylogeny; thermodynamic integration.] Bayesian methods have become popular in molecular phylogenetics over the recent years. The simple and intuitive interpretation of the concept of probabilities
Deviance Information Criterion for Comparing Stochastic Volatility Models
 Journal of Business and Economic Statistics
, 2002
"... Bayesian methods have been efficient in estimating parameters of stochastic volatility models for analyzing financial time series. Recent advances made it possible to fit stochastic volatility models of increasing complexity, including covariates, leverage effects, jump components and heavytailed d ..."
Abstract

Cited by 26 (7 self)
 Add to MetaCart
Bayesian methods have been efficient in estimating parameters of stochastic volatility models for analyzing financial time series. Recent advances made it possible to fit stochastic volatility models of increasing complexity, including covariates, leverage effects, jump components and heavytailed distributions. However, a formal model comparison via Bayes factors remains difficult. The main objective of this paper is to demonstrate that model selection is more easily performed using the deviance information criterion (DIC). It combines a Bayesian measureoffit with a measure of model complexity. We illustrate the performance of DIC in discriminating between various different stochastic volatility models using simulated data and daily returns data on the S&P100 index.
MCMC methods for continuoustime financial econometrics

, 2003
"... This chapter develops Markov Chain Monte Carlo (MCMC) methods for Bayesian inference in continuoustime asset pricing models. The Bayesian solution to the inference problem is the distribution of parameters and latent variables conditional on observed data, and MCMC methods provide a tool for explor ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
This chapter develops Markov Chain Monte Carlo (MCMC) methods for Bayesian inference in continuoustime asset pricing models. The Bayesian solution to the inference problem is the distribution of parameters and latent variables conditional on observed data, and MCMC methods provide a tool for exploring these highdimensional, complex distributions. We first provide a description of the foundations and mechanics of MCMC algorithms. This includes a discussion of the CliffordHammersley theorem, the Gibbs sampler, the MetropolisHastings algorithm, and theoretical convergence properties of MCMC algorithms. We next provide a tutorial on building MCMC algorithms for a range of continuoustime asset pricing models. We include detailed examples for equity price models, option pricing models, term structure models, and regimeswitching models. Finally, we discuss the issue of sequential Bayesian inference, both for parameters and state variables.
Transdimensional Markov Chains: A Decade of Progress and Future Perspectives
 Journal of the American Statistical Association
, 2005
"... The last ten years have witnessed the development of sampling frameworks that permit the construction of Markov chains which simultaneously traverse both parameter and model space. In this time substantial methodological progress has been made. In this article we present a survey of the current stat ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
The last ten years have witnessed the development of sampling frameworks that permit the construction of Markov chains which simultaneously traverse both parameter and model space. In this time substantial methodological progress has been made. In this article we present a survey of the current state of the art and evaluate some of the most recent advances in this field. We also discuss future research perspectives in the context of the drive to develop sampling mechanisms with high degrees of both efficiency and automation. 1
Robust InflationForecastBased Rules to Shield Against Indeterminacy.” Journal of Economic Dynamics and Control, forthcoming
 IMF Discussion Paper, forthcoming, presented at the 10th International Conference on Computing in Economics and Finance
, 2006
"... We estimate several variants of a linearized form of a New Keynesian model using quarterly US data. Using these rival models and the estimated posterior probabilities we then design rules that are robust in two senses: ‘weakly robust ’ rules are guaranteed to be stable and determinate in all the pos ..."
Abstract

Cited by 12 (8 self)
 Add to MetaCart
We estimate several variants of a linearized form of a New Keynesian model using quarterly US data. Using these rival models and the estimated posterior probabilities we then design rules that are robust in two senses: ‘weakly robust ’ rules are guaranteed to be stable and determinate in all the possible variants of the model, whereas ‘strongly robust ’ rules, in addition, use the probabilities to minimize an expected loss function of the central bank subject to this model uncertainty. We find three main results. First, in our two model variants with the highest posterior model probabilities there are substantial stabilization gains from commitment. Second, an optimized inflation targeting rule feeding back on current inflation will result in a unique stable equilibrium and realize at least threequarters of these potential gains, even if it is used in a variant of the model that is not the one for which it was designed. Third, the performance of optimimized inflation targeting rules perform increasing less well as the forward horizon increases from j = 0 to j = 1,2 quarters. For j=2, only a rule designed for our most indeterminacyprone model is weakly robust and yields determinacy across all models. A strongly robust rule can be designed that sacrifices performance in the least probable models for better performance in the most probable models. JEL Classification: E52, E37, E58
A Bayesian network classification methodology for gene expression data
 JOURNAL OF COMPUTATIONAL BIOLOGY
, 2004
"... We present new techniques for the application of a Bayesian network learning framework to the problem of classifying gene expression data. The focus on classification permits us to develop techniques that address in several ways the complexities of learning Bayesian nets. Our classification model re ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
We present new techniques for the application of a Bayesian network learning framework to the problem of classifying gene expression data. The focus on classification permits us to develop techniques that address in several ways the complexities of learning Bayesian nets. Our classification model reduces the Bayesian network learning problem to the problem of learning multiple subnetworks, each consisting of a class label node and its set of parent genes. We argue that this classification model is more appropriate for the gene expression domain than are other structurally similar Bayesian network classification models, such as Naive Bayes and Tree Augmented Naive Bayes (TAN), because our model is consistent with prior domain experience suggesting that a relatively small number of genes, taken in different combinations, is required to predict most clinical classes of interest. Within this framework, we consider two different approaches to identifying parent sets which are supported by the gene expression observations and any other currently available evidence. One approach employs a simple greedy algorithm to search the universe of all genes; the second approach develops and applies a gene selection algorithm whose results are incorporated as a prior to enable an exhaustive search for parent sets over a restricted universe of genes. Two other significant contributions are the construction of classifiers from multiple, competing Bayesian network hypotheses and algorithmic methods for normalizing and binning gene expression data in the
Bayesian Input Variable Selection Using CrossValidation Predictive Densities and Reversible Jump MCMC
, 2001
"... We consider the problem of input variable selection of a Bayesian model. With suitable priors it is possible to have a large number of input variables in Bayesian models, as less relevant inputs can have a smaller effect in the model. To make the model more explainable and easier to analyse, or to r ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
We consider the problem of input variable selection of a Bayesian model. With suitable priors it is possible to have a large number of input variables in Bayesian models, as less relevant inputs can have a smaller effect in the model. To make the model more explainable and easier to analyse, or to reduce the cost of making measurements or the cost of computation, it may be useful to select a smaller set of input variables. Our goal is to find a model with the smallest number of input variables having statistically or practically the same expected utility as the full model. A good estimate for the expected utility, with any desired utility, can be computed using crossvalidation predictive densities (Vehtari and Lampinen, 2001). In the case of input selection, there are 2 K input combinations and computing the crossvalidation predictive densities for each model easily becomes computationally prohibitive. We propose to use the reversible jump Markov chain Monte Carlo (RJMCMC) method to find out potentially useful input combinations, for which the final model choice and assessment is done using the crossvalidation predictive densities. The RJMCMC visits the models according to their posterior probabilities. As models with negligible probability are probably not visited in finite time, the computational savings can be considerable compared to going through all possible models. The posterior probabilities of the models, given by the RJMCMC, are proportional to the product of the prior probabilities of the models and the prior predictive likelihoods of the models. The prior predictive likelihood measures the goodness of the model if no training data were used, and thus can be used to estimate the lower limit of the expected predictive likelihood. These estimates indicate ...
Model Selection via Predictive Explanatory Power 20
 Helsinki University of Technology, Laboratory of Computational Engineering
, 1998
"... We consider model selection as a decision problem from a predictive perspective. The optimal Bayesian way of handling model uncertainty is to integrate over model space. Model selection can then be seen as point estimation in the model space. We propose a model selection method based on KullbackLei ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We consider model selection as a decision problem from a predictive perspective. The optimal Bayesian way of handling model uncertainty is to integrate over model space. Model selection can then be seen as point estimation in the model space. We propose a model selection method based on KullbackLeibler divergence from the predictive distribution of the full model to the predictive distributions of the submodels. The loss of predictive explanatory power is defined as the expectation of this predictive discrepancy. The goal is to find the simplest submodel which has a similar predictive distribution as the full model, that is, the simplest submodel whose loss of explanatory power is acceptable. To compute the expected predictive discrepancy between complex models, for which analytical solutions do not exist, we propose to use predictive distributions obtained via kfold crossvalidation. We compare the performance of the method to posterior probabilities (Bayes factors), deviance information criteria (DIC) and direct maximization of the expected utility via crossvalidation.