Results 1 
9 of
9
Benchmark Priors for Bayesian Model Averaging
 FORTHCOMING IN THE JOURNAL OF ECONOMETRICS
, 2001
"... In contrast to a posterior analysis given a particular sampling model, posterior model probabilities in the context of model uncertainty are typically rather sensitive to the specification of the prior. In particular, “diffuse” priors on modelspecific parameters can lead to quite unexpected consequ ..."
Abstract

Cited by 171 (5 self)
 Add to MetaCart
In contrast to a posterior analysis given a particular sampling model, posterior model probabilities in the context of model uncertainty are typically rather sensitive to the specification of the prior. In particular, “diffuse” priors on modelspecific parameters can lead to quite unexpected consequences. Here we focus on the practically relevant situation where we need to entertain a (large) number of sampling models and we have (or wish to use) little or no subjective prior information. We aim at providing an “automatic” or “benchmark” prior structure that can be used in such cases. We focus on the Normal linear regression model with uncertainty in the choice of regressors. We propose a partly noninformative prior structure related to a Natural Conjugate gprior specification, where the amount of subjective information requested from the user is limited to the choice of a single scalar hyperparameter g0j. The consequences of different choices for g0j are examined. We investigate theoretical properties, such as consistency of the implied Bayesian procedure. Links with classical information criteria are provided. More importantly, we examine the finite sample implications of several choices of g0j in a simulation study. The use of the MC3 algorithm of Madigan and York (1995), combined with efficient coding in Fortran, makes it feasible to conduct large simulations. In addition to posterior criteria, we shall also compare the predictive performance of different priors. A classic example concerning the economics of crime will also be provided and contrasted with results in the literature. The main findings of the paper will lead us to propose a “benchmark” prior specification in a linear regression context with model uncertainty.
The practical implementation of Bayesian model selection
 Institute of Mathematical Statistics
, 2001
"... In principle, the Bayesian approach to model selection is straightforward. Prior probability distributions are used to describe the uncertainty surrounding all unknowns. After observing the data, the posterior distribution provides a coherent post data summary of the remaining uncertainty which is r ..."
Abstract

Cited by 128 (3 self)
 Add to MetaCart
In principle, the Bayesian approach to model selection is straightforward. Prior probability distributions are used to describe the uncertainty surrounding all unknowns. After observing the data, the posterior distribution provides a coherent post data summary of the remaining uncertainty which is relevant for model selection. However, the practical implementation of this approach often requires carefully tailored priors and novel posterior calculation methods. In this article, we illustrate some of the fundamental practical issues that arise for two different model selection problems: the variable selection problem for the linear model and the CART model selection problem.
Bayesian CART Model Search
 Journal of the American Statistical Association
, 1998
"... In this paper we put forward a Bayesian approach for nding CART (classi cation and regression tree) models. The two basic components of this approach consist of prior speci cation and stochastic search. The basic idea is to have the prior induce a posterior ..."
Abstract

Cited by 72 (0 self)
 Add to MetaCart
(Show Context)
In this paper we put forward a Bayesian approach for nding CART (classi cation and regression tree) models. The two basic components of this approach consist of prior speci cation and stochastic search. The basic idea is to have the prior induce a posterior
The variable selection problem
 Journal of the American Statistical Association
, 2000
"... The problem of variable selection is one of the most pervasive model selection problems in statistical applications. Often referred to as the problem of subset selection, it arises when one wants to model the relationship between a variable of interest and a subset of potential explanatory variables ..."
Abstract

Cited by 62 (3 self)
 Add to MetaCart
(Show Context)
The problem of variable selection is one of the most pervasive model selection problems in statistical applications. Often referred to as the problem of subset selection, it arises when one wants to model the relationship between a variable of interest and a subset of potential explanatory variables or predictors, but there is uncertainty about which subset to use. This vignette reviews some of the key developments which have led to the wide variety of approaches for this problem. 1
Spike and Slab Prior Distributions for Simultaneous Bayesian Hypothesis Testing, Model Selection, and Prediction, of Nonlinear Outcomes
"... A small body of literature has used the spike and slab prior specification for model selection with strictly linear outcomes. In this setup a twocomponent mixture distribution is stipulated for coefficients of interest with one part centered at zero with very high precision (the spike) and the oth ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
A small body of literature has used the spike and slab prior specification for model selection with strictly linear outcomes. In this setup a twocomponent mixture distribution is stipulated for coefficients of interest with one part centered at zero with very high precision (the spike) and the other as a distribution diffusely centered at the research hypothesis (the slab). With the selective shrinkage, this setup incorporates the zero coefficient contingency directly into the modeling process to produce posterior probabilities for hypothesized outcomes. We extend the model to qualitative responses by designing a hierarchy of forms over both the parameter and model spaces to achieve variable selection, model averaging, and individual coefficient hypothesis testing. To overcome the technical challenges in estimating the marginal posterior distributions possibly with a dramatic ratio of density heights of the spike to the slab, we develop a hybrid Gibbs sampling algorithm using an adaptive rejection approach for various discrete outcome models, including dichotomous, polychotomous, and count responses. The performance of the models and methods are assessed with both Monte Carlo experiments and empirical applications in political science.
Regression of Unknown Degree
, 2003
"... This article presents a comparison of four methods to compute the posterior probabilities of the possible orders in polynomial regression models. These posterior probabilities are used for forecasting by using Bayesian model averaging. It is shown that Bayesian model averaging provides a closer rela ..."
Abstract
 Add to MetaCart
This article presents a comparison of four methods to compute the posterior probabilities of the possible orders in polynomial regression models. These posterior probabilities are used for forecasting by using Bayesian model averaging. It is shown that Bayesian model averaging provides a closer relationship between the theoretical coverage of the high density predictive interval (HDPI) and the observed coverage than those corresponding to selecting the best model. The performance of the different procedures are illustrated with simulations and some known engineering data. Key words:
Operations Management and Manufacturing by
, 2010
"... This thesis focuses on the design and analysis of discreteevent stochastic simulations involving correlated inputs, input modeling for stochastic simulations, and application of OM/OR techniques to the operations of food banks. This thesis contributes to the stochastic simulation theory by descri ..."
Abstract
 Add to MetaCart
(Show Context)
This thesis focuses on the design and analysis of discreteevent stochastic simulations involving correlated inputs, input modeling for stochastic simulations, and application of OM/OR techniques to the operations of food banks. This thesis contributes to the stochastic simulation theory by describing how to jointly represent stochastic and parameter uncertainties in stochastic simulations with correlated inputs, and decompose the variance of the simulation output into terms related to stochastic uncertainty and parameter uncertainty. Such a decomposition would be beneficial for developing data collection schemas to reduce the parameter uncertainty in stochastic simulations. Furthermore, this thesis contributes to the vehicle routing theory by being the first work to rigorously study the 1Commodity Pickup and Delivery Vehicle Routing Problem (1PDVRP) that arises in the context of food rescue programs of food banks. A synopsis of the three chapters of the thesis follows. Chapter 1: “Accounting for Parameter Uncertainty in LargeScale Stochastic Simulations with Correlated Inputs” This chapter considers largescale stochastic simulations with correlated inputs having Normal
Politics, Stock Markets, and Model Uncertainty∗
, 2007
"... The empirical finance literature mostly documents a weak response of stock markets to political events. We test whether this weak response may be due to model uncertainty. We do so by applying Bayesian techniques (Bayesian Model Averaging) to a novel data set of political variables for 17 democracie ..."
Abstract
 Add to MetaCart
The empirical finance literature mostly documents a weak response of stock markets to political events. We test whether this weak response may be due to model uncertainty. We do so by applying Bayesian techniques (Bayesian Model Averaging) to a novel data set of political variables for 17 democracies. Our results confirm that the relationship between political variables and the level of stock returns is weak. Stock market volatility, however, is shown to be significantly affected by a number of political variables.
Averaging
, 2003
"... A bayesian approach is used to estimate a nonparametric regression model. The main features of the procedure are, first, the functional form of the curve is approximated by a mixture of local polynomials by Bayesian Model Averaging (BMA); second, the model weights are approximated by the BIC criteri ..."
Abstract
 Add to MetaCart
A bayesian approach is used to estimate a nonparametric regression model. The main features of the procedure are, first, the functional form of the curve is approximated by a mixture of local polynomials by Bayesian Model Averaging (BMA); second, the model weights are approximated by the BIC criterion, and third, a robust estimation procedure is incorporated to improve the smoothness of the estimated curve. The models considered at each sample points are polynomial regression models of order smaller that four, and the parameters of each model are estimated by a local window. The estimated value is computed by BMA, and the posterior probability of each model is approximated by the exponential of the BIC criterion. The robustness is achieved by assuming that the noise follows a scale contaminated normal model so that the effect of possible outliers is downweighted. The procedure provides a smooth curve and allows a straightforward prediction and quantification of the uncertainty. The method is illustrated with several examples and some Monte Carlo experiments.