Results 1 
8 of
8
Mixtures of gpriors for Bayesian variable selection
 Journal of the American Statistical Association
, 2008
"... Zellner’s gprior remains a popular conventional prior for use in Bayesian variable selection, despite several undesirable consistency issues. In this paper, we study mixtures of gpriors as an alternative to default gpriors that resolve many of the problems with the original formulation, while mai ..."
Abstract

Cited by 36 (4 self)
 Add to MetaCart
Zellner’s gprior remains a popular conventional prior for use in Bayesian variable selection, despite several undesirable consistency issues. In this paper, we study mixtures of gpriors as an alternative to default gpriors that resolve many of the problems with the original formulation, while maintaining the computational tractability that has made the gprior so popular. We present theoretical properties of the mixture gpriors and provide real and simulated examples to compare the mixture formulation with fixed gpriors, Empirical Bayes approaches and other default procedures.
Bayesian Adaptive Sampling for Variable Selection and Model Averaging
"... For the problem of model choice in linear regression, we introduce a Bayesian adaptive sampling algorithm (BAS), that samples models without replacement from the space of models. For problems that permit enumeration of all models BAS is guaranteed to enumerate the model space in 2 p iterations where ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
For the problem of model choice in linear regression, we introduce a Bayesian adaptive sampling algorithm (BAS), that samples models without replacement from the space of models. For problems that permit enumeration of all models BAS is guaranteed to enumerate the model space in 2 p iterations where p is the number of potential variables under consideration. For larger problems where sampling is required, we provide conditions under which BAS provides perfect samples without replacement. When the sampling probabilities in the algorithm are the marginal variable inclusion probabilities, BAS may be viewed as sampling models “near ” the median probability model of Barbieri and Berger. As marginal inclusion probabilities are not known in advance we discuss several strategies to estimate adaptively the marginal inclusion probabilities within BAS. We illustrate the performance of the algorithm using simulated and real data and show that BAS can outperform Markov chain Monte Carlo methods. The algorithm is implemented in the R package BAS available at CRAN.
Submitted to the Annals of Applied Statistics BAYESIAN MODEL SEARCH AND MULTILEVEL INFERENCE FOR SNP ASSOCIATION STUDIES
"... Technological advances in genotyping have given rise to hypothesis– based association studies of increasing scope. As a result, the scientific hypotheses addressed by these studies have become more complex and more difficult to address using existing analytic methodologies. Obstacles to analysis inc ..."
Abstract
 Add to MetaCart
Technological advances in genotyping have given rise to hypothesis– based association studies of increasing scope. As a result, the scientific hypotheses addressed by these studies have become more complex and more difficult to address using existing analytic methodologies. Obstacles to analysis include inference in the face of multiple comparisons, complications arising from correlations among the SNPs (single nucleotide polymorphisms), choice of their genetic parametrization and missing data. In this paper we present an efficient Bayesian model search strategy that searches over the space of genetic markers and their genetic parametrization. The resulting method for Multilevel Inference of SNP Associations, MISA, allows computation of multilevel posterior probabilities and Bayes factors at the global, gene and SNP level, with the prior distribution on SNP inclusion in the model providing an intrinsic multiplicity correction. We use simulated data sets to characterize MISA’s statistical power, and show that MISA has higher power to detect association than standard procedures. Using data from the North Carolina Ovarian Cancer Study (NCOCS), MISA identifies variants that were not identified by standard methods and have been externally ’validated ’ in independent studies. We examine sensitivity of the NCOCS results to prior choice and method for imputing missing data. MISA is available in an R package on CRAN.
A Note on the Bias . . .
, 2010
"... In variable selection problems that preclude enumeration of models, stochastic search algorithms, often based on Markov Chain Monte Carlo, are commonly used to identify a set of models for model selection or model averaging. Because Monte Carlo frequencies of models are often zero or one in high dim ..."
Abstract
 Add to MetaCart
In variable selection problems that preclude enumeration of models, stochastic search algorithms, often based on Markov Chain Monte Carlo, are commonly used to identify a set of models for model selection or model averaging. Because Monte Carlo frequencies of models are often zero or one in high dimensional problems, posterior probabilities calculated from the observed marginal likelihoods, renormalized over the sampled models are often employed. Such estimates are the only recourse in the newer generation of stochastic search algorithms. In this paper, we show that the approach of estimating model probabilities based on renormalization of posterior probabilities over the set of sampled models leads to bias in many quantities of interest and may not reduce mean squared error.
Model Averaging in Economics: An Overview ∗ Enrique MoralBenito †
, 2010
"... Standard practice in empirical research is based on two steps: first, researchers select a model from the space of all possible models; second, they proceed as if the selected model had generated the data. Therefore, uncertainty in the model selection step is typically ignored. Alternatively, model ..."
Abstract
 Add to MetaCart
Standard practice in empirical research is based on two steps: first, researchers select a model from the space of all possible models; second, they proceed as if the selected model had generated the data. Therefore, uncertainty in the model selection step is typically ignored. Alternatively, model averaging accounts for this model uncertainty. In this paper, I review the literature on model averaging with special emphasis on its applications to economics. Finally, as empirical illustration, I consider model averaging to examine the deterrent effect of capital punishment across states in the US. JEL Classification: C5, K4.
Finite Population Estimators in Stochastic
"... Monte Carlo algorithms are commonly used to identify a set of models for Bayesian model selection or model averaging. Because empirical frequencies of models are often zero or one in high dimensional problems, posterior probabilities calculated from the observed marginal likelihoods, renormalized o ..."
Abstract
 Add to MetaCart
Monte Carlo algorithms are commonly used to identify a set of models for Bayesian model selection or model averaging. Because empirical frequencies of models are often zero or one in high dimensional problems, posterior probabilities calculated from the observed marginal likelihoods, renormalized over the sampled models are often employed. Such estimates are the only recourse in several newer stochastic search algorithms. In this paper, we prove that renormalization of posterior probabilities over the set of sampled models generally leads to bias which may dominate mean squared error. Viewing the model space as a finite population, we propose a new estimator based on a ratio of HorvitzThompson estimators which incorporates observed marginal likelihoods, but is approximately unbiased. This is shown to lead to a reduction in mean squared error compared to the empirical or renormalized estimators, with little increase in computational costs.