Results 1  10
of
44
Mixtures of gpriors for Bayesian variable selection
 Journal of the American Statistical Association
, 2008
"... Zellner’s gprior remains a popular conventional prior for use in Bayesian variable selection, despite several undesirable consistency issues. In this paper, we study mixtures of gpriors as an alternative to default gpriors that resolve many of the problems with the original formulation, while mai ..."
Abstract

Cited by 82 (7 self)
 Add to MetaCart
(Show Context)
Zellner’s gprior remains a popular conventional prior for use in Bayesian variable selection, despite several undesirable consistency issues. In this paper, we study mixtures of gpriors as an alternative to default gpriors that resolve many of the problems with the original formulation, while maintaining the computational tractability that has made the gprior so popular. We present theoretical properties of the mixture gpriors and provide real and simulated examples to compare the mixture formulation with fixed gpriors, Empirical Bayes approaches and other default procedures.
From Laplace To Supernova Sn 1987a: Bayesian Inference In Astrophysics
, 1990
"... . The Bayesian approach to probability theory is presented as an alternative to the currently used longrun relative frequency approach, which does not offer clear, compelling criteria for the design of statistical methods. Bayesian probability theory offers unique and demonstrably optimal solutions ..."
Abstract

Cited by 68 (2 self)
 Add to MetaCart
. The Bayesian approach to probability theory is presented as an alternative to the currently used longrun relative frequency approach, which does not offer clear, compelling criteria for the design of statistical methods. Bayesian probability theory offers unique and demonstrably optimal solutions to wellposed statistical problems, and is historically the original approach to statistics. The reasons for earlier rejection of Bayesian methods are discussed, and it is noted that the work of Cox, Jaynes, and others answers earlier objections, giving Bayesian inference a firm logical and mathematical foundation as the correct mathematical language for quantifying uncertainty. The Bayesian approaches to parameter estimation and model comparison are outlined and illustrated by application to a simple problem based on the gaussian distribution. As further illustrations of the Bayesian paradigm, Bayesian solutions to two interesting astrophysical problems are outlined: the measurement of wea...
On the effect of prior assumptions in Bayesian model averaging with applications to growth regression
, 2008
"... Abstract. We consider the problem of variable selection in linear regression models. Bayesian model averaging has become an important tool in empirical settings with large numbers of potential regressors and relatively limited numbers of observations. We examine the effect of a variety of prior assu ..."
Abstract

Cited by 60 (5 self)
 Add to MetaCart
Abstract. We consider the problem of variable selection in linear regression models. Bayesian model averaging has become an important tool in empirical settings with large numbers of potential regressors and relatively limited numbers of observations. We examine the effect of a variety of prior assumptions on the inference concerning model size, posterior inclusion probabilities of regressors and on predictive performance. We illustrate these issues in the context of crosscountry growth regressions using three datasets with 41 to 67 potential drivers of growth and 72 to 93 observations. Finally, we recommend priors for use in this and related contexts.
Objective Bayesian model selection in Gaussian graphical models
, 2007
"... This paper presents a default modelselection procedure for Gaussian graphical models that involves two new developments. First, we develop a default version of the hyperinverse Wishart prior for restricted covariance matrices, called the hyperinverse Wishart gprior, and show how it corresponds t ..."
Abstract

Cited by 36 (4 self)
 Add to MetaCart
(Show Context)
This paper presents a default modelselection procedure for Gaussian graphical models that involves two new developments. First, we develop a default version of the hyperinverse Wishart prior for restricted covariance matrices, called the hyperinverse Wishart gprior, and show how it corresponds to the implied fractional prior for covariance selection using fractional Bayes factors. Second, we apply a class of priors that automatically handles the problem of multiple hypothesis testing implied by covariance selection. We demonstrate our methods on a variety of simulated examples, concluding with a real example analysing covariation in mutualfund returns. These studies reveal that the combined use of a multiplicitycorrection prior on graphs and fractional Bayes factors for computing marginal likelihoods yields better performance than existing Bayesian methods. Some key words: covariance selection; hyperinverse Wishart distribution; fractional Bayes factors; Bayesian model selection; multiple hypothesis testing.
Empirical Bayes vs. fully Bayes variable selection
, 2008
"... For the problem of variable selection for the normal linear model, fixed penalty selection criteria such as AIC, Cp, BIC and RIC correspond to the posterior modes of a hierarchical Bayes model for various fixed hyperparameter settings. Adaptive selection criteria obtained by empirical Bayes estimati ..."
Abstract

Cited by 32 (0 self)
 Add to MetaCart
For the problem of variable selection for the normal linear model, fixed penalty selection criteria such as AIC, Cp, BIC and RIC correspond to the posterior modes of a hierarchical Bayes model for various fixed hyperparameter settings. Adaptive selection criteria obtained by empirical Bayes estimation of the hyperparameters have been shown by George and Foster [2000. Calibration and Empirical Bayes variable selection. Biometrika 87(4), 731–747] to improve on these fixed selection criteria. In this paper, we study the potential of alternative fully Bayes methods, which instead margin out the hyperparameters with respect to prior distributions. Several structured prior formulations are considered for which fully Bayes selection and estimation methods are obtained. Analytical and simulation comparisons with empirical Bayes counterparts are studied.
Reference analysis
 In Handbook of Statistics 25
, 2005
"... This chapter describes reference analysis, a method to produce Bayesian inferential statements which only depend on the assumed model and the available data. Statistical information theory is used to define the reference prior function as a mathematical description of that situation where data would ..."
Abstract

Cited by 24 (3 self)
 Add to MetaCart
This chapter describes reference analysis, a method to produce Bayesian inferential statements which only depend on the assumed model and the available data. Statistical information theory is used to define the reference prior function as a mathematical description of that situation where data would best dominate prior knowledge about the quantity of interest. Reference priors are not descriptions of personal beliefs; they are proposed as formal consensus prior functions to be used as standards for scientific communication. Reference posteriors are obtained by formal use of Bayes theorem with a reference prior. Reference prediction is achieved by integration with a reference posterior. Reference decisions are derived by minimizing a reference posterior expected loss. An information theory based loss function, the intrinsic discrepancy, may be used to derive reference procedures for conventional inference problems in scientific investigation, such as point estimation, region estimation and hypothesis testing.
Extending Conventional priors for Testing General Hypotheses
 Biometrika
, 2007
"... In this paper, we consider that observations Y come from a general normal linear model and that it is desired to test a simplifying (null) hypothesis about the parameters. We approach this problem from an objective Bayesian, model selection perspective. Crucial ingredients for this approach are ‘pro ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
In this paper, we consider that observations Y come from a general normal linear model and that it is desired to test a simplifying (null) hypothesis about the parameters. We approach this problem from an objective Bayesian, model selection perspective. Crucial ingredients for this approach are ‘proper objective priors ’ to be used for deriving the Bayes factors. JeffreysZellnerSiow priors have shown to have good properties for testing null hypotheses defined by specific values of the parameters in full rank linear models. We extend these priors to deal with general hypotheses in general linear models, not necessarily full rank. The resulting priors, which we call ‘conventional priors’, are expressed as a generalization of recently introduced ‘partially informative distributions’. The corresponding Bayes factors are fully automatic, easy to compute and very reasonable. The methodology is illustrated for two popular problems: the change point problem and the equality of treatments effects problem. We compare the conventional priors derived for these problems with other objective Bayesian proposals like the intrinsic priors. It is concluded that both priors behave similarly although interesting subtle differences arise. Finally, we accommodate the conventional priors to deal with non nested model selection as well as multiple model comparison.
Training Samples in Objective Bayesian Model Selection
 Ann. Statist
, 2004
"... Central to several objective approaches to Bayesian model selection is the use of training samples (subsets of the data), so as to allow utilization of improper objective priors. The most common prescription for choosing training samples is to choose them to be as small as possible, subject to yield ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
Central to several objective approaches to Bayesian model selection is the use of training samples (subsets of the data), so as to allow utilization of improper objective priors. The most common prescription for choosing training samples is to choose them to be as small as possible, subject to yielding proper posteriors; these are called minimal training samples.
Bayesian Computational Approaches to Model Selection
, 2000
"... this paper was to provide a summary of the stateof theart theory on Bayesian model selection and the application of MCMC algorithms. It has been shown how applications of considerable complexity can be handled successfully within this framework. Several methods for dealing with the use of default, ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
this paper was to provide a summary of the stateof theart theory on Bayesian model selection and the application of MCMC algorithms. It has been shown how applications of considerable complexity can be handled successfully within this framework. Several methods for dealing with the use of default, improper priors in the Bayesian model selection 506 Andrieu, Doucet et al. framework has been shown. Special care has been taken to pinpoint the subtleties of jumping from one parameter space to another, and in general, to show the construction of MCMC samplers in such scenarios. The focus in the paper was on the reversible jump MCMC algorithm as this is the most widely used of all existing methods; it is easy to use, flexible and has nice properties. Many references have been cited, with the emphasis being given to articles with signal processing applications. A Notation
Criteria for Bayesian model choice with application to variable selection
 In Bayesian Statistics 4
, 2012
"... ar ..."