Results 1 
9 of
9
The practical implementation of Bayesian model selection
 Institute of Mathematical Statistics
, 2001
"... In principle, the Bayesian approach to model selection is straightforward. Prior probability distributions are used to describe the uncertainty surrounding all unknowns. After observing the data, the posterior distribution provides a coherent post data summary of the remaining uncertainty which is r ..."
Abstract

Cited by 85 (3 self)
 Add to MetaCart
In principle, the Bayesian approach to model selection is straightforward. Prior probability distributions are used to describe the uncertainty surrounding all unknowns. After observing the data, the posterior distribution provides a coherent post data summary of the remaining uncertainty which is relevant for model selection. However, the practical implementation of this approach often requires carefully tailored priors and novel posterior calculation methods. In this article, we illustrate some of the fundamental practical issues that arise for two different model selection problems: the variable selection problem for the linear model and the CART model selection problem.
The variable selection problem
 Journal of the American Statistical Association
, 2000
"... The problem of variable selection is one of the most pervasive model selection problems in statistical applications. Often referred to as the problem of subset selection, it arises when one wants to model the relationship between a variable of interest and a subset of potential explanatory variables ..."
Abstract

Cited by 39 (2 self)
 Add to MetaCart
The problem of variable selection is one of the most pervasive model selection problems in statistical applications. Often referred to as the problem of subset selection, it arises when one wants to model the relationship between a variable of interest and a subset of potential explanatory variables or predictors, but there is uncertainty about which subset to use. This vignette reviews some of the key developments which have led to the wide variety of approaches for this problem. 1
Bayes model averaging with selection of regressors
 Journal of the Royal Statistical Society. Series B, Statistical Methodology
, 2002
"... Summary. When a number of distinct models contend for use in prediction, the choice of a single model can offer rather unstable predictions. In regression, stochastic search variable selection with Bayesian model averaging offers a cure for this robustness issue but at the expense of requiring very ..."
Abstract

Cited by 33 (8 self)
 Add to MetaCart
Summary. When a number of distinct models contend for use in prediction, the choice of a single model can offer rather unstable predictions. In regression, stochastic search variable selection with Bayesian model averaging offers a cure for this robustness issue but at the expense of requiring very many predictors. Here we look at Bayes model averaging incorporating variable selection for prediction. This offers similar meansquare errors of prediction but with a vastly reduced predictor space. This can greatly aid the interpretation of the model. It also reduces the cost if measured variables have costs. The development here uses decision theory in the context of the multivariate general linear model. In passing, this reduced predictor space Bayes model averaging is contrasted with singlemodel approximations. A fast algorithm for updating regressions in the Markov chain Monte Carlo searches for posterior inference is developed, allowing many more variables than observations to be contemplated. We discuss the merits of absolute rather than proportionate shrinkage in regression, especially when there are more variables than observations. The methodology is illustrated on a set of spectroscopic data used for measuring the amounts of different sugars in an aqueous solution.
The Curve Fitting Problem: A Bayesian Rejoinder
, 1998
"... In the curve fitting problem two conflicting desiderata, simplicity and goodnessoffit pull in opposite directions. To solve this problem, two proposals, the first one based on Bayes' theorem criterion (BTC) and the second one advocated by Forster and Sober based on Akaike's Information Criterion ..."
Abstract
 Add to MetaCart
In the curve fitting problem two conflicting desiderata, simplicity and goodnessoffit pull in opposite directions. To solve this problem, two proposals, the first one based on Bayes' theorem criterion (BTC) and the second one advocated by Forster and Sober based on Akaike's Information Criterion (AIC) are discussed. We show that AIC, which is frequentist in spirit, is logically equivalent to BTC, provided that a suitable choice of priors is made. We evaluate the charges against Bayesianism and contend that AIC approach has shortcomings. We also discuss the relationship between Schwarz's Bayesian Information Criterion and BTC. [ Word count 93] Overview In the curve fitting problem, two conflicting desiderata, simplicity and goodnessoffit, pull in opposite directions. Simplicity forces us to choose straight lines over nonlinear equations, whereas goodnessoffit forces us to choose the latter over the former. This article discusses two proposals that attempt to strike an optimal balance between these two conflicting desiderata. A Bayesian solution to the curve fitting problem can be obtained by applying Bayes' theorem. The Bayesian solution is called the Bayes' Theorem Criterion (BTC). Malcolm Forster and Elliot Sober, in contrast, propose Akaike's Information Criterion (AIC) which is frequentist in spirit. The purpose of this article is threefold. First, we address sonhe of the objections to the Bayesian approach raised by Forster and Sober. Second, we describe sonhe limitations in the the implementation of the approach based on AIC. Finally, we show that AIC is in fact logically equivalent to BTC with a suitable choice of priors. The underlying thenhe of this paper is to illuminate the Bayesian/nonBayesian debate in philosophy of science.
Regression of Unknown Degree
, 2003
"... This article presents a comparison of four methods to compute the posterior probabilities of the possible orders in polynomial regression models. These posterior probabilities are used for forecasting by using Bayesian model averaging. It is shown that Bayesian model averaging provides a closer rela ..."
Abstract
 Add to MetaCart
This article presents a comparison of four methods to compute the posterior probabilities of the possible orders in polynomial regression models. These posterior probabilities are used for forecasting by using Bayesian model averaging. It is shown that Bayesian model averaging provides a closer relationship between the theoretical coverage of the high density predictive interval (HDPI) and the observed coverage than those corresponding to selecting the best model. The performance of the different procedures are illustrated with simulations and some known engineering data. Key words:
Methods and Criteria for Model Selection
"... Model selection is an important part of any statistical analysis, and indeed is central to the pursuit of science in general. Many authors have examined this question, from both frequentist and Bayesian perspectives, and many tools for selecting the “best model ” have been suggested in the literatur ..."
Abstract
 Add to MetaCart
Model selection is an important part of any statistical analysis, and indeed is central to the pursuit of science in general. Many authors have examined this question, from both frequentist and Bayesian perspectives, and many tools for selecting the “best model ” have been suggested in the literature. This paper considers the various proposals from a Bayesian decision–theoretic perspective.
Some connections between Bayesian and nonBayesian methods for regression model selection
, 2001
"... In this article, we study the connections between Bayesian methods and nonBayesian methods for variable selection in multiple linear regression. We show that each ofthe nonBayesian criteria, FPE; AIC; Cp and adjusted R 2, has its Bayesian correspondence under an appropriate prior setting. The theo ..."
Abstract
 Add to MetaCart
In this article, we study the connections between Bayesian methods and nonBayesian methods for variable selection in multiple linear regression. We show that each ofthe nonBayesian criteria, FPE; AIC; Cp and adjusted R 2, has its Bayesian correspondence under an appropriate prior setting. The theoretical results are