Results 1  10
of
14
Bayes Factors
, 1995
"... In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null ..."
Abstract

Cited by 981 (70 self)
 Add to MetaCart
In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null is onehalf. Although there has been much discussion of Bayesian hypothesis testing in the context of criticism of P values, less attention has been given to the Bayes factor as a practical tool of applied statistics. In this paper we review and discuss the uses of Bayes factors in the context of five scientific applications in genetics, sports, ecology, sociology and psychology.
Model selection and accounting for model uncertainty in graphical models using Occam's window
, 1993
"... We consider the problem of model selection and accounting for model uncertainty in highdimensional contingency tables, motivated by expert system applications. The approach most used currently is a stepwise strategy guided by tests based on approximate asymptotic Pvalues leading to the selection o ..."
Abstract

Cited by 266 (46 self)
 Add to MetaCart
We consider the problem of model selection and accounting for model uncertainty in highdimensional contingency tables, motivated by expert system applications. The approach most used currently is a stepwise strategy guided by tests based on approximate asymptotic Pvalues leading to the selection of a single model; inference is then conditional on the selected model. The sampling properties of such a strategy are complex, and the failure to take account of model uncertainty leads to underestimation of uncertainty about quantities of interest. In principle, a panacea is provided by the standard Bayesian formalism which averages the posterior distributions of the quantity of interest under each of the models, weighted by their posterior model probabilities. Furthermore, this approach is optimal in the sense of maximising predictive ability. However, this has not been used in practice because computing the posterior model probabilities is hard and the number of models is very large (often greater than 1011). We argue that the standard Bayesian formalism is unsatisfactory and we propose an alternative Bayesian approach that, we contend, takes full account of the true model uncertainty byaveraging overamuch smaller set of models. An efficient search algorithm is developed for nding these models. We consider two classes of graphical models that arise in expert systems: the recursive causal models and the decomposable
Bayesian Model Averaging for Linear Regression Models
 Journal of the American Statistical Association
, 1997
"... We consider the problem of accounting for model uncertainty in linear regression models. Conditioning on a single selected model ignores model uncertainty, and thus leads to the underestimation of uncertainty when making inferences about quantities of interest. A Bayesian solution to this problem in ..."
Abstract

Cited by 184 (13 self)
 Add to MetaCart
We consider the problem of accounting for model uncertainty in linear regression models. Conditioning on a single selected model ignores model uncertainty, and thus leads to the underestimation of uncertainty when making inferences about quantities of interest. A Bayesian solution to this problem involves averaging over all possible models (i.e., combinations of predictors) when making inferences about quantities of
Assessment and Propagation of Model Uncertainty
, 1995
"... this paper I discuss a Bayesian approach to solving this problem that has long been available in principle but is only now becoming routinely feasible, by virtue of recent computational advances, and examine its implementation in examples that involve forecasting the price of oil and estimating the ..."
Abstract

Cited by 108 (0 self)
 Add to MetaCart
this paper I discuss a Bayesian approach to solving this problem that has long been available in principle but is only now becoming routinely feasible, by virtue of recent computational advances, and examine its implementation in examples that involve forecasting the price of oil and estimating the chance of catastrophic failure of the U.S. Space Shuttle.
Bayes factors and model uncertainty
 DEPARTMENT OF STATISTICS, UNIVERSITY OFWASHINGTON
, 1993
"... In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null ..."
Abstract

Cited by 89 (6 self)
 Add to MetaCart
In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null is onehalf. Although there has been much discussion of Bayesian hypothesis testing in the context of criticism of Pvalues, less attention has been given to the Bayes factor as a practical tool of applied statistics. In this paper we review and discuss the uses of Bayes factors in the context of five scientific applications. The points we emphasize are: from Jeffreys's Bayesian point of view, the purpose of hypothesis testing is to evaluate the evidence in favor of a scientific theory; Bayes factors offer a way of evaluating evidence in favor ofa null hypothesis; Bayes factors provide a way of incorporating external information into the evaluation of evidence about a hypothesis; Bayes factors are very general, and do not require alternative models to be nested; several techniques are available for computing Bayes factors, including asymptotic approximations which are easy to compute using the output from standard packages that maximize likelihoods; in "nonstandard " statistical models that do not satisfy common regularity conditions, it can be technically simpler to calculate Bayes factors than to derive nonBayesian significance
Model Selection and Accounting for Model Uncertainty in Linear Regression Models
, 1993
"... We consider the problems of variable selection and accounting for model uncertainty in linear regression models. Conditioning on a single selected model ignores model uncertainty, and thus leads to the underestimation of uncertainty when making inferences about quantities of interest. The complete B ..."
Abstract

Cited by 47 (6 self)
 Add to MetaCart
We consider the problems of variable selection and accounting for model uncertainty in linear regression models. Conditioning on a single selected model ignores model uncertainty, and thus leads to the underestimation of uncertainty when making inferences about quantities of interest. The complete Bayesian solution to this problem involves averaging over all possible models when making inferences about quantities of interest. This approach is often not practical. In this paper we offer two alternative approaches. First we describe a Bayesian model selection algorithm called "Occam's "Window" which involves averaging over a reduced set of models. Second, we describe a Markov chain Monte Carlo approach which directly approximates the exact solution. Both these model averaging procedures provide better predictive performance than any single model which might reasonably have been selected. In the extreme case where there are many candidate predictors but there is no relationship between any of them and the response, standard variable selection procedures often choose some subset of variables that yields a high R² and a highly significant overall F value. We refer to this unfortunate phenomenon as "Freedman's Paradox" (Freedman, 1983). In this situation, Occam's vVindow usually indicates the null model as the only one to be considered, or else a small number of models including the null model, thus largely resolving the paradox.
Model Selection for Generalized Linear Models via GLIB, with Application to Epidemiology
, 1993
"... Epidemiological studies for assessing risk factors often use logistic regression, loglinear models, or other generalized linear models. They involve many decisions, including the choice and coding of risk factors and control variables. It is common practice to select independent variables using a s ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
Epidemiological studies for assessing risk factors often use logistic regression, loglinear models, or other generalized linear models. They involve many decisions, including the choice and coding of risk factors and control variables. It is common practice to select independent variables using a series of significance tests and to choose the way variables are coded somewhat arbitrarily. The overall properties of such a procedure are not well understood, and conditioning on a single model ignores model uncertainty, leading to underestimation of uncertainty about quantities of interest (QUOIs). We describe a Bayesian modeling strategy that formalizes the model selection process and propagates model uncertainty through to inference about QUOIs. Each possible combination of modeling decisions defines a different model, and the models are compared using Bayes factors. Inference about a QUOI is based on an average of its posterior distributions under the individual models, weighted by thei...
Change Point and Change Curve Modeling in Stochastic Processes and Spatial Statistics
 Journal of Applied Statistical Science
, 1993
"... In simple onedimensional stochastic processes it is feasible to model change points explicitly and to make inference about them. I have found that the Bayesian approach produces results more easily than nonBayesian approaches. It has the advantages of relative technical simplicity, theoretical opt ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
In simple onedimensional stochastic processes it is feasible to model change points explicitly and to make inference about them. I have found that the Bayesian approach produces results more easily than nonBayesian approaches. It has the advantages of relative technical simplicity, theoretical optimality, and of allowing a formal comparison between abrupt and gradual descriptions of change. When it can be assumed that there is at most one changepoint, this is especially simple. This is illustrated in the context of Poisson point processes. A simple approximation is introduced that is applicable to a wide range of problems in which the change point model can be written as a regression or generalized linear model. When the number of change points is unknown, the Bayesian approach proceeds most naturally by statespace modeling or "hidden Markov chains". The general ideas of this are briefly reviewed, particularly the multiprocess Kalman filter. I then describe the application of these...
Bayes Factors and BIC: Comment on Weakliem
, 1998
"... Weakliem agrees that Bayes factors are useful for model selection and hypothesis testing. He reminds us that the simple and convenient BIC approximation corresponds most closely to one particular prior on the parameter space, the unit information prior, and points out that researchers may have diffe ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Weakliem agrees that Bayes factors are useful for model selection and hypothesis testing. He reminds us that the simple and convenient BIC approximation corresponds most closely to one particular prior on the parameter space, the unit information prior, and points out that researchers may have different prior information or opinions. Clearly a prior that represents the available information should be used, although the unit information prior often seems reasonable in the absence of strong prior information. It seems that, among the Bayes factors likely to be used in practice, BIC is conservative in the sense of tending to provide less evidence for additional parameters or "effects". Thus if a Bayes factor based on additional prior information favors an effect, but BIC does not, the prior information is playing a crucial role and this should be made clear when the research is reported. BIC may well have a role as a baseline reference analysis to be provided in routine reporting of research results, perhaps along with Bayes factors based on other priors. In Weakliem's 2 x 2 table examples, BIC and Bayes factors based on Weakliem's preferred priors lead to similar substantive conclusions, but both differ from those based on P values. When there is additional prior information, the technology now exists to express it as
Event History Modeling of World Fertility Survey Data
, 1993
"... Event history analysis seems ideally suited for the analysis of World Fertility Survey (WFS) data, which consists of full birth histories and related information. However, it has not been much used for this purpose, and most analyses of WFS data have consisted of tabulations of standard fertility ra ..."
Abstract
 Add to MetaCart
Event history analysis seems ideally suited for the analysis of World Fertility Survey (WFS) data, which consists of full birth histories and related information. However, it has not been much used for this purpose, and most analyses of WFS data have consisted of tabulations of standard fertility rates, and regressions with children ever born as the dependent variable, both of which have disadvantages. We suggest that this is because event history analysis has practical drawbacks for WFS data, even though, in principle, it provides a superior analytic framework. These are the many partial dates, the computational burden of discretetime event history analysis, the need to take account of five clocks at once (age, period, cohort, time since last event, and parity), and the difficulty of interpreting the coefficients. We propose a modeling strategy for the event history analysis of WFS data which aims to overcome these problems, and we apply it to the previously unanalyzed WFS data from...