Results 1 
7 of
7
Bayes Factors
, 1995
"... In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null ..."
Abstract

Cited by 1176 (71 self)
 Add to MetaCart
In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null is onehalf. Although there has been much discussion of Bayesian hypothesis testing in the context of criticism of P values, less attention has been given to the Bayes factor as a practical tool of applied statistics. In this paper we review and discuss the uses of Bayes factors in the context of five scientific applications in genetics, sports, ecology, sociology and psychology.
Assessment and Propagation of Model Uncertainty
, 1995
"... this paper I discuss a Bayesian approach to solving this problem that has long been available in principle but is only now becoming routinely feasible, by virtue of recent computational advances, and examine its implementation in examples that involve forecasting the price of oil and estimating the ..."
Abstract

Cited by 148 (0 self)
 Add to MetaCart
this paper I discuss a Bayesian approach to solving this problem that has long been available in principle but is only now becoming routinely feasible, by virtue of recent computational advances, and examine its implementation in examples that involve forecasting the price of oil and estimating the chance of catastrophic failure of the U.S. Space Shuttle.
Approximate Bayes Factors and Accounting for Model Uncertainty in Generalized Linear Models
, 1993
"... Ways of obtaining approximate Bayes factors for generalized linear models are described, based on the Laplace method for integrals. I propose a new approximation which uses only the output of standard computer programs such as GUM; this appears to be quite accurate. A reference set of proper priors ..."
Abstract

Cited by 110 (28 self)
 Add to MetaCart
Ways of obtaining approximate Bayes factors for generalized linear models are described, based on the Laplace method for integrals. I propose a new approximation which uses only the output of standard computer programs such as GUM; this appears to be quite accurate. A reference set of proper priors is suggested, both to represent the situation where there is not much prior information, and to assess the sensitivity of the results to the prior distribution. The methods can be used when the dispersion parameter is unknown, when there is overdispersion, to compare link functions, and to compare error distributions and variance functions. The methods can be used to implement the Bayesian approach to accounting for model uncertainty. I describe an application to inference about relative risks in the presence of control factors where model uncertainty is large and important. Software to implement the
Bayes factors and model uncertainty
 DEPARTMENT OF STATISTICS, UNIVERSITY OFWASHINGTON
, 1993
"... In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null ..."
Abstract

Cited by 95 (6 self)
 Add to MetaCart
In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null is onehalf. Although there has been much discussion of Bayesian hypothesis testing in the context of criticism of Pvalues, less attention has been given to the Bayes factor as a practical tool of applied statistics. In this paper we review and discuss the uses of Bayes factors in the context of five scientific applications. The points we emphasize are: from Jeffreys's Bayesian point of view, the purpose of hypothesis testing is to evaluate the evidence in favor of a scientific theory; Bayes factors offer a way of evaluating evidence in favor ofa null hypothesis; Bayes factors provide a way of incorporating external information into the evaluation of evidence about a hypothesis; Bayes factors are very general, and do not require alternative models to be nested; several techniques are available for computing Bayes factors, including asymptotic approximations which are easy to compute using the output from standard packages that maximize likelihoods; in "nonstandard " statistical models that do not satisfy common regularity conditions, it can be technically simpler to calculate Bayes factors than to derive nonBayesian significance
Discrimination With Many Variables
, 1999
"... Many statistical methods for discriminant analysis do not adapt well or easily to situations where the number of variables is large, possibly even exceeding the number of cases in the training set. We explore a variety of methods for providing robust identification of future samples in this situatio ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Many statistical methods for discriminant analysis do not adapt well or easily to situations where the number of variables is large, possibly even exceeding the number of cases in the training set. We explore a variety of methods for providing robust identification of future samples in this situation. We develop a range of flexible Bayesian methods, and primarily a new hierarchical covariance compromise method, akin to regularized discriminant analysis. Although the methods are much more widely applicable, the motivating problem was that of discriminating between groups of samples on the basis of their near infrared spectra. Here the ability of the Bayesian methods to take account of continuity of the spectra may be beneficial. The spectra may consist of absorbances or reflectances at as many as 1000 wavelengths, and yet there may be only tens or hundreds of training samples where both sample spectrum and group identity are known. Such problems arise in the food and pharmaceutical ind...
Approximate Bayesian . . . Weighted Likelihood Bootstrap
, 1991
"... We introduce the weighted likelihood bootstrap (WLB) as a simple way of approximately simulating from a posterior distribution. This is easy to implement, requiring only an algorithm for calculating the maximum likelihood estimator, such as the EM algorithm or iteratively reweighted least squares; i ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We introduce the weighted likelihood bootstrap (WLB) as a simple way of approximately simulating from a posterior distribution. This is easy to implement, requiring only an algorithm for calculating the maximum likelihood estimator, such as the EM algorithm or iteratively reweighted least squares; it does not necessarily require actual calculation of the likelihood itself. The method is exact up to an effective prior which is generally unknown but can be identified exactly for unconstrained discretedata models and approximately for other models. Accuracy of the WLB relies on the chosen distribution of weights. In the generic scheme, the WLB is at least firstorder correct under quite general conditions. We have also been able to prove higherorder correctness in some classes of models. The method, which generalizes Rubin's Bayesian bootstrap, provides approximate posterior distributions for prediction, calibration, dependent data and partial likelihood problems, as well as more standard models. The calculation of approximate Bayes factors for model comparison is also considered. We note that, given a sample simulated from the posterior distribution, the required marginal likelihood may be simulationconsistently estimated by the harmonic mean of the associated likelihood values; a modification of this estimator that avoids instability is also noted. An alternative, predictionbased, estimator of the marginal likelihood using the WLB is also described. These methods provide simple ways of calculating approximate Bayes factors and posterior model probabilities for a very wide class of models.
Model Discrimination in MetaAnalysis  A Bayesian Perspective
"... In wanting to summarise evidence from a number of studies a variety of statistical methods have been proposed. Of these the most widely used is the socalled fixed effect model in which the individual studies are estimating a single, but unknown, overall population effect. When there is `considerabl ..."
Abstract
 Add to MetaCart
In wanting to summarise evidence from a number of studies a variety of statistical methods have been proposed. Of these the most widely used is the socalled fixed effect model in which the individual studies are estimating a single, but unknown, overall population effect. When there is `considerable' heterogeneity, in terms of the effect sizes, between the studies the use of a random effect model has been advocated in which each individual study is assumed to be estimating its own, unknown, true effect. Discrimination between fixed and random effect models has been advocated by means of a Ø 2 test for heterogeneity, which it is accepted has low statistical power. Recent interest has been shown in the use of Bayes Factors as an alternative. The use of Bayes factors is illustrated using a number of previously published metaanalyses in which there are varying degrees of heterogeneity. It is shown how the use of Bayes Factors leads to a more intuitive assessment of the evidence in favo...