Results 1 
7 of
7
Bayes Factors
, 1995
"... In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null ..."
Abstract

Cited by 981 (70 self)
 Add to MetaCart
In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null is onehalf. Although there has been much discussion of Bayesian hypothesis testing in the context of criticism of P values, less attention has been given to the Bayes factor as a practical tool of applied statistics. In this paper we review and discuss the uses of Bayes factors in the context of five scientific applications in genetics, sports, ecology, sociology and psychology.
Bayesian Model Selection in Social Research (with Discussion by Andrew Gelman & Donald B. Rubin, and Robert M. Hauser, and a Rejoinder)
 SOCIOLOGICAL METHODOLOGY 1995, EDITED BY PETER V. MARSDEN, CAMBRIDGE,; MASS.: BLACKWELLS.
, 1995
"... It is argued that Pvalues and the tests based upon them give unsatisfactory results, especially in large samples. It is shown that, in regression, when there are many candidate independent variables, standard variable selection procedures can give very misleading results. Also, by selecting a singl ..."
Abstract

Cited by 253 (19 self)
 Add to MetaCart
It is argued that Pvalues and the tests based upon them give unsatisfactory results, especially in large samples. It is shown that, in regression, when there are many candidate independent variables, standard variable selection procedures can give very misleading results. Also, by selecting a single model, they ignore model uncertainty and so underestimate the uncertainty about quantities of interest. The Bayesian approach to hypothesis testing, model selection and accounting for model uncertainty is presented. Implementing this is straightforward using the simple and accurate BIC approximation, and can be done using the output from standard software. Specific results are presented for most of the types of model commonly used in sociology. It is shown that this approach overcomes the difficulties with P values and standard model selection procedures based on them. It also allows easy comparison of nonnested models, and permits the quantification of the evidence for a null hypothesis...
Bayes factors and model uncertainty
 DEPARTMENT OF STATISTICS, UNIVERSITY OFWASHINGTON
, 1993
"... In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null ..."
Abstract

Cited by 89 (6 self)
 Add to MetaCart
In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null is onehalf. Although there has been much discussion of Bayesian hypothesis testing in the context of criticism of Pvalues, less attention has been given to the Bayes factor as a practical tool of applied statistics. In this paper we review and discuss the uses of Bayes factors in the context of five scientific applications. The points we emphasize are: from Jeffreys's Bayesian point of view, the purpose of hypothesis testing is to evaluate the evidence in favor of a scientific theory; Bayes factors offer a way of evaluating evidence in favor ofa null hypothesis; Bayes factors provide a way of incorporating external information into the evaluation of evidence about a hypothesis; Bayes factors are very general, and do not require alternative models to be nested; several techniques are available for computing Bayes factors, including asymptotic approximations which are easy to compute using the output from standard packages that maximize likelihoods; in "nonstandard " statistical models that do not satisfy common regularity conditions, it can be technically simpler to calculate Bayes factors than to derive nonBayesian significance
SubregionAdaptive Integration of Functions Having a Dominant Peak
 JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS
, 1993
"... Many statistical multiple integration problems involve integrands that have a dominant peak. In applying numerical methods to solve these problems, statisticians have paid relatively little attention to existing quadrature methods and available software developed in the numerical analysis literature ..."
Abstract

Cited by 20 (5 self)
 Add to MetaCart
Many statistical multiple integration problems involve integrands that have a dominant peak. In applying numerical methods to solve these problems, statisticians have paid relatively little attention to existing quadrature methods and available software developed in the numerical analysis literature. One reason these methods have been largely overlooked, even though they are known to be more efficient than Monte Carlo for wellbehaved problems of low dimensionality, may be that when applied naively they are poorly suited for peakedintegrand problems. In this paper we use transformations based on "splitt" distributions to allow the integrals to be efficiently computed using a subregionadaptive numerical integration algorithm. Our splitt distributions are modifications of those suggested by Geweke (1989) and may also be used to define Monte Carlo importance functions. We then compare our approach to Monte Carlo. In the several examples we examine here, we find subregionadaptive inte...
Empirical Bayes and Item Clustering Effects in a Latent Variable Hierarchical Model: A case study from the National Assessment of Educational Progress
 JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
, 2002
"... Empirical Bayes regression procedures are commonly used in educational and psychological testing as extensions to latent variable models. The National Assessment of Educational Progress (NAEP) is an important national survey using such procedures. NAEP applies empirical Bayes methods to models from ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
Empirical Bayes regression procedures are commonly used in educational and psychological testing as extensions to latent variable models. The National Assessment of Educational Progress (NAEP) is an important national survey using such procedures. NAEP applies empirical Bayes methods to models from item response theory in order to calibrate student responses to questions of varying difficulty. Partially due to the limited computing technology that existed when NAEP was first conceived, NAEP analyses are carried out using a twostage estimation procedure that ignores uncertainty about some model parameters. Furthermore, the item response theory model NAEP uses ignores the effect of item clustering created by the design of a test form. Using Markov chain Monte Carlo, we simultaneously estimate all parameters of an expanded model that considers item clustering in order to investigate the impact of item clustering and ignored uncertainty about model parameters on NAEP's reported outcome me...
Methods for Numerical Integration of HighDimensional Posterior Densities with Application to Statistical Image Models
 In Proc. of the SPIE Conf. on Stochastic Methods in Signal Processing, Image Processing, and Computer Vision
, 1993
"... Numerical computation with Bayesian posterior densities has recently received much attention both in the applied statistics and image processing communities. This paper surveys previous literature and presents new, efficient methods for computing marginal density values for image models that have be ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
Numerical computation with Bayesian posterior densities has recently received much attention both in the applied statistics and image processing communities. This paper surveys previous literature and presents new, efficient methods for computing marginal density values for image models that have been widely considered in computer vision and image processing. The particular models chosen are a Markov random field formulation, implicit polynomial surface models, and parametric polynomial surface models. The computations can be used to make a variety of statisticallybased decisions, such as assessing region homogeneity for segmentation, or performing model selection. Detailed descriptions of the methods are provided, along with demonstrative experiments on real imagery. 1 1 Introduction Bayesian analysis has proven to be a powerful tool in many lowlevel computer vision and image processing applications; however, in many instances this tool is limited by computational requirements im...
Empirical Bayes and itemclustering effects in a latent variable hierarchical model: A case study from the National Assessment of Educational Progress
 Journal of the American Statistical Association
, 2002
"... Empirical Bayes regression procedures are often used in educational and psychological testing as extensions to latent variables models. The National Assessment of Educational Progress (NAEP) is an important national survey using such procedures. The NAEP applies empirical Bayes methods to models fro ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Empirical Bayes regression procedures are often used in educational and psychological testing as extensions to latent variables models. The National Assessment of Educational Progress (NAEP) is an important national survey using such procedures. The NAEP applies empirical Bayes methods to models from item response theory to calibrate student responses to questions of varying difficulty. Due partially to the limited computing technology that existed when the NAEP was first conceived, NAEP analyses are carried out using a twostage estimation procedure that ignores uncertainty about some model parameters. Furthermore, the item response theory model that the NAEP uses ignores the effect of item clustering created by the design of a test form. Using Markov chain Monte Carlo, we simultaneously estimate all parameters of an expanded model that considers item clustering to investigate the impact of item clustering and ignoring uncertainty about model parameters on an important outcome measure that the NAEP reports. Ignoring these two effects causes substantial underestimation of standard errors and induces a modest bias in location estimates.