Results 1  10
of
14
Bayes Factors
, 1995
"... In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null ..."
Abstract

Cited by 981 (70 self)
 Add to MetaCart
In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null is onehalf. Although there has been much discussion of Bayesian hypothesis testing in the context of criticism of P values, less attention has been given to the Bayes factor as a practical tool of applied statistics. In this paper we review and discuss the uses of Bayes factors in the context of five scientific applications in genetics, sports, ecology, sociology and psychology.
Bayesian Model Selection in Social Research (with Discussion by Andrew Gelman & Donald B. Rubin, and Robert M. Hauser, and a Rejoinder)
 SOCIOLOGICAL METHODOLOGY 1995, EDITED BY PETER V. MARSDEN, CAMBRIDGE,; MASS.: BLACKWELLS.
, 1995
"... It is argued that Pvalues and the tests based upon them give unsatisfactory results, especially in large samples. It is shown that, in regression, when there are many candidate independent variables, standard variable selection procedures can give very misleading results. Also, by selecting a singl ..."
Abstract

Cited by 253 (19 self)
 Add to MetaCart
It is argued that Pvalues and the tests based upon them give unsatisfactory results, especially in large samples. It is shown that, in regression, when there are many candidate independent variables, standard variable selection procedures can give very misleading results. Also, by selecting a single model, they ignore model uncertainty and so underestimate the uncertainty about quantities of interest. The Bayesian approach to hypothesis testing, model selection and accounting for model uncertainty is presented. Implementing this is straightforward using the simple and accurate BIC approximation, and can be done using the output from standard software. Specific results are presented for most of the types of model commonly used in sociology. It is shown that this approach overcomes the difficulties with P values and standard model selection procedures based on them. It also allows easy comparison of nonnested models, and permits the quantification of the evidence for a null hypothesis...
Bayes factors and model uncertainty
 DEPARTMENT OF STATISTICS, UNIVERSITY OFWASHINGTON
, 1993
"... In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null ..."
Abstract

Cited by 89 (6 self)
 Add to MetaCart
In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null is onehalf. Although there has been much discussion of Bayesian hypothesis testing in the context of criticism of Pvalues, less attention has been given to the Bayes factor as a practical tool of applied statistics. In this paper we review and discuss the uses of Bayes factors in the context of five scientific applications. The points we emphasize are: from Jeffreys's Bayesian point of view, the purpose of hypothesis testing is to evaluate the evidence in favor of a scientific theory; Bayes factors offer a way of evaluating evidence in favor ofa null hypothesis; Bayes factors provide a way of incorporating external information into the evaluation of evidence about a hypothesis; Bayes factors are very general, and do not require alternative models to be nested; several techniques are available for computing Bayes factors, including asymptotic approximations which are easy to compute using the output from standard packages that maximize likelihoods; in "nonstandard " statistical models that do not satisfy common regularity conditions, it can be technically simpler to calculate Bayes factors than to derive nonBayesian significance
Model Choice: A Minimum Posterior Predictive Loss Approach
, 1998
"... Model choice is a fundamental and much discussed activity in the analysis of data sets. Hierarchical models introducing random effects can not be handled by classical methods. Bayesian approaches using predictive distributions can, though the formal solution, which includes Bayes factors as a specia ..."
Abstract

Cited by 59 (10 self)
 Add to MetaCart
Model choice is a fundamental and much discussed activity in the analysis of data sets. Hierarchical models introducing random effects can not be handled by classical methods. Bayesian approaches using predictive distributions can, though the formal solution, which includes Bayes factors as a special case, can be criticized. We propose a predictive criterion where the goal is good prediction of a replicate of the observed data but tempered by fidelity to the observed values. We obtain this criterion by minimizing posterior loss for a given model and then, for models under consideration, select the one which minimizes this criterion. For a broad range of losses, the criterion emerges approximately as a form partitioned into a goodnessoffit term and a penalty term. In the context of generalized linear mixed effects models we obtain a penalized deviance criterion comprised of a piece which is a Bayesian deviance measure and a piece which is a penalty for model complexity. We illustrate ...
Bayesian Variogram Modeling for an Isotropic Spatial Process
 Journal of Agricultural, Biological and Environmental Statistics
, 1997
"... The variogram is a basic tool in geostatistics. In the case of an assumed isotropic process, it is used to compare variability of the difference between pairs of observations as a function of their distance. Customary approaches to variogram modeling create an empirical variogram and then fit a vali ..."
Abstract

Cited by 26 (5 self)
 Add to MetaCart
The variogram is a basic tool in geostatistics. In the case of an assumed isotropic process, it is used to compare variability of the difference between pairs of observations as a function of their distance. Customary approaches to variogram modeling create an empirical variogram and then fit a valid parametric or nonparametric variogram model to it. Here we adopt a Bayesian approach to variogram modeling. In particular, we seek to analyze a recent data set of scallop catches. We have the results of the analysis of an earlier data set from the region to supply useful prior information. In addition, the Bayesian approach enables inference about any aspect of spatial dependence of interest rather than merely providing a fitted variogram. We utilize discrete mixtures of Bessel functions which allow a rich and flexible class of variogram models. To differentiate between models, we introduce a utility based model choice criterion that encourages parsimony. We conclude with a fully Bayesian ...
Hypothesis Testing and Model Selection Via Posterior Simulation
 In Practical Markov Chain
, 1995
"... Introduction To motivate the methods described in this chapter, consider the following inference problem in astronomy (Soubiran, 1993). Until fairly recently, it has been believed that the Galaxy consists of two stellar populations, the disk and the halo. More recently, it has been hypothesized tha ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
Introduction To motivate the methods described in this chapter, consider the following inference problem in astronomy (Soubiran, 1993). Until fairly recently, it has been believed that the Galaxy consists of two stellar populations, the disk and the halo. More recently, it has been hypothesized that there are in fact three stellar populations, the old (or thin) disk, the thick disk, and the halo, distinguished by their spatial distributions, their velocities, and their metallicities. These hypotheses have different implications for theories of the formation of the Galaxy. Some of the evidence for deciding whether there are two or three populations is shown in Figure 1, which shows radial and rotational velocities for n = 2; 370 stars. A natural model for this situation is a mixture model with J components, namely y i = J X j=1 ae j
Diagnostic Measures for Model Criticism
 JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
, 1996
"... ... In this article we present the general outlook and discuss general families of elaborations for use in practice; the exponential connection elaboration plays a key role. We then describe model elaborations for use in diagnosing: departures from normality, goodness of fit in generalized linear mo ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
... In this article we present the general outlook and discuss general families of elaborations for use in practice; the exponential connection elaboration plays a key role. We then describe model elaborations for use in diagnosing: departures from normality, goodness of fit in generalized linear models, and variable selection in regression and outlier detection. We illustrate our approach with two applications.
Decisionmetrics: a decisionbased approach to econometric modelling
 Journal of Econometrics
, 2007
"... In many applications it is necessary to use a simple and therefore highly misspecified econometric model as the basis for decisionmaking. We propose an approach to developing a possibly misspecified econometric model that will be used as the beliefs of an objective expected utility maximiser. A dis ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
In many applications it is necessary to use a simple and therefore highly misspecified econometric model as the basis for decisionmaking. We propose an approach to developing a possibly misspecified econometric model that will be used as the beliefs of an objective expected utility maximiser. A discrepancy between model and ‘truth ’ is introduced that is interpretable as a measure of the model’s value for this decisionmaker. Our decisionbased approach utilises this discrepancy in estimation, selection, inference and evaluation of parametric or semiparametric models. The methods proposed nest quasilikelihood methods as a special case that arises when model value is measured by the KullbackLeibler information discrepancy and also provide an econometric approach for developing parametric decision rules (e.g. technical trading rules) with desirable properties. The approach is illustrated and applied in the context of a CARA investor’s decision problem for which analytical, simulation and empirical results suggest it is very effective.
On Bayes Factors for Nonparametric Alternatives
, 1996
"... this paper we derive global Bayes factors for the comparison of a parametric model with a nonparametric alternative. The alternative is constructed by embedding the parametric model in a mixture of Dirichlet Processes. Results include a general explicit form for partially exchangeable sequences as w ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
this paper we derive global Bayes factors for the comparison of a parametric model with a nonparametric alternative. The alternative is constructed by embedding the parametric model in a mixture of Dirichlet Processes. Results include a general explicit form for partially exchangeable sequences as well as closed form expressions in the context oneway analysis of variance.