Results 1 
6 of
6
Bayes Factors
, 1995
"... In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null ..."
Abstract

Cited by 1766 (74 self)
 Add to MetaCart
In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null is onehalf. Although there has been much discussion of Bayesian hypothesis testing in the context of criticism of P values, less attention has been given to the Bayes factor as a practical tool of applied statistics. In this paper we review and discuss the uses of Bayes factors in the context of five scientific applications in genetics, sports, ecology, sociology and psychology.
Bayes factors and model uncertainty
 DEPARTMENT OF STATISTICS, UNIVERSITY OFWASHINGTON
, 1993
"... In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null ..."
Abstract

Cited by 121 (6 self)
 Add to MetaCart
In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null is onehalf. Although there has been much discussion of Bayesian hypothesis testing in the context of criticism of Pvalues, less attention has been given to the Bayes factor as a practical tool of applied statistics. In this paper we review and discuss the uses of Bayes factors in the context of five scientific applications. The points we emphasize are: from Jeffreys's Bayesian point of view, the purpose of hypothesis testing is to evaluate the evidence in favor of a scientific theory; Bayes factors offer a way of evaluating evidence in favor ofa null hypothesis; Bayes factors provide a way of incorporating external information into the evaluation of evidence about a hypothesis; Bayes factors are very general, and do not require alternative models to be nested; several techniques are available for computing Bayes factors, including asymptotic approximations which are easy to compute using the output from standard packages that maximize likelihoods; in "nonstandard " statistical models that do not satisfy common regularity conditions, it can be technically simpler to calculate Bayes factors than to derive nonBayesian significance
Markov Chain Monte Carlo Model Determination for Hierarchical and Graphical Loglinear Models
 Biometrika
, 1996
"... this paper, we will only consider undirected graphical models. For details of Bayesian model selection for directed graphical models see Madigan et al (1995). An (undirected) graphical model is determined by a set of conditional independence constraints of the form `fl 1 is independent of fl 2 condi ..."
Abstract

Cited by 79 (12 self)
 Add to MetaCart
this paper, we will only consider undirected graphical models. For details of Bayesian model selection for directed graphical models see Madigan et al (1995). An (undirected) graphical model is determined by a set of conditional independence constraints of the form `fl 1 is independent of fl 2 conditional on all other fl i 2 C'. Graphical models are so called because they can each be represented as a graph with vertex set C and an edge between each pair fl 1 and fl 2 unless fl 1 and fl 2 are conditionally independent as described above. Darroch, Lauritzen and Speed (1980) show that each graphical loglinear model is hierarchical, with generators given by the cliques (complete subgraphs) of the graph. The total number of possible graphical models is clearly given by 2 (
Bayesian Selection of LogLinear Models
 Canadian Journal of Statistics
, 1995
"... A general methodology is presented for finding suitable Poisson loglinear models with applications to multiway contingency tables. Mixtures of multivariate normal distributions are used to model prior opinion when a subset of the regression vector is believed to be nonzero. This prior distribution ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
A general methodology is presented for finding suitable Poisson loglinear models with applications to multiway contingency tables. Mixtures of multivariate normal distributions are used to model prior opinion when a subset of the regression vector is believed to be nonzero. This prior distribution is studied for two and threeway contingency tables, in which the regression coefficients are interpretable in terms of oddsratios in the table. Efficient and accurate schemes are proposed for calculating the posterior model probabilities. The methods are illustrated for a large number of twoway simulated tables and for two threeway tables. These methods appear to be useful in selecting the best loglinear model and in estimating parameters of interest that reflect uncertainty in the true model. Key words and phrases: Bayes factors, Laplace method, Gibbs sampling, Model selection, Odds ratios. AMS subject classifications: Primary 62H17, 62F15, 62J12. 1 Introduction 1.1 Bayesian testing...
Bayesian Testing and Estimation of Association in a TwoWay Contingency Table
, 1996
"... In a twoway contingency table, one is interested in checking the goodness of fit of simple models such as independence, quasiindependence, symmetry, or constant association, and estimating parameters which describe the association structure of the table. In a large table, one may be interested in ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
In a twoway contingency table, one is interested in checking the goodness of fit of simple models such as independence, quasiindependence, symmetry, or constant association, and estimating parameters which describe the association structure of the table. In a large table, one may be interested in detecting a few outlying cells which deviate from the main association pattern in the table. Bayesian tests of the above hypotheses are described using a prior defined on the set of interaction terms of the loglinear model. These tests and associated estimation procedures have several advantages over classical fitting/estimation procedures First, the tests above can give measures of evidence in support of simple hypotheses. Second, the Bayes factors can be used to give estimates of association parameters of the table which allow for uncertainty that the hypothesized model is true. These methods are illustrated for a number of tables. Key words and phrases: Bayes factors, Laplace method, Gib...
Testing HardyWeinberg Equilibrium: an Objective Bayesian Analysis
"... Summary: We analyze the general (multiallelic) HardyWeinberg equilibrium problem from an objective Bayesian testing standpoint. We argue that for small or moderate sample sizes the answer is rather sensitive to the prior chosen, and this suggests to carry out a Bayesian sensitivity analysis to the ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
Summary: We analyze the general (multiallelic) HardyWeinberg equilibrium problem from an objective Bayesian testing standpoint. We argue that for small or moderate sample sizes the answer is rather sensitive to the prior chosen, and this suggests to carry out a Bayesian sensitivity analysis to the prior. This objective is achieved through the identification of a class of priors specifically designed for this testing problem. In this paper we consider the class of intrinsic priors under the full model, indexed by a tuning quantity, the training sample size. These priors are objective, satisfy Savage’s continuity condition and have proved to behave extremely well for many statistical testing problems. We compute the posterior probability of the HardyWeinberg equilibrium model for the class of intrinsic priors, and thus provide a range of plausible answers. If our decision of rejecting the null hypothesis does not change as the intrinsic prior varies over this class, we conclude that our analysis is robust. On the other hand, when the sample size grows to infinity any sensitivity to the prior disappears because under rather general conditions the Bayes factor for intrinsic priors is consistent.