Results 1 
8 of
8
Bayes Factors
, 1995
"... In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null ..."
Abstract

Cited by 981 (70 self)
 Add to MetaCart
In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null is onehalf. Although there has been much discussion of Bayesian hypothesis testing in the context of criticism of P values, less attention has been given to the Bayes factor as a practical tool of applied statistics. In this paper we review and discuss the uses of Bayes factors in the context of five scientific applications in genetics, sports, ecology, sociology and psychology.
Bayes factors and model uncertainty
 DEPARTMENT OF STATISTICS, UNIVERSITY OFWASHINGTON
, 1993
"... In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null ..."
Abstract

Cited by 89 (6 self)
 Add to MetaCart
In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null is onehalf. Although there has been much discussion of Bayesian hypothesis testing in the context of criticism of Pvalues, less attention has been given to the Bayes factor as a practical tool of applied statistics. In this paper we review and discuss the uses of Bayes factors in the context of five scientific applications. The points we emphasize are: from Jeffreys's Bayesian point of view, the purpose of hypothesis testing is to evaluate the evidence in favor of a scientific theory; Bayes factors offer a way of evaluating evidence in favor ofa null hypothesis; Bayes factors provide a way of incorporating external information into the evaluation of evidence about a hypothesis; Bayes factors are very general, and do not require alternative models to be nested; several techniques are available for computing Bayes factors, including asymptotic approximations which are easy to compute using the output from standard packages that maximize likelihoods; in "nonstandard " statistical models that do not satisfy common regularity conditions, it can be technically simpler to calculate Bayes factors than to derive nonBayesian significance
Markov Chain Monte Carlo Model Determination for Hierarchical and Graphical Loglinear Models
 Biometrika
, 1996
"... this paper, we will only consider undirected graphical models. For details of Bayesian model selection for directed graphical models see Madigan et al (1995). An (undirected) graphical model is determined by a set of conditional independence constraints of the form `fl 1 is independent of fl 2 condi ..."
Abstract

Cited by 55 (8 self)
 Add to MetaCart
this paper, we will only consider undirected graphical models. For details of Bayesian model selection for directed graphical models see Madigan et al (1995). An (undirected) graphical model is determined by a set of conditional independence constraints of the form `fl 1 is independent of fl 2 conditional on all other fl i 2 C'. Graphical models are so called because they can each be represented as a graph with vertex set C and an edge between each pair fl 1 and fl 2 unless fl 1 and fl 2 are conditionally independent as described above. Darroch, Lauritzen and Speed (1980) show that each graphical loglinear model is hierarchical, with generators given by the cliques (complete subgraphs) of the graph. The total number of possible graphical models is clearly given by 2 (
Bayesian Selection of LogLinear Models
 Canadian Journal of Statistics
, 1995
"... A general methodology is presented for finding suitable Poisson loglinear models with applications to multiway contingency tables. Mixtures of multivariate normal distributions are used to model prior opinion when a subset of the regression vector is believed to be nonzero. This prior distribution ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
A general methodology is presented for finding suitable Poisson loglinear models with applications to multiway contingency tables. Mixtures of multivariate normal distributions are used to model prior opinion when a subset of the regression vector is believed to be nonzero. This prior distribution is studied for two and threeway contingency tables, in which the regression coefficients are interpretable in terms of oddsratios in the table. Efficient and accurate schemes are proposed for calculating the posterior model probabilities. The methods are illustrated for a large number of twoway simulated tables and for two threeway tables. These methods appear to be useful in selecting the best loglinear model and in estimating parameters of interest that reflect uncertainty in the true model. Key words and phrases: Bayes factors, Laplace method, Gibbs sampling, Model selection, Odds ratios. AMS subject classifications: Primary 62H17, 62F15, 62J12. 1 Introduction 1.1 Bayesian testing...
Objective Bayesian analysis of contingency tables
, 2002
"... The statistical analysis of contingency tables is typically carried out with a hypothesis test. In the Bayesian paradigm, default priors for hypothesis tests are typically improper, and cannot be used. Although such priors are available, and proper, for testing contingency tables, we show that for t ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
The statistical analysis of contingency tables is typically carried out with a hypothesis test. In the Bayesian paradigm, default priors for hypothesis tests are typically improper, and cannot be used. Although such priors are available, and proper, for testing contingency tables, we show that for testing independence they can be greatly improved on by socalled intrinsic priors. We also argue that because there is no realistic situation that corresponds to the case of conditioning on both margins of a contingency table, the proper analysis of an a × b contingency table should only condition on either the table total or on only one of the margins. The posterior probabilities from the intrinsic priors provide reasonable answers in these cases. Examples using simulated and real data are given.
Testing HardyWeinberg Equilibrium: an Objective Bayesian Analysis
"... Summary: We analyze the general (multiallelic) HardyWeinberg equilibrium problem from an objective Bayesian testing standpoint. We argue that for small or moderate sample sizes the answer is rather sensitive to the prior chosen, and this suggests to carry out a Bayesian sensitivity analysis to the ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Summary: We analyze the general (multiallelic) HardyWeinberg equilibrium problem from an objective Bayesian testing standpoint. We argue that for small or moderate sample sizes the answer is rather sensitive to the prior chosen, and this suggests to carry out a Bayesian sensitivity analysis to the prior. This objective is achieved through the identification of a class of priors specifically designed for this testing problem. In this paper we consider the class of intrinsic priors under the full model, indexed by a tuning quantity, the training sample size. These priors are objective, satisfy Savage’s continuity condition and have proved to behave extremely well for many statistical testing problems. We compute the posterior probability of the HardyWeinberg equilibrium model for the class of intrinsic priors, and thus provide a range of plausible answers. If our decision of rejecting the null hypothesis does not change as the intrinsic prior varies over this class, we conclude that our analysis is robust. On the other hand, when the sample size grows to infinity any sensitivity to the prior disappears because under rather general conditions the Bayes factor for intrinsic priors is consistent.
Assessing Robustness of Intrinsic Tests of Independence in Twoway Contingency Tables
"... Abstract: A condition needed for testing nested hypotheses from a Bayesian viewpoint is that the prior for the alternative model concentrates mass around the smaller, or null, model. For testing independence in contingency tables, the intrinsic priors satisfy this requirement. Further, the degree ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract: A condition needed for testing nested hypotheses from a Bayesian viewpoint is that the prior for the alternative model concentrates mass around the smaller, or null, model. For testing independence in contingency tables, the intrinsic priors satisfy this requirement. Further, the degree of concentration of the priors is controlled by a discrete parameter m, the training sample size, which plays an important role in the resulting answer. In this paper we study, for small or moderate sample sizes, robustness of the tests of independence in contingency tables with respect to intrinsic priors with different degrees of concentration around the null. We compare these tests with frequentist tests and the robust Bayes tests of Good and Crook. For large sample sizes robustness is achieved since the intrinsic Bayesian tests are consistent. We also discuss conditioning issues and sampling schemes, and argue that conditioning should be on either one margin or the table total, but not on both margins. Examples using real are simulated data are given.
two examples. Bayesian Inference for Poisson and Multinomial Loglinear Models
"... Categorical data frequently arise in applications in the social sciences. In such applications,the class of loglinear models, based on either a Poisson or (product) multinomial response distribution, is a flexible model class for inference and prediction. In this paper we consider the Bayesian anal ..."
Abstract
 Add to MetaCart
Categorical data frequently arise in applications in the social sciences. In such applications,the class of loglinear models, based on either a Poisson or (product) multinomial response distribution, is a flexible model class for inference and prediction. In this paper we consider the Bayesian analysis of both Poisson and multinomial loglinear models. It is often convenient to model multinomial or product multinomial data as observations of independent Poisson variables. For multinomial data, Lindley (1964) showed that this approach leads to valid Bayesian posterior inferences when the prior density for the Poisson cell means factorises in a particular way. We develop this result to provide a general framework for the analysis of multinomial or product multinomial data using a Poisson loglinear model. Valid finite population inferences are also available, which can be particularly important in modelling social data.We then focus particular attention on multivariate normal prior distributions for the loglinear model parameters.Here, an improper prior distribution for certain Poisson model parameters is required for valid multinomial analysis, and we derive conditions under which the resulting posterior distribution is proper.We also consider the construction of prior distributions across models, and for model parameters, when uncertainty exists about the appropriate form of the model. We present classes of Poisson and multinomial models, invariant under certain natural groups of permutations of the cells. We demonstrate that, if prior belief concerning the model parameters is also invariant, as is the case in a `reference' analysis, then choice of prior distribution is considerably restricted. The analysis of multivariate categorical data in the form of a contingency table is considered in detail. We illustrate the methods with