Results 1 
8 of
8
Bayes Factors
, 1995
"... In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null ..."
Abstract

Cited by 983 (70 self)
 Add to MetaCart
In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null is onehalf. Although there has been much discussion of Bayesian hypothesis testing in the context of criticism of P values, less attention has been given to the Bayes factor as a practical tool of applied statistics. In this paper we review and discuss the uses of Bayes factors in the context of five scientific applications in genetics, sports, ecology, sociology and psychology.
Bayes factors and model uncertainty
 DEPARTMENT OF STATISTICS, UNIVERSITY OFWASHINGTON
, 1993
"... In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null ..."
Abstract

Cited by 89 (6 self)
 Add to MetaCart
In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null is onehalf. Although there has been much discussion of Bayesian hypothesis testing in the context of criticism of Pvalues, less attention has been given to the Bayes factor as a practical tool of applied statistics. In this paper we review and discuss the uses of Bayes factors in the context of five scientific applications. The points we emphasize are: from Jeffreys's Bayesian point of view, the purpose of hypothesis testing is to evaluate the evidence in favor of a scientific theory; Bayes factors offer a way of evaluating evidence in favor ofa null hypothesis; Bayes factors provide a way of incorporating external information into the evaluation of evidence about a hypothesis; Bayes factors are very general, and do not require alternative models to be nested; several techniques are available for computing Bayes factors, including asymptotic approximations which are easy to compute using the output from standard packages that maximize likelihoods; in "nonstandard " statistical models that do not satisfy common regularity conditions, it can be technically simpler to calculate Bayes factors than to derive nonBayesian significance
Bayesian inference procedures derived via the concept of relative surprise
 Communications in Statistics
, 1997
"... of least relative surprise; model checking; change of variable problem; crossvalidation. We consider the problem of deriving Bayesian inference procedures via the concept of relative surprise. The mathematical concept of surprise has been developed by I.J. Good in a long sequence of papers. We make ..."
Abstract

Cited by 17 (6 self)
 Add to MetaCart
of least relative surprise; model checking; change of variable problem; crossvalidation. We consider the problem of deriving Bayesian inference procedures via the concept of relative surprise. The mathematical concept of surprise has been developed by I.J. Good in a long sequence of papers. We make a modiÞcation to this development that permits the avoidance of a serious defect; namely, the change of variable problem. We apply relative surprise to the development of estimation, hypothesis testing and model checking procedures. Important advantages of the relative surprise approach to inference include the lack of dependence on a particular loss function and complete freedom to the statistician in the choice of prior for hypothesis testing problems. Links are established with common Bayesian inference procedures such as highest posterior density regions, modal estimates and Bayes factors. From a practical perspective new inference
Reference analysis
 In Handbook of Statistics 25
, 2005
"... This chapter describes reference analysis, a method to produce Bayesian inferential statements which only depend on the assumed model and the available data. Statistical information theory is used to define the reference prior function as a mathematical description of that situation where data would ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
This chapter describes reference analysis, a method to produce Bayesian inferential statements which only depend on the assumed model and the available data. Statistical information theory is used to define the reference prior function as a mathematical description of that situation where data would best dominate prior knowledge about the quantity of interest. Reference priors are not descriptions of personal beliefs; they are proposed as formal consensus prior functions to be used as standards for scientific communication. Reference posteriors are obtained by formal use of Bayes theorem with a reference prior. Reference prediction is achieved by integration with a reference posterior. Reference decisions are derived by minimizing a reference posterior expected loss. An information theory based loss function, the intrinsic discrepancy, may be used to derive reference procedures for conventional inference problems in scientific investigation, such as point estimation, region estimation and hypothesis testing.
Consistency of Bayesian Procedures for Variable Selection
, 2008
"... It has long been known that for the comparison of pairwise nested models, a decision based on the Bayes factor produces a consistent ..."
Abstract
 Add to MetaCart
It has long been known that for the comparison of pairwise nested models, a decision based on the Bayes factor produces a consistent
Specification of prior distributions under model uncertainty
, 2008
"... We consider the specification of prior distributions for Bayesian model comparison, focusing on regressiontype models. We propose a particular joint specification of the prior distribution across models so that sensitivity of posterior model probabilities to the dispersion of prior distributions fo ..."
Abstract
 Add to MetaCart
We consider the specification of prior distributions for Bayesian model comparison, focusing on regressiontype models. We propose a particular joint specification of the prior distribution across models so that sensitivity of posterior model probabilities to the dispersion of prior distributions for the parameters of individual models (Lindley’s paradox) is diminished. We illustrate the behavior of inferential and predictive posterior quantities in linear and loglinear regressions under our proposed prior densities with a series of simulated and real data examples.
TESTING A PRECISE NULL HYPOTHESIS USING REFERENCE POSTERIOR ODDS (BayesFrequentist interfaee/Bayes information criterion/posterior Bayes factor/realization factor/Schwarz criterion)
"... Testing a precise null hypothesis against a composite alternative presents a problem for Reference Bayesian inference. When the alternative prior is improper, any finite observation ensures that the Bayes Factor will be infinite. This paradox can be avoided by using a Reference Posterior Odds (RPO) ..."
Abstract
 Add to MetaCart
Testing a precise null hypothesis against a composite alternative presents a problem for Reference Bayesian inference. When the alternative prior is improper, any finite observation ensures that the Bayes Factor will be infinite. This paradox can be avoided by using a Reference Posterior Odds (RPO) ratio rather than the Bayes Factor. The RPO is closely related to the ratio of the Bayes Factor to its repeated sampling expectation, to Aitkin's Posterior Bayes Factor and also to the probability density or mass of the corresponding Frequentist test statistic, differing from all three principally by a factor n^''^, When the observations are normally distributed, the logarithm of the RPO is exactly equal to the Schwarz Criterion, up to an arbitrary constant. RESUMEN Contraste de una hipótesis precisa mediante el cociente de referencia de probabilidades finales El contraste de una hipótesis precisa frente a una alternativa compuesta presenta dificultades para la inferencia bayesiana objetiva, puesto que cuando la distribución inicial bajo la hipótesis alternativa es impropia, el factor Bayes correspondiente resulta infinito. Esta dificultad puede evitarse utilizando el cociente de referencia de probabilidades a posteriori {reference posterior odds, RPO). El RPO está muy relacionado con el cociente del factor de Bayes a su valor esperado en el muestreo, al factor Bayes a posteriori de Aitkin, y a a densidad de probabilidad en el muestreo del estadístico convencional de contraste, difiriendo de todos ellos en un factor del orden de n^^^. Cuando las observaciones tienen una distribución normal, el logaritmo del RPO coincide exactamente con el criterio de Schwartz, excepto por una constante arbitraria. 1.
Test manuscript No. (will be inserted by the editor) Combining Bayesian procedures for testing
, 2009
"... Abstract Jeffreys and PereiraStern Bayesian procedures for testing provide measures of evidence in favour the null hypothesis which can lead to different decisions. We introduce two procedures for testing based on pooling the posterior evidences in favour of the null hypothesis provided by these pr ..."
Abstract
 Add to MetaCart
Abstract Jeffreys and PereiraStern Bayesian procedures for testing provide measures of evidence in favour the null hypothesis which can lead to different decisions. We introduce two procedures for testing based on pooling the posterior evidences in favour of the null hypothesis provided by these procedures. We prove that the proposed procedure which has been built using the linear pool of probability is a Bayes test and does not lead to JeffreysLindley paradox. We apply the results for testing precise hypothesis about parameters of some asymmetric family of distributions including the skewnormal one.