Results 1  10
of
13
Bayes Factors
, 1995
"... In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null ..."
Abstract

Cited by 1766 (74 self)
 Add to MetaCart
In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null is onehalf. Although there has been much discussion of Bayesian hypothesis testing in the context of criticism of P values, less attention has been given to the Bayes factor as a practical tool of applied statistics. In this paper we review and discuss the uses of Bayes factors in the context of five scientific applications in genetics, sports, ecology, sociology and psychology.
Approximate Bayes Factors and Accounting for Model Uncertainty in Generalized Linear Models
, 1993
"... Ways of obtaining approximate Bayes factors for generalized linear models are described, based on the Laplace method for integrals. I propose a new approximation which uses only the output of standard computer programs such as GUM; this appears to be quite accurate. A reference set of proper priors ..."
Abstract

Cited by 148 (28 self)
 Add to MetaCart
Ways of obtaining approximate Bayes factors for generalized linear models are described, based on the Laplace method for integrals. I propose a new approximation which uses only the output of standard computer programs such as GUM; this appears to be quite accurate. A reference set of proper priors is suggested, both to represent the situation where there is not much prior information, and to assess the sensitivity of the results to the prior distribution. The methods can be used when the dispersion parameter is unknown, when there is overdispersion, to compare link functions, and to compare error distributions and variance functions. The methods can be used to implement the Bayesian approach to accounting for model uncertainty. I describe an application to inference about relative risks in the presence of control factors where model uncertainty is large and important. Software to implement the
Bayes factors and model uncertainty
 DEPARTMENT OF STATISTICS, UNIVERSITY OFWASHINGTON
, 1993
"... In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null ..."
Abstract

Cited by 121 (6 self)
 Add to MetaCart
In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null is onehalf. Although there has been much discussion of Bayesian hypothesis testing in the context of criticism of Pvalues, less attention has been given to the Bayes factor as a practical tool of applied statistics. In this paper we review and discuss the uses of Bayes factors in the context of five scientific applications. The points we emphasize are: from Jeffreys's Bayesian point of view, the purpose of hypothesis testing is to evaluate the evidence in favor of a scientific theory; Bayes factors offer a way of evaluating evidence in favor ofa null hypothesis; Bayes factors provide a way of incorporating external information into the evaluation of evidence about a hypothesis; Bayes factors are very general, and do not require alternative models to be nested; several techniques are available for computing Bayes factors, including asymptotic approximations which are easy to compute using the output from standard packages that maximize likelihoods; in "nonstandard " statistical models that do not satisfy common regularity conditions, it can be technically simpler to calculate Bayes factors than to derive nonBayesian significance
Reference analysis
 In Handbook of Statistics 25
, 2005
"... This chapter describes reference analysis, a method to produce Bayesian inferential statements which only depend on the assumed model and the available data. Statistical information theory is used to define the reference prior function as a mathematical description of that situation where data would ..."
Abstract

Cited by 24 (3 self)
 Add to MetaCart
This chapter describes reference analysis, a method to produce Bayesian inferential statements which only depend on the assumed model and the available data. Statistical information theory is used to define the reference prior function as a mathematical description of that situation where data would best dominate prior knowledge about the quantity of interest. Reference priors are not descriptions of personal beliefs; they are proposed as formal consensus prior functions to be used as standards for scientific communication. Reference posteriors are obtained by formal use of Bayes theorem with a reference prior. Reference prediction is achieved by integration with a reference posterior. Reference decisions are derived by minimizing a reference posterior expected loss. An information theory based loss function, the intrinsic discrepancy, may be used to derive reference procedures for conventional inference problems in scientific investigation, such as point estimation, region estimation and hypothesis testing.
Model Validation and Spatial Interpolation by Combining Observations with Outputs from Numerical Models via Bayesian Melding
, 2001
"... ..."
Twostage Bayesian model averaging in endogenous variable models. Econometric Reviews, Forthcoming
, 2011
"... Economic modeling in the presence of endogeneity is subject to model uncertainty at both the instrument and covariate level. We propose a TwoStage Bayesian Model Averaging (2SBMA) methodology that extends the TwoStage Least Squares (2SLS) estimator. By constructing a TwoStage Unit Information Pri ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Economic modeling in the presence of endogeneity is subject to model uncertainty at both the instrument and covariate level. We propose a TwoStage Bayesian Model Averaging (2SBMA) methodology that extends the TwoStage Least Squares (2SLS) estimator. By constructing a TwoStage Unit Information Prior in the endogenous variable model, we are able to efficiently combine established methods for addressing model uncertainty in regression models with the classic technique of 2SLS. To assess the validity of instruments in the 2SBMA context, we develop Bayesian tests of the identification restriction that are based on model averaged posterior predictive pvalues. A simulation study showed that 2SBMA has the ability to recover structure in both the instrument and covariate set, and substantially improves the sharpness of resulting coefficient estimates in comparison to 2SLS using the full specification in an automatic fashion. Due to the increased parsimony of the 2SBMA estimate, the Bayesian Sargan test had a power of 50 percent in detecting a violation of the exogeneity assumption, while the method based on 2SLS using the full specification had negligible power. We apply our approach to the problem of development accounting, and find support not only for institutions, but also for geography and integration as development determinants, once both model uncertainty and endogeneity have been jointly addressed.
Approximate Bayesian . . . Weighted Likelihood Bootstrap
, 1991
"... We introduce the weighted likelihood bootstrap (WLB) as a simple way of approximately simulating from a posterior distribution. This is easy to implement, requiring only an algorithm for calculating the maximum likelihood estimator, such as the EM algorithm or iteratively reweighted least squares; i ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We introduce the weighted likelihood bootstrap (WLB) as a simple way of approximately simulating from a posterior distribution. This is easy to implement, requiring only an algorithm for calculating the maximum likelihood estimator, such as the EM algorithm or iteratively reweighted least squares; it does not necessarily require actual calculation of the likelihood itself. The method is exact up to an effective prior which is generally unknown but can be identified exactly for unconstrained discretedata models and approximately for other models. Accuracy of the WLB relies on the chosen distribution of weights. In the generic scheme, the WLB is at least firstorder correct under quite general conditions. We have also been able to prove higherorder correctness in some classes of models. The method, which generalizes Rubin's Bayesian bootstrap, provides approximate posterior distributions for prediction, calibration, dependent data and partial likelihood problems, as well as more standard models. The calculation of approximate Bayes factors for model comparison is also considered. We note that, given a sample simulated from the posterior distribution, the required marginal likelihood may be simulationconsistently estimated by the harmonic mean of the associated likelihood values; a modification of this estimator that avoids instability is also noted. An alternative, predictionbased, estimator of the marginal likelihood using the WLB is also described. These methods provide simple ways of calculating approximate Bayes factors and posterior model probabilities for a very wide class of models.
A Bayesian approach to combine disparate spatial data
"... Constructing maps of pollution levels is vital for air quality management, and presents statistical problems typical of many environmental and spatial applications. Ideally, such maps would be based on a dense network of monitoring stations, but this does not exist. Instead, there are two main sourc ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Constructing maps of pollution levels is vital for air quality management, and presents statistical problems typical of many environmental and spatial applications. Ideally, such maps would be based on a dense network of monitoring stations, but this does not exist. Instead, there are two main sources of information in the U.S.: one is pollution measurements at a sparse set of about 50 monitoring stations called CASTNet, and the other is the output of the regional scale air quality models (called Models3). A related problem is the evaluation of these numerical models for air quality applications that is crucial to assist in control strategy selection. Here we
PbiUpJ.Smitb
, 1988
"... How many serious nuclear reactor accidents will there be in the ten years? Some recent correspondence on this topic in the journal reviewed. The approach here uses data from operating experience, the more traditional "technical risk assessment". We derive predictive distributions for the n ..."
Abstract
 Add to MetaCart
How many serious nuclear reactor accidents will there be in the ten years? Some recent correspondence on this topic in the journal reviewed. The approach here uses data from operating experience, the more traditional "technical risk assessment". We derive predictive distributions for the number of future accidents, using a Bayesian approach. Whether or not (optimistic) prior information is incorporated, these are not reassuring. How many serious nuclear reactor accidents will there be in the next ten years? There has recently been some lively correspondence on this topicin Nature, which seems worth bringing Traditionally, nuclear risk assessment has been carried out by subjectively assigning eXJ:lert opinion to the component events of a nuclear.accident, and then decision trees. This approach, known as "technical risk assessment", Ra.li)mll$S(m r(mort (l'taslnussen et 1975), and the German Risk Study (Federal
unknown title
"... The classification maximum likelihood approach is sufficiently general to encompass many current clustering algorithms, including those based on the sum of squares criterion and on the criterion of Friedman and Rubin (1967). However, as currently implemented, it does not allow the specification of w ..."
Abstract
 Add to MetaCart
The classification maximum likelihood approach is sufficiently general to encompass many current clustering algorithms, including those based on the sum of squares criterion and on the criterion of Friedman and Rubin (1967). However, as currently implemented, it does not allow the specification of which features (orientation, size and shape) are to be common to all clusters and which may differ between clusters. Also, it is restricted to Gaussian distributions and it does not allow for noise. We propose ways of overcoming these limitations. A reparameterization of the covariance matrix allows us to specify that some features, but not all, be the same for all clusters. A practical framework for nonGaussian clustering is outlined, and a means of incorporating noise in the form of a Poisson process is described. An approximate Bayesian method for choosing the number of clusters is given. The performance of the proposed methods is studied by simulation, with encouraging results. The methods are applied to the analysis of a data set arising in the study of diabetes, and the results seem better than those of previous analyses.