Results 1  10
of
13
Bayes Factors
, 1995
"... In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null ..."
Abstract

Cited by 981 (70 self)
 Add to MetaCart
In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null is onehalf. Although there has been much discussion of Bayesian hypothesis testing in the context of criticism of P values, less attention has been given to the Bayes factor as a practical tool of applied statistics. In this paper we review and discuss the uses of Bayes factors in the context of five scientific applications in genetics, sports, ecology, sociology and psychology.
Model selection and accounting for model uncertainty in graphical models using Occam's window
, 1993
"... We consider the problem of model selection and accounting for model uncertainty in highdimensional contingency tables, motivated by expert system applications. The approach most used currently is a stepwise strategy guided by tests based on approximate asymptotic Pvalues leading to the selection o ..."
Abstract

Cited by 266 (46 self)
 Add to MetaCart
We consider the problem of model selection and accounting for model uncertainty in highdimensional contingency tables, motivated by expert system applications. The approach most used currently is a stepwise strategy guided by tests based on approximate asymptotic Pvalues leading to the selection of a single model; inference is then conditional on the selected model. The sampling properties of such a strategy are complex, and the failure to take account of model uncertainty leads to underestimation of uncertainty about quantities of interest. In principle, a panacea is provided by the standard Bayesian formalism which averages the posterior distributions of the quantity of interest under each of the models, weighted by their posterior model probabilities. Furthermore, this approach is optimal in the sense of maximising predictive ability. However, this has not been used in practice because computing the posterior model probabilities is hard and the number of models is very large (often greater than 1011). We argue that the standard Bayesian formalism is unsatisfactory and we propose an alternative Bayesian approach that, we contend, takes full account of the true model uncertainty byaveraging overamuch smaller set of models. An efficient search algorithm is developed for nding these models. We consider two classes of graphical models that arise in expert systems: the recursive causal models and the decomposable
Bayesian Model Averaging for Linear Regression Models
 Journal of the American Statistical Association
, 1997
"... We consider the problem of accounting for model uncertainty in linear regression models. Conditioning on a single selected model ignores model uncertainty, and thus leads to the underestimation of uncertainty when making inferences about quantities of interest. A Bayesian solution to this problem in ..."
Abstract

Cited by 184 (13 self)
 Add to MetaCart
We consider the problem of accounting for model uncertainty in linear regression models. Conditioning on a single selected model ignores model uncertainty, and thus leads to the underestimation of uncertainty when making inferences about quantities of interest. A Bayesian solution to this problem involves averaging over all possible models (i.e., combinations of predictors) when making inferences about quantities of
Assessment and Propagation of Model Uncertainty
, 1995
"... this paper I discuss a Bayesian approach to solving this problem that has long been available in principle but is only now becoming routinely feasible, by virtue of recent computational advances, and examine its implementation in examples that involve forecasting the price of oil and estimating the ..."
Abstract

Cited by 108 (0 self)
 Add to MetaCart
this paper I discuss a Bayesian approach to solving this problem that has long been available in principle but is only now becoming routinely feasible, by virtue of recent computational advances, and examine its implementation in examples that involve forecasting the price of oil and estimating the chance of catastrophic failure of the U.S. Space Shuttle.
Bayes factors and model uncertainty
 DEPARTMENT OF STATISTICS, UNIVERSITY OFWASHINGTON
, 1993
"... In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null ..."
Abstract

Cited by 89 (6 self)
 Add to MetaCart
In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null is onehalf. Although there has been much discussion of Bayesian hypothesis testing in the context of criticism of Pvalues, less attention has been given to the Bayes factor as a practical tool of applied statistics. In this paper we review and discuss the uses of Bayes factors in the context of five scientific applications. The points we emphasize are: from Jeffreys's Bayesian point of view, the purpose of hypothesis testing is to evaluate the evidence in favor of a scientific theory; Bayes factors offer a way of evaluating evidence in favor ofa null hypothesis; Bayes factors provide a way of incorporating external information into the evaluation of evidence about a hypothesis; Bayes factors are very general, and do not require alternative models to be nested; several techniques are available for computing Bayes factors, including asymptotic approximations which are easy to compute using the output from standard packages that maximize likelihoods; in "nonstandard " statistical models that do not satisfy common regularity conditions, it can be technically simpler to calculate Bayes factors than to derive nonBayesian significance
Model Selection and Accounting for Model Uncertainty in Linear Regression Models
, 1993
"... We consider the problems of variable selection and accounting for model uncertainty in linear regression models. Conditioning on a single selected model ignores model uncertainty, and thus leads to the underestimation of uncertainty when making inferences about quantities of interest. The complete B ..."
Abstract

Cited by 47 (6 self)
 Add to MetaCart
We consider the problems of variable selection and accounting for model uncertainty in linear regression models. Conditioning on a single selected model ignores model uncertainty, and thus leads to the underestimation of uncertainty when making inferences about quantities of interest. The complete Bayesian solution to this problem involves averaging over all possible models when making inferences about quantities of interest. This approach is often not practical. In this paper we offer two alternative approaches. First we describe a Bayesian model selection algorithm called "Occam's "Window" which involves averaging over a reduced set of models. Second, we describe a Markov chain Monte Carlo approach which directly approximates the exact solution. Both these model averaging procedures provide better predictive performance than any single model which might reasonably have been selected. In the extreme case where there are many candidate predictors but there is no relationship between any of them and the response, standard variable selection procedures often choose some subset of variables that yields a high R² and a highly significant overall F value. We refer to this unfortunate phenomenon as "Freedman's Paradox" (Freedman, 1983). In this situation, Occam's vVindow usually indicates the null model as the only one to be considered, or else a small number of models including the null model, thus largely resolving the paradox.
Concerning Bayesian Motion Segmentation, Model Averaging, Matching and the Trifocal Tensor
 In European Conference on Computer Vision
, 1998
"... . Motion segmentation involves identifying regions of the image that correspond to independently moving objects. The number of independently moving objects, and type of motion model for each of the objects is unknown a priori. In order to perform motion segmentation, the problems of model select ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
. Motion segmentation involves identifying regions of the image that correspond to independently moving objects. The number of independently moving objects, and type of motion model for each of the objects is unknown a priori. In order to perform motion segmentation, the problems of model selection, robust estimation and clustering must all be addressed simultaneously. Here we place the three problems into a common Bayesian framework; investigating the use of model averagingrepresenting a motion by a combination of modelsas a principled way for motion segmentation of images. The final result is a fully automatic algorithm for clustering that works in the presence of noise and outliers. 1 Introduction Detection of independently moving objects is an essential but often neglected precursor to problems in computer vision e.g. e#cient video compression [3], video editing, surveillance, smart tracking of objects etc. The work in this paper stems from the desire to develop a g...
Model Selection for Two View Geometry: A Review
 Microsoft Research, USA, Microsoft Research
, 1998
"... . Computer vision often concerns the estimation of models of the world from visual input. Sometimes it is possible to fit several di#erent models or hypotheses to a set of data, the choice of which is usually left to the vision practitioner. This paper explores ways of automating the model selec ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
. Computer vision often concerns the estimation of models of the world from visual input. Sometimes it is possible to fit several di#erent models or hypotheses to a set of data, the choice of which is usually left to the vision practitioner. This paper explores ways of automating the model selection process, with specific emphasis on the least squares problem. The statistical literature is reviewed and it will become apparent that although no one method has yet been developed that will be generally useful for all computer vision problems, there do exist some useful partial solutions. Thus this paper is intended as a beginner's guide to model selection, highlighting the pertinent problem areas in model selection and illustrating them by the example of estimating two view geometry. 1 Introduction Robotic vision has its basis in geometric modeling of the world, and many vision algorithms attempt to estimate these geometric models from perceived data. Usually only one model is...
Determinants of Foreign Direct Investment
, 2010
"... Abstract: Empirical studies of bilateral foreign direct investment (FDI) activity show substantial differences in specifications with little agreement on the set of covariates that are (or should be) included. We use Bayesian statistical techniques that allow one to select from a large set of candid ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract: Empirical studies of bilateral foreign direct investment (FDI) activity show substantial differences in specifications with little agreement on the set of covariates that are (or should be) included. We use Bayesian statistical techniques that allow one to select from a large set of candidates those variables most likely to be determinants of FDI activity. The variables with consistently high inclusion probabilities are traditional gravity variables, cultural distance factors, parentcountry per capita GDP, relative labor endowments, and regional trade agreements. Variables with little support for inclusion are multilateral trade openness, hostcountry business costs, hostcountry infrastructure (including credit markets), and hostcountry institutions. Of particular note, our results suggest that many covariates found significant by previous studies are not robust.