Results 1  10
of
17
Induction and Deduction in Bayesian Data Analysis
, 2011
"... The classical or frequentist approach to statistics (in which inference is centered on significance testing), is associated with a philosophy in which science is deductive and follows Popper’s doctrine of falsification. In contrast, Bayesian inference is commonly associated with inductive reasoning ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
The classical or frequentist approach to statistics (in which inference is centered on significance testing), is associated with a philosophy in which science is deductive and follows Popper’s doctrine of falsification. In contrast, Bayesian inference is commonly associated with inductive reasoning and the idea that a model can be dethroned by a competing model but can never be directly falsified by a significance test. The purpose of this article is to break these associations, which I think are incorrect and have been detrimental to statistical practice, in that they have steered falsificationists away from the very useful tools of Bayesian inference and have discouraged Bayesians from checking the fit of their models. From my experience using and developing Bayesian methods in social and environmental science, I have found model checking and falsification to be central in the modeling process.
“Not only defended but also applied”: The perceived absurdity of Bayesian inference
, 2011
"... Abstract. The missionary zeal of many Bayesians has been matched, in the other direction, by a view among some theoreticians that Bayesian methods are absurd—not merely misguided but obviously wrong in principle. We consider several examples, beginning with Feller’s classic text on probability theor ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract. The missionary zeal of many Bayesians has been matched, in the other direction, by a view among some theoreticians that Bayesian methods are absurd—not merely misguided but obviously wrong in principle. We consider several examples, beginning with Feller’s classic text on probability theory and continuing with more recent cases such as the perceived Bayesian nature of the socalled doomsday argument. We analyze in this note the intellectual background behind various misconceptions about Bayesian statistics, without aiming at a complete historical coverage of the reasons for this dismissal.
Understanding predictive information criteria for Bayesian models ∗
, 2013
"... We review the Akaike, deviance, and WatanabeAkaike information criteria from a Bayesian perspective, where the goal is to estimate expected outofsampleprediction error using a biascorrectedadjustmentofwithinsampleerror. Wefocusonthechoicesinvolvedinsettingupthese measures, and we compare them i ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We review the Akaike, deviance, and WatanabeAkaike information criteria from a Bayesian perspective, where the goal is to estimate expected outofsampleprediction error using a biascorrectedadjustmentofwithinsampleerror. Wefocusonthechoicesinvolvedinsettingupthese measures, and we compare them in three simple examples, one theoretical and two applied. The contribution of this paper is to put all these information criteria into a Bayesian predictive context and to better understand, through small examples, how these methods can apply in practice.
How uncertain do we need to be?
, 2013
"... Expert probability forecasts can be useful for decision making (§1). But levels of uncertainty escalate: however the forecaster expresses the uncertainty that attaches to a forecast, there are good reasons for her to express a further level of uncertainty, in the shape of either imprecision or highe ..."
Abstract
 Add to MetaCart
Expert probability forecasts can be useful for decision making (§1). But levels of uncertainty escalate: however the forecaster expresses the uncertainty that attaches to a forecast, there are good reasons for her to express a further level of uncertainty, in the shape of either imprecision or higher order uncertainty (§2). Bayesian epistemology provides the means to halt this escalator, by tying expressions of uncertainty to the propositions expressible in an agent’s language (§3). But Bayesian epistemology comes in three main varieties. Strictly subjective Bayesianism and empiricallybased subjective Bayesianism have difficulty in justifying the use of a forecaster’s probabilities for decision making (§4). On the other hand, objective Bayesianism can justify the use of these probabilities, at least when the probabilities are consistent with the agent’s evidence (§5). Hence objective Bayesianism offers the most promise overall for explaining how testimony of uncertainty can be useful for decision making. Interestingly, the objective Bayesian analysis provided in §5 can also be used
Chapter 1 How do we choose our default methods?
"... [Chapter by Andrew Gelman for the Committee of Presidents of Statistical ..."
4 5 6 7
, 2013
"... I agree with Murtaugh (and also with Greenland and Poole 2013, who make similar points from a Bayesian perspective) that with simple inference for linear models, pvalues are mathematically equivalent to confidence intervals and other data reductions, there should be no strong reason to prefer one m ..."
Abstract
 Add to MetaCart
I agree with Murtaugh (and also with Greenland and Poole 2013, who make similar points from a Bayesian perspective) that with simple inference for linear models, pvalues are mathematically equivalent to confidence intervals and other data reductions, there should be no strong reason to prefer one method to another. In that sense, my problem is not with pvalues but in how they are used and interpreted. Based on my own readings and experiences (not in ecology but in a range of social and environmental sciences), I feel that pvalues and hypothesis testing have led to much scientific confusion by researchers treating nonsignificant results as zero and significant results as real. In many settings I have found estimation rather than testing to be more direct. For example, when modeling home radon levels (Lin et al. 1999), we constructed our inferences by combining direct radon measurements with geographic and geological information. This approach of modeling and estimation worked better than a series of hypothesis tests that would, for example, reject the assumption that radon levels are independent of geologic characteristics. I have, on occasion, successfully used pvalues and hypothesis testing in my own work, and in other settings I have reported pvalues (or, equivalently, confidence intervals) in ways that I believe have done no harm, as a way to convey uncertainty about an estimate (Gelman 2013). In many other cases, however, I believe that null hypothesis testing has led to the publication of serious mistakes, perhaps most notoriously in the paper by Bem (2011), who claimed evidence for extrasensory perception (ESP) based on a series of statistically significant results. The ESP example was widely recognized to indicate a crisis in psychology research, not because of the substance of Bem’s implausible and unreplicated claims, but
Bayesian Estimation Supersedes the t Test
"... This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. Bayesian estimation for 2 groups provides complete distributions of credible valu ..."
Abstract
 Add to MetaCart
This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. Bayesian estimation for 2 groups provides complete distributions of credible values for the effect size, group means and their difference, standard deviations and their difference, and the normality of the data. The method handles outliers. The decision rule can accept the null value (unlike traditional t tests) when certainty in the estimate is high (unlike Bayesian model comparison using Bayes factors). The method also yields precise estimates of statistical power for various research goals. The software and programs are free and run on Macintosh, Windows, and Linux platforms.
Advanced Methods in Probabilistic Modeling
, 2013
"... We will study how to use probability models to analyze data, focusing both on mathematical details of the models and the technology that implements the corresponding algorithms. We will study advanced methods, such as large scale inference, model diagnostics and selection, and Bayesian nonparametric ..."
Abstract
 Add to MetaCart
We will study how to use probability models to analyze data, focusing both on mathematical details of the models and the technology that implements the corresponding algorithms. We will study advanced methods, such as large scale inference, model diagnostics and selection, and Bayesian nonparametrics. Our goals are to understand the cutting edge of modern probabilistic modeling, to begin research that makes contributions to this field, and develop good practices for specifying and applying probabilistic models to analyze realworld data. The centerpiece of the course will be the student project. Over the course of the semester, students will develop an applied case study, ideally one that is connected to their graduate research. Each project must involve using probabilistic models to analyze realworld data. Prerequisites I assume you are familiar with the basic material from COS513 (Foundations of Proababilistic Modeling). For example, you should be comfortable with probabilistic graphical models basic statistics mixture modeling linear regression hidden Markov models exponential families the expectationmaximization algorithm We will study again some of the advanced material that was touched on in COS513, such as variational inference and Bayesian nonparametrics. I assume you are comfortable writing software to analyze data and learning about new tools for that purpose. For example, you should be familiar with a statistical programming language such as R and a scripting language such as Python.