Results 1  10
of
10
“Not only defended but also applied”: The perceived absurdity of Bayesian inference
, 2011
"... Abstract. The missionary zeal of many Bayesians has been matched, in the other direction, by a view among some theoreticians that Bayesian methods are absurd—not merely misguided but obviously wrong in principle. We consider several examples, beginning with Feller’s classic text on probability theor ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. The missionary zeal of many Bayesians has been matched, in the other direction, by a view among some theoreticians that Bayesian methods are absurd—not merely misguided but obviously wrong in principle. We consider several examples, beginning with Feller’s classic text on probability theory and continuing with more recent cases such as the perceived Bayesian nature of the socalled doomsday argument. We analyze in this note the intellectual background behind various misconceptions about Bayesian statistics, without aiming at a complete historical coverage of the reasons for this dismissal.
Chapter 1 How do we choose our default methods?
"... [Chapter by Andrew Gelman for the Committee of Presidents of Statistical ..."
Hypothesis Space Checking in Intuitive Reasoning
"... The process of generating a new hypothesis often begins with the recognition that all of the hypotheses currently under consideration are wrong. While this sort of falsification is straightforward when the observations are incompatible with each of the hypotheses, an interesting situation arises whe ..."
Abstract
 Add to MetaCart
The process of generating a new hypothesis often begins with the recognition that all of the hypotheses currently under consideration are wrong. While this sort of falsification is straightforward when the observations are incompatible with each of the hypotheses, an interesting situation arises when the observations are implausible under the hypotheses but not incompatible with them. We propose a formal account, inspired by statistical model checking, as an explanation for how people reason about these probabilistic falsifications. We contrast this account with approaches such as Bayesian inference that account for hypothesis comparison but do not explain how a reasoner might decide that the hypothesis space needs to be expanded.
How uncertain do we need to be?
, 2013
"... Expert probability forecasts can be useful for decision making (§1). But levels of uncertainty escalate: however the forecaster expresses the uncertainty that attaches to a forecast, there are good reasons for her to express a further level of uncertainty, in the shape of either imprecision or highe ..."
Abstract
 Add to MetaCart
Expert probability forecasts can be useful for decision making (§1). But levels of uncertainty escalate: however the forecaster expresses the uncertainty that attaches to a forecast, there are good reasons for her to express a further level of uncertainty, in the shape of either imprecision or higher order uncertainty (§2). Bayesian epistemology provides the means to halt this escalator, by tying expressions of uncertainty to the propositions expressible in an agent’s language (§3). But Bayesian epistemology comes in three main varieties. Strictly subjective Bayesianism and empiricallybased subjective Bayesianism have difficulty in justifying the use of a forecaster’s probabilities for decision making (§4). On the other hand, objective Bayesianism can justify the use of these probabilities, at least when the probabilities are consistent with the agent’s evidence (§5). Hence objective Bayesianism offers the most promise overall for explaining how testimony of uncertainty can be useful for decision making. Interestingly, the objective Bayesian analysis provided in §5 can also be used
4 5 6 7
, 2013
"... I agree with Murtaugh (and also with Greenland and Poole 2013, who make similar points from a Bayesian perspective) that with simple inference for linear models, pvalues are mathematically equivalent to confidence intervals and other data reductions, there should be no strong reason to prefer one m ..."
Abstract
 Add to MetaCart
I agree with Murtaugh (and also with Greenland and Poole 2013, who make similar points from a Bayesian perspective) that with simple inference for linear models, pvalues are mathematically equivalent to confidence intervals and other data reductions, there should be no strong reason to prefer one method to another. In that sense, my problem is not with pvalues but in how they are used and interpreted. Based on my own readings and experiences (not in ecology but in a range of social and environmental sciences), I feel that pvalues and hypothesis testing have led to much scientific confusion by researchers treating nonsignificant results as zero and significant results as real. In many settings I have found estimation rather than testing to be more direct. For example, when modeling home radon levels (Lin et al. 1999), we constructed our inferences by combining direct radon measurements with geographic and geological information. This approach of modeling and estimation worked better than a series of hypothesis tests that would, for example, reject the assumption that radon levels are independent of geologic characteristics. I have, on occasion, successfully used pvalues and hypothesis testing in my own work, and in other settings I have reported pvalues (or, equivalently, confidence intervals) in ways that I believe have done no harm, as a way to convey uncertainty about an estimate (Gelman 2013). In many other cases, however, I believe that null hypothesis testing has led to the publication of serious mistakes, perhaps most notoriously in the paper by Bem (2011), who claimed evidence for extrasensory perception (ESP) based on a series of statistically significant results. The ESP example was widely recognized to indicate a crisis in psychology research, not because of the substance of Bem’s implausible and unreplicated claims, but
VilliersenBois 2École Doctorale Sciences pour l’Environnement Gay LussacUniversité de
, 2011
"... Breaking the sticks: a hierarchical changepoint model for estimating ontogenetic shifts with stable isotope data ..."
Abstract
 Add to MetaCart
Breaking the sticks: a hierarchical changepoint model for estimating ontogenetic shifts with stable isotope data
An application of Carnapian inductive logic to philosophy of
"... In my talk I claim that an argument in philosophy of statistics found in Gelman and Shalizi (2012) can be improved using Carnapian inductive logic. Gelman and Shalizi argue against the ‘conventional philosophy ’ of Bayesian statistics, which stipulates that statistical models should be chosen only o ..."
Abstract
 Add to MetaCart
In my talk I claim that an argument in philosophy of statistics found in Gelman and Shalizi (2012) can be improved using Carnapian inductive logic. Gelman and Shalizi argue against the ‘conventional philosophy ’ of Bayesian statistics, which stipulates that statistical models should be chosen only on the basis of how well they represent knowledge, and only
Advanced Methods in Probabilistic Modeling
, 2013
"... We will study how to use probability models to analyze data, focusing both on mathematical details of the models and the technology that implements the corresponding algorithms. We will study advanced methods, such as large scale inference, model diagnostics and selection, and Bayesian nonparametric ..."
Abstract
 Add to MetaCart
We will study how to use probability models to analyze data, focusing both on mathematical details of the models and the technology that implements the corresponding algorithms. We will study advanced methods, such as large scale inference, model diagnostics and selection, and Bayesian nonparametrics. Our goals are to understand the cutting edge of modern probabilistic modeling, to begin research that makes contributions to this field, and develop good practices for specifying and applying probabilistic models to analyze realworld data. The centerpiece of the course will be the student project. Over the course of the semester, students will develop an applied case study, ideally one that is connected to their graduate research. Each project must involve using probabilistic models to analyze realworld data. Prerequisites I assume you are familiar with the basic material from COS513 (Foundations of Proababilistic Modeling). For example, you should be comfortable with probabilistic graphical models basic statistics mixture modeling linear regression hidden Markov models exponential families the expectationmaximization algorithm We will study again some of the advanced material that was touched on in COS513, such as variational inference and Bayesian nonparametrics. I assume you are comfortable writing software to analyze data and learning about new tools for that purpose. For example, you should be familiar with a statistical programming language such as R and a scripting language such as Python.