Results 1  10
of
11
Severe Testing as a Basic Concept in a NeymanPearson Philosophy of Induction
 BRITISH JOURNAL FOR THE PHILOSOPHY OF SCIENCE
, 2006
"... Despite the widespread use of key concepts of the Neyman–Pearson (N–P) statistical paradigm—type I and II errors, significance levels, power, confidence levels—they have been the subject of philosophical controversy and debate for over 60 years. Both current and longstanding problems of N–P tests s ..."
Abstract

Cited by 36 (14 self)
 Add to MetaCart
Despite the widespread use of key concepts of the Neyman–Pearson (N–P) statistical paradigm—type I and II errors, significance levels, power, confidence levels—they have been the subject of philosophical controversy and debate for over 60 years. Both current and longstanding problems of N–P tests stem from unclarity and confusion, even among N–P adherents, as to how a test’s (predata) error probabilities are to be used for (postdata) inductive inference as opposed to inductive behavior. We argue that the relevance of error probabilities is to ensure that only statistical hypotheses that have passed severe or probative tests are inferred from the data. The severity criterion supplies a metastatistical principle for evaluating proposed statistical inferences, avoiding classic fallacies from tests that are overly sensitive, as well as those not sensitive enough to particular errors and discrepancies.
From association to causation: Some remarks on the history of statistics
 Statist. Sci
, 1999
"... The “numerical method ” in medicine goes back to Pierre Louis ’ study of pneumonia (1835), and John Snow’s book on the epidemiology of cholera (1855). Snow took advantage of natural experiments and used convergent lines of evidence to demonstrate that cholera is a waterborne infectious disease. More ..."
Abstract

Cited by 23 (6 self)
 Add to MetaCart
The “numerical method ” in medicine goes back to Pierre Louis ’ study of pneumonia (1835), and John Snow’s book on the epidemiology of cholera (1855). Snow took advantage of natural experiments and used convergent lines of evidence to demonstrate that cholera is a waterborne infectious disease. More recently, investigators in the social and life sciences have used statistical models and significance tests to deduce causeandeffect relationships from patterns of association; an early example is Yule’s study on the causes of poverty (1899). In my view, this modeling enterprise has not been successful. Investigators tend to neglect the difficulties in establishing causal relations, and the mathematical complexities obscure rather than clarify the assumptions on which the analysis is based. Formal statistical inference is, by its nature, conditional. If maintained hypotheses A, B, C,... hold, then H can be tested against the data. However, if A, B, C,... remain in doubt, so must inferences about H. Careful scrutiny of maintained hypotheses should therefore be a critical part of empirical work—a principle honored more often in the breach than the observance. Snow’s work on cholera will be contrasted with modern studies that depend on statistical models and tests of significance. The examples may help to clarify the limits of current statistical techniques for making causal inferences from patterns of association. 1.
On specifying graphical models for causation, and the identification problem
 Evaluation Review
, 2004
"... This paper (which is mainly expository) sets up graphical models for causation, having a bit less than the usual complement of hypothetical counterfactuals. Assuming the invariance of error distributions may be essential for causal inference, but the errors themselves need not be invariant. Graphs c ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
This paper (which is mainly expository) sets up graphical models for causation, having a bit less than the usual complement of hypothetical counterfactuals. Assuming the invariance of error distributions may be essential for causal inference, but the errors themselves need not be invariant. Graphs can be interpreted using conditional distributions, so that we can better address connections between the mathematical framework and causality in the world. The identification problem is posed in terms of conditionals. As will be seen, causal relationships cannot be inferred from a data set by running regressions unless there is substantial prior knowledge about the mechanisms that generated the data. There are few successful applications of graphical models, mainly because few causal pathways can be excluded on a priori grounds. The invariance conditions themselves remain to be assessed.
What Is The Chance Of An Earthquake?
"... this paper, we try to interpret such probabilities. Making sense of earthquake forecasts is surprisingly dicult. In part, this is because the forecasts are based on a complicated mixture of geological maps, empirical rules of thumb, expert opinion, physical models, stochastic models, numerical simul ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
this paper, we try to interpret such probabilities. Making sense of earthquake forecasts is surprisingly dicult. In part, this is because the forecasts are based on a complicated mixture of geological maps, empirical rules of thumb, expert opinion, physical models, stochastic models, numerical simulations, as well as geodetic, seismic, and paleoseismic data. Even the concept of probability is hard to dene in this context. We examine the problems in applying standard denitions of probability to earthquakes, taking the USGS forecastthe product of a particularly careful and ambitious studyas our lead example. The issues are general, and concern the interpretation more than the numerical values. Despite the work involved in the USGS forecast, the probability estimate is shaky, as is the estimate of its uncertainty
Introductory Remarks on Metastatistics for The Practically Minded Non–Bayesian Regression Runner Contents
, 2008
"... ..."
Dutch Book against some `Objective' Priors
"... Dutch book' and `strong inconsistency' are generally equivalent: there is a system of bets that makes money for the gambler, whatever the state of nature may be. As de Finetti showed, an oddsmaker who is not a Bayesian is subject to a dutch book, under certain highly stylized rules of playafactof ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Dutch book' and `strong inconsistency' are generally equivalent: there is a system of bets that makes money for the gambler, whatever the state of nature may be. As de Finetti showed, an oddsmaker who is not a Bayesian is subject to a dutch book, under certain highly stylized rules of playafactoften used as an argument against frequentists. However, socalled `objective' or `uninformative' priors may also be subject to a dutch book. This note explains, in a relatively simple and selfcontained way, how to make dutch book against a frequentlyrecommended uninformative prior for covariance matrices.
Salt and Blood Pressure: Conventional Wisdom Reconsidered
"... The "salt hypothesis" is that higher levels of salt in the diet lead to higher levels of blood pressure, with attendant risk of cardiovascular disease. Intersalt was designed to test the hypothesis, with a crosssectional study of salt levels and blood pressures in 52 populations. The study is often ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
The "salt hypothesis" is that higher levels of salt in the diet lead to higher levels of blood pressure, with attendant risk of cardiovascular disease. Intersalt was designed to test the hypothesis, with a crosssectional study of salt levels and blood pressures in 52 populations. The study is often cited to support the salt hypothesis, but the data are somewhat contradictory. Thus, four of the populations (Kenya, Papua, and two Indian tribes in Brazil) have very low levels of salt and blood pressure. Across the other 48 populations, however, blood pressures go down as salt levels go upcontradicting the salt hypothesis. Regressions of blood pressure on age indicate that for young people, blood pressure is inversely related to salt intakeanother paradox. This paper discusses the Intersalt data and study design, looking at some of the statistical issues and identifying respects in which the study failed to follow its own protocol. Also considered are human experiments bearing on t...
Mapping of Probabilities Theory for the Interpretation of Uncertain Physical Measurements
, 2007
"... In this book, I attempt to reach two goals. The first is purely mathematical: to clarify some of the basic concepts of probability theory. The second goal is physical: to clarify the methods to be used when handling the information brought by measurements, in order to understand how accurate are the ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this book, I attempt to reach two goals. The first is purely mathematical: to clarify some of the basic concepts of probability theory. The second goal is physical: to clarify the methods to be used when handling the information brought by measurements, in order to understand how accurate are the inferences they allow. Probability theory is solidly based on Kolmogorov axioms, but the basic inference tool provided by Kolmogorov’s theory is the definition of conditional probability. While some simple problems can be solved though this notion of conditional probability, more elaborate problems, in particular, most of the inference problems that use inaccurate observations require a more advanced probability theory. When considering sets, there are some well known notions, for instance, the intersection of two sets, or, when a mapping is considered between two sets, the notion of image of a set, or of reciprocal image of a set. I develop in this book the theory that generalizes these notions when, instead of sets,
Statistical Models for Causation: A Critical Review
"... Regression models are often used to infer causation from association. For instance, Yule [79] showed – or tried to show – that welfare was a cause of poverty. Path models and structural equation models are later ..."
Abstract
 Add to MetaCart
Regression models are often used to infer causation from association. For instance, Yule [79] showed – or tried to show – that welfare was a cause of poverty. Path models and structural equation models are later
www.elsevier.com/locate/humov Probabilistic models in human sensorimotor control
"... Sensory and motor uncertainty form a fundamental constraint on human sensorimotor control. Bayesian decision theory (BDT) has emerged as a unifying framework to understand how the central nervous system performs optimal estimation and control in the face of such uncertainty. BDT has two components: ..."
Abstract
 Add to MetaCart
Sensory and motor uncertainty form a fundamental constraint on human sensorimotor control. Bayesian decision theory (BDT) has emerged as a unifying framework to understand how the central nervous system performs optimal estimation and control in the face of such uncertainty. BDT has two components: Bayesian statistics and decision theory. Here we review Bayesian statistics and show how it applies to estimating the state of the world and our own body. Recent results suggest that when learning novel tasks we are able to learn the statistical properties of both the world and our own sensory apparatus so as to perform estimation using Bayesian statistics. We review studies which suggest that humans can combine multiple sources of information to form maximum likelihood estimates, can incorporate prior beliefs about possible states of the world so as to generate maximum a posteriori estimates and can use Kalman filterbased processes to estimate timevarying states. Finally, we review Bayesian decision theory in motor control and how the central nervous system processes errors to determine loss functions and select optimal actions. We review results that suggest we plan movements based on statistics of our actions that result from signaldependent noise on our motor outputs. Taken together these studies provide a statistical framework for how the motor system performs in the presence of uncertainty.