Results 1  10
of
28
On regression adjustments to experimental data
 In press, Advances in Applied Mathematics. http://www.stat.berkeley.edu/users/census/neyregr.pdf
, 2007
"... Regression adjustments are often made to experimental data. Since randomization does not justify the models, almost anything can happen. Here, we evaluate results using Neyman’s nonparametric model, where each subject has two potential responses, one if treated and the other if untreated. Only one ..."
Abstract

Cited by 46 (3 self)
 Add to MetaCart
(Show Context)
Regression adjustments are often made to experimental data. Since randomization does not justify the models, almost anything can happen. Here, we evaluate results using Neyman’s nonparametric model, where each subject has two potential responses, one if treated and the other if untreated. Only one of the two responses is observed. Regression estimates are generally biased, but the bias is small with large samples. Adjustment may improve precision, or make precision worse; standard errors computed according to usual procedures may overstate the precision, or understate, by quite large factors. Asymptotic expansions make these ideas more precise.
From association to causation: Some remarks on the history of statistics
 Statist. Sci
, 1999
"... The “numerical method ” in medicine goes back to Pierre Louis ’ study of pneumonia (1835), and John Snow’s book on the epidemiology of cholera (1855). Snow took advantage of natural experiments and used convergent lines of evidence to demonstrate that cholera is a waterborne infectious disease. More ..."
Abstract

Cited by 33 (7 self)
 Add to MetaCart
The “numerical method ” in medicine goes back to Pierre Louis ’ study of pneumonia (1835), and John Snow’s book on the epidemiology of cholera (1855). Snow took advantage of natural experiments and used convergent lines of evidence to demonstrate that cholera is a waterborne infectious disease. More recently, investigators in the social and life sciences have used statistical models and significance tests to deduce causeandeffect relationships from patterns of association; an early example is Yule’s study on the causes of poverty (1899). In my view, this modeling enterprise has not been successful. Investigators tend to neglect the difficulties in establishing causal relations, and the mathematical complexities obscure rather than clarify the assumptions on which the analysis is based. Formal statistical inference is, by its nature, conditional. If maintained hypotheses A, B, C,... hold, then H can be tested against the data. However, if A, B, C,... remain in doubt, so must inferences about H. Careful scrutiny of maintained hypotheses should therefore be a critical part of empirical work—a principle honored more often in the breach than the observance. Snow’s work on cholera will be contrasted with modern studies that depend on statistical models and tests of significance. The examples may help to clarify the limits of current statistical techniques for making causal inferences from patterns of association. 1.
On regression adjustments in experiments with several treatments
"... Regression adjustments are often made to experimental data. Since randomization does not justify the models, bias is likely; nor are the usual variance calculations to be trusted. Here, we evaluate regression adjustments using Neyman’s nonparametric model. Previous results are generalized, and more ..."
Abstract

Cited by 31 (1 self)
 Add to MetaCart
Regression adjustments are often made to experimental data. Since randomization does not justify the models, bias is likely; nor are the usual variance calculations to be trusted. Here, we evaluate regression adjustments using Neyman’s nonparametric model. Previous results are generalized, and more intuitive proofs are given. A bias term is isolated, and conditions are given for unbiased estimation in finite samples. 1. Introduction. Data
On specifying graphical models for causation, and the identification problem
 Evaluation Review
, 2004
"... This paper (which is mainly expository) sets up graphical models for causation, having a bit less than the usual complement of hypothetical counterfactuals. Assuming the invariance of error distributions may be essential for causal inference, but the errors themselves need not be invariant. Graphs c ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
This paper (which is mainly expository) sets up graphical models for causation, having a bit less than the usual complement of hypothetical counterfactuals. Assuming the invariance of error distributions may be essential for causal inference, but the errors themselves need not be invariant. Graphs can be interpreted using conditional distributions, so that we can better address connections between the mathematical framework and causality in the world. The identification problem is posed in terms of conditionals. As will be seen, causal relationships cannot be inferred from a data set by running regressions unless there is substantial prior knowledge about the mechanisms that generated the data. There are few successful applications of graphical models, mainly because few causal pathways can be excluded on a priori grounds. The invariance conditions themselves remain to be assessed.
Randomization does not justify logistic regression
 ADVANCES IN APPLIED MATHEMATICS
, 2008
"... Logit models are often used to analyze experimental data. However, randomization does not justify the model, and estimators may be inconsistent. Here, Neyman’s nonparametric setup is used as a benchmark. Each subject has two potential responses, one if treated and the other if untreated; only one o ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
Logit models are often used to analyze experimental data. However, randomization does not justify the model, and estimators may be inconsistent. Here, Neyman’s nonparametric setup is used as a benchmark. Each subject has two potential responses, one if treated and the other if untreated; only one of the two responses is observed. A consistent estimator is proposed for use with the logit model. There is a brief literature review, and some recommendations for practice.
Statistical Models for Causation: What Inferential Leverage Do They Provide
 Evaluation Review
, 2006
"... The online version of this article can be found at: ..."
Abstract

Cited by 20 (4 self)
 Add to MetaCart
The online version of this article can be found at:
Single World Intervention Graphs (SWIGs): A Unification of the Counterfactual and Graphical Approaches to Causality
"... We present a simple graphical theory unifying causal directed acyclic graphs (DAGs) and potential (aka counterfactual) outcomes via a nodesplitting transformation. We introduce a new graph, the SingleWorld Intervention Graph (SWIG). The SWIG encodes the counterfactual independences associated with ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
We present a simple graphical theory unifying causal directed acyclic graphs (DAGs) and potential (aka counterfactual) outcomes via a nodesplitting transformation. We introduce a new graph, the SingleWorld Intervention Graph (SWIG). The SWIG encodes the counterfactual independences associated with a specific hypothetical intervention on the set of treatment variables. The nodes on the SWIG are the corresponding counterfactual random variables. We illustrate the theory with a number of examples. Our graphical theory of SWIGs may be used to infer the counterfactual independence relations implied by the counterfactual models developed in Robins (1986, 1987). Moreover, in the absence of hidden variables, the joint distribution of the counterfactuals is identified; the identifying formula is the extended gcomputation formula introduced in (Robins et al., 2004). Although Robins (1986, 1987) did not use DAGs we translate his algebraic results to facilitate understanding of this prior work. An attractive feature of Robins ’ approach is that it largely avoids making counterfactual independence assumptions that are experimentally untestable. As an important illustration we revisit the critique of Robins ’ gcomputation given in (Pearl, 2009, Ch. 11.3.7); we use SWIGs to show that all of Pearl’s claims are either erroneous or based on misconceptions. We also show that simple extensions of the formalism may be used to accommodate dynamic regimes, and to formulate nonparametric structural equation models in which assumptions relating to the absence of direct effects are formulated at the population level. Finally, we show that our graphical theory also naturally arises in the context of an expanded causal Bayesian network in which we are able to observe the natural state of a Potential outcomes are extensively used within Statistics, Political Science, Economics, and Epidemiology for reasoning about causation. Directed acyclic graphs (DAGs) are another formalism used to represent causal systems also
The swine flu vaccine and GuillainBarré syndrome: a case study in relative risk and specific causation
 Evaluation Review
, 1999
"... This article discusses the role of epidemiologic evidence in toxic tort cases, focusing on relative risk. If a relative risk is above 2.0, can we infer specific causation? Relative risk compares groups in an epidemiologic study. One group is exposed to some hazard, like a toxic substance; another “c ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
This article discusses the role of epidemiologic evidence in toxic tort cases, focusing on relative risk. If a relative risk is above 2.0, can we infer specific causation? Relative risk compares groups in an epidemiologic study. One group is exposed to some hazard, like a toxic substance; another “control ” group is not
The Salience of Ethnic Categories: Field and Natural Experimental Evidence from Indian Village Councils
, 2011
"... collaborators at Bangalore University, and especially to Dr. B.S. Padmavathi of the international Academy for Creative Teaching (iACT) for assistance with fieldwork. Janhavi Nilekani and Rishabh Khosla of Yale College provided superb research assistance. Previous versions of this paper were presente ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
collaborators at Bangalore University, and especially to Dr. B.S. Padmavathi of the international Academy for Creative Teaching (iACT) for assistance with fieldwork. Janhavi Nilekani and Rishabh Khosla of Yale College provided superb research assistance. Previous versions of this paper were presented at Yale, Princeton, and the annual meetings of the Society for Political Methodology. I received helpful advice and comments from seminar participants and from
Model Specification in InstrumentalVariables Regression
"... In many applications of instrumentalvariables regression, researchers seek to defend the plausibility of a key assumption: the instrumental variable is independent of the error term in a linear regression model. Although fulfilling this exogeneity criterion is necessary for a valid application of t ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
In many applications of instrumentalvariables regression, researchers seek to defend the plausibility of a key assumption: the instrumental variable is independent of the error term in a linear regression model. Although fulfilling this exogeneity criterion is necessary for a valid application of the instrumentalvariables approach, it is not sufficient. In the regression context, the identification of causal effects depends not just on the exogeneity of the instrument but also on the validity of the underlying model. In this article, I focus on one feature of such models: the assumption that variation in the endogenous regressor that is related to the instrumental variable has the same effect as variation that is unrelated to the instrument. In many applications, this assumption may be quite strong, but relaxing it can limit our ability to estimate parameters of interest. After discussing two substantive examples, I develop analytic results (simulations are reported elsewhere). I also present a specification test that may be useful for determining the relevance of these issues in a given application. 1