Results 1  10
of
14
Social Capital
 In P. Aghion, S.N. Durlauf, eds, Handbook of Economic Growth
, 2006
"... have provided excellent research assistance. I thank Stephen Machin and three referees for ..."
Abstract

Cited by 71 (5 self)
 Add to MetaCart
have provided excellent research assistance. I thank Stephen Machin and three referees for
From association to causation: Some remarks on the history of statistics
 Statist. Sci
, 1999
"... The “numerical method ” in medicine goes back to Pierre Louis ’ study of pneumonia (1835), and John Snow’s book on the epidemiology of cholera (1855). Snow took advantage of natural experiments and used convergent lines of evidence to demonstrate that cholera is a waterborne infectious disease. More ..."
Abstract

Cited by 23 (6 self)
 Add to MetaCart
The “numerical method ” in medicine goes back to Pierre Louis ’ study of pneumonia (1835), and John Snow’s book on the epidemiology of cholera (1855). Snow took advantage of natural experiments and used convergent lines of evidence to demonstrate that cholera is a waterborne infectious disease. More recently, investigators in the social and life sciences have used statistical models and significance tests to deduce causeandeffect relationships from patterns of association; an early example is Yule’s study on the causes of poverty (1899). In my view, this modeling enterprise has not been successful. Investigators tend to neglect the difficulties in establishing causal relations, and the mathematical complexities obscure rather than clarify the assumptions on which the analysis is based. Formal statistical inference is, by its nature, conditional. If maintained hypotheses A, B, C,... hold, then H can be tested against the data. However, if A, B, C,... remain in doubt, so must inferences about H. Careful scrutiny of maintained hypotheses should therefore be a critical part of empirical work—a principle honored more often in the breach than the observance. Snow’s work on cholera will be contrasted with modern studies that depend on statistical models and tests of significance. The examples may help to clarify the limits of current statistical techniques for making causal inferences from patterns of association. 1.
On specifying graphical models for causation, and the identification problem
 Evaluation Review
, 2004
"... This paper (which is mainly expository) sets up graphical models for causation, having a bit less than the usual complement of hypothetical counterfactuals. Assuming the invariance of error distributions may be essential for causal inference, but the errors themselves need not be invariant. Graphs c ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
This paper (which is mainly expository) sets up graphical models for causation, having a bit less than the usual complement of hypothetical counterfactuals. Assuming the invariance of error distributions may be essential for causal inference, but the errors themselves need not be invariant. Graphs can be interpreted using conditional distributions, so that we can better address connections between the mathematical framework and causality in the world. The identification problem is posed in terms of conditionals. As will be seen, causal relationships cannot be inferred from a data set by running regressions unless there is substantial prior knowledge about the mechanisms that generated the data. There are few successful applications of graphical models, mainly because few causal pathways can be excluded on a priori grounds. The invariance conditions themselves remain to be assessed.
Econometric Analysis and the Study of Economic Growth: A Skeptical Perspective
 in Macroeconomics and the Real World, R. Backhouse and A Salanti
, 2000
"... this paper. Andros Kourtellos and Artur Minkin have provided excellent research assistance. All errors are mine ..."
Abstract

Cited by 17 (7 self)
 Add to MetaCart
this paper. Andros Kourtellos and Artur Minkin have provided excellent research assistance. All errors are mine
From association to causation via regression
 Indiana: University of Notre Dame
, 1997
"... For nearly a century, investigators in the social sciences have used regression models to deduce causeandeffect relationships from patterns of association. Path models and automated search procedures are more recent developments. In my view, this enterprise has not been successful. The models tend ..."
Abstract

Cited by 16 (6 self)
 Add to MetaCart
For nearly a century, investigators in the social sciences have used regression models to deduce causeandeffect relationships from patterns of association. Path models and automated search procedures are more recent developments. In my view, this enterprise has not been successful. The models tend to neglect the difficulties in establishing causal relations, and the mathematical complexities tend to obscure rather than clarify the assumptions on which the analysis is based. Formal statistical inference is, by its nature, conditional. If maintained hypotheses A, B, C,... hold, then H can be tested against the data. However, if A, B, C,... remain in doubt, so must inferences about H. Careful scrutiny of maintained hypotheses should therefore be a critical part of empirical work a principle honored more often in the breach than the observance.
The phantom menace: Omitted variable bias in econometric research
 Conflict Management and Peace Science
"... Quantitative political science is awash in control variables. The justification for these bloated specifications is usually the fear of omitted variable bias. A key underlying assumption is that the danger posed by omitted variable bias can be ameliorated by the inclusion of relevant control variabl ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Quantitative political science is awash in control variables. The justification for these bloated specifications is usually the fear of omitted variable bias. A key underlying assumption is that the danger posed by omitted variable bias can be ameliorated by the inclusion of relevant control variables. Unfortunately, as this article demonstrates, there is nothing in the mathematics of regression analysis that supports this conclusion. The inclusion of additional control variables may increase or decrease the bias, and we cannot know for sure which is the case in any particular situation. A brief discussion of alternative strategies for achieving experimental control follows the main result. Keywords omitted variable bias, specification, control variables, research design Quantitative political science is awash in control variables. It is not uncommon to see statistical models with 20 or more independent variables. An article in the August 2004 issue of the American Political Science Review, for example, reports a model with 22 independent variables (Duch & Palmer, 2004). 1 The situation is no different if we consider
Statistical Models for Causation
, 2005
"... We review the basis for inferring causation by statistical modeling. Parameters should be stable under interventions, and so should error distributions. There are also statistical conditions on the errors. Stability is difficult to establish a priori, and the statistical conditions are equally probl ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We review the basis for inferring causation by statistical modeling. Parameters should be stable under interventions, and so should error distributions. There are also statistical conditions on the errors. Stability is difficult to establish a priori, and the statistical conditions are equally problematic. Therefore, causal relationships are seldom to be inferred from a data set by running statistical algorithms, unless there is substantial prior knowledge about the mechanisms that generated the data. We begin with linear models (regression analysis) and then turn to graphical models, which may in principle be nonlinear.
Toward a Unified Theory of Causality
"... In comparative research, analysts conceptualize causation in contrasting ways when they pursue explanation in particular cases (caseoriented research) versus large populations (populationoriented research). With caseoriented research, they understand causation in terms of necessary, sufficient, I ..."
Abstract
 Add to MetaCart
In comparative research, analysts conceptualize causation in contrasting ways when they pursue explanation in particular cases (caseoriented research) versus large populations (populationoriented research). With caseoriented research, they understand causation in terms of necessary, sufficient, INUS, and SUIN causes. With populationoriented research, by contrast, they understand causation as mean causal effects. This article explores whether it is possible to translate the kind of causal language that is used in caseoriented research into the kind of causal language that is used in populationoriented research (and vice versa). The article suggests that such translation is possible, because certain types of INUS causes manifest themselves as variables that exhibit partial effects when studied in populationoriented research. The article concludes that the conception of causation adopted in caseoriented research is appropriate for the population level, whereas the conception of causation used in populationoriented research is valuable for making predictions in the face of uncertainty.