Results 1  10
of
36
Instrumental Variables and the Search for Identification: From Supply and Demand to Natural Experiments
 Journal of Economic Perspectives
, 2001
"... The method of instrumental variables is a signature technique in the econometrics toolkit. The canonical example, and earliest applications, of instrumental variables involved attempts to estimate demand and supply curves. 1 Economists such as P.G. Wright, Henry Schultz, Elmer Working and Ragnar Fri ..."
Abstract

Cited by 360 (2 self)
 Add to MetaCart
The method of instrumental variables is a signature technique in the econometrics toolkit. The canonical example, and earliest applications, of instrumental variables involved attempts to estimate demand and supply curves. 1 Economists such as P.G. Wright, Henry Schultz, Elmer Working and Ragnar Frisch were interested in estimating the elasticities of demand and supply for products ranging from herring to butter, usually with time series data. If the demand and supply curves shift over time, the observed data on quantities and prices reflect a set of equilibrium points on both curves. Consequently, an ordinary least squares regression of quantities on prices fails to identify—that is, trace out—either the supply or demand relationship. P.G. Wright (1928) confronted this issue in the seminal application of instrumental variables: estimating the elasticities of supply and demand for flaxseed, the source of linseed oil. 2 Wright noted the difficulty of obtaining estimates of the elasticities of supply and demand from the relationship between price and quantity 1
Social Capital
 In P. Aghion, S.N. Durlauf, eds, Handbook of Economic Growth
, 2006
"... have provided excellent research assistance. I thank Stephen Machin and three referees for ..."
Abstract

Cited by 199 (8 self)
 Add to MetaCart
have provided excellent research assistance. I thank Stephen Machin and three referees for
Growth empirics and reality
 The World Bank Economic Review
, 2001
"... This article questions current empirical practice in the study of growth. It argues that much of the modern empirical growth literature is based on assumptions about regressors, residuals, and parameters that are implausible from the perspective of both economic theory and the historical experienc ..."
Abstract

Cited by 75 (8 self)
 Add to MetaCart
This article questions current empirical practice in the study of growth. It argues that much of the modern empirical growth literature is based on assumptions about regressors, residuals, and parameters that are implausible from the perspective of both economic theory and the historical experiences of the countries under study. Many of these problems, it argues, are forms of violations of an exchangeability assumption that implicitly underlies standard growth exercises. The article shows that these implausible assumptions can be relaxed by allowing for uncertainty in model specification. Model uncertainty consists of two types: theory uncertainty, which relates to which growth determinants should be included in a model; and heterogeneity uncertainty, which relates to which observations in a data set constitute draw from the same statistical model. The article proposes ways to account for both theory and heterogeneity uncertainty. Finally, using an explicit decisiontheoretic framework, the authors describe how one can engage in policyrelevant empirical analysis. There are more things in heaven and earth, Horatio, Than are dreamt of in your philosophy. —William Shakespeare
How reliable is pooled analysis in political economy? The globalizationwelfare state nexus revisited
 European Journal of Political Research
, 2005
"... Panel data analysis has become very popular in comparative political economy. However, in order to draw meaningful inferences from such data, one has to address specification and estimation issues carefully. This paper aims to demonstrate various pitfalls that typically occur in applied empirical ..."
Abstract

Cited by 40 (1 self)
 Add to MetaCart
Panel data analysis has become very popular in comparative political economy. However, in order to draw meaningful inferences from such data, one has to address specification and estimation issues carefully. This paper aims to demonstrate various pitfalls that typically occur in applied empirical work. To illustrate this, we refer to the debate on the globalizationwelfare state nexus. We reexamine a model by Garrett and Mitchell (2001), a leading study in this regard. Utilizing a data set of 17 OECD countries and the time period 1961 to 1993, they find evidence that globalization and partisan composition have a significant impact on the extent of public activity. However, because they apply a dynamic specification in levels, they do not adequately take into account both the dynamic and spherical nature of the data. In contrast, we propose an autoregressive model in first differences that is shown to perform well in statistical terms. Further, we explicitly pay attention to the time pattern of the globalizationwelfare state nexus. Substantively, we find evidence that government spending is primarily driven by the state of the domestic economy. Neither partisan effects nor the international economic environment have affected public expenditure considerably.
From association to causation: Some remarks on the history of statistics
 Statist. Sci
, 1999
"... The “numerical method ” in medicine goes back to Pierre Louis ’ study of pneumonia (1835), and John Snow’s book on the epidemiology of cholera (1855). Snow took advantage of natural experiments and used convergent lines of evidence to demonstrate that cholera is a waterborne infectious disease. More ..."
Abstract

Cited by 36 (7 self)
 Add to MetaCart
The “numerical method ” in medicine goes back to Pierre Louis ’ study of pneumonia (1835), and John Snow’s book on the epidemiology of cholera (1855). Snow took advantage of natural experiments and used convergent lines of evidence to demonstrate that cholera is a waterborne infectious disease. More recently, investigators in the social and life sciences have used statistical models and significance tests to deduce causeandeffect relationships from patterns of association; an early example is Yule’s study on the causes of poverty (1899). In my view, this modeling enterprise has not been successful. Investigators tend to neglect the difficulties in establishing causal relations, and the mathematical complexities obscure rather than clarify the assumptions on which the analysis is based. Formal statistical inference is, by its nature, conditional. If maintained hypotheses A, B, C,... hold, then H can be tested against the data. However, if A, B, C,... remain in doubt, so must inferences about H. Careful scrutiny of maintained hypotheses should therefore be a critical part of empirical work—a principle honored more often in the breach than the observance. Snow’s work on cholera will be contrasted with modern studies that depend on statistical models and tests of significance. The examples may help to clarify the limits of current statistical techniques for making causal inferences from patterns of association. 1.
Econometric Analysis and the Study of Economic Growth: A Skeptical Perspective
 in Macroeconomics and the Real World, R. Backhouse and A Salanti
, 2000
"... this paper. Andros Kourtellos and Artur Minkin have provided excellent research assistance. All errors are mine ..."
Abstract

Cited by 33 (10 self)
 Add to MetaCart
this paper. Andros Kourtellos and Artur Minkin have provided excellent research assistance. All errors are mine
From association to causation via regression
 Indiana: University of Notre Dame
, 1997
"... For nearly a century, investigators in the social sciences have used regression models to deduce causeandeffect relationships from patterns of association. Path models and automated search procedures are more recent developments. In my view, this enterprise has not been successful. The models tend ..."
Abstract

Cited by 31 (7 self)
 Add to MetaCart
For nearly a century, investigators in the social sciences have used regression models to deduce causeandeffect relationships from patterns of association. Path models and automated search procedures are more recent developments. In my view, this enterprise has not been successful. The models tend to neglect the difficulties in establishing causal relations, and the mathematical complexities tend to obscure rather than clarify the assumptions on which the analysis is based. Formal statistical inference is, by its nature, conditional. If maintained hypotheses A, B, C,... hold, then H can be tested against the data. However, if A, B, C,... remain in doubt, so must inferences about H. Careful scrutiny of maintained hypotheses should therefore be a critical part of empirical work a principle honored more often in the breach than the observance.
On specifying graphical models for causation, and the identification problem
 Evaluation Review
, 2004
"... This paper (which is mainly expository) sets up graphical models for causation, having a bit less than the usual complement of hypothetical counterfactuals. Assuming the invariance of error distributions may be essential for causal inference, but the errors themselves need not be invariant. Graphs c ..."
Abstract

Cited by 29 (2 self)
 Add to MetaCart
This paper (which is mainly expository) sets up graphical models for causation, having a bit less than the usual complement of hypothetical counterfactuals. Assuming the invariance of error distributions may be essential for causal inference, but the errors themselves need not be invariant. Graphs can be interpreted using conditional distributions, so that we can better address connections between the mathematical framework and causality in the world. The identification problem is posed in terms of conditionals. As will be seen, causal relationships cannot be inferred from a data set by running regressions unless there is substantial prior knowledge about the mechanisms that generated the data. There are few successful applications of graphical models, mainly because few causal pathways can be excluded on a priori grounds. The invariance conditions themselves remain to be assessed.
The phantom menace: Omitted variable bias in econometric research
 Conflict Management and Peace Science
"... Quantitative political science is awash in control variables. The justification for these bloated specifications is usually the fear of omitted variable bias. A key underlying assumption is that the danger posed by omitted variable bias can be ameliorated by the inclusion of relevant control variabl ..."
Abstract

Cited by 25 (1 self)
 Add to MetaCart
(Show Context)
Quantitative political science is awash in control variables. The justification for these bloated specifications is usually the fear of omitted variable bias. A key underlying assumption is that the danger posed by omitted variable bias can be ameliorated by the inclusion of relevant control variables. Unfortunately, as this article demonstrates, there is nothing in the mathematics of regression analysis that supports this conclusion. The inclusion of additional control variables may increase or decrease the bias, and we cannot know for sure which is the case in any particular situation. A brief discussion of alternative strategies for achieving experimental control follows the main result. Keywords omitted variable bias, specification, control variables, research design Quantitative political science is awash in control variables. It is not uncommon to see statistical models with 20 or more independent variables. An article in the August 2004 issue of the American Political Science Review, for example, reports a model with 22 independent variables (Duch & Palmer, 2004). 1 The situation is no different if we consider