Results 1  10
of
27
A Theory Of Inferred Causation
, 1991
"... This paper concerns the empirical basis of causation, and addresses the following issues: 1. the clues that might prompt people to perceive causal relationships in uncontrolled observations. 2. the task of inferring causal models from these clues, and 3. whether the models inferred tell us anything ..."
Abstract

Cited by 205 (35 self)
 Add to MetaCart
This paper concerns the empirical basis of causation, and addresses the following issues: 1. the clues that might prompt people to perceive causal relationships in uncontrolled observations. 2. the task of inferring causal models from these clues, and 3. whether the models inferred tell us anything useful about the causal mechanisms that underly the observations. We propose a minimalmodel semantics of causation, and show that, contrary to common folklore, genuine causal influences can be distinguished from spurious covariations following standard norms of inductive reasoning. We also establish a sound characterization of the conditions under which such a distinction is possible. We provide an effective algorithm for inferred causation and show that, for a large class of data the algorithm can uncover the direction of causal influences as defined above. Finally, we address the issue of nontemporal causation.
From association to causation via regression
 Indiana: University of Notre Dame
, 1997
"... For nearly a century, investigators in the social sciences have used regression models to deduce causeandeffect relationships from patterns of association. Path models and automated search procedures are more recent developments. In my view, this enterprise has not been successful. The models tend ..."
Abstract

Cited by 16 (6 self)
 Add to MetaCart
For nearly a century, investigators in the social sciences have used regression models to deduce causeandeffect relationships from patterns of association. Path models and automated search procedures are more recent developments. In my view, this enterprise has not been successful. The models tend to neglect the difficulties in establishing causal relations, and the mathematical complexities tend to obscure rather than clarify the assumptions on which the analysis is based. Formal statistical inference is, by its nature, conditional. If maintained hypotheses A, B, C,... hold, then H can be tested against the data. However, if A, B, C,... remain in doubt, so must inferences about H. Careful scrutiny of maintained hypotheses should therefore be a critical part of empirical work a principle honored more often in the breach than the observance.
On specifying graphical models for causation, and the identification problem
 Evaluation Review
, 2004
"... This paper (which is mainly expository) sets up graphical models for causation, having a bit less than the usual complement of hypothetical counterfactuals. Assuming the invariance of error distributions may be essential for causal inference, but the errors themselves need not be invariant. Graphs c ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
This paper (which is mainly expository) sets up graphical models for causation, having a bit less than the usual complement of hypothetical counterfactuals. Assuming the invariance of error distributions may be essential for causal inference, but the errors themselves need not be invariant. Graphs can be interpreted using conditional distributions, so that we can better address connections between the mathematical framework and causality in the world. The identification problem is posed in terms of conditionals. As will be seen, causal relationships cannot be inferred from a data set by running regressions unless there is substantial prior knowledge about the mechanisms that generated the data. There are few successful applications of graphical models, mainly because few causal pathways can be excluded on a priori grounds. The invariance conditions themselves remain to be assessed.
Laws and limits of econometrics
 ECONOMIC JOURNAL
, 2003
"... We start by discussing some general weaknesses and limitations of the econometric approach. A template from sociology is used to formulate six laws that characterize mainstream activities of econometrics and the scientific limits of those activities. Next, we discuss some proximity theorems that qua ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
We start by discussing some general weaknesses and limitations of the econometric approach. A template from sociology is used to formulate six laws that characterize mainstream activities of econometrics and the scientific limits of those activities. Next, we discuss some proximity theorems that quantify by means of explicit bounds how close we can get to the generating mechanism of the data and the optimal forecasts of next period observations using a finite number of observations. The magnitude of the bound depends on the characteristics of the model and the trajectory of the observed data. The results show that trends are more elusive to model than stationary processes in the sense that the proximity bounds are larger. By contrast, the bounds are of smaller order for models that are unidentified or nearly unidentified, so that lack or near lack of identification may not be as fatal to the use of a model in practice as some recent results on inference suggest. Finally, we look at one possible future of econometrics that involves the use of advanced econometric methods interactively by way of a web browser. With these methods users may access a suite of econometric methods and data sets online. They may also upload data to remote servers and by simple web browser selections initiate the implementation of advanced econometric software algorithms, returning the results online and by file and graphics downloads.
On the Constancy of TimeSeries Econometric Equations
 Economic and Social Review
, 1996
"... Parameter constancy is a fundamental requirement for empirical models to be useful for forecasting, analysing economic policy, or testing economic theories. However, there are surprises in defining a constantparameter model, such that models with timevarying coefficients, and expansion of the para ..."
Abstract

Cited by 6 (6 self)
 Add to MetaCart
Parameter constancy is a fundamental requirement for empirical models to be useful for forecasting, analysing economic policy, or testing economic theories. However, there are surprises in defining a constantparameter model, such that models with timevarying coefficients, and expansion of the parameterization over time are both compatible with constancy, yet unbiased forecasts may not entail a sensible model choice. Insample tests cannot determine likely postsample predictive failure. A comparison of two models of UK money demand illustrates the analysis empirically, as one suffers considerable predictive failure yet the other does not, despite being identical insample. 1 Introduction Parameter constancy is a fundamental requirement for empirical models to be useful for forecasting, analyzing economic policy, or testing economic theories. Nevertheless, it remains unclear precisely what constancy entails, what aspects of models should be constant, and what features of models insa...
Automatic Model Selection: A New Instrument for Social Science
 Electoral Studies
, 2004
"... Most social science disciplines seek an interaction between theoretical ideas and empirical evidence, ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Most social science disciplines seek an interaction between theoretical ideas and empirical evidence,
STATISTICAL MODELING OF MONETARY POLICY AND ITS EFFECTS
, 2012
"... The science of economics has some constraints and tensions that set it apart from other sciences. One reflection of these constraints and tensions is that, more than in most other scientific disciplines, it is easy to find economists of high reputation who disagree strongly with one another on issue ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The science of economics has some constraints and tensions that set it apart from other sciences. One reflection of these constraints and tensions is that, more than in most other scientific disciplines, it is easy to find economists of high reputation who disagree strongly with one another on issues of wide public interest. This may suggest that economics, unlike most other scientific disciplines, does not really make progress. Its theories and results seem to come and go, always in hot dispute, rather than improving over time so as to build an increasing body of knowledge. There is some truth to this view; there are examples where disputes of earlier decades have been not so much resolved as replaced by new disputes. But though economics progresses unevenly, and not even monotonically, there are some examples of real scientific progress in economics. This essay describes one — the evolution since around 1950 of our understanding of how monetary policy is determined and what its effects are. The story described here is not a simple success story. It describes an ascent to higher ground, but the ground is still shaky. Part of the purpose of the essay is to remind readers of how views strongly held in earlier decades have since been shown to be mistaken. This should encourage continuing skepticism of consensus views and motivate critics to sharpen their efforts at looking at new data, or at old data in new ways, and generating improved theories in the light of what they see. We will be tracking two interrelated strands of intellectual effort: the methodology of modeling and inference for economic time series, and the
ECONOMETRICS FOR POLICY ANALYSIS: PROGRESS AND REGRESS ECONOMETRICS FOR POLICY ANALYSIS: PROGRESS AND REGRESS
"... I don’t want to rehash that. (II) Time I spent last year visiting central banks and interviewing people there about what econometric models they use and how they use them. (III) Recent technical developments that have converted theoretical advantages of Bayesian over classical approaches to inferenc ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
I don’t want to rehash that. (II) Time I spent last year visiting central banks and interviewing people there about what econometric models they use and how they use them. (III) Recent technical developments that have converted theoretical advantages of Bayesian over classical approaches to inference into practical reality in some applied areas. Associated applied work and methodological commentary emerging in the literature. (IV) Haavelmo’s 1944 paper/monograph “The Probability Approach in Econometrics”, and some related previous literature. We are going to begin by discussing (IV), using it as a kind of table of contents for aspects of (II) and (III).
Model Discovery and Trygve Haavelmo’s Legacy
"... Trygve Haavelmo’s Probability Approach aimed to implement economic theories, but he later recognized their incompleteness. Although he did not explicitly consider model selection, we apply it when theoryrelevant variables,{xt}, are retained without selection while selecting other candidate variable ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Trygve Haavelmo’s Probability Approach aimed to implement economic theories, but he later recognized their incompleteness. Although he did not explicitly consider model selection, we apply it when theoryrelevant variables,{xt}, are retained without selection while selecting other candidate variables, {wt}. Under the null that the {wt} are irrelevant, by orthogonalizing with respect to the {xt}, the estimator distributions of the xt’s parameters are unaffected by selection even for more variables than observations and for endogenous variables. Under the alternative, when the joint model nests the generating process, an improved outcome results from selection. This implements Haavelmo’s program relatively costlessly.
Statistical Models for Causation
, 2005
"... We review the basis for inferring causation by statistical modeling. Parameters should be stable under interventions, and so should error distributions. There are also statistical conditions on the errors. Stability is difficult to establish a priori, and the statistical conditions are equally probl ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We review the basis for inferring causation by statistical modeling. Parameters should be stable under interventions, and so should error distributions. There are also statistical conditions on the errors. Stability is difficult to establish a priori, and the statistical conditions are equally problematic. Therefore, causal relationships are seldom to be inferred from a data set by running statistical algorithms, unless there is substantial prior knowledge about the mechanisms that generated the data. We begin with linear models (regression analysis) and then turn to graphical models, which may in principle be nonlinear.