Results 1  10
of
42
Instrumental Variables and the Search for Identification: From Supply and Demand to Natural Experiments
 Journal of Economic Perspectives
, 2001
"... The method of instrumental variables is a signature technique in the econometrics toolkit. The canonical example, and earliest applications, of instrumental variables involved attempts to estimate demand and supply curves. 1 Economists such as P.G. Wright, Henry Schultz, Elmer Working and Ragnar Fri ..."
Abstract

Cited by 145 (1 self)
 Add to MetaCart
The method of instrumental variables is a signature technique in the econometrics toolkit. The canonical example, and earliest applications, of instrumental variables involved attempts to estimate demand and supply curves. 1 Economists such as P.G. Wright, Henry Schultz, Elmer Working and Ragnar Frisch were interested in estimating the elasticities of demand and supply for products ranging from herring to butter, usually with time series data. If the demand and supply curves shift over time, the observed data on quantities and prices reflect a set of equilibrium points on both curves. Consequently, an ordinary least squares regression of quantities on prices fails to identify—that is, trace out—either the supply or demand relationship. P.G. Wright (1928) confronted this issue in the seminal application of instrumental variables: estimating the elasticities of supply and demand for flaxseed, the source of linseed oil. 2 Wright noted the difficulty of obtaining estimates of the elasticities of supply and demand from the relationship between price and quantity 1
The Art of Causal Conjecture
, 1996
"... Causal relations are regularities in the way Nature’s predictions change. Since we usually do not stand in Nature’s shoes, we usually do not observe these dynamic regularities directly. But we sometimes observe statistical regularities that are most easily explained by hypothesizing such dynamic reg ..."
Abstract

Cited by 84 (18 self)
 Add to MetaCart
Causal relations are regularities in the way Nature’s predictions change. Since we usually do not stand in Nature’s shoes, we usually do not observe these dynamic regularities directly. But we sometimes observe statistical regularities that are most easily explained by hypothesizing such dynamic regularities. In this chapter, I illustrate this process of causal conjecture with a few simple examples. I first consider a negative causal relation: causal uncorrelatedness. Two variables are causally uncorrelated if there are no steps in Nature’s event tree that change them both in expected value. They have, in this sense, no common causes. This implies, as we shall see, that the two variables are uncorrelated in the classical sense in every situation in the tree. When we observe that variables are uncorrelated in many different situations, then we may conjecture that this is due to their being causally uncorrelated. I will also discuss three causal relations of a positive character. These relations assert, each in a different way, that the causes (steps in Nature’s tree) that affect a certain variable X also affect another variable Y. This implies regularities in certain classical statistical predictions. The first causal relation, which I call linear sign, implies regularity in linear regression. The second, scored sign, implies regularity in conditional
The incidental parameter problem since 1948
 JOURNAL OF ECONOMETRICS 95 (2000) 391413
, 2000
"... This paper was written to mark the 50th anniversary of Neyman and Scott's Econometrica paper defining the incidental parameter problem. It surveys the history both of the paper and of the problem in the statistics and econometrics literature. ..."
Abstract

Cited by 47 (0 self)
 Add to MetaCart
This paper was written to mark the 50th anniversary of Neyman and Scott's Econometrica paper defining the incidental parameter problem. It surveys the history both of the paper and of the problem in the statistics and econometrics literature.
An Extended Class of Instrumental Variables for the Estimation of Causal Effects
 UCSD DEPT. OF ECONOMICS DISCUSSION PAPER
, 1996
"... This paper builds on the structural equations, treatment effect, and machine learning literatures to provide a causal framework that permits the identification and estimation of causal effects from observational studies. We begin by providing a causal interpretation for standard exogenous regresso ..."
Abstract

Cited by 30 (11 self)
 Add to MetaCart
This paper builds on the structural equations, treatment effect, and machine learning literatures to provide a causal framework that permits the identification and estimation of causal effects from observational studies. We begin by providing a causal interpretation for standard exogenous regressors and standard “valid” and “relevant” instrumental variables. We then build on this interpretation to characterize extended instrumental variables (EIV) methods, that is methods that make use of variables that need not be valid instruments in the standard sense, but that are nevertheless instrumental in the recovery of causal effects of interest. After examining special cases of single and double EIV methods, we provide necessary and sufficient conditions for the identification of causal effects by means of EIV and provide consistent and asymptotically normal estimators for the effects of interest.
Indirect inference and calibration of dynamic stochastic general equilibrium models
 Journal of Econometrics
, 2007
"... We advocate in this paper the use of a Sequential Partial Indirect Inference (SPII) approach, in order to account for calibration practice where dynamic stochastic general equilibrium models (DGSE) are studied only through their ability to reproduce some wellchosen moments. We stress that, despite ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
We advocate in this paper the use of a Sequential Partial Indirect Inference (SPII) approach, in order to account for calibration practice where dynamic stochastic general equilibrium models (DGSE) are studied only through their ability to reproduce some wellchosen moments. We stress that, despite a lack of statistical formalization, the controversial calibration methodology addresses a genuine issue on the consequences of misspecification in highly nonlinear and dynamic structural macromodels. Such likely misspecification is even more detrimental than for direct inference, since the misspecified model is used for building simulated paths. The only way to get robust estimators, but also to assess the model despite misspecification consists in examining the structural model through a convenient and parsimonious instrumental model, which basically does not capture what goes wrong in the simulated paths. We argue that a welldriven SPII strategy might be seen as a rigorous calibrationnist approach, that captures both the advantages of this approach (accounting for structural “astatistical” ideas) and of the inferential approach (precise appraisal of loss functions and conditions of validity). This methodology should be useful for the empirical assessment of structural models such as those stemming from the Real Business Cycle theory or the asset pricing literature.
On the Constancy of TimeSeries Econometric Equations
 Economic and Social Review
, 1996
"... Parameter constancy is a fundamental requirement for empirical models to be useful for forecasting, analysing economic policy, or testing economic theories. However, there are surprises in defining a constantparameter model, such that models with timevarying coefficients, and expansion of the para ..."
Abstract

Cited by 6 (6 self)
 Add to MetaCart
Parameter constancy is a fundamental requirement for empirical models to be useful for forecasting, analysing economic policy, or testing economic theories. However, there are surprises in defining a constantparameter model, such that models with timevarying coefficients, and expansion of the parameterization over time are both compatible with constancy, yet unbiased forecasts may not entail a sensible model choice. Insample tests cannot determine likely postsample predictive failure. A comparison of two models of UK money demand illustrates the analysis empirically, as one suffers considerable predictive failure yet the other does not, despite being identical insample. 1 Introduction Parameter constancy is a fundamental requirement for empirical models to be useful for forecasting, analyzing economic policy, or testing economic theories. Nevertheless, it remains unclear precisely what constancy entails, what aspects of models should be constant, and what features of models insa...
Explaining Cointegration Analysis: Part I
, 1999
"... ‘Classical econometric theory assumes that observed data come from a stationary process, where means and variances are constant over time. Graphs of economic time series, and the historical record of economic forecasting, reveal the invalidity of such an assumption. Consequently, we discuss the imp ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
‘Classical econometric theory assumes that observed data come from a stationary process, where means and variances are constant over time. Graphs of economic time series, and the historical record of economic forecasting, reveal the invalidity of such an assumption. Consequently, we discuss the importance of stationarity for empirical modeling and inference; describe the effects of incorrectly assuming stationarity; explain the basic concepts of nonstationarity; note some sources of nonstationarity; formulate a class of nonstationary processes (autoregressions with unit roots) that seem empirically relevant for analyzing economic time series; and show when an analysis can be transformed by means of differencing and cointegrating combinations so stationarity becomes a reasonable assumption. We then describe how to test for unit roots and cointegration. Monte Carlo simulations and empirical examples illustrate the analysis.
of LaborOn the Role of Counterfactuals in Inferring Causal Effects of Treatments
"... This Discussion Paper is issued within the framework of IZA’s research area Project Evaluation. Any opinions expressed here are those of the author(s) and not those of the institute. Research disseminated by IZA may include views on policy, but the institute itself takes no institutional policy posi ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
This Discussion Paper is issued within the framework of IZA’s research area Project Evaluation. Any opinions expressed here are those of the author(s) and not those of the institute. Research disseminated by IZA may include views on policy, but the institute itself takes no institutional policy positions. The Institute for the Study of Labor (IZA) in Bonn is a local and virtual international research center and a place of communication between science, politics and business. IZA is an independent, nonprofit limited liability company (Gesellschaft mit beschränkter Haftung) supported by the Deutsche Post AG. The center is associated with the University of Bonn and offers a stimulating research environment through its research networks, research support, and visitors and doctoral programs. IZA engages in (i) original and internationally competitive research in all fields of labor economics, (ii) development of policy concepts, and (iii) dissemination of research results and concepts to the interested public. The current research program deals with (1) mobility and flexibility of labor markets, (2) internationalization of labor markets and European integration, (3) the welfare state and labor markets, (4) labor markets in transition, (5) the future of work, (6) project evaluation and (7) general labor economics. IZA Discussion Papers often represent preliminary work and are circulated to encourage discussion. Citation of such a paper should account for its provisional character. IZA Discussion Paper No. 354
The Error Term in the History of Time Series Econometrics.” Econometric Theory
, 2001
"... We argue that many methodological confusions in timeseries econometrics may be seen as arising out of ambivalence or confusion about the error terms. Relationships between macroeconomic time series are inexact and, inevitably, the early econometricians found that any estimated relationship would on ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
We argue that many methodological confusions in timeseries econometrics may be seen as arising out of ambivalence or confusion about the error terms. Relationships between macroeconomic time series are inexact and, inevitably, the early econometricians found that any estimated relationship would only fit with errors. Slutsky interpreted these errors as shocks that constitute the motive force behind business cycles. Frisch tried to dissect further the errors into two parts: stimuli, which are analogous to shocks, and nuisance aberrations. However, he failed to provide a statistical framework to make this distinction operational. Haavelmo, and subsequent researchers at the Cowles Commission, saw errors in equations as providing the statistical foundations for econometric models, and required that they conform to a priori distributional assumptions specified in structural models of the general equilibrium type, later known as simultaneousequations models (SEM). Since theoretical models were at that time mostly static, the structural modelling strategy relegated the dynamics in timeseries data frequently to nuisance, atheoretical complications. Revival of the shock interpretation in theoretical models came about through the rational expectations movement and development of the VAR (Vector AutoRegression) modelling approach. The socalled LSE (London School of Economics) dynamic specification approach decomposes the dynamics of modelled variable into three parts: shortrun shocks, disequilibrium shocks and innovative residuals, with only the first two of these sustaining an economic interpretation.