Results 1  10
of
165
How Much Should We Trust DifferencesinDifferences Estimates?” Quarterly
 Journal of Economics
, 2004
"... Most papers that employ DifferencesinDifferences estimation (DD) use many years of data and focus on serially correlated outcomes but ignore that the resulting standard errors are inconsistent. To illustrate the severity of this issue, we randomly generate placebo laws in statelevel data on femal ..."
Abstract

Cited by 284 (0 self)
 Add to MetaCart
Most papers that employ DifferencesinDifferences estimation (DD) use many years of data and focus on serially correlated outcomes but ignore that the resulting standard errors are inconsistent. To illustrate the severity of this issue, we randomly generate placebo laws in statelevel data on female wages from the Current Population Survey. For each law, we use OLS to compute the DD estimate of its “effect ” as well as the standard error of this estimate. These conventional DD standard errors severely understate the standard deviation of the estimators: we �nd an “effect ” signi�cant at the 5 percent level for up to 45 percent of the placebo interventions. We use Monte Carlo simulations to investigate how well existing methods help solve this problem. Econometric corrections that place a speci�c parametric form on the timeseries process do not perform well. Bootstrap (taking into account the autocorrelation of the data) works well when the number of states is large enough. Two corrections based on asymptotic approximation of the variancecovariance matrix work well for moderate numbers of states and one correction that collapses the time series information into a “pre”and “post”period and explicitly takes into account the effective sample size works well even for small numbers of states. I.
Stock Market Prices Do Not Follow Random Walks: Evidence from a Simple Specification Test
 REVIEW OF FINANCIAL STUDIES
, 1988
"... In this article we test the random walk hypothesis for weekly stock market returns by comparing variance estimators derived from data sampled at different frequencies. The random walk model is strongly rejected for the entire sample period (19621985) and for all subperiod for a variety of aggrega ..."
Abstract

Cited by 238 (14 self)
 Add to MetaCart
In this article we test the random walk hypothesis for weekly stock market returns by comparing variance estimators derived from data sampled at different frequencies. The random walk model is strongly rejected for the entire sample period (19621985) and for all subperiod for a variety of aggregate returns indexes and sizesorted portofolios. Although the rejections are due largely to the behavior of small stocks, they cannot be attributed completely to the effects of infrequent trading or timevarying volatilities. Moreover, the rejection of the random walk for weekly returns does not support a meanreverting model of asset prices.
Time series regression with a unit root
 Econometrica
, 1987
"... Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at ..."
Abstract

Cited by 203 (27 self)
 Add to MetaCart
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
Robust Inference with Multiway Clustering
, 2006
"... In this paper we propose a new variance estimator for OLS as well as for nonlinear estimators such as logit, probit and GMM. This variance estimator enables clusterrobust inference when there is twoway or multiway clustering that is nonnested. The variance estimator extends the standard clusterr ..."
Abstract

Cited by 150 (4 self)
 Add to MetaCart
In this paper we propose a new variance estimator for OLS as well as for nonlinear estimators such as logit, probit and GMM. This variance estimator enables clusterrobust inference when there is twoway or multiway clustering that is nonnested. The variance estimator extends the standard clusterrobust variance estimator or sandwich estimator for oneway clustering (e.g. Liang and Zeger (1986), Arellano (1987)) and relies on similar relatively weak distributional assumptions. Our method is easily implemented in statistical packages, such as Stata and SAS, that already offer clusterrobust standard errors when there is oneway clustering. The method is demonstrated by a Monte Carlo analysis for a twoway random effects model; a Monte Carlo analysis of a placebo law that extends the stateyear effects example of Bertrand et al. (2004) to two dimensions; and by application to two studies in the empirical public/labor literature where twoway clustering is present.
BootstrapBased Improvements for Inference with Clustered Errors
, 2006
"... Microeconometrics researchers have increasingly realized the essential need to account for any withingroup dependence in estimating standard errors of regression parameter estimates. The typical preferred solution is to calculate clusterrobust or sandwich standard errors that permit quite general ..."
Abstract

Cited by 103 (5 self)
 Add to MetaCart
Microeconometrics researchers have increasingly realized the essential need to account for any withingroup dependence in estimating standard errors of regression parameter estimates. The typical preferred solution is to calculate clusterrobust or sandwich standard errors that permit quite general heteroskedasticity and withincluster error correlation, but presume that the number of clusters is large. In applications with few (530) clusters, standard asymptotic tests can overreject considerably. We investigate more accurate inference using cluster bootstrapt procedures that provide asymptotic refinement. These procedures are evaluated using Monte Carlos, including the muchcited differencesindifferences example of Bertrand, Mullainathan and Duflo (2004). In situations where standard methods lead to rejection rates in excess of ten percent (or more) for tests of nominal size 0.05, our methods can reduce this to five percent. In principle a pairs cluster bootstrap should work well, but in practice a wild cluster bootstrap performs better.
Large Sample Sieve Estimation of SemiNonparametric Models
 Handbook of Econometrics
, 2007
"... Often researchers find parametric models restrictive and sensitive to deviations from the parametric specifications; seminonparametric models are more flexible and robust, but lead to other complications such as introducing infinite dimensional parameter spaces that may not be compact. The method o ..."
Abstract

Cited by 93 (13 self)
 Add to MetaCart
Often researchers find parametric models restrictive and sensitive to deviations from the parametric specifications; seminonparametric models are more flexible and robust, but lead to other complications such as introducing infinite dimensional parameter spaces that may not be compact. The method of sieves provides one way to tackle such complexities by optimizing an empirical criterion function over a sequence of approximating parameter spaces, called sieves, which are significantly less complex than the original parameter space. With different choices of criteria and sieves, the method of sieves is very flexible in estimating complicated econometric models. For example, it can simultaneously estimate the parametric and nonparametric components in seminonparametric models with or without constraints. It can easily incorporate prior information, often derived from economic theory, such as monotonicity, convexity, additivity, multiplicity, exclusion and nonnegativity. This chapter describes estimation of seminonparametric econometric models via the method of sieves. We present some general results on the large sample properties of the sieve estimates, including consistency of the sieve extremum estimates, convergence rates of the sieve Mestimates, pointwise normality of series estimates of regression functions, rootn asymptotic normality and efficiency of sieve estimates of smooth functionals of infinite dimensional parameters. Examples are used to illustrate the general results.
Understanding Instrumental Variables in Models with Essential Heterogeneity
 The Review of Economics and Statistics
, 2006
"... ..."
Tests of conditional predictive ability
 Econometrica
, 2006
"... We argue that the current framework for predictive ability testing (e.g.,West, 1996) is not necessarily useful for realtime forecast selection, i.e., for assessing which of two competing forecasting methods will perform better in the future. We propose an alternative framework for outofsample com ..."
Abstract

Cited by 51 (1 self)
 Add to MetaCart
We argue that the current framework for predictive ability testing (e.g.,West, 1996) is not necessarily useful for realtime forecast selection, i.e., for assessing which of two competing forecasting methods will perform better in the future. We propose an alternative framework for outofsample comparison of predictive ability which delivers more practically relevant conclusions. Our approach is based on inference about conditional expectations of forecasts and forecast errors rather than the unconditional expectations that are the focus of the existing literature. We capture important determinants of forecast performance that are neglected in the existing literature by evaluating what we call the forecasting method (the model and the parameter estimation procedure), rather than just the forecasting model. Compared to previous approaches, our tests are valid under more general data assumptions (heterogeneity rather than stationarity) and estimation methods, and they can handle comparison of both nested and nonnested models, which is not currently possible. To illustrate the usefulness of the proposed tests, we compare the forecast performance of three leading parameterreduction methods for macroeconomic forecasting using a large number of predictors: a sequential model selection approach,
Asymptotic distributions of quasimaximum likelihood estimates for spatial autoregressive models. Econometrica
, 2004
"... This paper investigates asymptotic properties of the maximim likelihood estimator and the quasimaximum likelihood estimator for the spatial autoregressive model. The rates of convergence of those estimators may depend on some general features of the spatial weights matrix of the model. It is import ..."
Abstract

Cited by 50 (8 self)
 Add to MetaCart
This paper investigates asymptotic properties of the maximim likelihood estimator and the quasimaximum likelihood estimator for the spatial autoregressive model. The rates of convergence of those estimators may depend on some general features of the spatial weights matrix of the model. It is important to make the distinction with different spatial scenarios. Under the scenario that each unit will be influenced by only a few neighboring units, the estimators may have √ nrate of convergence and be asymptotic normal. When each unit can be influenced by many neighbors, irregularity of the information matrix may occur and various components of the estimators may have different rates of convergence.
Stochastic Permanent Breaks
 Review of Economics and Statistics
, 1998
"... This paper aims to bridge the gap between processes where shocks are permanent and those with transitory shocks by formulating a process in which the long run impact of each innovation is time varying and stochastic. Frequent transitory shocks are supplemented by occasional permanent shifts. The sto ..."
Abstract

Cited by 35 (0 self)
 Add to MetaCart
This paper aims to bridge the gap between processes where shocks are permanent and those with transitory shocks by formulating a process in which the long run impact of each innovation is time varying and stochastic. Frequent transitory shocks are supplemented by occasional permanent shifts. The stochastic permanent breaks (STOPBREAK) process is based on the premise that a shock is more likely to be permanent if it is large than if it is small. This formulation is motivated by a class of processes that undergo random structural breaks. Consistency and asymptotic normality of quasi maximum likelihood estimates is established and locally best hypothesis tests of the null of a random walk are developed. The model is applied to relative prices of pairs of stocks and significant test statistics result. KEYWORDS: Structural breaks, nonlinear moving average, unit roots, quasi maximum likelihood estimation, NeymanPearson testing, locally best test, temporary cointegration. 1. INTRODUCTION Time series analysts tend to draw a sharp line between processes where shocks have a permanent effect and those where they do not. The most notable example of this is the distinction between stationary AR(1) processes, where all shocks are transitory, and the random walk. As the autoregressive root approaches one, the rate at which shocks are expected to decay decreases, but they remain transitory. This paper aims to bridge the gap between transience and permanence by formulating a process in which the long run impact of each observation is time varying and stochastic. At one extreme all innovations are transitory and at the other, all shocks are permanent. 2