Results 1  10
of
576
BootstrapBased Improvements for Inference with Clustered Errors
, 2006
"... Microeconometrics researchers have increasingly realized the essential need to account for any withingroup dependence in estimating standard errors of regression parameter estimates. The typical preferred solution is to calculate clusterrobust or sandwich standard errors that permit quite general ..."
Abstract

Cited by 236 (11 self)
 Add to MetaCart
Microeconometrics researchers have increasingly realized the essential need to account for any withingroup dependence in estimating standard errors of regression parameter estimates. The typical preferred solution is to calculate clusterrobust or sandwich standard errors that permit quite general heteroskedasticity and withincluster error correlation, but presume that the number of clusters is large. In applications with few (530) clusters, standard asymptotic tests can overreject considerably. We investigate more accurate inference using cluster bootstrapt procedures that provide asymptotic refinement. These procedures are evaluated using Monte Carlos, including the muchcited differencesindifferences example of Bertrand, Mullainathan and Duflo (2004). In situations where standard methods lead to rejection rates in excess of ten percent (or more) for tests of nominal size 0.05, our methods can reduce this to five percent. In principle a pairs cluster bootstrap should work well, but in practice a wild cluster bootstrap performs better.
Interval estimation for a binomial proportion
 Statist. Sci
, 2001
"... Abstract. We revisit the problem of interval estimation of a binomial proportion. The erratic behavior of the coverage probability of the standardWaldconfidence interval has previously been remarkedon in the literature (Blyth andStill, Agresti andCoull, Santner andothers). We begin by showing that t ..."
Abstract

Cited by 164 (2 self)
 Add to MetaCart
Abstract. We revisit the problem of interval estimation of a binomial proportion. The erratic behavior of the coverage probability of the standardWaldconfidence interval has previously been remarkedon in the literature (Blyth andStill, Agresti andCoull, Santner andothers). We begin by showing that the chaotic coverage properties of the Waldinterval are far more persistent than is appreciated. Furthermore, common textbook prescriptions regarding its safety are misleading and defective in several respects andcannot be trusted. This leads us to consideration of alternative intervals. A number of natural alternatives are presented, each with its motivation and context. Each interval is examinedfor its coverage probability andits length. Basedon this analysis, we recommendthe Wilson interval or the equaltailedJeffreys prior interval for small n andthe interval suggestedin Agresti andCoull for larger n. We also provide an additional frequentist justification for use of the Jeffreys interval. Key words and phrases: Bayes, binomial distribution, confidence intervals, coverage probability, Edgeworth expansion, expected length, Jeffreys prior, normal approximation, posterior.
The Dynamics Effects of Neutral and InvestmentSpecific Technology Shocks
 Journal of Political Economy
"... The neoclassical growth model is used to identify the short run effects of two technology shocks. Neutral shocks affect the production of all goods homogeneously, and investmentspecific shocksaffect only investment goods. The paper finds that previous estimates, based on considering only neutral te ..."
Abstract

Cited by 164 (2 self)
 Add to MetaCart
The neoclassical growth model is used to identify the short run effects of two technology shocks. Neutral shocks affect the production of all goods homogeneously, and investmentspecific shocksaffect only investment goods. The paper finds that previous estimates, based on considering only neutral technical change, substantially understate the effects of technology shocks. When investmentspecific technical change is taken into account, the two technology shocks combined account for 4060 % of the fluctuations in output and hours at business cycle frequencies. The two shocks also account for more than 50 % of the forecast error of output and hours over an eight year horizon. The investmentspecific shocks account for the majority of these short run effects. This paper is a substantial revision to “Technology Shocks Matter. ” Thanks to Lisa Barrow, Lawrence
Error Bands for Impulse Responses
 Econometrica
, 1999
"... We show how correctly to extend known methods for generating error bands in reduced form VAR’s to overidentified models. We argue that the conventional pointwise bands common in the literature should be supplemented with measures of shape uncertainty, and we show how to generate such measures. We fo ..."
Abstract

Cited by 160 (4 self)
 Add to MetaCart
We show how correctly to extend known methods for generating error bands in reduced form VAR’s to overidentified models. We argue that the conventional pointwise bands common in the literature should be supplemented with measures of shape uncertainty, and we show how to generate such measures. We focus on bands that characterize the shape of the likelihood. Such bands are not classical confidence regions. We explain that classical confidence regions mix information about parameter location with information about model fit, and hence can be misleading as summaries of the implications of the data for the location of parameters. Because classical confidence regions also present conceptual and computational problems in multivariate time series models, we suggest that likelihoodbased bands, rather than approximate confidence bands based on asymptotic theory, be standard in reporting results for this type of model. 1 I.
The bootstrap
 In Handbook of Econometrics
, 2001
"... The bootstrap is a method for estimating the distribution of an estimator or test statistic by resampling one’s data. It amounts to treating the data as if they were the population for the purpose of evaluating the distribution of interest. Under mild regularity conditions, the bootstrap yields an a ..."
Abstract

Cited by 150 (2 self)
 Add to MetaCart
(Show Context)
The bootstrap is a method for estimating the distribution of an estimator or test statistic by resampling one’s data. It amounts to treating the data as if they were the population for the purpose of evaluating the distribution of interest. Under mild regularity conditions, the bootstrap yields an approximation to the distribution of an estimator or test statistic that is at least as accurate as the
Semiparametric DifferenceinDifferences Estimators
 Review of Economic Studies
, 2005
"... The differenceindifferences (DID) estimator is one of the most popular tools for applied research in economics to evaluate the effects of public interventions and other treatments of interest on some relevant outcome variables. However, it is wellknown that the DID estimator is based on strong id ..."
Abstract

Cited by 137 (5 self)
 Add to MetaCart
The differenceindifferences (DID) estimator is one of the most popular tools for applied research in economics to evaluate the effects of public interventions and other treatments of interest on some relevant outcome variables. However, it is wellknown that the DID estimator is based on strong identifying assumptions. In particular, the conventional DID estimator requires that, in absence of the treatment, the average outcomes for the treated and control groups would have followed parallel paths over time. This assumption may be implausible if pretreatment characteristics that are thought to be associated with the dynamics of the outcome variable are unbalanced between the treated and the untreated. That would be the case, for example, if selection for treatment is influenced by individualtransitory shocks on past outcomes (Ashenfelter’s Dip). This paper considers the case in which differences in observed characteristics create nonparallel outcome dynamics between treated and controls. It is shown that, in such case, a simple twostep strategy can be used to estimate the average effect of the treatment for the treated. In addition, the estimation framework proposed in this paper allows the use of covariates to describe how the average effect of the treatment varies with changes in observed characteristics.
SiZer for exploration of structures in curves
 Journal of the American Statistical Association
, 1997
"... In the use of smoothing methods in data analysis, an important question is often: which observed features are "really there?", as opposed to being spurious sampling artifacts. An approach is described, based on scale space ideas that were originally developed in computer vision literatu ..."
Abstract

Cited by 125 (19 self)
 Add to MetaCart
In the use of smoothing methods in data analysis, an important question is often: which observed features are "really there?", as opposed to being spurious sampling artifacts. An approach is described, based on scale space ideas that were originally developed in computer vision literature. Assessment of Significant ZERo crossings of derivatives, results in the SiZer map, a graphical device for display of significance of features, with respect to both location and scale. Here "scale" means "level of resolution", i.e.
Can mutual fund "stars" really pick stocks? New evidence from a bootstrap analysis
 Journal of Finance
, 2006
"... ..."
Confidence Estimation for Machine Translation
 IN M. ROLLINS (ED.), MENTAL IMAGERY
, 2004
"... ..."