Results 1  10
of
159
Large Sample Properties of Matching Estimators for Average Treatment Effects", Econometrica 74,235267
 Abadie A. and Imbens G
, 2006
"... Matching estimators for average treatment effects are widely used in evaluation research despite the fact that their large sample properties have not been established in many cases. The absence of formal results in this area may be partly due to the fact that standard asymptotic expansions do not ap ..."
Abstract

Cited by 111 (5 self)
 Add to MetaCart
Matching estimators for average treatment effects are widely used in evaluation research despite the fact that their large sample properties have not been established in many cases. The absence of formal results in this area may be partly due to the fact that standard asymptotic expansions do not apply to matching estimators with a fixed number of matches because such estimators are highly nonsmooth functionals of the data. In this article we develop new methods for analyzing the large sample properties of matching estimators and establish a number of new results. We focus on matching with replacement with a fixed number of matches. First, we show that matching estimators are not N1/2consistent in general and describe conditions under which matching estimators do attain N1/2consistency. Second, we show that even in settings where matching estimators are N1/2consistent, simple matching estimators with a fixed number of matches do not attain the semiparametric efficiency bound. Third, we provide a consistent estimator for the large sample variance that does not require consistent nonparametric estimation of unknown functions. Software for implementing these methods is available in Matlab, Stata, and R.
Large Sample Sieve Estimation of SemiNonparametric Models
 Handbook of Econometrics
, 2007
"... Often researchers find parametric models restrictive and sensitive to deviations from the parametric specifications; seminonparametric models are more flexible and robust, but lead to other complications such as introducing infinite dimensional parameter spaces that may not be compact. The method o ..."
Abstract

Cited by 92 (17 self)
 Add to MetaCart
Often researchers find parametric models restrictive and sensitive to deviations from the parametric specifications; seminonparametric models are more flexible and robust, but lead to other complications such as introducing infinite dimensional parameter spaces that may not be compact. The method of sieves provides one way to tackle such complexities by optimizing an empirical criterion function over a sequence of approximating parameter spaces, called sieves, which are significantly less complex than the original parameter space. With different choices of criteria and sieves, the method of sieves is very flexible in estimating complicated econometric models. For example, it can simultaneously estimate the parametric and nonparametric components in seminonparametric models with or without constraints. It can easily incorporate prior information, often derived from economic theory, such as monotonicity, convexity, additivity, multiplicity, exclusion and nonnegativity. This chapter describes estimation of seminonparametric econometric models via the method of sieves. We present some general results on the large sample properties of the sieve estimates, including consistency of the sieve extremum estimates, convergence rates of the sieve Mestimates, pointwise normality of series estimates of regression functions, rootn asymptotic normality and efficiency of sieve estimates of smooth functionals of infinite dimensional parameters. Examples are used to illustrate the general results.
Matching as Nonparametric Preprocessing for Reducing Model Dependence
 in Parametric Causal Inference,” Political Analysis
, 2007
"... Although published works rarely include causal estimates from more than a few model specifications, authors usually choose the presented estimates from numerous trial runs readers never see. Given the often large variation in estimates across choices of control variables, functional forms, and other ..."
Abstract

Cited by 86 (32 self)
 Add to MetaCart
Although published works rarely include causal estimates from more than a few model specifications, authors usually choose the presented estimates from numerous trial runs readers never see. Given the often large variation in estimates across choices of control variables, functional forms, and other modeling assumptions, how can researchers ensure that the few estimates presented are accurate or representative? How do readers know that publications are not merely demonstrations that it is possible to find a specification that fits the author’s favorite hypothesis? And how do we evaluate or even define statistical properties like unbiasedness or mean squared error when no unique model or estimator even exists? Matching methods, which offer the promise of causal inference with fewer assumptions, constitute one possible way forward, but crucial results in this fastgrowing methodological
Semiparametric DifferenceinDifferences Estimators
 Review of Economic Studies
, 2005
"... The differenceindifferences (DID) estimator is one of the most popular tools for applied research in economics to evaluate the effects of public interventions and other treatments of interest on some relevant outcome variables. However, it is wellknown that the DID estimator is based on strong id ..."
Abstract

Cited by 61 (2 self)
 Add to MetaCart
The differenceindifferences (DID) estimator is one of the most popular tools for applied research in economics to evaluate the effects of public interventions and other treatments of interest on some relevant outcome variables. However, it is wellknown that the DID estimator is based on strong identifying assumptions. In particular, the conventional DID estimator requires that, in absence of the treatment, the average outcomes for the treated and control groups would have followed parallel paths over time. This assumption may be implausible if pretreatment characteristics that are thought to be associated with the dynamics of the outcome variable are unbalanced between the treated and the untreated. That would be the case, for example, if selection for treatment is influenced by individualtransitory shocks on past outcomes (Ashenfelter’s Dip). This paper considers the case in which differences in observed characteristics create nonparallel outcome dynamics between treated and controls. It is shown that, in such case, a simple twostep strategy can be used to estimate the average effect of the treatment for the treated. In addition, the estimation framework proposed in this paper allows the use of covariates to describe how the average effect of the treatment varies with changes in observed characteristics.
The Propensity Score with Continuous Treatments
 APPLIED BAYESIAN MODELING AND CAUSAL INFERENCE FROM INCOMPLETEDATA PERSPECTIVES
, 2004
"... ..."
Efficient semiparametric estimation of quantile treatment effects
, 2003
"... This paper presents calculations of semiparametric efficiency bounds for quantile treatment effects parameters when selection to treatment is based on observable characteristics. The paper also presents three estimation procedures for these parameters, all of which have two steps: a nonparametric e ..."
Abstract

Cited by 46 (5 self)
 Add to MetaCart
This paper presents calculations of semiparametric efficiency bounds for quantile treatment effects parameters when selection to treatment is based on observable characteristics. The paper also presents three estimation procedures for these parameters, all of which have two steps: a nonparametric estimation and a computation of the difference between the solutions of two distinct minimization problems. RootN consistency, asymptotic normality, and the achievement of the semiparametric efficiency bound is shown for one of the three estimators. In the final part of the paper, an empirical application to a job training program reveals the importance of heterogeneous treatment effects, showing that for this program the effects are concentrated in the upper quantiles of the earnings distribution.
CrossValidation and the Estimation of Conditional Probability Densities
 Journal of the American Statistical Association
, 2004
"... ABSTRACT. Many practical problems, especially some connected with forecasting, require nonparametric estimation of conditional densities from mixed data. For example, given an explanatory data vector X for a prospective customer, with components that could include the customer’s salary, occupation, ..."
Abstract

Cited by 34 (3 self)
 Add to MetaCart
ABSTRACT. Many practical problems, especially some connected with forecasting, require nonparametric estimation of conditional densities from mixed data. For example, given an explanatory data vector X for a prospective customer, with components that could include the customer’s salary, occupation, age, sex, marital status and address, a company might wish to estimate the density of the expenditure, Y, that could be made by that person, basing the inference on observations of (X, Y) for previous clients. Choosing appropriate smoothing parameters for this problem can be tricky, not least because plugin rules take a particularly complex form in the case of mixed data. An obvious difficulty is that there exists no general formula for the optimal smoothing parameters. More insidiously, and more seriously, it can be difficult to determine which components of X are relevant to the problem of conditional inference. For example, if the jth component of X is independent of Y then that component is irrelevant to estimating the density of Y given X, and ideally should be dropped before conducting inference. In this paper we show that crossvalidation overcomes these difficulties. It automatically determines which components are relevant and which are not, through assigning large smoothing parameters to the latter and consequently shrinking them towards the uniform distribution on the respective marginals. This effectively removes irrelevant components from contention, by suppressing their contribution to estimator variance; they already have very small bias, a consequence of their independence of Y. Crossvalidation also gives us important information about which components are relevant: the relevant components are precisely those which crossvalidation has chosen to smooth in a traditional way, by assigning them smoothing parameters of conventional size. Indeed, crossvalidation produces asymptotically optimal smoothing for relevant components, while eliminating irrelevant components by oversmoothing. In the problem of nonparametric estimation of a conditional density, crossvalidation comes into its own as a method with no obvious peers.
Propensity Score Estimation with Boosted Regression for Evaluating Causal Effects in Observational Studies
 Psychological Methods
, 2004
"... Causal effect modeling with naturalistic rather than experimental data is challenging. In observational studies participants in different treatment conditions may also differ on pretreatment characteristics that influence outcomes. Propensity score methods can theoretically eliminate these confounds ..."
Abstract

Cited by 32 (4 self)
 Add to MetaCart
Causal effect modeling with naturalistic rather than experimental data is challenging. In observational studies participants in different treatment conditions may also differ on pretreatment characteristics that influence outcomes. Propensity score methods can theoretically eliminate these confounds for all observed covariates, but accurate estimation of propensity scores is impeded by large numbers of covariates, uncertain functional forms for their associations with treatment selection, and other problems. This paper demonstrates that boosting, a modern statistical technique, can overcome many of these obstacles. We illustrate this approach with a study of adolescent probationers in substance abuse treatment programs. Propensity score weights estimated using boosting eliminate most pretreatment group differences, and substantially alter the apparent relative effects of adolescent substance abuse treatment. Experimental studies offer the most rigorous evidence with which to establish treatment efficacy, but they are not always practical or feasible. Experimental treatment evaluations can be expensive to field and may be too slow to produce answers to pressing questions. In some cases
An Extended Class of Instrumental Variables for the Estimation of Causal Effects
 UCSD DEPT. OF ECONOMICS DISCUSSION PAPER
, 1996
"... This paper builds on the structural equations, treatment effect, and machine learning literatures to provide a causal framework that permits the identification and estimation of causal effects from observational studies. We begin by providing a causal interpretation for standard exogenous regresso ..."
Abstract

Cited by 32 (13 self)
 Add to MetaCart
This paper builds on the structural equations, treatment effect, and machine learning literatures to provide a causal framework that permits the identification and estimation of causal effects from observational studies. We begin by providing a causal interpretation for standard exogenous regressors and standard “valid” and “relevant” instrumental variables. We then build on this interpretation to characterize extended instrumental variables (EIV) methods, that is methods that make use of variables that need not be valid instruments in the standard sense, but that are nevertheless instrumental in the recovery of causal effects of interest. After examining special cases of single and double EIV methods, we provide necessary and sufficient conditions for the identification of causal effects by means of EIV and provide consistent and asymptotically normal estimators for the effects of interest.