Results 1  10
of
36
Large Sample Properties of Matching Estimators for Average Treatment Effects
 ECONOMETRICA 74,235267
, 2006
"... Matching estimators for average treatment effects are widely used in evaluation research despite the fact that their large sample properties have not been established in many cases. The absence of formal results in this area may be partly due to the fact that standard asymptotic expansions do not ap ..."
Abstract

Cited by 318 (18 self)
 Add to MetaCart
Matching estimators for average treatment effects are widely used in evaluation research despite the fact that their large sample properties have not been established in many cases. The absence of formal results in this area may be partly due to the fact that standard asymptotic expansions do not apply to matching estimators with a fixed number of matches because such estimators are highly nonsmooth functionals of the data. In this article we develop new methods for analyzing the large sample properties of matching estimators and establish a number of new results. We focus on matching with replacement with a fixed number of matches. First, we show that matching estimators are not N1/2consistent in general and describe conditions under which matching estimators do attain N1/2consistency. Second, we show that even in settings where matching estimators are N1/2consistent, simple matching estimators with a fixed number of matches do not attain the semiparametric efficiency bound. Third, we provide a consistent estimator for the large sample variance that does not require consistent nonparametric estimation of unknown functions. Software for implementing these methods is available in Matlab, Stata, and R.
Large Sample Sieve Estimation of SemiNonparametric Models
 Handbook of Econometrics
, 2007
"... Often researchers find parametric models restrictive and sensitive to deviations from the parametric specifications; seminonparametric models are more flexible and robust, but lead to other complications such as introducing infinite dimensional parameter spaces that may not be compact. The method o ..."
Abstract

Cited by 185 (19 self)
 Add to MetaCart
Often researchers find parametric models restrictive and sensitive to deviations from the parametric specifications; seminonparametric models are more flexible and robust, but lead to other complications such as introducing infinite dimensional parameter spaces that may not be compact. The method of sieves provides one way to tackle such complexities by optimizing an empirical criterion function over a sequence of approximating parameter spaces, called sieves, which are significantly less complex than the original parameter space. With different choices of criteria and sieves, the method of sieves is very flexible in estimating complicated econometric models. For example, it can simultaneously estimate the parametric and nonparametric components in seminonparametric models with or without constraints. It can easily incorporate prior information, often derived from economic theory, such as monotonicity, convexity, additivity, multiplicity, exclusion and nonnegativity. This chapter describes estimation of seminonparametric econometric models via the method of sieves. We present some general results on the large sample properties of the sieve estimates, including consistency of the sieve extremum estimates, convergence rates of the sieve Mestimates, pointwise normality of series estimates of regression functions, rootn asymptotic normality and efficiency of sieve estimates of smooth functionals of infinite dimensional parameters. Examples are used to illustrate the general results.
VARIABLE SELECTION IN NONPARAMETRIC ADDITIVE MODELS
, 2008
"... Summary. We consider a nonparametric additive model of a conditional mean function in which the number of variables and additive components may be larger than the sample size but the number of nonzero additive components is “small” relative to the sample size. The statistical problem is to determin ..."
Abstract

Cited by 65 (1 self)
 Add to MetaCart
Summary. We consider a nonparametric additive model of a conditional mean function in which the number of variables and additive components may be larger than the sample size but the number of nonzero additive components is “small” relative to the sample size. The statistical problem is to determine which additive components are nonzero. The additive components are approximated by truncated series expansions with Bspline bases. With this approximation, the problem of component selection becomes that of selecting the groups of coefficients in the expansion. We apply the adaptive group Lasso to select nonzero components, using the group Lasso to obtain an initial estimator and reduce the dimension of the problem. We give conditions under which the group Lasso selects a model whose number of components is comparable with the underlying model and, the adaptive group Lasso selects the nonzero components correctly with probability approaching one as the sample size increases and achieves the optimal rate of convergence. Following model selection, oracleefficient, asymptotically normal estimators of the nonzero components can be obtained by using existing methods. The results of Monte Carlo experiments show that the adaptive group Lasso procedure works well with samples of moderate size. A data example is used to illustrate the application of the proposed method. Key words and phrases. Adaptive group Lasso; component selection; highdimensional data; nonparametric regression; selection consistency. Short title. Nonparametric component selection AMS 2000 subject classification. Primary 62G08, 62G20; secondary 62G99 1
Estimating semiparametric ARCH(∞) models by kernel smoothing methods
 Econometrica
, 2005
"... Contents: ..."
Splinebackfitted kernel smoothing of nonlinear additive autoregression model
, 2007
"... Application of nonparametric and semiparametric regression techniques to highdimensional time series data has been hampered due to the lack of effective tools to address the “curse of dimensionality.” Under rather weak conditions, we propose splinebackfitted kernel estimators of the component func ..."
Abstract

Cited by 25 (14 self)
 Add to MetaCart
(Show Context)
Application of nonparametric and semiparametric regression techniques to highdimensional time series data has been hampered due to the lack of effective tools to address the “curse of dimensionality.” Under rather weak conditions, we propose splinebackfitted kernel estimators of the component functions for the nonlinear additive time series data that are both computationally expedient so they are usable for analyzing very highdimensional time series, and theoretically reliable so inference can be made on the component functions with confidence. Simulation experiments have provided strong evidence that corroborates the asymptotic theory.
RateOptimal Estimation for a General Class of Nonparametric Regression Models with Unknown Link Functions
 ANNALS OF STATISTICS
, 2007
"... This paper discusses a nonparametric regression model that naturally generalizes neural network models. The model is based on a finite number of onedimensional transformations and can be estimated with a onedimensional rate of convergence. The model contains the generalized additive model with unk ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
(Show Context)
This paper discusses a nonparametric regression model that naturally generalizes neural network models. The model is based on a finite number of onedimensional transformations and can be estimated with a onedimensional rate of convergence. The model contains the generalized additive model with unknown link function as a special case. For this case, it is shown that the additive components and link function can be estimated with the optimal rate by a smoothing spline that is the solution of a penalized least squares criterion.
Additive isotone regression
 In: Asymptotics: Particles, Processes and Inverse Problems: Festschrift for Piet Groeneboom. IMS
, 2007
"... This paper is dedicated to Piet Groeneboom on the occasion of his 65th birthday Abstract: This paper is about optimal estimation of the additive components of a nonparametric, additive isotone regression model. It is shown that asymptotically up to first order, each additive component can be estimat ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
(Show Context)
This paper is dedicated to Piet Groeneboom on the occasion of his 65th birthday Abstract: This paper is about optimal estimation of the additive components of a nonparametric, additive isotone regression model. It is shown that asymptotically up to first order, each additive component can be estimated as well as it could be by a least squares estimator if the other components were known. The algorithm for the calculation of the estimator uses backfitting. Convergence of the algorithm is shown. Finite sample properties are also compared through simulation experiments. 1.
Partially Linear Hazard Regression for Multivariate Survival Data,” Mimeo Series 2235
, 2006
"... This article studies estimation of partially linear hazard regression models for multivariate survival data. A profile pseudo–partial likelihood estimation method is proposed under the marginal hazard model framework. The estimation on the parameters for the linear part is accomplished by maximizati ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
(Show Context)
This article studies estimation of partially linear hazard regression models for multivariate survival data. A profile pseudo–partial likelihood estimation method is proposed under the marginal hazard model framework. The estimation on the parameters for the linear part is accomplished by maximization of a pseudo–partial likelihood profiled over the nonparametric part. This enables us to obtain √ nconsistent estimators of the parametric component. Asymptotic normality is obtained for the estimates of both the linear and nonlinear parts. The new technical challenge is that the nonparametric component is indirectly estimated through its integrated derivative function from a local polynomial fit. An algorithm of fast implementation of our proposed method is presented. Consistent standard error estimates using sandwichtype ideas are also developed, which facilitates inferences for the model. It is shown that the nonparametric component can be estimated as well as if the parametric components were known and the failure times within each subject were independent. Simulations are conducted to demonstrate the performance of the proposed method. A real dataset is analyzed to illustrate the proposed methodology. KEY WORDS: Local pseudo–partial likelihood; Marginal hazard model; Multivariate failure time; Partially linear; Profile pseudo–partial likelihood.
Smooth backfitting in generalized additive models
, 2007
"... Generalized additive models have been popular among statisticians and data analysts in multivariate nonparametric regression with nonGaussian responses including binary and count data. In this paper, a new likelihood approach for fitting generalized additive models is proposed. It aims to maximize ..."
Abstract

Cited by 9 (6 self)
 Add to MetaCart
(Show Context)
Generalized additive models have been popular among statisticians and data analysts in multivariate nonparametric regression with nonGaussian responses including binary and count data. In this paper, a new likelihood approach for fitting generalized additive models is proposed. It aims to maximize a smoothed likelihood. The additive functions are estimated by solving a system of nonlinear integral equations. An iterative algorithm based on smooth backfitting is developed from the Newton–Kantorovich theorem. Asymptotic properties of the estimator and convergence of the algorithm are discussed. It is shown that our proposal based on local linear fit achieves the same bias and variance as the oracle estimator that uses knowledge of the other components. Numerical comparison with the recently proposed twostage estimator [Ann. Statist. 32 (2004) 2412–2443] is also made.
A SIMPLE SMOOTH BACKFITTING METHOD FOR ADDITIVE MODELS
, 2007
"... In this paper a new smooth backfitting estimate is proposed for additive regression models. The estimate has the simple structure of Nadaraya–Watson smooth backfitting but at the same time achieves the oracle property of local linear smooth backfitting. Each component is estimated with the same asym ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
(Show Context)
In this paper a new smooth backfitting estimate is proposed for additive regression models. The estimate has the simple structure of Nadaraya–Watson smooth backfitting but at the same time achieves the oracle property of local linear smooth backfitting. Each component is estimated with the same asymptotic accuracy as if the other components were known. 1. Introduction. In