Results 1  10
of
49
Large Sample Sieve Estimation of SemiNonparametric Models
 Handbook of Econometrics
, 2007
"... Often researchers find parametric models restrictive and sensitive to deviations from the parametric specifications; seminonparametric models are more flexible and robust, but lead to other complications such as introducing infinite dimensional parameter spaces that may not be compact. The method o ..."
Abstract

Cited by 92 (17 self)
 Add to MetaCart
Often researchers find parametric models restrictive and sensitive to deviations from the parametric specifications; seminonparametric models are more flexible and robust, but lead to other complications such as introducing infinite dimensional parameter spaces that may not be compact. The method of sieves provides one way to tackle such complexities by optimizing an empirical criterion function over a sequence of approximating parameter spaces, called sieves, which are significantly less complex than the original parameter space. With different choices of criteria and sieves, the method of sieves is very flexible in estimating complicated econometric models. For example, it can simultaneously estimate the parametric and nonparametric components in seminonparametric models with or without constraints. It can easily incorporate prior information, often derived from economic theory, such as monotonicity, convexity, additivity, multiplicity, exclusion and nonnegativity. This chapter describes estimation of seminonparametric econometric models via the method of sieves. We present some general results on the large sample properties of the sieve estimates, including consistency of the sieve extremum estimates, convergence rates of the sieve Mestimates, pointwise normality of series estimates of regression functions, rootn asymptotic normality and efficiency of sieve estimates of smooth functionals of infinite dimensional parameters. Examples are used to illustrate the general results.
Efficient semiparametric estimation of quantile treatment effects
, 2003
"... This paper presents calculations of semiparametric efficiency bounds for quantile treatment effects parameters when selection to treatment is based on observable characteristics. The paper also presents three estimation procedures for these parameters, all of which have two steps: a nonparametric e ..."
Abstract

Cited by 46 (5 self)
 Add to MetaCart
This paper presents calculations of semiparametric efficiency bounds for quantile treatment effects parameters when selection to treatment is based on observable characteristics. The paper also presents three estimation procedures for these parameters, all of which have two steps: a nonparametric estimation and a computation of the difference between the solutions of two distinct minimization problems. RootN consistency, asymptotic normality, and the achievement of the semiparametric efficiency bound is shown for one of the three estimators. In the final part of the paper, an empirical application to a job training program reveals the importance of heterogeneous treatment effects, showing that for this program the effects are concentrated in the upper quantiles of the earnings distribution.
Adjusting for nonignorable dropout using semiparametric nonresponse models (with discussion
 Journal of the American Statistical Association
, 1999
"... Consider a study whose design calls for the study subjects to be followed from enrollment (time t = 0) to time t = T,at which point a primary endpoint of interest Y is to be measured. The design of the study also calls for measurements on a vector V(t) of covariates to be made at one or more times t ..."
Abstract

Cited by 39 (10 self)
 Add to MetaCart
Consider a study whose design calls for the study subjects to be followed from enrollment (time t = 0) to time t = T,at which point a primary endpoint of interest Y is to be measured. The design of the study also calls for measurements on a vector V(t) of covariates to be made at one or more times t during the interval [0,T). We are interested in making inferences about the marginal mean µ0 of Y when some subjects drop out of the study at random times Q prior to the common fixed end of followup time T. The purpose of this article is to show how to make inferences about µ0 when the continuous dropout time Q is modeled semiparametrically and no restrictions are placed on the joint distribution of the outcome and other measured variables. In particular, we consider two models for the conditional hazard of dropout given ( ¯ V(T), Y), where ¯ V(t) denotes the history of the process V(t) through time t, t ∈ [0,T). In the first model, we assume that λQ(t  ¯ V(T), Y) = λ0(t  ¯ V(t)) exp(α0Y), where α0 is a scalar parameter and λ0(t  ¯ V(t)) is an unrestricted positive function of t and the process ¯ V(t). When the process ¯ V(t) is high dimensional, estimation in this model is not feasible with moderate sample sizes, due to the curse of dimensionality. For such situations, we consider a second model that imposes the additional restriction that λ0(t  ¯ V(t)) = λ0(t) exp(γ ′ 0W(t)), where λ0(t) is an unspecified baseline hazard function, W(t) = w(t, ¯ V(t)), w(·, ·) is a known function that maps (t, ¯ V(t)) to Rq, and γ0 is a q × 1 unknown parameter vector. When α0 � = 0, then dropout is nonignorable. On account of identifiability problems, joint estimation of the mean µ0 of Y and the selection bias parameter α0 may be difficult or impossible. Therefore, we propose regarding the selection bias parameter α0 as known, rather than estimating it from the data. We then perform a sensitivity analysis to see how inference about µ0 changes as we vary α0 over a plausible range of values. We apply our approach to the analysis of ACTG 175, an AIDS clinical trial. KEY WORDS: Augmented inverse probability of censoring weighted estimators; Cox proportional hazards model; Identification;
Estimating Static Models of Strategic Interactions
, 2006
"... We propose a method for estimating static games of incomplete information. A static game is a generalization of a discrete choice model, such as a multinomial logit or probit, which allows the actions of a group of agents to be interdependent. Unlike most earlier work, the method we propose is semip ..."
Abstract

Cited by 29 (5 self)
 Add to MetaCart
We propose a method for estimating static games of incomplete information. A static game is a generalization of a discrete choice model, such as a multinomial logit or probit, which allows the actions of a group of agents to be interdependent. Unlike most earlier work, the method we propose is semiparametric and does not require the covariates to lie in a discrete set. While the estimator we propose is quite flexible, we demonstrate that in most cases it can be easily implemented using standard statistical packages such as STATA. We also propose an algorithm for simulating the model which finds all equilibria to the game. As an application of our estimator, we study recommendations for high technology stocks between 19982003. We find that strategic motives, typically ignored in the empirical literature, appear to be an important consideration in the recommendations submitted by equity analysts.
2007): “Understanding Bias in Nonlinear Panel Models: Some Recent Developments
 Advances in Economics and Econometrics, Ninth World Congress
"... The purpose of this paper is to review recently developed biasadjusted methods of estimation of nonlinear panel data models with fixed effects. For some models, like static linear and logit regressions, there exist fixedT consistent estimators as n →∞. Fixed T consistency is a desirable property b ..."
Abstract

Cited by 26 (6 self)
 Add to MetaCart
The purpose of this paper is to review recently developed biasadjusted methods of estimation of nonlinear panel data models with fixed effects. For some models, like static linear and logit regressions, there exist fixedT consistent estimators as n →∞. Fixed T consistency is a desirable property because for many panels T is much smaller than n.
Semiparametric Estimation of a Simultaneous Game with Incomplete Information
 Journal of Econometrics
, 2010
"... We analyze a 2 × 2 simultaneous game. We start by showing that a likelihood function defined over the set of four observable outcomes and all possible variations of the game exists only if players have incomplete information. We assume a general incomplete information structure, where players ’ beli ..."
Abstract

Cited by 21 (3 self)
 Add to MetaCart
We analyze a 2 × 2 simultaneous game. We start by showing that a likelihood function defined over the set of four observable outcomes and all possible variations of the game exists only if players have incomplete information. We assume a general incomplete information structure, where players ’ beliefs are conditioned on a vector of signals Z observable by the researcher but whose exact distribution is known only to the players. The resulting BayesianNash equilibrium (BNE) is characterized as a vector of conditional moment restrictions. We show how to exploit the information contained in these equilibrium conditions efficiently. The proposal takes the form of a twostep estimator. The first step estimates the unknown equilibrium beliefs using semiparametric restrictions analog to the population BNE conditions. The second step maximizes a trimmed loglikelihood function using the estimates from the first step as plugins for the unknown equilibrium beliefs. The trimming set is an interior subset of the support of Z where the BNE conditions have a unique solution. The resulting estimator of the vector of structural parameters ‘θ ’ is √ N−consistent and exploits all information in the model efficiently. We allow Z to
Efficient Estimation of Semiparametric Conditional Moment Models with Possibly Nonsmooth Residuals
 FORTHCOMING IN JOURNAL OF ECONOMETRICS
, 2008
"... For semi/nonparametric conditional moment models containing unknown parametric components (θ) and unknown functions of endogenous variables (h), Newey and Powell (2003) and Ai and Chen (2003) propose sieve minimum distance (SMD) estimation of (θ, h) and derive the large sample properties. This paper ..."
Abstract

Cited by 21 (4 self)
 Add to MetaCart
For semi/nonparametric conditional moment models containing unknown parametric components (θ) and unknown functions of endogenous variables (h), Newey and Powell (2003) and Ai and Chen (2003) propose sieve minimum distance (SMD) estimation of (θ, h) and derive the large sample properties. This paper greatly extends their results by establishing the followings: (1) The penalized SMD (PSMD) estimator ( ˆ θ, ˆ h) can simultaneously achieve rootn asymptotic normality of ˆ θ and nonparametric optimal convergence rate of ˆ h, allowing for models with possibly nonsmooth residuals and/or noncompact infinite dimensional parameter spaces. (2) A simple weighted bootstrap procedure can consistently estimate the limiting distribution of the PSMD ˆ θ. (3) The semiparametric efficiency bound results of Ai and Chen (2003) remain valid for conditional models with nonsmooth residuals, and the optimally weighted PSMD estimator achieves the bounds. (4) The profiled optimally weighted PSMD criterion is asymptotically Chisquare distributed, which implies an alternative consistent estimation of confidence region of the efficient PSMD estimator of θ. All the theoretical results are stated in terms of any consistent nonparametric estimator of conditional mean functions. We illustrate our general theories using a partially linear quantile instrumental variables regression, a Monte Carlo study, and an
Semiparametric efficiency in GMM models with auxiliary data
 Ann. Statist
, 2008
"... We study semiparametric efficiency bounds and efficient estimation of parameters defined through general moment restrictions with missing data. Identification relies on auxiliary data containing information about the distribution of the missing variables conditional on proxy variables that are obser ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
We study semiparametric efficiency bounds and efficient estimation of parameters defined through general moment restrictions with missing data. Identification relies on auxiliary data containing information about the distribution of the missing variables conditional on proxy variables that are observed in both the primary and the auxiliary database, when such distribution is common to the two data sets. The auxiliary sample can be independent of the primary sample, or can be a subset of it. For both cases, we derive bounds when the probability of missing data given the proxy variables is unknown, or known, or belongs to a correctly specified parametric family. We find that the conditional probability is not ancillary when the two samples are independent. For all cases, we discuss efficient semiparametric estimators. An estimator based on a conditional expectation projection is shown to require milder regularity conditions than one based on inverse probability weighting. 1. Introduction. Many
Finite Sample Properties of Semiparametric Estimators of Average Treatment Effects,” Unpublished Working
, 2008
"... We explore the finite sample properties of several semiparametric estimators of average treatment effects, including propensity score reweighting, matching, double robust, and control function estimators. When there is good overlap in the distribution of propensity scores for treatment and control u ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
We explore the finite sample properties of several semiparametric estimators of average treatment effects, including propensity score reweighting, matching, double robust, and control function estimators. When there is good overlap in the distribution of propensity scores for treatment and control units, reweighting estimators are preferred on bias grounds and attain the semiparametric efficiency bound even for samples of size 100. Pair matching exhibits similarly good performance in terms of bias, but has notably higher variance. Local linear and ridge matching are competitive with reweighting in terms of bias and variance, but only once n = 500. Nearestneighbor, kernel, and blocking matching are not competitive. When overlap is close to failing, none of the estimators examined perform well and √ nasymptotics may be a poor guide to finite sample performance. Trimming rules, commonly used in the face of problems with overlap, are effective only in settings with homogeneous treatment effects. JEL Classification: C14, C21, C52.