Results 1  10
of
31
Informationtheoretic asymptotics of Bayes methods
 IEEE Transactions on Information Theory
, 1990
"... AbstractIn the absence of knowledge of the true density function, Bayesian models take the joint density function for a sequence of n random variables to be an average of densities with respect to a prior. We examine the relative entropy distance D,, between the true density and the Bayesian densit ..."
Abstract

Cited by 107 (10 self)
 Add to MetaCart
AbstractIn the absence of knowledge of the true density function, Bayesian models take the joint density function for a sequence of n random variables to be an average of densities with respect to a prior. We examine the relative entropy distance D,, between the true density and the Bayesian density and show that the asymptotic distance is (d/2Xlogn)+ c, where d is the dimension of the parameter vector. Therefore, the relative entropy rate D,,/n converges to zero at rate (logn)/n. The constant c, which we explicitly identify, depends only on the prior density function and the Fisher information matrix evaluated at the true parameter value. Consequences are given for density estimation, universal data compression, composite hypothesis testing, and stockmarket portfolio selection. 1.
Estimation When a Parameter Is on a Boundary
 Econometrica
, 1999
"... This paper establishes the asymptotic distribution of an extremum estimator when the true parameter lies on the boundary of the parameter space. The boundary may be linear, curved, and�or kinked. Typically the asymptotic distribution is a function of a multivariate normal distribution in models with ..."
Abstract

Cited by 28 (4 self)
 Add to MetaCart
This paper establishes the asymptotic distribution of an extremum estimator when the true parameter lies on the boundary of the parameter space. The boundary may be linear, curved, and�or kinked. Typically the asymptotic distribution is a function of a multivariate normal distribution in models without stochastic trends and a function of a multivariate Brownian motion in models with stochastic trends. The results apply to a wide variety of estimators and models. Examples treated in the paper are: Ž. i quasiML estimation of a random coefficients regression model with some coefficient variances equal to zero and Ž ii. LS estimation of an augmented DickeyFuller regression with unit root and time trend parameters on the boundary of the parameter space.
Large Sample Theory for Semiparametric Regression Models with TwoPhase, Outcome Dependent Sampling
, 2000
"... Outcomedependent, twophase sampling designs can dramatically reduce the costs of observational studies by judicious selection of the most informative subjects for purposes of detailed covariate measurement. Here we derive asymptotic information bounds and the form of the efficient score and inuenc ..."
Abstract

Cited by 20 (9 self)
 Add to MetaCart
Outcomedependent, twophase sampling designs can dramatically reduce the costs of observational studies by judicious selection of the most informative subjects for purposes of detailed covariate measurement. Here we derive asymptotic information bounds and the form of the efficient score and inuence functions for the semiparametric regression models studied by Lawless, Kalbfleisch, and Wild (1999) under twophase sampling designs. We relate the efficient score to the leastfavorable parametric submodel by use of formal calculations suggested by Newey (1994). We then proceed to show that the maximum likelihood estimators proposed by Lawless, Kalbfleisch, and Wild (1999) for both the parametric and nonparametric parts of the model are asymptotically normal and efficient, and that the efficient influence function for the parametric part agrees with the more general calculations of Robins, Hsieh, and Newey (1995).
Quantile regression under random censoring
 Journal of Econometrics 109
"... Censored regression models have received a great deal of attention in both the theoretical and applied econometric literature. Most of the existing estimation procedures for either cross sectional or panel data models are designed only for models with ¯xed censoring. In this paper, a new procedure f ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
Censored regression models have received a great deal of attention in both the theoretical and applied econometric literature. Most of the existing estimation procedures for either cross sectional or panel data models are designed only for models with ¯xed censoring. In this paper, a new procedure for adapting these estimators designed for ¯xed censoring to models with random censorship is proposed. This procedure is then applied to the CLAD and quantile estimators of Powell(1984,1986a) to obtain an estimator of the regression coe±cients under a mild conditional quantile restriction on the error term that is applicable to samples exhibiting ¯xed or random censoring. The resulting estimator is shown to have desirable asymptotic properties, and performs well in a small scale simulation study.
2003), “Estimation, Inference, and Specification Testing for Possibly Misspecified Quantile Regression
 Maximum Likelihood Esitmation of Misspecified Models: Twenty Years Later, 107–132
"... Abstract: To date the literature on quantile regression and least absolute deviation regression has assumed either explicitly or implicitly that the conditional quantile regression model is correctly specified. When the model is misspecified, confidence intervals and hypothesis tests based on the co ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
Abstract: To date the literature on quantile regression and least absolute deviation regression has assumed either explicitly or implicitly that the conditional quantile regression model is correctly specified. When the model is misspecified, confidence intervals and hypothesis tests based on the conventional covariance matrix are invalid. Although misspecification is a generic phenomenon and correct specification is rare in reality, there has to date been no theory proposed for inference when a conditional quantile model may be misspecified. In this paper, we allow for possible misspecification of a linear conditional quantile regression model. We obtain consistency of the quantile estimator for certain “pseudotrue ” parameter values and asymptotic normality of the quantile estimator when the model is misspecified. In this case, the asymptotic covariance matrix has a novel form, not seen in earlier work, and we provide a consistent estimator of the asymptotic covariance matrix. We also propose a quick and simple test for conditional quantile misspecification based on the quantile residuals.
Asymptotics for Minimisers of Convex Processes
, 1993
"... . By means of two simple convexity arguments we are able to develop a general method for proving consistency and asymptotic normality of estimators that are defined by minimisation of convex criterion functions. This method is then applied to a fair range of different statistical estimation problems ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
. By means of two simple convexity arguments we are able to develop a general method for proving consistency and asymptotic normality of estimators that are defined by minimisation of convex criterion functions. This method is then applied to a fair range of different statistical estimation problems, including Cox regression, logistic and Poisson regression, least absolute deviation regression outside model conditions, and pseudolikelihood estimation for Markov chains. Our paper has two aims. The first is to exposit the method itself, which in many cases, under reasonable regularity conditions, leads to new proofs that are simpler than the traditional proofs. Our second aim is to exploit the method to its limits for logistic regression and Cox regression, where we seek asymptotic results under as weak regularity conditions as possible. For Cox regression in particular we are able to weaken previously published regularity conditions substantially. Key words: argmin lemma approximation...
Estimating and testing the order of a model
, 2002
"... This paper deals with order identification for nested models in the i.i.d. framework. We study the asymptotic efficiency of two generalized likelihood ratio tests of the order. They are based on two estimators which are proved to be strongly consistent. A version of Stein’s lemma yields an optimal u ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
This paper deals with order identification for nested models in the i.i.d. framework. We study the asymptotic efficiency of two generalized likelihood ratio tests of the order. They are based on two estimators which are proved to be strongly consistent. A version of Stein’s lemma yields an optimal underestimation error exponent. The lemma also implies that the overestimation error exponent is necessarily trivial. Our tests admit nontrivial underestimation error exponents. The optimal underestimation error exponent is achieved in some situations. The overestimation error can decay exponentially with respect to a positive power of the number of observations. These results are proved under mild assumptions by relating the underestimation (resp. overestimation) error to large (resp. moderate) deviations of the loglikelihood process. In particular, it is not necessary that the classical Cramér condition be satisfied; namely, the logdensities are not required to admit every exponential moment. Three benchmark examples with specific difficulties (location mixture of normal distributions, abrupt changes and various regressions) are detailed so as to illustrate the generality of our results.
MEstimators Converging to a Stable Limit
"... Introduction. We discuss the convergence of Mestimators to a stable (possibly normal) limit distribution. Huber (1964) introduced Mestimators as a way to obtain more robust estimators. Let (S; S; P ) be a probability space and let fX i g 1 i=1 be a sequence of i.i.d.r.v.'s with values in S. Le ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
Introduction. We discuss the convergence of Mestimators to a stable (possibly normal) limit distribution. Huber (1964) introduced Mestimators as a way to obtain more robust estimators. Let (S; S; P ) be a probability space and let fX i g 1 i=1 be a sequence of i.i.d.r.v.'s with values in S. Let X be a copy of X 1 . Let \Theta be a subset of IR d . Let g : S \Theta \Theta ! IR be a function such that g(\Delta; `) : S ! IR is measurable for each ` 2 \Theta. Suppose that we want to estimate a parameter ` 0 2 \Theta characterized by E[g(X; `) \Gamma g(X; `<F8.496