Results 1  10
of
11
On the uniform asymptotic validity of subsampling and the bootstrap
, 2010
"... This paper provides conditions under which subsampling and the bootstrap can be used to construct estimators of the quantiles of the distribution of a root that behave well uniformly over a large class of distributions P. These results are then applied (i) to construct confidenceregions thatbehavewe ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
This paper provides conditions under which subsampling and the bootstrap can be used to construct estimators of the quantiles of the distribution of a root that behave well uniformly over a large class of distributions P. These results are then applied (i) to construct confidenceregions thatbehavewell uniformly overPin thesense that the coverage probability tends to at least the nominal level uniformly over P and (ii) to construct tests that behave well uniformly over P in the sense that the size tends to no greater than the nominal level uniformly over P. Without these stronger notions of convergence, the asymptotic approximations to the coverage probability or size may be poor, even in very large samples. Specific applications include the multivariate mean, testing moment inequalities, multiple testing, the empirical process and Ustatistics.
Model Uncertainty
 Learning, and the Gains from Coordination, American Economic Review
, 1991
"... Risk and resampling under model ..."
2011): “A PlugIn Averaging Estimator for Regressions with Heteroskedastic Errors
"... This paper proposes a novel model averaging estimator for the linear regression model with heteroskedastic errors. Unlike model selection which picks the single model among the candidate models, model averaging, on the other hand, incorporates all the information by averaging over all potential mode ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
This paper proposes a novel model averaging estimator for the linear regression model with heteroskedastic errors. Unlike model selection which picks the single model among the candidate models, model averaging, on the other hand, incorporates all the information by averaging over all potential models. The two main questions of concern are: (1) How do we assign the weights for candidate models? (2) What is the asymptotic distribution of the averaging estimator and how do wemake inference? Thispaperseeks to tackle these two problemsfrom afrequentist view. First, we derive the asymptotic distribution of the averaging estimator with fixed weights in a local asymptotic framework. The optimal weights are obtained by minimizing the asymptotic meansquared error (AMSE) of the averaging estimator. Second, we propose a plugin averaging estimator which selects the weights by minimizing the sample analog of the AMSE. The asymptotic distribution of the proposed estimator is derived. Third, we show that the confidence intervals based on normal approximations suffer from size distortions. We suggest a plugin method to construct the confidence interval which has good finitesample coverage probability. The simulation results show that the plugin averaging estimator performs favorably compared with other existing model selection and model averaging methods. As an empirical illustration, the proposed methodology is applied to estimate the effect of the studentteacher ratio on student achievement. We find that the insignificance of the studentteacher ratio variable from previous literature could be potentially explained by the fact of ignoring the model uncertainty.
Valid postselection inference
, 2012
"... It is common practice in statistical data analysis toperform datadriven variable selection and derive statistical inference from the resulting model. Such inference enjoys none of the guarantees that classical statistical theory provides for tests and confidence intervals whenthemodelhas beenchosen ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
It is common practice in statistical data analysis toperform datadriven variable selection and derive statistical inference from the resulting model. Such inference enjoys none of the guarantees that classical statistical theory provides for tests and confidence intervals whenthemodelhas beenchosen apriori. Weproposetoproducevalid “postselection inference ” by reducing the problem to one of simultaneous inference and hence suitably widening conventional confidence and retention intervals. Simultaneity is required for all linear functions that arise as coefficient estimates in all submodels. By purchasing “simultaneity insurance ” for all possible submodels, the resulting postselection inference is rendered universally valid under all possible model selection procedures. This inference is therefore generally conservative for particular selection procedures, but it is always less conservative than full Scheffé protection. Importantly it does not depend on the truth of the selected submodel, and hence it produces valid inference evenin wrongmodels. Wedescribe thestructureof the simultaneous inference problem and give some asymptotic results. 1. Introduction: The
Reconciling Model Selection and Prediction
, 2009
"... It is known that there is a dichotomy in the performance of model selectors. Those that are consistent (having the “oracle property”) do not achieve the asymptotic minimax rate for prediction error. We look at this phenomenon closely, and argue that the set of parameters on which this dichotomy occu ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
It is known that there is a dichotomy in the performance of model selectors. Those that are consistent (having the “oracle property”) do not achieve the asymptotic minimax rate for prediction error. We look at this phenomenon closely, and argue that the set of parameters on which this dichotomy occurs is extreme, even pathological, and should not be considered when evaluating model selectors. We characterize this set, and show that, when such parameters are dismissed from consideration, consistency and asymptotic minimaxity can 1 be attained simultaneously.
IMS Lecture Notes–Monograph Series Time Series and Related Topics
, 2007
"... The distribution of model averaging estimators and an impossibility result regarding its estimation ..."
Abstract
 Add to MetaCart
The distribution of model averaging estimators and an impossibility result regarding its estimation
IMS Lecture Notes–Monograph Series
, 2006
"... The distribution of a linear predictor after model selection: Unconditional finitesample distributions and asymptotic approximations ..."
Abstract
 Add to MetaCart
The distribution of a linear predictor after model selection: Unconditional finitesample distributions and asymptotic approximations
On the Distribution of the Adaptive LASSO
, 2007
"... We study the distribution of the adaptive LASSO estimator (Zou (2006)) in finite samples as well as in the largesample limit. The largesample distributions are derived both for the case where the adaptive LASSO estimator is tuned to perform conservative model selection as well as for the case where ..."
Abstract
 Add to MetaCart
We study the distribution of the adaptive LASSO estimator (Zou (2006)) in finite samples as well as in the largesample limit. The largesample distributions are derived both for the case where the adaptive LASSO estimator is tuned to perform conservative model selection as well as for the case where tuning results in consistent model selection. We show that the finitesample as well as the largesample distributions are typically highly nonnormal, regardless of the choice of the tuning parameter. The uniform convergence rate is also obtained, and is shown to be slower than n −1/2 in case the estimator is tuned to perform consistent model selection. In particular, these results question the statistical relevance of the ‘oracle ’ property of the adaptive LASSO estimator established in Zou (2006). Moreover, we also provide an impossibility result regarding the estimation of the distribution function of the adaptive LASSO estimator.
BonferroniBased SizeCorrection for Nonstandard Testing Problems ∗
, 2011
"... We develop powerful new sizecorrection procedures for nonstandard hypothesis testing environments in which the asymptotic distribution of a test statistic is discontinuous in a parameter under the null hypothesis. Examples of this form of testing problem are pervasive in econometrics and complicate ..."
Abstract
 Add to MetaCart
We develop powerful new sizecorrection procedures for nonstandard hypothesis testing environments in which the asymptotic distribution of a test statistic is discontinuous in a parameter under the null hypothesis. Examples of this form of testing problem are pervasive in econometrics and complicate inference by making size difficult to control. This paper introduces two new sizecorrection methods that correspond to two different general hypothesis testing frameworks. They are designed to maximize the power of the underlying test while maintaining correct asymptotic size uniformly over the parameter space specified by the null hypothesis. The new methods involve the construction of critical values that make use of reasoning derived from Bonferroni bounds. The first new method provides a complementary alternative to existing sizecorrection methods, entailing substantially higher power for many testing problems. The second new method provides the first available asymptotically sizecorrect testing methodology for the general class of testing problems to which it applies. This class includes hypothesis tests on parameters after consistent model selection and tests on superefficient/hardthresholding estimators. We detail the construction and performance of the new tests in three specific examples: testing after conservative model selection, testing when a nuisance parameter may be on a boundary and testing after consistent model selection.