Results 1  10
of
28
Mestimation of multivariate regressions
 Journal of Amer. Stat. Assoc
, 1990
"... ol UrftanaCtwbnp&lon ..."
Nonparametric Quantile Estimations For Dynamic Smooth Coefficient Models
, 2006
"... In this paper, quantile regression methods are suggested for a class of smooth coefficient time series models. We employ a local linear fitting scheme to estimate the smooth coefficients in the quantile framework. The programming involved in the local linear quantile estimation is relatively simple ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
In this paper, quantile regression methods are suggested for a class of smooth coefficient time series models. We employ a local linear fitting scheme to estimate the smooth coefficients in the quantile framework. The programming involved in the local linear quantile estimation is relatively simple and it can be modified with few efforts from the existing programs for the linear quantile model. We derive the local Bahadur representation of the local linear estimator for αmixing time series and establish the asymptotic normality of the resulting estimator. Also, a bandwidth selector based on the nonparametric version of the Akaike information criterion is proposed, together with a consistent estimate of the asymptotic covariance matrix. The asymptotic behaviors of the estimator at the boundaries are examined. A comparison of the local linear quantile estimator with the local constant estimator is presented. A simulation study is carried out to illustrate the performance of the estimates. An empirical application of the model to the exchange rate time series data and the wellknown Boston house price data further demonstrates the potential of the proposed modeling procedures. KEY WORDS: Bandwidth selection; boundary effect; covariance estimation; kernel smoothing methods; nonlinear time series; quantile regression; valueatrisk; varying coefficients.
Econometric model selection with more variables than observations. Working paper
"... Preliminary version Several algorithms for indicator saturation are compared and found to have low power when there are multiple breaks. A new algorithm is introduced, based on repeated application of an automatic model selection procedure (Autometrics, see Doornik, 2009) which is based on the gener ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Preliminary version Several algorithms for indicator saturation are compared and found to have low power when there are multiple breaks. A new algorithm is introduced, based on repeated application of an automatic model selection procedure (Autometrics, see Doornik, 2009) which is based on the generaltospecific approach. The new algorithm can also be applied in the general case of more variables than observations. The performance of this new algorithm is investigated through Monte Carlo analysis. The relationship between indicator saturation and robust estimation is explored. Building an the results of Johansen and Nielsen (2009), the asymptotic distribution of multistep indicator saturation is derived, as well as the efficiency of the twostep variance. Next, the asymptotic distribution of multistep robust estimation using two different critical values (a low one at first) is derived. The asymptotic distribution of the fully iterated case is conjectured, as is the asymptotic distribution of reweighted least squares based on least trimmed squares (Rousseeuw, 1984)), called RLTS here. This allows for a comparison of the efficiency of indicator saturation with RLTS. Finally, the performance of several robust estimators and the new approach is studied in the presence of a structural break. When there are many irrelevant regressors in the model, the robust estimators break down while the new algorithm is largely unaffected. 1
On Computing the Least Quantile of Squares Estimate
"... . In linear regression, an important role is played by the least quantile of squares (LQS) estimate, which involves the minimization of the qth smallest squared residual for a given set of data. This function is nondi#erentiable and nonconvex and may have a large nu ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
.<F3.82e+05> In linear regression, an important role is played by the least quantile of squares (LQS) estimate, which involves the minimization of the<F2.934e+05><F3.82e+05> qth smallest squared residual for a given set of data. This function is nondi#erentiable and nonconvex and may have a large number of local minima. This paper is mainly concerned with the e#cient calculation of the global solution, and some di#erent approaches are considered.<F4.005e+05> Key words.<F3.82e+05> linear regression, LQS estimate, Chebyshev approximation<F4.005e+05> AMS subject classifications.<F3.82e+05> 62J05, 65D10<F4.005e+05> PII.<F3.82e+05> S1064827595283768<F4.61e+05> 1. Introduction.<F4.492e+05> The problem of fitting a linear model to data usually involves the solution of an overdetermined system of linear equations which can be expressed as<F3.774e+05><F4.492e+05> (1.1)<F3.774e+05><F4.61e+05> Ax<F4.634e+05> #<F4.61e+05><F3.774e+05> b,<F4.492e+05> where<F4.61e+05> x<F4.634e+05> #<F4.492e+05>...
Finite sample distributions of regression quantiles
, 2010
"... The finite sample distributions of the regression quantile and of the extreme regression quantile are derived for a broad class of distributions of the model errors, even for the noni.i.d case. The distributions are analogous to the corresponding distributions in the location model; this again conf ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
The finite sample distributions of the regression quantile and of the extreme regression quantile are derived for a broad class of distributions of the model errors, even for the noni.i.d case. The distributions are analogous to the corresponding distributions in the location model; this again confirms that the regression quantile is a straightforward extension of the sample quantile. As an application, the tail behavior of the regression quantile is studied.
Asymptotic distribution of regression Mestimators
, 1996
"... i . As a particular case, we consider the case ae(x) = jxj p . In this case, we show that if E[kZk p +kZk 2 ] ! 1; either p ? 1=2 or m 2; and some other regularity conditions hold, then n 1=2 ( ` n \Gamma ` 0 ) converges in distribution to a normal limit. For m = 1 and p = 1=2, n 1=2 ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
i . As a particular case, we consider the case ae(x) = jxj p . In this case, we show that if E[kZk p +kZk 2 ] ! 1; either p ? 1=2 or m 2; and some other regularity conditions hold, then n 1=2 ( ` n \Gamma ` 0 ) converges in distribution to a normal limit. For m = 1 and p = 1=2, n 1=2 (log n) \Gamma1=2 ( ` n \Gamma ` 0 ) converges in distribution to a normal limit. For m = 1 and 1=2 ? p ? 0, n<F7.92
Appendix E for "Generalized Method of Moments with Tail Trimming
, 2010
"... We develop a GMM estimator for stationary heavy tailed data by trimming an asymptotically vanishing sample portion of the estimating equations. Trimming ensures the estimator is asymptotically normal, and selfnormalization implies we do not need to know the rate of convergence. Tailtrimming, howev ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We develop a GMM estimator for stationary heavy tailed data by trimming an asymptotically vanishing sample portion of the estimating equations. Trimming ensures the estimator is asymptotically normal, and selfnormalization implies we do not need to know the rate of convergence. Tailtrimming, however, ensures asymmetric models are covered under rudimentary assumptions about the thresholds, and it implies possibly heterogeneous convergence rates below, at or above √ T.Further, it implies super √ Tconsistency is achievable depending on regressor and error tail thickness and feedback, with a rate equivalent to the largest possible rate amongst untrimmed minimum distance estimators for linear models with iid errors, and a faster rate than QML for heavy tailed GARCH. In the latter cases the optimal rate is achieved with the efficient GMM weight, and by using simple rules of thumb for choosing the number of trimmed equations. Simulation evidence shows the new estimator dominates GMM and QML when these estimators are not or have not been shown to be asymptotically normal. 1. INTRODUCTION We
Global Validation of Linear Model Assumptions
, 2003
"... A test for globally testing the four assumptions of the linear model is proposed. The test can be viewed as a Neyman’s smooth test and it only relies on the residual vector. The components of the global test statistic could be utilized to gain insights into which assumptions have been violated if th ..."
Abstract
 Add to MetaCart
A test for globally testing the four assumptions of the linear model is proposed. The test can be viewed as a Neyman’s smooth test and it only relies on the residual vector. The components of the global test statistic could be utilized to gain insights into which assumptions have been violated if the global procedure indicates that there is a breakdown in at least one of the four assumptions. The procedure could be used in conjunction with the usual graphical methods, and it is simple enough to be implemented by beginning statistics students. The procedure is demonstrated by analyzing data sets that have been used in previous works dealing with model diagnostics, and a real data set pertaining to endoftradingday share values of the College Retirement and Equities Funds Growth and Stock accounts. Simulation results are presented indicating the sensitivity of the procedure in detecting model violations under a variety of situations.
Institute of Statistics Mimeo Series No. 1829 July 1987REGRESSION QUANTILES AND IMPROVED LESTIMATION IN LINEAR MODELS v,;
"... ABSTRACT.For the usual linear model, bearing the plausibility of a redundant subset of parameters, pretest and Steinrule estimators based on the trimmed least squares estimation theory are considered. Compared to parallel Mestimators, proposed Lestimators are computationally simpler and are scal ..."
Abstract
 Add to MetaCart
ABSTRACT.For the usual linear model, bearing the plausibility of a redundant subset of parameters, pretest and Steinrule estimators based on the trimmed least squares estimation theory are considered. Compared to parallel Mestimators, proposed Lestimators are computationally simpler and are scaleequivariant too. In the light of asymptotic distributional risks, the relative (risk)efficiency results for these trimmed Lestimators and their improved versions are studied in detail. Positiverule Lestimators are also considered in this context. 1. INTRODUCTION. Consider