Results 1  10
of
10
Nonparametric analysis of a generalized regression model: the maximum rank correlation estimator
 Journal of the Royal Statistical Society
, 1977
"... The paper considers estimation of a model.b; = D F ( x//3,, u,), where the composite transformation D. F is only specified that D: W * R is nondegenerate monotonic and F: R * + R is strictly monotonic in each of its variables. The paper thus generalizes standard data analysis which assumes tha ..."
Abstract

Cited by 124 (0 self)
 Add to MetaCart
The paper considers estimation of a model.b; = D F ( x//3,, u,), where the composite transformation D. F is only specified that D: W * R is nondegenerate monotonic and F: R * + R is strictly monotonic in each of its variables. The paper thus generalizes standard data analysis which assumes that the functional form of II. F is known and additive. The estimator which it proposes is the maximum rank correlation estimator which is nonparametric in the functional form of D. F and nonparametric in the distribution of the error terms, a,. The estimator is shown to be strongly consistent for the parameters /?a up to a scale coefficient. 1.
Optimal Comparison of Misspecified Moment Restriction Models
, 2009
"... This paper considers optimal testing of model comparison hypotheses for misspecified unconditional moment restriction models. We adopt the generalized NeymanPearson optimality criterion, which focuses on the convergence rates of the type I and II error probabilities under fixed global alternatives, ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
This paper considers optimal testing of model comparison hypotheses for misspecified unconditional moment restriction models. We adopt the generalized NeymanPearson optimality criterion, which focuses on the convergence rates of the type I and II error probabilities under fixed global alternatives, and derive an optimal but practically infeasible test. We then propose feasible approximation test statistics to the optimal one. For linear instrumental variable regression models, the conventional empirical likelihood ratio test statistic emerges. For general nonlinear moment restrictions, we propose a new test statistic based on an iterative algorithm. We derive asymptotic properties of these test statistics.
Use of SAMC for Bayesian Analysis of Statistical Models with Intractable Normalizing Constants
, 2010
"... Bayesian analysis for the models with intractable normalizing constants has attracted much attention in recent literature. In this paper, we propose a new algorithm, the socalled Bayesian Stochastic Approximation Monte Carlo (BSAMC) algorithm, for this problem. BSAMC provides an online approximati ..."
Abstract
 Add to MetaCart
Bayesian analysis for the models with intractable normalizing constants has attracted much attention in recent literature. In this paper, we propose a new algorithm, the socalled Bayesian Stochastic Approximation Monte Carlo (BSAMC) algorithm, for this problem. BSAMC provides an online approximation to the normalizing constant using the stochastic approximation Monte Carlo (SAMC) algorithm. One significant advantage of BSAMC over the auxiliary variable MCMC methods is that it avoids the requirement for perfect samples, and thus it can be applied to many models for which perfect sampling is impossible or very expensive. Although the normalizing constant approximation is also involved in BSAMC, as shown by our numerical examples, BSAMC can perform very robustly to initial guesses of parameters due to the powerful ability of SAMC in sample space exploration. Under mild conditions, we show that BSAMC estimates can converge almost surely to their true values. BSAMC also provides a general framework for approximate Bayesian sampling: sampling from a sequence of distributions for which the average of the logdensity functions converges to the true logdensity function.
Inst. of Statistics Mimeo Series #1369ABSTRACT On Weissman's Method of Estimating Large Percentiles
"... n based on the joint limiting distribution of the k largest order statistics. The present work extends the method to censored situations and investigates the consistency and asymptotic distribution of n under two different limiting schemes. Mean squared error calculations indicate that is often an i ..."
Abstract
 Add to MetaCart
n based on the joint limiting distribution of the k largest order statistics. The present work extends the method to censored situations and investigates the consistency and asymptotic distribution of n under two different limiting schemes. Mean squared error calculations indicate that is often an improvement over the usual sample percentile estimators
Quadratic Forms
"... In time series analysis, tests for serial independence, symmetry, and goodnessoffit based on divergence measures, such as the KullbackLeibler divergence or Hellinger distance are currently receiving much interest. We consider replacing the divergence measures in these tests by kernelbased quadrat ..."
Abstract
 Add to MetaCart
In time series analysis, tests for serial independence, symmetry, and goodnessoffit based on divergence measures, such as the KullbackLeibler divergence or Hellinger distance are currently receiving much interest. We consider replacing the divergence measures in these tests by kernelbased quadratic form. In this way we avoid the common practice of using plugin estimators. Our approach separates the problem of consistent estimation of the divergence measure from that of estimating the underlying joint densities consistently. We construct a test for serial independence on the basis of the introduced quadratic forms. An optimal bandwidth selection is a common problem in the nonparametric econometrics. To confront this problem we use an adaptive bandwidth procedure over a range of different bandwidth values. In order to produce an exact test, a permutation procedure is applied. Our results are illustrated with simulations for various data generating processes relevant to financial econometrics. We compare the performance of our test with existing nonparametric tests for serial independence and show that for many processes our approach produces higher power in comparison with BDS test and the test of Granger, Maasoumi, and Racine (2004). We apply our method to the return series of S&P 500. JEL classification: C10, C12, C22 1
Risk Forecasting with GARCH, Skewed t Distributions, and Multiple Timescales
"... Historical time series of asset returns are commonly used to derive forecasts of risk, such as value at risk (VaR). Provided there is enough data, this can be done successfully even though asset returns are typically heavytailed, heteroskedastic, and serially dependent. We describe how the historica ..."
Abstract
 Add to MetaCart
Historical time series of asset returns are commonly used to derive forecasts of risk, such as value at risk (VaR). Provided there is enough data, this can be done successfully even though asset returns are typically heavytailed, heteroskedastic, and serially dependent. We describe how the historical data can first be GARCH filtered and then used to calibrate parameters of the heavytailed skewed t distribution. Sufficient recent data is available if the forecasting horizon is short enough, for example for daily VaR forecasts. When the horizon is weekly or monthly, however, a sufficiently long weekly or monthly returns series extends too far into the past to be practical. To address this we introduce a multiple timescale approach, where risk forecasts at a longer timescale, such as weekly or monthly, can be made with the more abundant data available at a shorter timescale, such as daily or weekly. The method is analyzed both theoretically and empirically using the last few decades of daily S&P500 returns. Since this method is not tied
0 Applied Probability Trust 19% FOR THE PARSIMONY
"... In phylogenetic analysis it is useful to study the distribution of the parsimony length of a tree under the null model, by which the leaves are independently assigned letters according to prescribed probabilities. Except in one special case, this distribution is difficult to describe exactly. Here w ..."
Abstract
 Add to MetaCart
In phylogenetic analysis it is useful to study the distribution of the parsimony length of a tree under the null model, by which the leaves are independently assigned letters according to prescribed probabilities. Except in one special case, this distribution is difficult to describe exactly. Here we analyze this distribution by providing a recursive and readily computable description, establishing large deviation bounds for the parsimony length of a fixed tree on a single site and for the minimum length (maximum parsimony) tree over several sites. We also show that, under very general conditions, the former distribution converges asymptotically to the normal, thereby settling a recent conjecture. Furthermore, we show how the mean and variance of this distribution can be efficiently calculated. The proof of normality requires a number of new and recent results, as the parsimony length is not directly expressible as a sum of independent random variables, and so normality does not follow immediately from a standard central limit theorem.
Statistic ∗
, 2009
"... In this note we establish the existence of the first two moments of the asymptotic trace statistic, which appears as weak limit of the likelihood ratio statistic for testing the cointegration rank in a vector autoregressive model and whose moments may be used to develop panel cointegration tests. Mo ..."
Abstract
 Add to MetaCart
In this note we establish the existence of the first two moments of the asymptotic trace statistic, which appears as weak limit of the likelihood ratio statistic for testing the cointegration rank in a vector autoregressive model and whose moments may be used to develop panel cointegration tests. Moreover, we justify the common practice to approximate these moments by simulating a certain statistic, which converges weakly to the asymptotic trace statistic. To accomplish this we show that the moments of the mentioned statistic converge to those of the asymptotic trace statistic as the time dimension tends to infinity.
Empirical Likelihood Confidence Intervals for the Gini Measure of Income Inequality
"... Gini coefficient is among the most popular and widely used measures of income inequality in economic studies, with various extensions and applications in finance and other related areas. This paper studies confidence intervals on the Gini coefficient for simple random samples, using normal approxima ..."
Abstract
 Add to MetaCart
Gini coefficient is among the most popular and widely used measures of income inequality in economic studies, with various extensions and applications in finance and other related areas. This paper studies confidence intervals on the Gini coefficient for simple random samples, using normal approximation, bootstrap percentile, bootstrapt and the empirical likelihood method. Through both theory and simulation studies it is shown that the intervals based on normal or bootstrap approximation are less satisfactory for samples of small or moderate size than the bootstrap calibrated empirical likelihood ratio confidence intervals which perform well for all sample sizes. Results for stratified random sampling are also presented.