Results 1  10
of
26
A Model Selection Approach to Assessing the Information in the Term Structure Using Linear Models and Artificial Neural Networks
 Journal of Business and Economic Statistics
, 1992
"... We take a model selection approach to the question of whether forward interest rates are useful in predicting future spot rates, using a variety of outofsample forecastbased model selection criteria: forecast mean squared error, forecast direction accuracy, and forecastbased trading system profi ..."
Abstract

Cited by 53 (13 self)
 Add to MetaCart
We take a model selection approach to the question of whether forward interest rates are useful in predicting future spot rates, using a variety of outofsample forecastbased model selection criteria: forecast mean squared error, forecast direction accuracy, and forecastbased trading system profitability. We also examine the usefulness of a class of novel prediction models called "artificial neural networks," and investigate the issue of appropriate window sizes for rollingwindowbased prediction methods. Results indicate that the premium of the forward rate over the spot rate helps to predict the sign of future changes in the interest rate. Further, model selection based on an insample Schwarz Information Criterion (SIC) does not appear to be a reliable guide to outofsample performance, in the case of shortterm interest rates. Thus, the insample SIC apparently fails to offer a convenient shortcut to true outofsample performance measures. Keywords: Artificial Neural Network...
Computer Automation of GeneraltoSpecific Model Selection Procedures." Unpublished Paper, Nuffield
, 1999
"... That econometric methodology remains in dispute partly reflects the lack of clear evidence on alternative approaches. This paper reconsiders econometric model selection from a computerautomation perspective, focusing on generaltospecific reduction approaches, as embodied in the program PcGets (gen ..."
Abstract

Cited by 36 (11 self)
 Add to MetaCart
That econometric methodology remains in dispute partly reflects the lack of clear evidence on alternative approaches. This paper reconsiders econometric model selection from a computerautomation perspective, focusing on generaltospecific reduction approaches, as embodied in the program PcGets (general–to–specific). Starting from a general linear, dynamic statistical model, which captures the essential data characteristics, standard testing procedures are applied to eliminate statisticallyinsignificant variables, using diagnostic tests to check the validity of the reductions, ensuring a congruent final model. As the joint issue of variable selection and diagnostic testing eludes most attempts at theoretical analysis, a simulationbased analysis of modelling strategies is presented. The results of the Monte Carlo experiments cohere with the established theory: PcGets recovers the DGP specification with remarkable accuracy. Empirical size and power of PcGets are close to what one would expect if the DGP were known. JEL Classification: C51, C22.
Reversed Score and Likelihood Ratio Tests
, 2000
"... Two extensions of a parametric model are proposed, each one involving the score function of an alternative parametric model. We show that the encompassing hypothesis is equivalent to standard conditions on the score of each of the extended models. The condition on the first extension gives rise to t ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
Two extensions of a parametric model are proposed, each one involving the score function of an alternative parametric model. We show that the encompassing hypothesis is equivalent to standard conditions on the score of each of the extended models. The condition on the first extension gives rise to the standard score encompassing test, while the condition on the second extension induces a socalled reversed score encompassing test. A similar logic is applied to the likelihood ratio, generating a likelihood ratio and a reversed likelihood ratio encompassing test. The ensued test statistics can be based on simulations if certain calculations are too difficult to carry out analytically. We study the first order asymptotic properties of the proposed test statistics under general conditions.
Comparing density forecast models
 University of California, Riverside
, 2007
"... In this paper we discuss how to compare various (possibly misspecified) density forecast models using the KullbackLeibler Information Criterion (KLIC) of a candidate density forecast model with respect to thetruedensity. TheKLICdifferential between a pair of competing models is the (predictive) lo ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
In this paper we discuss how to compare various (possibly misspecified) density forecast models using the KullbackLeibler Information Criterion (KLIC) of a candidate density forecast model with respect to thetruedensity. TheKLICdifferential between a pair of competing models is the (predictive) loglikelihood ratio (LR) between the two models. Even though the true density is unknown, using the LR statistic amounts to comparing models with the KLIC as a loss function and thus enables us to assess which density forecast model can approximate the true density more closely. We also discuss how this KLIC is related to the KLIC based on the probability integral transform (PIT) in the framework of Diebold et al. (1998). While they are asymptotically equivalent, the PITbased KLIC is best suited for evaluating the adequacy of each density forecast model and the original KLIC is best suited for comparing competing models. In an empirical study with the S&P500 and NASDAQ daily return series, we find strong evidence for rejecting the NormalGARCH benchmark model, in favor of the models that can capture skewness in the conditional distribution and asymmetry and longmemory in the conditional variance.
A Test for Density Forecast Comparison with Applications to Risk Management
, 2004
"... In this paper we propose a testing procedure for comparing the predictive abilities of possibly misspecified density forecast models. We use the minimum KullbackLeibler Information Criterion (KLIC) divergence measure to define the distance between the candidate density forecast model and the true m ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
In this paper we propose a testing procedure for comparing the predictive abilities of possibly misspecified density forecast models. We use the minimum KullbackLeibler Information Criterion (KLIC) divergence measure to define the distance between the candidate density forecast model and the true model. We use the fact that the inversenormal transform of the probability integral transforms (PIT) should be IID standard normal as discussed in Berkowitz (2001). To compare the performance of density forecast models in the tails, we use the censored likelihood functions to compute the tail minimum KLIC. The reality check test of White (2000) is then constructed using our distance measure as a loss function. To highlight the merits of our approach, we use the daily S&P500 and NASDAQ return series to conduct an empirical density forecast comparison exercise. A large set of distributions, including some recently proposed flexible distributions to accommodate higher moments, and the ARCHfamily volatility specifications are studied. Our empirical findings lend further support of fattailedness and skewness of return distributions. In addition, the choice of conditional distribution specification appears to be a much more dominant factor in determining the quality of density forecasts than the choice of volatility specification.
Bayesian Exponentially Tilted Empirical Likeliood
 Biometrika
, 2005
"... Newey and Smith (2001) have recently shown that Empirical Likelihood (EL) exhibits desirable higherorder asymptotic properties, namely, that its O ¡ n −1 ¢ bias is particularly small and that biascorrected EL is higherorder efficient. Although EL possesses these properties when the model is correc ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Newey and Smith (2001) have recently shown that Empirical Likelihood (EL) exhibits desirable higherorder asymptotic properties, namely, that its O ¡ n −1 ¢ bias is particularly small and that biascorrected EL is higherorder efficient. Although EL possesses these properties when the model is correctly specified, this paper shows that the asymptotic variance of EL in the presence of model misspecification may become undefined when the functions defining the moment conditions are unbounded. In contrast, the Exponential Tilting (ET) estimator avoids this problem under mild regularity conditions. Since ET does not share the higherorder asymptotic properties of EL, there is a need for an estimator that combines the qualities of both estimators. This paper introduces a new estimator called Exponentially Tilted Empirical Likelihood (ETEL) that is shown to have the same O ¡ n −1 ¢ bias and the same O ¡ n −2¢ variance as EL, while maintaining a welldefined asymptotic variance under model misspecification.
Conditional Distributions Of Earnings, Wages And Hours For Blacks And Whites
 Journal of Econometrics
, 1981
"... This paper provides new evidence on the conditional distributions of earnings, wages and hours for white and black males in the University of Michigan's Panel Study of Income Dynamics. Conditional hours and In wages are approximately normal for both races. Conditional earnings are approximately nor ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
This paper provides new evidence on the conditional distributions of earnings, wages and hours for white and black males in the University of Michigan's Panel Study of Income Dynamics. Conditional hours and In wages are approximately normal for both races. Conditional earnings are approximately normal for blacks while earnings are well approximated for whites by the product of normal hours and lognormal wages. The distribution of this product is derived here for the first time. Treating the marginal earnings distribution as the average of the conditional distributions, we use these results to predict poverty and affluence rates in our sample
Information and Posterior Probability Criteria for Model Selection in Local Likelihood Estimation
 J Amer. Stat. Ass
, 1998
"... this paper we propose a modification to the methods used to motivate many information and posterior probability criteria for the weighted likelihood case. We derive weighted versions for two of the most widely known criteria, namely the AIC and BIC. Via a simple modification, the criteria are also m ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
this paper we propose a modification to the methods used to motivate many information and posterior probability criteria for the weighted likelihood case. We derive weighted versions for two of the most widely known criteria, namely the AIC and BIC. Via a simple modification, the criteria are also made useful for window span selection. The usefulness of the weighted version of these criteria are demonstrated through a simulation study and an application to three data sets. KEY WORDS: Information Criteria; Posterior Probability Criteria; Model Selection; Local Likelihood. 1. INTRODUCTION Local regression has become a popular method for smoothing scatterplots and for nonparametric regression in general. It has proven to be a useful tool in finding structure in datasets (Cleveland and Devlin 1988). Local regression estimation is a method for smoothing scatterplots (x i ; y i ), i = 1; : : : ; n in which the fitted value at x 0 is the value of a polynomial fit to the data using weighted least squares where the weight given to (x i ; y i ) is related to the distance between x i and x 0 . Stone (1977) shows that estimates obtained using the local regression methods have desirable theoretical properties. Recently, Fan (1993) has studied minimax properties of local linear regression. Tibshirani and Hastie (1987) extend the ideas of local regression to a local likelihood procedure. This procedure is designed for nonparametric regression modeling in situations where weighted least squares is inappropriate as an estimation method, for example binary data. Local regression may be viewed as a special case of local likelihood estimation. Tibshirani and Hastie (1987), Staniswalis (1989), and Loader (1999) apply local likelihood estimation to several types of data where local regressio...
The Curve Fitting Problem: A Bayesian Rejoinder
, 1998
"... In the curve fitting problem two conflicting desiderata, simplicity and goodnessoffit pull in opposite directions. To solve this problem, two proposals, the first one based on Bayes' theorem criterion (BTC) and the second one advocated by Forster and Sober based on Akaike's Information Criterion ..."
Abstract
 Add to MetaCart
In the curve fitting problem two conflicting desiderata, simplicity and goodnessoffit pull in opposite directions. To solve this problem, two proposals, the first one based on Bayes' theorem criterion (BTC) and the second one advocated by Forster and Sober based on Akaike's Information Criterion (AIC) are discussed. We show that AIC, which is frequentist in spirit, is logically equivalent to BTC, provided that a suitable choice of priors is made. We evaluate the charges against Bayesianism and contend that AIC approach has shortcomings. We also discuss the relationship between Schwarz's Bayesian Information Criterion and BTC. [ Word count 93] Overview In the curve fitting problem, two conflicting desiderata, simplicity and goodnessoffit, pull in opposite directions. Simplicity forces us to choose straight lines over nonlinear equations, whereas goodnessoffit forces us to choose the latter over the former. This article discusses two proposals that attempt to strike an optimal balance between these two conflicting desiderata. A Bayesian solution to the curve fitting problem can be obtained by applying Bayes' theorem. The Bayesian solution is called the Bayes' Theorem Criterion (BTC). Malcolm Forster and Elliot Sober, in contrast, propose Akaike's Information Criterion (AIC) which is frequentist in spirit. The purpose of this article is threefold. First, we address sonhe of the objections to the Bayesian approach raised by Forster and Sober. Second, we describe sonhe limitations in the the implementation of the approach based on AIC. Finally, we show that AIC is in fact logically equivalent to BTC with a suitable choice of priors. The underlying thenhe of this paper is to illuminate the Bayesian/nonBayesian debate in philosophy of science.
The GLMSELECT Procedure
"... For a Web download or ebook: Your use of this publication shall be governed by the terms established by the vendor at the time you acquire this publication. U.S. Government Restricted Rights Notice: Use, duplication, or disclosure of this software and related documentation by the U.S. government is ..."
Abstract
 Add to MetaCart
For a Web download or ebook: Your use of this publication shall be governed by the terms established by the vendor at the time you acquire this publication. U.S. Government Restricted Rights Notice: Use, duplication, or disclosure of this software and related documentation by the U.S. government is subject to the Agreement with SAS Institute and the restrictions set forth in FAR 52.22719, Commercial Computer SoftwareRestricted Rights (June 1987).