Results 1  10
of
18
Tests of conditional predictive ability
 Econometrica
, 2006
"... We argue that the current framework for predictive ability testing (e.g.,West, 1996) is not necessarily useful for realtime forecast selection, i.e., for assessing which of two competing forecasting methods will perform better in the future. We propose an alternative framework for outofsample com ..."
Abstract

Cited by 46 (1 self)
 Add to MetaCart
We argue that the current framework for predictive ability testing (e.g.,West, 1996) is not necessarily useful for realtime forecast selection, i.e., for assessing which of two competing forecasting methods will perform better in the future. We propose an alternative framework for outofsample comparison of predictive ability which delivers more practically relevant conclusions. Our approach is based on inference about conditional expectations of forecasts and forecast errors rather than the unconditional expectations that are the focus of the existing literature. We capture important determinants of forecast performance that are neglected in the existing literature by evaluating what we call the forecasting method (the model and the parameter estimation procedure), rather than just the forecasting model. Compared to previous approaches, our tests are valid under more general data assumptions (heterogeneity rather than stationarity) and estimation methods, and they can handle comparison of both nested and nonnested models, which is not currently possible. To illustrate the usefulness of the proposed tests, we compare the forecast performance of three leading parameterreduction methods for macroeconomic forecasting using a large number of predictors: a sequential model selection approach,
A Consistent Test for Nonlinear Out of Sample Predictive Accuracy
, 2000
"... In this paper, we draw on both the consistent specification testing and the predictive ability testing literatures and propose a test for predictive accuracy which is consistent against generic nonlinear alternatives. Broadly speaking, given a particular reference model, assume that the objective is ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
In this paper, we draw on both the consistent specification testing and the predictive ability testing literatures and propose a test for predictive accuracy which is consistent against generic nonlinear alternatives. Broadly speaking, given a particular reference model, assume that the objective is to test whether there exists any alternative model, among an infinite number of alternatives, that has better predictive accuracy than the reference model, for a given loss function. A typical example is the case in which the reference model is a simple autoregressive model and the objective is to check whether a more accurate forecasting model can be constructed by including possibly unknown (non) linear functions of the past of the process or of the past of some other process(es). We propose a statistic which is similar in spirit to that of White (2000), although our approach diers from his as we allow for an innite number of competing models that may be nested. In addition, we allow for non ...
Tests of equal predictive ability with RealTime data. Discussion paper
, 2007
"... This paper examines the asymptotic and finitesample properties of tests of equal forecast accuracy applied to direct, multi–step predictions from both nonnested and nested linear regression models. In contrast to earlier work — including West (1996), Clark and McCracken (2001, 2005), and McCracken ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
This paper examines the asymptotic and finitesample properties of tests of equal forecast accuracy applied to direct, multi–step predictions from both nonnested and nested linear regression models. In contrast to earlier work — including West (1996), Clark and McCracken (2001, 2005), and McCracken (2006) — our asymptotics take account of the realtime, revised nature of the data. Monte Carlo simulations indicate that our asymptotic approximations yield reasonable size and power properties in most circumstances. The paper concludes with an examination of the real–time predictive content of various measures of economic activity for inflation.
Bootstrap conditional distribution tests in the presence of dynamic misspecification
 Journal of Econometrics
, 2006
"... In this paper, we show the first order validity of the block bootstrap in the context of Kolmogorov type conditional distribution tests when there is dynamic misspecification and parameter estimation error. Our approach differs from the literature to date because we construct a bootstrap statistic t ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
In this paper, we show the first order validity of the block bootstrap in the context of Kolmogorov type conditional distribution tests when there is dynamic misspecification and parameter estimation error. Our approach differs from the literature to date because we construct a bootstrap statistic that allows for dynamic misspecification under both hypotheses. We consider two test statistics; one is the CK test of Andrews (1997), and the other is in the spirit of Diebold, Gunther and Tay (1998). The limiting distribution of both tests is a Gaussian process with a covariance kernel that reflects dynamic misspecification and parameter estimation error. In order to provide valid asymptotic critical values we suggest an extention of the empirical process version of the block bootstrap to the case of non vanishing parameter estimation error. The findings from Monte Carlo experiments show that both statistics have good finite sample properties for samples as small as 500 observations. JEL classification: C12, C22.
A Comparison of Alternative Causality and Predictive Accuracy Tests in the Presence of Integrated and Cointegrated Economic Variables
 Texas A&M University
, 2001
"... A number of variants of seven procedures designed to check for the absence of causal ordering are summarized. Five are based on classical hypothesis testing principles, including: Wald Ftests designed for stationary and difference stationary data; sequential Wald tests that account for cointegratio ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
A number of variants of seven procedures designed to check for the absence of causal ordering are summarized. Five are based on classical hypothesis testing principles, including: Wald Ftests designed for stationary and difference stationary data; sequential Wald tests that account for cointegration; surplus lag regression type tests; and nonparametric fully modied vector autoregressive type tests. The other two are based on model selection techniques, and include: complexity penalized likelihood criteria; and exante model selection based on predictive ability. In addition, various other approaches to checking for the causal order of economic variables are briey discussed. A small set of Monte Carlo experiments is carried out in order to assess empirical size, and it is found that although all tests perform well in the environments where the true lag dynamics and cointegrating ranks are "accurately" estimated, simple surplus lag type tests of the variety discussed by Toda and Yamamoto ...
NotforPublication Appendix to “Tests of Equal Forecast Accuracy and Encompassing for Nested Models”
, 2000
"... This notforpublication appendix contains proofs of Theorems 3.1 3.3 as discussed in the text. It also contains lemmas used to prove the theorems. In addition, the appendix contains ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
This notforpublication appendix contains proofs of Theorems 3.1 3.3 as discussed in the text. It also contains lemmas used to prove the theorems. In addition, the appendix contains
2004), Some recent developments in predictive accuracy testing with nested models and (generic) nonlinear alternatives
 International Journal of Forecasting
"... Forecasters and applied econometricians are often interested in comparing the predictive accuracy of nested competing models. A leading example of a context in which competing models are nested is when predictive ability is equated with “outofsample Granger causality”. In particular, it is often o ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Forecasters and applied econometricians are often interested in comparing the predictive accuracy of nested competing models. A leading example of a context in which competing models are nested is when predictive ability is equated with “outofsample Granger causality”. In particular, it is often of interest to assess whether historical data from one variable are useful when constructing a forecasting model for another variable, and hence our use of terminology such as “outofsample Granger causality ” (see e.g. Ashley, Granger and Schmalensee (1980)). In this paper we examine and discuss three key issues one is faced with when constructing predictive accuracy tests, namely: the contribution of parameter estimation error, the choice of linear versus nonlinear models, and the issue of (dynamic) misspecification, with primary focus on the latter of these issues. One of our main conclusions is that there are a number of easy to apply statistics constructed using out of sample conditional moment conditions which are robust to the presence of dynamic misspecification under both hypothesis. We provide some new Monte Carlo findings and empirical evidence based on the use of such tests. In particular, we analyze the finite sample properties of the consistent out of sample test of Corradi and Swanson (2002) using data generating processes calibrated with
Evaluating LongHorizon Forecasts
, 2003
"... This paper examines the asymptotic and finitesample properties of tests of equal forecast accuracy and encompassing applied to predictions from nested longhorizon regression models. We first derive the asymptotic distributions of a set of tests of equal forecast accuracy and encompassing, showing ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This paper examines the asymptotic and finitesample properties of tests of equal forecast accuracy and encompassing applied to predictions from nested longhorizon regression models. We first derive the asymptotic distributions of a set of tests of equal forecast accuracy and encompassing, showing that the tests have nonstandard distributions that depend on the parameters of the datagenerating process. Using a simple modelbased bootstrap for inference, we then conduct Monte Carlo simulations of a range of datagenerating processes to examine the finitesample size and power of the tests. In these simulations, the bootstrap yields tests with good finitesample size and power properties, with the encompassing test proposed by Clark and McCracken (2001) having superior power. The paper concludes with a reexamination of the predictive content of capacity utilization for core inflation.
Asymptotics for Out of Sample Tests of Causality
, 1999
"... This paper presents analytical and numerical evidence concerning out of sample tests of causality. The relevant environment is one in which the relative predictive ability of two nested parametric regression models is of interest. Results are provided for three statistics: a regressionbased statist ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This paper presents analytical and numerical evidence concerning out of sample tests of causality. The relevant environment is one in which the relative predictive ability of two nested parametric regression models is of interest. Results are provided for three statistics: a regressionbased statistic suggested by Morgan (1939) and Granger and Newbold (1977), a ttype statistic commonly attributed to either West (1996) or Diebold and Mariano (1995) and an Ftype statistic akin to Theil's U. Since the limiting distributions under the null are nonstandard, tables of asymptotically valid critical values are provided. The null distributions indicate that overfit models should predict poorly and that the Principle of Parsimony should be applied judiciously. Power calculations under a local alternative provide some guidance on the choice of test statistic and the percentage of the sample withheld for predictive evaluation. Keywords: causality, forecast evaluation, testing, hypothesis testing...