Results 1  10
of
22
Tests of conditional predictive ability
 Econometrica
, 2006
"... We argue that the current framework for predictive ability testing (e.g.,West, 1996) is not necessarily useful for realtime forecast selection, i.e., for assessing which of two competing forecasting methods will perform better in the future. We propose an alternative framework for outofsample com ..."
Abstract

Cited by 97 (1 self)
 Add to MetaCart
We argue that the current framework for predictive ability testing (e.g.,West, 1996) is not necessarily useful for realtime forecast selection, i.e., for assessing which of two competing forecasting methods will perform better in the future. We propose an alternative framework for outofsample comparison of predictive ability which delivers more practically relevant conclusions. Our approach is based on inference about conditional expectations of forecasts and forecast errors rather than the unconditional expectations that are the focus of the existing literature. We capture important determinants of forecast performance that are neglected in the existing literature by evaluating what we call the forecasting method (the model and the parameter estimation procedure), rather than just the forecasting model. Compared to previous approaches, our tests are valid under more general data assumptions (heterogeneity rather than stationarity) and estimation methods, and they can handle comparison of both nested and nonnested models, which is not currently possible. To illustrate the usefulness of the proposed tests, we compare the forecast performance of three leading parameterreduction methods for macroeconomic forecasting using a large number of predictors: a sequential model selection approach,
Predictive density evaluation
, 2005
"... This chapter discusses estimation, specification testing, and model selection of predictive density models. In particular, predictive density estimation is briefly discussed, and a variety of different specification and model evaluation tests due to various ..."
Abstract

Cited by 45 (6 self)
 Add to MetaCart
This chapter discusses estimation, specification testing, and model selection of predictive density models. In particular, predictive density estimation is briefly discussed, and a variety of different specification and model evaluation tests due to various
Evaluating direct multistep forecasts
 Econometric Reviews
, 2005
"... Todd E. Clark is a vice president and economist at the Federal Reserve Bank of Kansas City. Michael W. McCracken is an assistant professor of economics at the University of MissouriColumbia. Earlier versions of this paper were titled “Evaluating LongHorizon Forecasts. ” The authors gratefully ack ..."
Abstract

Cited by 40 (9 self)
 Add to MetaCart
Todd E. Clark is a vice president and economist at the Federal Reserve Bank of Kansas City. Michael W. McCracken is an assistant professor of economics at the University of MissouriColumbia. Earlier versions of this paper were titled “Evaluating LongHorizon Forecasts. ” The authors gratefully acknowledge the helpful comments of Lutz Kilian, David Rapach, Ken West, seminar participants at the Federal Reserve Bank of Kansas City, and participants at the 2001 MEG meetings. McCracken thanks LSU for financial support during work on a substantial portion of this paper. The views expressed herein are solely those of the authors and do not necessarily reflect the views of the Federal Reserve Bank of Kansas City or the Federal Reserve System.
Bootstrap conditional distribution tests in the presence of dynamic misspecification
 Journal of Econometrics
, 2006
"... In this paper, we show the first order validity of the block bootstrap in the context of Kolmogorov type conditional distribution tests when there is dynamic misspecification and parameter estimation error. Our approach differs from the literature to date because we construct a bootstrap statistic t ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
In this paper, we show the first order validity of the block bootstrap in the context of Kolmogorov type conditional distribution tests when there is dynamic misspecification and parameter estimation error. Our approach differs from the literature to date because we construct a bootstrap statistic that allows for dynamic misspecification under both hypotheses. We consider two test statistics; one is the CK test of Andrews (1997), and the other is in the spirit of Diebold, Gunther and Tay (1998). The limiting distribution of both tests is a Gaussian process with a covariance kernel that reflects dynamic misspecification and parameter estimation error. In order to provide valid asymptotic critical values we suggest an extention of the empirical process version of the block bootstrap to the case of non vanishing parameter estimation error. The findings from Monte Carlo experiments show that both statistics have good finite sample properties for samples as small as 500 observations. JEL classification: C12, C22.
A Consistent Test for Nonlinear Out of Sample Predictive Accuracy
, 2000
"... In this paper, we draw on both the consistent specification testing and the predictive ability testing literatures and propose a test for predictive accuracy which is consistent against generic nonlinear alternatives. Broadly speaking, given a particular reference model, assume that the objective is ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
In this paper, we draw on both the consistent specification testing and the predictive ability testing literatures and propose a test for predictive accuracy which is consistent against generic nonlinear alternatives. Broadly speaking, given a particular reference model, assume that the objective is to test whether there exists any alternative model, among an infinite number of alternatives, that has better predictive accuracy than the reference model, for a given loss function. A typical example is the case in which the reference model is a simple autoregressive model and the objective is to check whether a more accurate forecasting model can be constructed by including possibly unknown (non) linear functions of the past of the process or of the past of some other process(es). We propose a statistic which is similar in spirit to that of White (2000), although our approach diers from his as we allow for an innite number of competing models that may be nested. In addition, we allow for non ...
Tests of equal predictive ability with RealTime data
, 2007
"... This paper examines the asymptotic and finitesample properties of tests of equal forecast accuracy applied to direct, multi–step predictions from both nonnested and nested linear regression models. In contrast to earlier work — including West (1996), Clark and McCracken (2001, 2005), and McCracken ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
This paper examines the asymptotic and finitesample properties of tests of equal forecast accuracy applied to direct, multi–step predictions from both nonnested and nested linear regression models. In contrast to earlier work — including West (1996), Clark and McCracken (2001, 2005), and McCracken (2006) — our asymptotics take account of the realtime, revised nature of the data. Monte Carlo simulations indicate that our asymptotic approximations yield reasonable size and power properties in most circumstances. The paper concludes with an examination of the real–time predictive content of various measures of economic activity for inflation.
Some recent developments in predictive accuracy testing with nested models and (generic) nonlinear alternatives
 INTERNATIONAL JOURNAL OF FORECASTING
, 2002
"... Forecasters and applied econometricians are often interested in comparing the predictive accuracy of nested competing models. A leading example of a context in which competing models are nested is when predictive ability is equated with “outofsample Granger causality”. In particular, it is often o ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
Forecasters and applied econometricians are often interested in comparing the predictive accuracy of nested competing models. A leading example of a context in which competing models are nested is when predictive ability is equated with “outofsample Granger causality”. In particular, it is often of interest to assess whether historical data from one variable are useful when constructing a forecasting model for another variable, and hence our use of terminology such as “outofsample Granger causality” (see e.g. Ashley, Granger and Schmalensee (1980)). In this paper we examine and discuss three key issues one is faced with when constructing predictive accuracy tests, namely: the contribution of parameter estimation error, the choice of linear versus nonlinear models, and the issue of (dynamic) misspecification, with primary focus on the latter of these issues. One of our main conclusions is that there are a number of easy to apply statistics constructed using out of sample conditional moment conditions which are robust to the presence of dynamic misspecification under both hypothesis. We provide some new Monte Carlo findings and empirical evidence based on the use of such tests. In particular, we analyze the finite sample properties of the consistent out of sample test of Corradi and Swanson (2002) using data generating processes calibrated with
Evaluating LongHorizon Forecasts
, 2003
"... This paper examines the asymptotic and finitesample properties of tests of equal forecast accuracy and encompassing applied to predictions from nested longhorizon regression models. We first derive the asymptotic distributions of a set of tests of equal forecast accuracy and encompassing, showing ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
This paper examines the asymptotic and finitesample properties of tests of equal forecast accuracy and encompassing applied to predictions from nested longhorizon regression models. We first derive the asymptotic distributions of a set of tests of equal forecast accuracy and encompassing, showing that the tests have nonstandard distributions that depend on the parameters of the datagenerating process. Using a simple modelbased bootstrap for inference, we then conduct Monte Carlo simulations of a range of datagenerating processes to examine the finitesample size and power of the tests. In these simulations, the bootstrap yields tests with good finitesample size and power properties, with the encompassing test proposed by Clark and McCracken (2001) having superior power. The paper concludes with a reexamination of the predictive content of capacity utilization for core inflation.