Results 1  10
of
26
2006) Approximately normal tests for equal predictive accuracy in nested models. Journal of Econometrics forthcoming
"... Forecast evaluation often compares a parsimonious null model to a larger model that nests the null model. Under the null that the parsimonious model generates the data, the larger model introduces noise into its forecasts by estimating parameters whose population values are zero. We observe that the ..."
Abstract

Cited by 141 (13 self)
 Add to MetaCart
Forecast evaluation often compares a parsimonious null model to a larger model that nests the null model. Under the null that the parsimonious model generates the data, the larger model introduces noise into its forecasts by estimating parameters whose population values are zero. We observe that the mean squared prediction error (MSPE) from the parsimonious model is therefore expected to be smaller than that of the larger model. We describe how to adjust MSPEs to account for this noise. We propose applying standard methods (West (1996)) to test whether the adjusted mean squared error difference is zero. We refer to nonstandard limiting distributions derived in Clark and McCracken (2001, 2005a) to argue that use of standard normal critical values will yield actual sizes close to, but a little less than, nominal size. Simulation evidence supports our recommended procedure. West thanks the National Science Foundation for financial support. We thank Pablo M. PincheiraBrown and Taisuke Nakata for helpful comments. The views expressed herein are solely those of the authors and do not necessarily reflect the views of the Federal Reserve Bank of Kansas City or the Federal Reserve System.
Economic forecasting
, 2007
"... Forecasts guide decisions in all areas of economics and finance and their value can only be understood in relation to, and in the context of, such decisions. We discuss the central role of the loss function in helping determine the forecaster’s objectives. Decision theory provides a framework for bo ..."
Abstract

Cited by 63 (2 self)
 Add to MetaCart
Forecasts guide decisions in all areas of economics and finance and their value can only be understood in relation to, and in the context of, such decisions. We discuss the central role of the loss function in helping determine the forecaster’s objectives. Decision theory provides a framework for both the construction and evaluation of forecasts. This framework allows an understanding of the challenges that arise from the explosion in the sheer volume of predictor variables under consideration and the forecaster’s ability to entertain an endless array of forecasting models and timevarying specifications, none of which may coincide with the ‘true’ model. We show this along with reviewing methods for comparing the forecasting performance of pairs of models or evaluating the ability of the best of many models to beat a benchmark specification.
Predictive density evaluation
, 2005
"... This chapter discusses estimation, specification testing, and model selection of predictive density models. In particular, predictive density estimation is briefly discussed, and a variety of different specification and model evaluation tests due to various ..."
Abstract

Cited by 46 (6 self)
 Add to MetaCart
This chapter discusses estimation, specification testing, and model selection of predictive density models. In particular, predictive density estimation is briefly discussed, and a variety of different specification and model evaluation tests due to various
RealTime Exchange Rate Predictability with Taylor Rule Fundamentals
, 2008
"... An extensive literature that studied the performance of empirical exchange rate models following Meese and Rogoff’s (1983a) seminal paper has not convincingly found evidence of outofsample exchange rate predictability. This paper extends the conventional set of models of exchange rate determinatio ..."
Abstract

Cited by 40 (4 self)
 Add to MetaCart
An extensive literature that studied the performance of empirical exchange rate models following Meese and Rogoff’s (1983a) seminal paper has not convincingly found evidence of outofsample exchange rate predictability. This paper extends the conventional set of models of exchange rate determination by investigating predictability of models that incorporate Taylor rule fundamentals. We find evidence of shortterm predictability for 11 out of 12 currencies visàvis the U.S. dollar over the postBretton Woods float, with the strongest evidence coming from specifications that incorporate heterogeneous coefficients and interest rate smoothing. The evidence of predictability is much stronger with Taylor rule models than with conventional interest rate, purchasing power parity, or monetary models.
2008, Biases in Macroeconomic Forecasts: Irrationality or Asymmetric Loss
 Journal of European Economic Association
"... Empirical studies using survey data on expectations have frequently observed that forecasts are biased and have concluded that agents are not rational. We establish that existing rationality tests are not robust to even small deviations from symmetric loss and hence have little ability to tell wheth ..."
Abstract

Cited by 34 (2 self)
 Add to MetaCart
Empirical studies using survey data on expectations have frequently observed that forecasts are biased and have concluded that agents are not rational. We establish that existing rationality tests are not robust to even small deviations from symmetric loss and hence have little ability to tell whether the forecaster is irrational or the loss function is asymmetric. We quantify the exact tradeoff between forecast inefficiency and asymmetric loss leading to identical outcomes of standard rationality tests and explore new and more general methods for testing forecast rationality jointly with flexible families of loss functions that embed quadratic loss as a special case. An empirical application to survey data on forecasts of nominal output growth demonstrates the empirical significance of our results and finds that rejections of rationality may largely have been driven by the assumption of symmetric loss.
Evaluation of Dynamic Stochastic General Equilibrium Models Based on
 Mary, University of London and Rutgers University
, 2003
"... We take as a starting point the existence of a joint distribution implied by different dynamic stochastic general equilibrium (DSGE) models, all of which are potentially misspecified. Our objective is to compare “true ” joint distributions with ones generated by given DSGEs. This is accomplished vi ..."
Abstract

Cited by 22 (14 self)
 Add to MetaCart
We take as a starting point the existence of a joint distribution implied by different dynamic stochastic general equilibrium (DSGE) models, all of which are potentially misspecified. Our objective is to compare “true ” joint distributions with ones generated by given DSGEs. This is accomplished via comparison of the empirical joint distributions (or confidence intervals) of historical and simulated time series. The tool draws on recent advances in the theory of the bootstrap, Kolmogorov type testing, and other work on the evaluation of DSGEs, aimed at comparing the second order properties of historical and simulated time series. We begin by fixing a given model as the “benchmark ” model, against which all “alternative ” models are to be compared. We then test whether at least one of the alternative models provides a more “accurate ” approximation to the true cumulative distribution than does the benchmark model, where accuracy is measured in terms of distributional square error. Bootstrap critical values are discussed, and an illustrative example is given, in which it is shown that alternative versions of a standard DSGE model in which calibrated parameters are allowed to vary slightly perform equally well. On the other hand, there are stark differences between models when the shocks driving the models are assigned nonplausible variances and/or distributional assumptions. JEL classification: C12, C22.
Changes in Predictive Ability with Mixed Frequency Data.” Working Paper No
, 2007
"... This paper proposes a new regression model — a smooth transition mixed data sampling (STMIDAS) approach — that captures recurrent changes in the ability of a high frequency variable in predicting a variable only available at lower frequency. The model is applied to the use of …nancial variables, suc ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
This paper proposes a new regression model — a smooth transition mixed data sampling (STMIDAS) approach — that captures recurrent changes in the ability of a high frequency variable in predicting a variable only available at lower frequency. The model is applied to the use of …nancial variables, such as the slope of the yield curve, the shortrate and stock returns, to forecast US output growth both in and outofsample. I …nd evidence that the use of the predictor sampled weekly improves output growth forecasts, which may also be improved when changes in …nancial variables’predictive power are considered.
ROBUST BACKTESTING TESTS FOR VALUEATRISK MODELS
, 2008
"... Backtesting methods are statistical tests designed to uncover excessive risktaking from financial institutions. We show in this paper that these methods are subject to the presence of model risk produced by a wrong specification of the conditional VaR model, and derive its effect on the asymptotic ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Backtesting methods are statistical tests designed to uncover excessive risktaking from financial institutions. We show in this paper that these methods are subject to the presence of model risk produced by a wrong specification of the conditional VaR model, and derive its effect on the asymptotic distribution of the relevant outofsample tests. We also show that in the absence of estimation risk, the unconditional backtest is affected by model misspecification but the independence test is not. Our solution for the general case consists on proposing robust subsampling techniques to approximate the true sampling distribution of these tests. We carry out a Monte Carlo study to see the importance of these effects in finite samples for locationscale models that are wrongly specified but correct on “average ”. An application to DowJones Index shows the impact of correcting for model risk on backtesting procedures for different dynamic VaR models measuring risk exposure.
The Incremental Predictive Information Associated with Using New Keynesian DSGE Models vs. Simple Linear Econometric Models
 Oxford Bulletin of Economics and Statistics
, 2005
"... In this paper we construct output gap and inflation predictions using a variety of DSGE sticky price models. Predictive density accuracy tests related to the test discussed in Corradi and Swanson (2005a) as well as predictive accuracy tests due to Diebold and Mariano (1995) and West (1996) are used ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
In this paper we construct output gap and inflation predictions using a variety of DSGE sticky price models. Predictive density accuracy tests related to the test discussed in Corradi and Swanson (2005a) as well as predictive accuracy tests due to Diebold and Mariano (1995) and West (1996) are used to compare the alternative models. A number of simple time series prediction models (such as autoregressive and vector autoregressive (VAR) models) are additionally used as strawman models. Given that DSGE model restrictions are routinely nested within VAR models, the addition of our strawman models allows us to indirectly assess the usefulness of imposing theoretical restrictions implied by DSGE models on unrestricted econometric models. With respect to predictive density evaluation, our results suggest that the standard sticky price model discussed in Calvo (1983) is not outperformed by the same model augmented either with information or indexation, when used to predict the output gap. On the other hand, there are clear gains to using the more recent models when predicting inflation. Results based on mean square forecast error analysis are less clearcut, although the standard sticky price model fares best at our longest forecast horizon of 3 years, and performs relatively poorly at shorter horizons. When the strawman time series models are added to the picture, we find that the DSGE models still fare very well, often winning our forecast competitions, suggesting that theoretical macroeconomic restrictions yield useful additional information for forming macroeconomic forecasts.