Results 1  10
of
125
Comparing Predictive Accuracy
 JOURNAL OF BUSINESS AND ECONOMIC STATISTICS, 13, 253265
, 1995
"... We propose and evaluate explicit tests of the null hypothesis of no difference in the accuracy of two competing forecasts. In contrast to previously developed tests, a wide variety of accuracy measures can be used (in particular, the loss function need not be quadratic, and need not even be symmetri ..."
Abstract

Cited by 1132 (26 self)
 Add to MetaCart
We propose and evaluate explicit tests of the null hypothesis of no difference in the accuracy of two competing forecasts. In contrast to previously developed tests, a wide variety of accuracy measures can be used (in particular, the loss function need not be quadratic, and need not even be symmetric), and forecast errors can be nonGaussian, nonzero mean, serially correlated, and contemporaneously correlated. Asymptotic and exact finite sample tests are proposed, evaluated, and illustrated.
Modeling and Forecasting Realized Volatility
, 2002
"... this paper is built. First, although raw returns are clearly leptokurtic, returns standardized by realized volatilities are approximately Gaussian. Second, although the distributions of realized volatilities are clearly rightskewed, the distributions of the logarithms of realized volatilities are a ..."
Abstract

Cited by 475 (49 self)
 Add to MetaCart
this paper is built. First, although raw returns are clearly leptokurtic, returns standardized by realized volatilities are approximately Gaussian. Second, although the distributions of realized volatilities are clearly rightskewed, the distributions of the logarithms of realized volatilities are approximately Gaussian. Third, the longrun dynamics of realized logarithmic volatilities are well approximated by a fractionallyintegrated longmemory process. Motivated by the three ABDL empirical regularities, we proceed to estimate and evaluate a multivariate model for the logarithmic realized volatilities: a fractionallyintegrated Gaussian vector autoregression (VAR) . Importantly, our approach explicitly permits measurement errors in the realized volatilities. Comparing the resulting volatility forecasts to those obtained from currently popular daily volatility models and more complicated highfrequency models, we find that our simple Gaussian VAR forecasts generally produce superior forecasts. Furthermore, we show that, given the theoretically motivated and empirically plausible assumption of normally distributed returns conditional on the realized volatilities, the resulting lognormalnormal mixture forecast distribution provides conditionally wellcalibrated density forecasts of returns, from which we obtain accurate estimates of conditional return quantiles. In the remainder of this paper, we proceed as follows. We begin in section 2 by formally developing the relevant quadratic variation theory within a standard frictionless arbitragefree multivariate pricing environment. In section 3 we discuss the practical construction of realized volatilities from highfrequency foreign exchange returns. Next, in section 4 we summarize the salient distributional features of r...
Forecast Evaluation and Combination
 IN G.S. MADDALA AND C.R. RAO (EDS.), HANDBOOK OF STATISTICS
, 1996
"... It is obvious that forecasts are of great importance and widely used in economics and finance. Quite simply, good forecasts lead to good decisions. The importance of forecast evaluation and combination techniques follows immediately forecast users naturally have a keen interest in monitoring and ..."
Abstract

Cited by 140 (29 self)
 Add to MetaCart
It is obvious that forecasts are of great importance and widely used in economics and finance. Quite simply, good forecasts lead to good decisions. The importance of forecast evaluation and combination techniques follows immediately forecast users naturally have a keen interest in monitoring and improving forecast performance. More generally, forecast evaluation figures prominently in many questions in empirical economics and finance, such as: Are expectations rational? (e.g., Keane and Runkle, 1990; Bonham and Cohen, 1995) Are financial markets efficient? (e.g., Fama, 1970, 1991) Do macroeconomic shocks cause agents to revise their forecasts at all horizons, or just at short and mediumterm horizons? (e.g., Campbell and Mankiw, 1987; Cochrane, 1988) Are observed asset returns &quot;too volatile&quot;? (e.g., Shiller, 1979; LeRoy and Porter, 1981) Are asset returns forecastable over long horizons? (e.g., Fama and French, 1988; Mark, 1995)
2006) Approximately normal tests for equal predictive accuracy in nested models. Journal of Econometrics forthcoming
"... Forecast evaluation often compares a parsimonious null model to a larger model that nests the null model. Under the null that the parsimonious model generates the data, the larger model introduces noise into its forecasts by estimating parameters whose population values are zero. We observe that the ..."
Abstract

Cited by 128 (13 self)
 Add to MetaCart
Forecast evaluation often compares a parsimonious null model to a larger model that nests the null model. Under the null that the parsimonious model generates the data, the larger model introduces noise into its forecasts by estimating parameters whose population values are zero. We observe that the mean squared prediction error (MSPE) from the parsimonious model is therefore expected to be smaller than that of the larger model. We describe how to adjust MSPEs to account for this noise. We propose applying standard methods (West (1996)) to test whether the adjusted mean squared error difference is zero. We refer to nonstandard limiting distributions derived in Clark and McCracken (2001, 2005a) to argue that use of standard normal critical values will yield actual sizes close to, but a little less than, nominal size. Simulation evidence supports our recommended procedure. West thanks the National Science Foundation for financial support. We thank Pablo M. PincheiraBrown and Taisuke Nakata for helpful comments. The views expressed herein are solely those of the authors and do not necessarily reflect the views of the Federal Reserve Bank of Kansas City or the Federal Reserve System.
Forecast Combinations
 HANDBOOK OF ECONOMIC FORECASTING
, 2006
"... Forecast combinations have frequently been found in empirical studies to produce better forecasts on average than methods based on the exante best individual forecasting model. Moreover, simple combinations that ignore correlations between forecast errors often dominate more refined combination sch ..."
Abstract

Cited by 90 (2 self)
 Add to MetaCart
Forecast combinations have frequently been found in empirical studies to produce better forecasts on average than methods based on the exante best individual forecasting model. Moreover, simple combinations that ignore correlations between forecast errors often dominate more refined combination schemes aimed at estimating the theoretically optimal combination weights. In this chapter we analyze theoretically the factors that determine the advantages from combining forecasts (for example, the degree of correlation between forecast errors and the relative size of the individual models’ forecast error variances). Although the reasons for the success of simple combination schemes are poorly understood, we discuss several possibilities related to model misspecification, instability (nonstationarities) and estimation error in situations where thenumbersofmodelsislargerelativetothe available sample size. We discuss the role of combinations under asymmetric loss and consider combinations of point, interval and probability forecasts.
Prediction in dynamic models with time dependent conditional heteroskedasticity, Working paper no
, 1990
"... This paper considers forecasting the conditional mean and variance from a singleequation dynamic model with autocorrelated disturbances following an ARMA process, and innovations with timedependent conditional heteroskedasticity as represented by a linear GARCH process. Expressions for the minimum ..."
Abstract

Cited by 84 (13 self)
 Add to MetaCart
This paper considers forecasting the conditional mean and variance from a singleequation dynamic model with autocorrelated disturbances following an ARMA process, and innovations with timedependent conditional heteroskedasticity as represented by a linear GARCH process. Expressions for the minimum MSE predictor and the conditional MSE are presented. We also derive the formula for all the theoretical moments of the prediction error distribution from a general dynamic model with GARCHtl, 1) innovations. These results are then used in the construction of ex ante prediction confidence intervals by means of the CornishFisher asymptotic expansion. An empirical example relating to the uncertainty of the expected depreciation of foreign exchange rates illustrates the usefulness of the results. 1.
RegressionBased Tests of Predictive Ability
 International Economic Review
, 1998
"... helpful comments, and the National Science Foundation and the Graduate School We develop regressionbased tests of hypotheses about out of sample prediction errors. Representative tests include ones for zero mean and zero correlation between a prediction error and a vector of predictors. The relevan ..."
Abstract

Cited by 69 (11 self)
 Add to MetaCart
helpful comments, and the National Science Foundation and the Graduate School We develop regressionbased tests of hypotheses about out of sample prediction errors. Representative tests include ones for zero mean and zero correlation between a prediction error and a vector of predictors. The relevant environments are ones in which predictions depend on estimated parameters. We show that standard regression statistics generally fail to account for error introduced by estimation of these parameters. We propose computationally convenient test statistics that properly account for such error. Simulations indicate that the procedures can work well in samples of size typically available, although there sometimes are substantial size distortions.
Pooling of forecasts
 Econometrics Journal
, 2004
"... We consider forecasting using a combination, when no model coincides with a nonconstant data generation process (DGP). Practical experience suggests that combining forecasts adds value, and can even dominate the best individual device. We show why this can occur when forecasting models are differe ..."
Abstract

Cited by 63 (9 self)
 Add to MetaCart
We consider forecasting using a combination, when no model coincides with a nonconstant data generation process (DGP). Practical experience suggests that combining forecasts adds value, and can even dominate the best individual device. We show why this can occur when forecasting models are differentially misspecified, and is likely to occur when the DGP is subject to location shifts. Moreover, averaging may then dominate over estimated weights in the combination. Finally, it cannot be proved that only nonencompassed devices should be retained in the combination. Empirical and Monte Carlo illustrations confirm the analysis. Journal of Economic Literature classification: C32.
2003), “Correcting the Errors: Volatility Forecast Evaluation Using HighFrequency Data and Realized Volatilities,” working paper
"... We develop general modelfree adjustment procedures for the calculation of unbiased volatility loss functions based on practically feasible realized volatility benchmarks. The procedures, which exploit the recent nonparametric asymptotic distributional results in BarndorffNielsen and Shephard (200 ..."
Abstract

Cited by 63 (11 self)
 Add to MetaCart
We develop general modelfree adjustment procedures for the calculation of unbiased volatility loss functions based on practically feasible realized volatility benchmarks. The procedures, which exploit the recent nonparametric asymptotic distributional results in BarndorffNielsen and Shephard (2002a) along with new results explicitly allowing for leverage effects, are both easytoimplement and highly accurate in empirically realistic situations. On properly accounting for the measurement errors in the volatility forecast evaluations reported in Andersen, Bollerslev, Diebold and Labys (2003), the adjustments result in markedly higher estimates for the true degree of return volatility predictability.
Analytic evaluation of volatility forecasts
 International Economic Review
, 2004
"... The development of estimation and forecasting procedures using empirically realistic continuoustime stochastic volatility models is severely hampered by the lack of closedform expressions for the transition densities of the observed returns. In response to this, Andersen, Bollerslev, Diebold and L ..."
Abstract

Cited by 54 (10 self)
 Add to MetaCart
The development of estimation and forecasting procedures using empirically realistic continuoustime stochastic volatility models is severely hampered by the lack of closedform expressions for the transition densities of the observed returns. In response to this, Andersen, Bollerslev, Diebold and Labys (2002) have recently advocated modeling and forecasting the (latent) integrated volatility of primary import from a pricing perspective based on simple reduced form time series models for the observable realized volatilities, constructed from the summation of highfrequency squared returns. Building on the eigenfunction stochastic volatility class of models introduced by Meddahi (2001), we present analytical expressions for the loss in forecast eÆciency associated with this easytoimplement procedure as a function of the sampling frequency of the returns underlying the realized volatility measures. On numerically quantifying this eÆciency loss for such popular continuoustime models as GARCH, multifactor aÆne, and lognormal diusions, we nd that the realized volatility reduced form procedures perform remarkably well in comparison to the optimal (nonfeasible) forecasts conditional on the full sample path realization of the latent instantaneous volatility process.