Results 1  10
of
72
Modeling and Forecasting Realized Volatility
, 2002
"... this paper is built. First, although raw returns are clearly leptokurtic, returns standardized by realized volatilities are approximately Gaussian. Second, although the distributions of realized volatilities are clearly rightskewed, the distributions of the logarithms of realized volatilities are a ..."
Abstract

Cited by 265 (34 self)
 Add to MetaCart
this paper is built. First, although raw returns are clearly leptokurtic, returns standardized by realized volatilities are approximately Gaussian. Second, although the distributions of realized volatilities are clearly rightskewed, the distributions of the logarithms of realized volatilities are approximately Gaussian. Third, the longrun dynamics of realized logarithmic volatilities are well approximated by a fractionallyintegrated longmemory process. Motivated by the three ABDL empirical regularities, we proceed to estimate and evaluate a multivariate model for the logarithmic realized volatilities: a fractionallyintegrated Gaussian vector autoregression (VAR) . Importantly, our approach explicitly permits measurement errors in the realized volatilities. Comparing the resulting volatility forecasts to those obtained from currently popular daily volatility models and more complicated highfrequency models, we find that our simple Gaussian VAR forecasts generally produce superior forecasts. Furthermore, we show that, given the theoretically motivated and empirically plausible assumption of normally distributed returns conditional on the realized volatilities, the resulting lognormalnormal mixture forecast distribution provides conditionally wellcalibrated density forecasts of returns, from which we obtain accurate estimates of conditional return quantiles. In the remainder of this paper, we proceed as follows. We begin in section 2 by formally developing the relevant quadratic variation theory within a standard frictionless arbitragefree multivariate pricing environment. In section 3 we discuss the practical construction of realized volatilities from highfrequency foreign exchange returns. Next, in section 4 we summarize the salient distributional features of r...
2006) Approximately normal tests for equal predictive accuracy in nested models. Journal of Econometrics forthcoming
"... Forecast evaluation often compares a parsimonious null model to a larger model that nests the null model. Under the null that the parsimonious model generates the data, the larger model introduces noise into its forecasts by estimating parameters whose population values are zero. We observe that the ..."
Abstract

Cited by 86 (13 self)
 Add to MetaCart
Forecast evaluation often compares a parsimonious null model to a larger model that nests the null model. Under the null that the parsimonious model generates the data, the larger model introduces noise into its forecasts by estimating parameters whose population values are zero. We observe that the mean squared prediction error (MSPE) from the parsimonious model is therefore expected to be smaller than that of the larger model. We describe how to adjust MSPEs to account for this noise. We propose applying standard methods (West (1996)) to test whether the adjusted mean squared error difference is zero. We refer to nonstandard limiting distributions derived in Clark and McCracken (2001, 2005a) to argue that use of standard normal critical values will yield actual sizes close to, but a little less than, nominal size. Simulation evidence supports our recommended procedure. West thanks the National Science Foundation for financial support. We thank Pablo M. PincheiraBrown and Taisuke Nakata for helpful comments. The views expressed herein are solely those of the authors and do not necessarily reflect the views of the Federal Reserve Bank of Kansas City or the Federal Reserve System.
Forecast Evaluation and Combination
 IN G.S. MADDALA AND C.R. RAO (EDS.), HANDBOOK OF STATISTICS
, 1996
"... It is obvious that forecasts are of great importance and widely used in economics and finance. Quite simply, good forecasts lead to good decisions. The importance of forecast evaluation and combination techniques follows immediately forecast users naturally have a keen interest in monitoring and ..."
Abstract

Cited by 85 (24 self)
 Add to MetaCart
It is obvious that forecasts are of great importance and widely used in economics and finance. Quite simply, good forecasts lead to good decisions. The importance of forecast evaluation and combination techniques follows immediately forecast users naturally have a keen interest in monitoring and improving forecast performance. More generally, forecast evaluation figures prominently in many questions in empirical economics and finance, such as: Are expectations rational? (e.g., Keane and Runkle, 1990; Bonham and Cohen, 1995) Are financial markets efficient? (e.g., Fama, 1970, 1991) Do macroeconomic shocks cause agents to revise their forecasts at all horizons, or just at short and mediumterm horizons? (e.g., Campbell and Mankiw, 1987; Cochrane, 1988) Are observed asset returns "too volatile"? (e.g., Shiller, 1979; LeRoy and Porter, 1981) Are asset returns forecastable over long horizons? (e.g., Fama and French, 1988; Mark, 1995)
Forecast Combinations
 Handbook of Economic Forecasting
, 2006
"... Forecast combinations have frequently been found in empirical studies to produce better forecasts on average than methods based on the exante best individual forecasting model. Moreover, simple combinations that ignore correlations between forecast errors often dominate more refined combination sch ..."
Abstract

Cited by 50 (3 self)
 Add to MetaCart
Forecast combinations have frequently been found in empirical studies to produce better forecasts on average than methods based on the exante best individual forecasting model. Moreover, simple combinations that ignore correlations between forecast errors often dominate more refined combination schemes aimed at estimating the theoretically optimal combination weights. In this chapter we analyze theoretically the factors that determine the advantages from combining forecasts (for example, the degree of correlation between forecast errors and the relative size of the individual models’ forecast error variances). Although the reasons for the success of simple combination schemes are poorly understood, we discuss several possibilities related to model misspecification, instability (nonstationarities) and estimation error in situations where thenumbersofmodelsislargerelativetothe available sample size. We discuss the role of combinations under asymmetric loss and consider combinations of point, interval and probability forecasts. Key words: Forecast combinations; pooling and trimming; shrinkage methods; model misspecification, diversification gains
Prediction in dynamic models with time dependent conditional heteroskedasticity, Working paper no
, 1990
"... This paper considers forecasting the conditional mean and variance from a singleequation dynamic model with autocorrelated disturbances following an ARMA process, and innovations with timedependent conditional heteroskedasticity as represented by a linear GARCH process. Expressions for the minimum ..."
Abstract

Cited by 43 (7 self)
 Add to MetaCart
This paper considers forecasting the conditional mean and variance from a singleequation dynamic model with autocorrelated disturbances following an ARMA process, and innovations with timedependent conditional heteroskedasticity as represented by a linear GARCH process. Expressions for the minimum MSE predictor and the conditional MSE are presented. We also derive the formula for all the theoretical moments of the prediction error distribution from a general dynamic model with GARCHtl, 1) innovations. These results are then used in the construction of ex ante prediction confidence intervals by means of the CornishFisher asymptotic expansion. An empirical example relating to the uncertainty of the expected depreciation of foreign exchange rates illustrates the usefulness of the results. 1.
RegressionBased Tests of Predictive Ability
 International Economic Review
, 1998
"... helpful comments, and the National Science Foundation and the Graduate School We develop regressionbased tests of hypotheses about out of sample prediction errors. Representative tests include ones for zero mean and zero correlation between a prediction error and a vector of predictors. The relevan ..."
Abstract

Cited by 43 (8 self)
 Add to MetaCart
helpful comments, and the National Science Foundation and the Graduate School We develop regressionbased tests of hypotheses about out of sample prediction errors. Representative tests include ones for zero mean and zero correlation between a prediction error and a vector of predictors. The relevant environments are ones in which predictions depend on estimated parameters. We show that standard regression statistics generally fail to account for error introduced by estimation of these parameters. We propose computationally convenient test statistics that properly account for such error. Simulations indicate that the procedures can work well in samples of size typically available, although there sometimes are substantial size distortions.
2003), “Correcting the Errors: Volatility Forecast Evaluation Using HighFrequency Data and Realized Volatilities,” working paper
"... We develop general modelfree adjustment procedures for the calculation of unbiased volatility loss functions based on practically feasible realized volatility benchmarks. The procedures, which exploit the recent nonparametric asymptotic distributional results in BarndorffNielsen and Shephard (200 ..."
Abstract

Cited by 41 (11 self)
 Add to MetaCart
We develop general modelfree adjustment procedures for the calculation of unbiased volatility loss functions based on practically feasible realized volatility benchmarks. The procedures, which exploit the recent nonparametric asymptotic distributional results in BarndorffNielsen and Shephard (2002a) along with new results explicitly allowing for leverage effects, are both easytoimplement and highly accurate in empirically realistic situations. On properly accounting for the measurement errors in the volatility forecast evaluations reported in Andersen, Bollerslev, Diebold and Labys (2003), the adjustments result in markedly higher estimates for the true degree of return volatility predictability.
The Use and Abuse of "RealTime" Data in Economic Forecasting
, 2000
"... : We distinguish between three different ways of using realtime data to estimate forecasting equations and argue that the most frequently used approach should generally be avoided. The point is illustrated with a model that uses monthly observations of industrial production, employment, and retail ..."
Abstract

Cited by 27 (1 self)
 Add to MetaCart
: We distinguish between three different ways of using realtime data to estimate forecasting equations and argue that the most frequently used approach should generally be avoided. The point is illustrated with a model that uses monthly observations of industrial production, employment, and retail sales to predict real GDP growth. When the model is estimated using our preferred method, its outofsample forecasting performance is clearly superior to that obtained using conventional estimation, and compares favorably with that of the BlueChip consensus. * Koenig and Dolmas  Federal Reserve Bank of Dallas, Piger  Federal Reserve Board and Federal Reserve Bank of Dallas. This paper had its origins in a forecasting project undertaken jointly with Ken Emery. Helpful comments and suggestions were offered by Nathan Balke, Dean Croushore, Preston Miller, John Robertson, and attendees of the November 1999 meeting of the Federal Reserve System Committee on Macroeconomics. Dean Croushore and ...
Economic forecasting: some lessons from recent research
, 2002
"... This paper describes some recent advances and contributions to our understanding of economic forecasting. The framework we develop helps explain the findings of forecasting competitions and the prevalence of forecast failure. It constitutes a general theoretical background against which recent resul ..."
Abstract

Cited by 19 (2 self)
 Add to MetaCart
This paper describes some recent advances and contributions to our understanding of economic forecasting. The framework we develop helps explain the findings of forecasting competitions and the prevalence of forecast failure. It constitutes a general theoretical background against which recent results can be judged. We compare this framework to a previous formulation, which was silent on the very issues of most concern to the forecaster. We describe a number of aspects which it illuminates, and draw out the implications for model selection. Finally, we discuss the areas where research remains needed to clarify empirical findings which lack theoretical explanations.