Results 1  10
of
26
Optimal combination of density forecasts
 NATIONAL INSTITUTE OF ECONOMIC AND SOCIAL RESEARCH DISCUSSION PAPER NO
, 2005
"... This paper brings together two important but hitherto largely unrelated areas of the forecasting literature, density forecasting and forecast combination. It proposes a simple datadriven approach to direct combination of density forecasts using optimal weights. These optimal weights are those weigh ..."
Abstract

Cited by 23 (9 self)
 Add to MetaCart
This paper brings together two important but hitherto largely unrelated areas of the forecasting literature, density forecasting and forecast combination. It proposes a simple datadriven approach to direct combination of density forecasts using optimal weights. These optimal weights are those weights that minimize the ‘distance’, as measured by the KullbackLeibler information criterion, between the forecasted and true but unknown density. We explain how this minimization both can and should be achieved. Comparisons with the optimal combination of point forecasts are made. An application to simple timeseries density forecasts and two widely used published density forecasts for U.K. inflation, namely the Bank of England and NIESR “fan” charts, illustrates that combination can but need not always help.
Comparing density forecast models
 University of California, Riverside
, 2007
"... In this paper we discuss how to compare various (possibly misspecified) density forecast models using the KullbackLeibler Information Criterion (KLIC) of a candidate density forecast model with respect to thetruedensity. TheKLICdifferential between a pair of competing models is the (predictive) lo ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
In this paper we discuss how to compare various (possibly misspecified) density forecast models using the KullbackLeibler Information Criterion (KLIC) of a candidate density forecast model with respect to thetruedensity. TheKLICdifferential between a pair of competing models is the (predictive) loglikelihood ratio (LR) between the two models. Even though the true density is unknown, using the LR statistic amounts to comparing models with the KLIC as a loss function and thus enables us to assess which density forecast model can approximate the true density more closely. We also discuss how this KLIC is related to the KLIC based on the probability integral transform (PIT) in the framework of Diebold et al. (1998). While they are asymptotically equivalent, the PITbased KLIC is best suited for evaluating the adequacy of each density forecast model and the original KLIC is best suited for comparing competing models. In an empirical study with the S&P500 and NASDAQ daily return series, we find strong evidence for rejecting the NormalGARCH benchmark model, in favor of the models that can capture skewness in the conditional distribution and asymmetry and longmemory in the conditional variance.
Combining Forecast Densities from VARs with Uncertain Instabilities
, 2008
"... Clark and McCracken (2008) argue that combining realtime point forecasts from VARs of output, prices and interest rates improves point forecast accuracy in the presence of uncertain model instabilities. In this paper, we generalize their approach to consider forecast density combinations and evalua ..."
Abstract

Cited by 11 (8 self)
 Add to MetaCart
Clark and McCracken (2008) argue that combining realtime point forecasts from VARs of output, prices and interest rates improves point forecast accuracy in the presence of uncertain model instabilities. In this paper, we generalize their approach to consider forecast density combinations and evaluations. Whereas Clark and McCracken (2008) show that the point forecast errors from particular equalweight pairwise averages are typically comparable or better than benchmark univariate time series models, we show that neither approach produces accurate realtime forecast densities for recent US data. If greater weight is given to models that allow for the shifts in volatilities associated with the Great Moderation, predictive density accuracy improves substantially.
Evaluating Density Forecasts: Forecast Combinations, Model Mixtures, Calibration and Sharpness
, 2008
"... In a recent article Gneiting, Balabdaoui and Raftery (JRSSB, 2007) propose the criterion of sharpness for the evaluation of predictive distributions or density forecasts. They motivate their proposal by an example in which standard evaluation procedures based on probability integral transforms cann ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
In a recent article Gneiting, Balabdaoui and Raftery (JRSSB, 2007) propose the criterion of sharpness for the evaluation of predictive distributions or density forecasts. They motivate their proposal by an example in which standard evaluation procedures based on probability integral transforms cannot distinguish between the ideal forecast and several competing forecasts. In this paper we show that their example has some unrealistic features from the perspective of the timeseries forecasting literature, hence it is an insecure foundation for their argument that existing calibration procedures are inadequate in practice. We present an alternative, more realistic example in which relevant statistical methods, including informationbased methods, provide the required discrimination between competing forecasts. We conclude that there is no need for a subsidiary criterion of sharpness.
Quantiles as optimal point predictors
"... The loss function plays a central role in the theory and practice of forecasting. If the loss is quadratic, the mean of the predictive distribution is the unique optimal point predictor. If the loss is linear, any median is an optimal point forecast. The title of the paper refers to the simple, poss ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
The loss function plays a central role in the theory and practice of forecasting. If the loss is quadratic, the mean of the predictive distribution is the unique optimal point predictor. If the loss is linear, any median is an optimal point forecast. The title of the paper refers to the simple, possibly surprising fact that quantiles arise as optimal point predictors under a general class of economically relevant loss functions, to which we refer as generalized piecewise linear (GPL). The level of the quantile depends on a generic asymmetry parameter that reflects the possibly distinct costs of underprediction and overprediction. A loss function for which quantiles are optimal point predictors is necessarily GPL, similarly to the classical fact that a loss function for which the mean is optimal is necessarily of the Bregman type. We prove general versions of these results that apply on any decisionobservation domain and rest on weak assumptions. The empirical relevance of the choices in the transition from the predictive distribution to the point forecast is illustrated on the Bank of England’s density forecasts of United Kingdom inflation rates, and probabilistic predictions of wind energy resources in the Pacific Northwest. Key words and phrases: asymmetric loss function; Bayes predictor; density forecast; mean; median; mode; optimal point predictor; quantile; statistical decision theory 1
Comparing Density Forecasts Using Threshold and Quantile Weighted Scoring Rules
, 2008
"... We propose a method for comparing density forecasts that is based on weighted versions of the continuous ranked probability score. The weighting emphasizes regions of interest, such as the tails or the center of a variable’s range, while retaining propriety, as opposed to a recently developed weight ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We propose a method for comparing density forecasts that is based on weighted versions of the continuous ranked probability score. The weighting emphasizes regions of interest, such as the tails or the center of a variable’s range, while retaining propriety, as opposed to a recently developed weighted likelihood ratio test, which can be hedged. Threshold and quantile based decompositions of the continuous ranked probability score can be illustrated graphically and prompt insights into the strengths and deficiencies of a forecasting method. We illustrate the use of the test and graphical tools in case studies on the Bank of England’s density forecasts of quarterly inflation rates in the United Kingdom, and probabilistic predictions of wind resources in the
Forecast Accuracy and Economic Gains from Bayesian Model Averaging using Time Varying Weights
"... Several Bayesian model combination schemes, including some novel approaches that simultaneously allow for parameter uncertainty, model uncertainty and robust time varying model weights, are compared in terms of forecast accuracy and economic gains using financial and macroeconomic time series. The r ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Several Bayesian model combination schemes, including some novel approaches that simultaneously allow for parameter uncertainty, model uncertainty and robust time varying model weights, are compared in terms of forecast accuracy and economic gains using financial and macroeconomic time series. The results indicate that the proposed time varying model weight schemes outperform other combination schemes in terms of predictive and economic gains. In an empirical application using returns on the S&P 500 index, time varying model weights provide improved forecasts with substantial economic gains in an investment strategy including transaction costs. Another empirical example refers to forecasting US economic growth over the business cycle. It suggests that time varying combination schemes may be very useful in business cycle analysis and forecasting, as these may provide an early indicator for recessions. Key words: forecast combination, Bayesian model averaging, time varying model weights, portfolio optimization, business cycle. 1
Measuring Output Gap Uncertainty
, 2009
"... We propose a methodology for producing density forecasts for the output gap in real time using a large number of vector autoregessions in inflation and output gap measures. Density combination utilizes a linear mixture of experts framework to produce potentially nonGaussian ensemble densities for t ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We propose a methodology for producing density forecasts for the output gap in real time using a large number of vector autoregessions in inflation and output gap measures. Density combination utilizes a linear mixture of experts framework to produce potentially nonGaussian ensemble densities for the unobserved output gap. In our application, we show that data revisions alter substantially our probabilistic assessments of the output gap using a variety of output gap measures derived from univariate detrending filters. The resulting ensemble produces wellcalibrated forecast densities for US inflation in real time, in contrast to those from simple univariate autoregressions which ignore the contribution of the output gap. Combining evidence from both linear trends and more flexible univariate detrending filters induces strong multimodality in the predictive densities for the unobserved output gap. The peaks associated with these two detrending methodologies indicate output gaps of opposite sign for some observations, reflecting the pervasive nature of model uncertainty in our US data.
Macro Modelling with Many Models ∗
, 2009
"... We argue that the next generation of macro modellers at Inflation Targeting central banks should adapt a methodology from the weather forecasting literature known as ‘ensemble modelling’. In this approach, uncertainty about model specifications (e.g., initial conditions, parameters, and boundary con ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We argue that the next generation of macro modellers at Inflation Targeting central banks should adapt a methodology from the weather forecasting literature known as ‘ensemble modelling’. In this approach, uncertainty about model specifications (e.g., initial conditions, parameters, and boundary conditions) is explicitly accounted for by constructing ensemble predictive densities from a large number of component models. The components allow the modeller to explore a wide range of uncertainties; and the resulting ensemble ‘integrates out ’ these uncertainties using timevarying weights on the components. We provide two examples of this modelling strategy: (i) forecasting inflation with a disaggregate ensemble; and (ii) forecasting inflation with an ensemble DSGE.
Density nowcasts and model combination: nowcasting Euroarea GDP growth over the 20089 recession ∗
, 2010
"... Combined density nowcasts for quarterly Euroarea GDP growth are produced based on the realtime performance of component models. Components are distinguished by their use of “hard ” and “soft ” indicators. We consider the accuracy of the density nowcasts as withinquarter information on the monthly ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Combined density nowcasts for quarterly Euroarea GDP growth are produced based on the realtime performance of component models. Components are distinguished by their use of “hard ” and “soft ” indicators. We consider the accuracy of the density nowcasts as withinquarter information on the monthly indicators accumulates. We focus on their ability to anticipate the recent recession probabilistically. We find that the relative utility of “soft ” data increased suddenly during the recession. But as this instability was hard to detect in realtime it helps, when producing nowcasts knowing only one month’s “hard ” data, to weight the different indicators equally. As more monthly “hard ” data arrive, better calibrated densities are obtained by giving a higher weight in the combination to these “hard ” indicators.