Results 11  20
of
171
Forecast Combinations
 Handbook of Economic Forecasting
, 2006
"... Forecast combinations have frequently been found in empirical studies to produce better forecasts on average than methods based on the exante best individual forecasting model. Moreover, simple combinations that ignore correlations between forecast errors often dominate more refined combination sch ..."
Abstract

Cited by 50 (3 self)
 Add to MetaCart
Forecast combinations have frequently been found in empirical studies to produce better forecasts on average than methods based on the exante best individual forecasting model. Moreover, simple combinations that ignore correlations between forecast errors often dominate more refined combination schemes aimed at estimating the theoretically optimal combination weights. In this chapter we analyze theoretically the factors that determine the advantages from combining forecasts (for example, the degree of correlation between forecast errors and the relative size of the individual models’ forecast error variances). Although the reasons for the success of simple combination schemes are poorly understood, we discuss several possibilities related to model misspecification, instability (nonstationarities) and estimation error in situations where thenumbersofmodelsislargerelativetothe available sample size. We discuss the role of combinations under asymmetric loss and consider combinations of point, interval and probability forecasts. Key words: Forecast combinations; pooling and trimming; shrinkage methods; model misspecification, diversification gains
2003): “Forecast uncertainties in macroeconometric modelling: an application to the UK economy
 Journal of the American Statistical Association
"... This paper argues that probability forecasts convey information on the uncertainties that surround macroeconomic forecasts in a straightforward manner which is preferable to other alternatives, including the use of confidence intervals. Probability forecasts obtained using a small benchmark macroec ..."
Abstract

Cited by 44 (14 self)
 Add to MetaCart
This paper argues that probability forecasts convey information on the uncertainties that surround macroeconomic forecasts in a straightforward manner which is preferable to other alternatives, including the use of confidence intervals. Probability forecasts obtained using a small benchmark macroeconometric model as well as a number of other alternatives are presented and evaluated using recursive forecasts generated over the period 1999q12001q1. Out of sample probability forecasts of inflation and output growth are also provided over the period 2001q22003q1, and their implications discussed in relation to the Bank of England’s inflation target and the need to avoid recessions, both as separate events and jointly. The robustness of the results to parameter and model uncertainties is also investigated by a pragmatic implementation of the Bayesian model averaging approach.
Bayesian model averaging
 STAT.SCI
, 1999
"... Standard statistical practice ignores model uncertainty. Data analysts typically select a model from some class of models and then proceed as if the selected model had generated the data. This approach ignores the uncertainty in model selection, leading to overcon dent inferences and decisions tha ..."
Abstract

Cited by 42 (0 self)
 Add to MetaCart
Standard statistical practice ignores model uncertainty. Data analysts typically select a model from some class of models and then proceed as if the selected model had generated the data. This approach ignores the uncertainty in model selection, leading to overcon dent inferences and decisions that are more risky than one thinks they are. Bayesian model averaging (BMA) provides a coherent mechanism for accounting for this model uncertainty. Several methods for implementing BMA haverecently emerged. We discuss these methods and present anumber of examples. In these examples, BMA provides improved outofsample predictive performance. We also provide a catalogue of
Adaptive Regression by Mixing
 Journal of American Statistical Association
"... Adaptation over different procedures is of practical importance. Different procedures perform well under different conditions. In many practical situations, it is rather hard to assess which conditions are (approximately) satisfied so as to identify the best procedure for the data at hand. Thus auto ..."
Abstract

Cited by 39 (7 self)
 Add to MetaCart
Adaptation over different procedures is of practical importance. Different procedures perform well under different conditions. In many practical situations, it is rather hard to assess which conditions are (approximately) satisfied so as to identify the best procedure for the data at hand. Thus automatic adaptation over various scenarios is desirable. A practically feasible method, named Adaptive Regression by Mixing (ARM) is proposed to convexly combine general candidate regression procedures. Under mild conditions, the resulting estimator is theoretically shown to perform optimally in rates of convergence without knowing which of the original procedures work the best. Simulations are conducted in several settings, including comparing a parametric model with nonparametric alternatives, comparing a neural network with a projection pursuit in multidimensional regression, and combining bandwidths in kernel regression. The results clearly support the theoretical property of ARM. The ARM ...
Improving Model Accuracy using Optimal Linear Combinations of Trained Neural Networks
 IEEE Transactions on Neural Networks
, 1992
"... Neural network (NN) based modeling often requires trying multiple networks with different architectures and training parameters in order to achieve an acceptable model accuracy. Typically, only one of the trained networks is selected as "best" and the rest are discarded. We propose using optimal lin ..."
Abstract

Cited by 38 (3 self)
 Add to MetaCart
Neural network (NN) based modeling often requires trying multiple networks with different architectures and training parameters in order to achieve an acceptable model accuracy. Typically, only one of the trained networks is selected as "best" and the rest are discarded. We propose using optimal linear combinations (OLCs) of the corresponding outputs of a set of NNs as an alternative to using a single network. Modeling accuracy is measured by mean squared error (MSE) with respect to the distribution of random inputs. Optimality is defined by minimizing the MSE, with the resultant combination referred to as MSEOLC. We formulate the MSEOLC problem for trained NNs and derive two closedform expressions for the optimal combinationweights. An example that illustrates significant improvement in model accuracy as a result of using MSEOLCs of the trained networks is included. I. INTRODUCTION Constructing neural network (NN) based models often involves training a number of networks. The cr...
Diversity in Neural Network Ensembles
, 2004
"... We study the issue of error diversity in ensembles of neural networks. In ensembles of regression estimators, the measurement of diversity can be formalised as the BiasVarianceCovariance decomposition. In ensembles of classifiers, there is no neat theory in the literature to date. Our objective is ..."
Abstract

Cited by 37 (4 self)
 Add to MetaCart
We study the issue of error diversity in ensembles of neural networks. In ensembles of regression estimators, the measurement of diversity can be formalised as the BiasVarianceCovariance decomposition. In ensembles of classifiers, there is no neat theory in the literature to date. Our objective is to understand how to precisely define, measure, and create diverse errors for both cases. As a focal point we study one algorithm, Negative Correlation (NC) Learning which claimed, and showed empirical evidence, to enforce useful error diversity, creating neural network ensembles with very competitive performance on both classification and regression problems. With the lack of a solid understanding of its dynamics, we engage in a theoretical and empirical investigation. In an initial empirical stage, we demonstrate the application of an evolutionary search algorithm to locate the optimal value for λ, the configurable parameter in NC. We observe the behaviour of the optimal parameter under different ensemble architectures and datasets; we note a high degree of unpredictability, and embark on a more formal investigation. During the theoretical investigations, we find that NC succeeds due to exploiting the
Rulebased forecasting: Development and validation of an expertsystems approach to combining timeseries extrapolations
 Management Science
, 1992
"... This paper examines the feasibility of rulebased forecasting, a procedure that applies forecasting expertise and domain knowledge to produce forecasts according to features of the data. We developed a rule base to make annual extrapolation forecasts for economic and demographic time series. The dev ..."
Abstract

Cited by 34 (12 self)
 Add to MetaCart
This paper examines the feasibility of rulebased forecasting, a procedure that applies forecasting expertise and domain knowledge to produce forecasts according to features of the data. We developed a rule base to make annual extrapolation forecasts for economic and demographic time series. The development of the rule base drew upon protocol analyses of five experts on forecasting methods. This rule base, consisting of 99 rules, combined forecasts from four extrapolation methods (the random walk, regression, Brown's linear exponential smoothing, and Holt's exponential smoothing) according to rules using 18 features of time series. For oneyear ahead ex ante forecasts of 90 annual series, the median absolute percentage error (MdAPE) for rulebased forecasting was 13 % less than that from equallyweighted combined forecasts. For sixyear ahead ex ante forecasts, rulebased forecasting had a MdAPE that was 42 % less. The improvement in accuracy of the rulebased forecasts over equallyweighted combined forecasts was statistically significant. Rulebased forecasting was more accurate than equalweights combining in situations involving significant trends, low uncertainty, stability, and good domain expertise.
Selection of estimation window in the presence of breaks
 Journal of Econometrics
, 2007
"... In situations where a regression model is subject to one or more breaks it is shown that it can be optimal to use prebreak data to estimate the parameters of the model used to compute outofsample forecasts. The issue of how best to exploit the tradeo that might exist between bias and forecast er ..."
Abstract

Cited by 32 (6 self)
 Add to MetaCart
In situations where a regression model is subject to one or more breaks it is shown that it can be optimal to use prebreak data to estimate the parameters of the model used to compute outofsample forecasts. The issue of how best to exploit the tradeo that might exist between bias and forecast error variance is explored and illustrated for the multivariate regression model under the assumption of strictly exogenous regressors. In practice when this assumption cannot be maintained and both the time and size of the breaks are unknown the optimal choice of the observation window will be subject to further uncertainties that make exploiting the biasvariance tradeo di cult. To that end we propose a new set of crossvalidation methods for selection of a single estimation window and weighting or pooling methods for combination of forecasts based on estimation windows of di erent lengths. Monte Carlo simulations are used to show when these procedures work well compared with methods that ignore the presence of breaks. JEL Classi cations: C22, C53.
Extracting Collective Probabilistic Forecasts from Web Games
, 2001
"... Game sites on the World Wide Web draw people from around the world with specialized interests, skills, and knowledge. ..."
Abstract

Cited by 30 (10 self)
 Add to MetaCart
Game sites on the World Wide Web draw people from around the world with specialized interests, skills, and knowledge.