Results 1  10
of
18
Forecast Evaluation and Combination
 IN G.S. MADDALA AND C.R. RAO (EDS.), HANDBOOK OF STATISTICS
, 1996
"... It is obvious that forecasts are of great importance and widely used in economics and finance. Quite simply, good forecasts lead to good decisions. The importance of forecast evaluation and combination techniques follows immediately forecast users naturally have a keen interest in monitoring and ..."
Abstract

Cited by 84 (24 self)
 Add to MetaCart
It is obvious that forecasts are of great importance and widely used in economics and finance. Quite simply, good forecasts lead to good decisions. The importance of forecast evaluation and combination techniques follows immediately forecast users naturally have a keen interest in monitoring and improving forecast performance. More generally, forecast evaluation figures prominently in many questions in empirical economics and finance, such as: Are expectations rational? (e.g., Keane and Runkle, 1990; Bonham and Cohen, 1995) Are financial markets efficient? (e.g., Fama, 1970, 1991) Do macroeconomic shocks cause agents to revise their forecasts at all horizons, or just at short and mediumterm horizons? (e.g., Campbell and Mankiw, 1987; Cochrane, 1988) Are observed asset returns "too volatile"? (e.g., Shiller, 1979; LeRoy and Porter, 1981) Are asset returns forecastable over long horizons? (e.g., Fama and French, 1988; Mark, 1995)
Prequential Probability: Principles and Properties
, 1997
"... this paper we first illustrate the above considerations for a variety of appealling criteria, and then, in an attempt to understand this behaviour, introduce a new gametheoretic framework for Probability Theory, the `prequential framework', which is particularly suited for the study of such problem ..."
Abstract

Cited by 34 (2 self)
 Add to MetaCart
this paper we first illustrate the above considerations for a variety of appealling criteria, and then, in an attempt to understand this behaviour, introduce a new gametheoretic framework for Probability Theory, the `prequential framework', which is particularly suited for the study of such problems.
Bayesian Modeling of Uncertainty in Ensembles of Climate Models
, 2008
"... Projections of future climate change caused by increasing greenhouse gases depend critically on numerical climate models coupling the ocean and atmosphere (GCMs). However, different models differ substantially in their projections, which raises the question of how the different models can best be co ..."
Abstract

Cited by 29 (6 self)
 Add to MetaCart
Projections of future climate change caused by increasing greenhouse gases depend critically on numerical climate models coupling the ocean and atmosphere (GCMs). However, different models differ substantially in their projections, which raises the question of how the different models can best be combined into a probability distribution of future climate change. For this analysis, we have collected both current and future projected mean temperatures produced by nine climate models for 22 regions of the earth. We also have estimates of current mean temperatures from actual observations, together with standard errors, that can be used to calibrate the climate models. We propose a Bayesian analysis that allows us to combine the different climate models into a posterior distribution of future temperature increase, for each of the 22 regions, while allowing for the different climate models to have different variances. Two versions of the analysis are proposed, a univariate analysis in which each region is analyzed separately, and a multivariate analysis in which the 22 regions are combined into an overall statistical model. A crossvalidation approach is proposed to confirm the reasonableness of our Bayesian predictive distributions. The results of this analysis allow for a quantification of the uncertainty of climate model projections as a Bayesian posterior distribution, substantially extending previous approaches to uncertainty in climate models.
Lightweight emulators for multivariate deterministic functions
 FORTHCOMING IN THE JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS
, 2007
"... An emulator is a statistical model of a deterministic function, to be used where the function itself is too expensive to evaluate withintheloop of an inferential calculation. Typically, emulators are deployed when dealing with complex functions that have large and heterogeneous input and output sp ..."
Abstract

Cited by 21 (5 self)
 Add to MetaCart
An emulator is a statistical model of a deterministic function, to be used where the function itself is too expensive to evaluate withintheloop of an inferential calculation. Typically, emulators are deployed when dealing with complex functions that have large and heterogeneous input and output spaces: environmental models, for example. In this challenging situation we should be sceptical about our statistical models, no matter how sophisticated, and adopt approaches that prioritise interpretative and diagnostic information, and the flexibility to respond. This paper presents one such approach, candidly rejecting the standard Smooth Gaussian Process approach in favour of a fullyBayesian treatment of multivariate regression which, by permitting sequential updating, allows for very detailed predictive diagnostics. It is argued directly and by illustration that the incoherence of such a treatment (which does not impose continuity on the model outputs) is more than compensated for by the wealth of available information, and the possibilities for generalisation.
Evaluating the predictive accuracy of volatility models
 Journal of Forecasting
, 2001
"... Statistical loss functions that generally lack economic content are commonly used for evaluating financial volatility forecasts. In this paper, an evaluation framework based on loss functions tailored to a user’s economic interests is proposed. According to these interests, the user specifies the ec ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
Statistical loss functions that generally lack economic content are commonly used for evaluating financial volatility forecasts. In this paper, an evaluation framework based on loss functions tailored to a user’s economic interests is proposed. According to these interests, the user specifies the economic events to be forecast, the criterion with which to evaluate these forecasts, and the subsets of the forecasts of particular interest. The volatility forecasts from a model are then transformed into probability forecasts of the relevant events and evaluated using the specified criteria (i.e., a probability scoring rule and calibration tests). An empirical example using exchange rate data illustrates the framework and confirms that the choice of loss function directly affects the forecast evaluation results.
Combining model selection procedures for online prediction
 Sankhya A
, 2001
"... SUMMARY. Here we give a technique for online prediction that uses different model selection principles (MSP’s) at different times. The central idea is that each MSP is associated with a collection of models for which it is best suited. This means one can use the data to choose an MSP. Then, the MSP ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
SUMMARY. Here we give a technique for online prediction that uses different model selection principles (MSP’s) at different times. The central idea is that each MSP is associated with a collection of models for which it is best suited. This means one can use the data to choose an MSP. Then, the MSP chosen is used with the data to choose a model, and the parameters of the model are estimated so that predictions can be made. Depending on the degree of discrepancy between the predicted values and the actual outcomes one may update the parameters within a model, reuse the MSP to rechoose the model and estimate its parameters, or start all over again rechoosing the MSP. Our main formal result is a theorem which gives conditions under which our technique performs better than always using the same MSP. We also discuss circumstances under which dropping data points may lead to better predictions. 1.
Predictive Inference, Rare Events And Hierarchical Models
, 1997
"... this paper have implicity assumed a single homogeneous sample. However, they are also applicable in multisample problems, in which the parameters of the model are possibly different from one sample to another. Such problems lead to what are usually called empirical Bayes methods of analysis. In rec ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
this paper have implicity assumed a single homogeneous sample. However, they are also applicable in multisample problems, in which the parameters of the model are possibly different from one sample to another. Such problems lead to what are usually called empirical Bayes methods of analysis. In recent years it has become more common to solve such problems from a fully Bayesian point of view, using a hierarchical model structure to link together the parameters of the different subsamples. This is the point of view taken, for example, in the excellent recent monograph by Carlin and Louis (1996). Despite the very rapid growth of this field, there has been comparatively little study of the frequentist properties of Bayesian procedures in this setting. Berger and Strawderman (1996) established some admissibility results, which have the advantage of not relying on any kind of asymptotics, and which provide guidance on the choice of prior particularly where improper priors are concerned. On the other hand, the class of models to which their results apply is restrictive, and admissibility results do not necessarily help to pick out a prior distribution which has good properties under particular conditions. In contrast, the results of the present paper are asymptotic (letting sample size n !1 while the number of samples remains fixed) but they do allow explicit computations to be made under a veriety of circumstances. In the present section, these ideas are worked out in some detail for the simplest problem in this class: the case of p normal distributions with unknown means and known common variance. In the next section, a more complicated example is considered. Suppose there are p subgroups and the data in the j'th subgroup follow a N(` j ; 1) distribution. Here the vector ...
Does the Covariance Structure Matter in Longitudinal Modelling for the Prediction of Future CD4 Counts?
"... We investigate the importance of the assumed covariance structure for longitudinal modelling of CD4 counts. We examine how individual predictions of future CD4 counts are affected by the covariance structure. We consider four covariance structures: one based on an integrated OrnsteinUhlenbeck st ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We investigate the importance of the assumed covariance structure for longitudinal modelling of CD4 counts. We examine how individual predictions of future CD4 counts are affected by the covariance structure. We consider four covariance structures: one based on an integrated OrnsteinUhlenbeck stochastic process, one based on Brownian motion and two derived from standard linear and quadratic random e#ects models. Using data from the Multicenter AIDS Cohort Study and from a simulation study,we show that there is a noticeable deterioration in the coverage rate of confidence intervals if we assume the wrong covariance. There is also a loss in e#ciency. The quadratic random effects model is found to be the best in terms of correctly calibrated prediction intervals, but is substantially less efficient than the others. Incorrectly specifying the covariance structure as linear random effects gives too narrow prediction intervals with poor coverage rates. Fitting using the model based on th...
Forecasting Binary Outcomes
, 2013
"... Binary events are involved in many economic decision problems. In recent years, considerable progress has been made in diverse disciplines in developing models for forecasting binary outcomes. We distinguish between two types of forecasts for binary events that are generally obtained as the output o ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Binary events are involved in many economic decision problems. In recent years, considerable progress has been made in diverse disciplines in developing models for forecasting binary outcomes. We distinguish between two types of forecasts for binary events that are generally obtained as the output of regression models: probability forecasts and point forecasts. We summarize specification, estimation, and evaluation of binary response models for the purpose of forecasting in a unified framework which is characterized by the joint distribution of forecasts and actuals, and a general loss function. Analysis of both the skill and the value of probability and point forecasts can be carried out within this framework. Parametric, semiparametric, nonparametric, and Bayesian approaches are covered. The emphasis is on the basic intuitions underlying each methodology, abstracting away from the mathematical details.