Results 1 - 10
of
54
Strictly Proper Scoring Rules, Prediction, and Estimation
, 2007
"... Scoring rules assess the quality of probabilistic forecasts, by assigning a numerical score based on the predictive distribution and on the event or value that materializes. A scoring rule is proper if the forecaster maximizes the expected score for an observation drawn from the distribution F if he ..."
Abstract
-
Cited by 373 (28 self)
- Add to MetaCart
(Show Context)
Scoring rules assess the quality of probabilistic forecasts, by assigning a numerical score based on the predictive distribution and on the event or value that materializes. A scoring rule is proper if the forecaster maximizes the expected score for an observation drawn from the distribution F if he or she issues the probabilistic forecast F, rather than G ̸ = F. It is strictly proper if the maximum is unique. In prediction problems, proper scoring rules encourage the forecaster to make careful assessments and to be honest. In estimation problems, strictly proper scoring rules provide attractive loss and utility functions that can be tailored to the problem at hand. This article reviews and develops the theory of proper scoring rules on general probability spaces, and proposes and discusses examples thereof. Proper scoring rules derive from convex functions and relate to information measures, entropy functions, and Bregman divergences. In the case of categorical variables, we prove a rigorous version of the Savage representation. Examples of scoring rules for probabilistic forecasts in the form of predictive densities include the logarithmic, spherical, pseudospherical, and quadratic scores. The continuous ranked probability score applies to probabilistic forecasts that take the form of predictive cumulative distribution functions. It generalizes the absolute error and forms a special case of a new and very general type of score, the energy score. Like many other scoring rules, the energy score admits a kernel representation in terms of negative definite functions, with links to inequalities of Hoeffding type, in both univariate and multivariate settings. Proper scoring rules for quantile and interval forecasts are also discussed. We relate proper scoring rules to Bayes factors and to cross-validation, and propose a novel form of cross-validation known as random-fold cross-validation. A case study on probabilistic weather forecasts in the North American Pacific Northwest illustrates the importance of propriety. We note optimum score approaches to point and quantile
Probabilistic forecasts, calibration and sharpness
- Journal of the Royal Statistical Society Series B
, 2007
"... Summary. Probabilistic forecasts of continuous variables take the form of predictive densities or predictive cumulative distribution functions. We propose a diagnostic approach to the evaluation of predictive performance that is based on the paradigm of maximizing the sharpness of the predictive dis ..."
Abstract
-
Cited by 116 (23 self)
- Add to MetaCart
Summary. Probabilistic forecasts of continuous variables take the form of predictive densities or predictive cumulative distribution functions. We propose a diagnostic approach to the evaluation of predictive performance that is based on the paradigm of maximizing the sharpness of the predictive distributions subject to calibration. Calibration refers to the statistical consistency between the distributional forecasts and the observations and is a joint property of the predictions and the events that materialize. Sharpness refers to the concentration of the predictive distributions and is a property of the forecasts only. A simple theoretical framework allows us to distinguish between probabilistic calibration, exceedance calibration and marginal calibration. We propose and study tools for checking calibration and sharpness, among them the probability integral transform histogram, marginal calibration plots, the sharpness diagram and proper scoring rules. The diagnostic approach is illustrated by an assessment and ranking of probabilistic forecasts of wind speed at the Stateline wind energy centre in the US Pacific Northwest. In combination with cross-validation or in the time series context, our proposal provides very general, nonparametric alternatives to the use of information criteria for model diagnostics and model selection.
Calibrated probabilistic forecasting at the Stateline wind energy center: The regime-switching space-time (RST) method
- Journal of the American Statistical Association
, 2004
"... With the global proliferation of wind power, accurate short-term forecasts of wind resources at wind energy sites are becoming paramount. Regime-switching space-time (RST) models merge meteorological and statistical expertise to obtain accurate and calibrated, fully probabilistic forecasts of wind s ..."
Abstract
-
Cited by 35 (14 self)
- Add to MetaCart
(Show Context)
With the global proliferation of wind power, accurate short-term forecasts of wind resources at wind energy sites are becoming paramount. Regime-switching space-time (RST) models merge meteorological and statistical expertise to obtain accurate and calibrated, fully probabilistic forecasts of wind speed and wind power. The model formulation is parsimonious, yet takes account of all the salient features of wind speed: alternating atmospheric regimes, temporal and spatial correlation, diurnal and seasonal non-stationarity, conditional heteroscedasticity, and non-Gaussianity. The RST method identifies forecast regimes at the wind energy site and fits a conditional predictive model for each regime. Geographically dispersed meteorological observations in the vicinity of the wind farm are used as off-site predictors. The RST technique was applied to 2-hour ahead forecasts of hourly average wind speed at the Stateline wind farm in the US Pacific Northwest. In July 2003, for instance, the RST forecasts had root-mean-square error (RMSE) 28.6 % less than the persistence forecasts. For each month in the test period, the RST forecasts had lower RMSE than forecasts using state-of-the-art vector time series techniques. The RST method provides probabilistic forecasts in the form of
Probabilistic forecasts of wind speed: ensemble model output statistics by using heteroscedastic censored regression,”
, 2009
"... ..."
Wind Power Density Forecasting Using Ensemble Predictions and Time Series Models
"... Wind power is an increasingly used form of renewable energy. The uncertainty in wind generation is very largely due to the inherent variability in wind speed, and this needs to be understood by operators of power systems and wind farms. To assist with the management of this risk, this paper investig ..."
Abstract
-
Cited by 11 (2 self)
- Add to MetaCart
Wind power is an increasingly used form of renewable energy. The uncertainty in wind generation is very largely due to the inherent variability in wind speed, and this needs to be understood by operators of power systems and wind farms. To assist with the management of this risk, this paper investigates methods for predicting the probability density function of generated wind power from one to 10 days ahead at five UK wind farm locations. These density forecasts provide a description of the expected future value and the associated uncertainty. We construct density forecasts from weather ensemble predictions, which are a relatively new type of weather forecast generated from atmospheric models. We also consider density forecasting from statistical time series models. The best results for wind power density prediction and point forecasting were produced by an approach that involves calibration and smoothing of the ensemble-based wind power density.
Density forecasting for weather derivative pricing
- International Journal of Forecasting
, 2006
"... Address for Correspondence: ..."
(Show Context)
Pricing of Asian temperature risk
, 2010
"... Weather derivatives (WD) are different from most financial derivatives because the underlying weather cannot be traded and therefore cannot be replicated by other financial instruments. The market price of risk (MPR) is an important parameter of the associated equivalent martingale measures used to ..."
Abstract
-
Cited by 4 (4 self)
- Add to MetaCart
Weather derivatives (WD) are different from most financial derivatives because the underlying weather cannot be traded and therefore cannot be replicated by other financial instruments. The market price of risk (MPR) is an important parameter of the associated equivalent martingale measures used to price and hedge weather futures/options in the market. The majority of papers so far have priced non-tradable assets assuming zero MPR, but this assumption underestimates WD prices. We study the MPR structure as a time dependent object with concentration on emerging markets in Asia. We find that Asian Temperatures (Tokyo, Osaka, Beijing, Teipei) are normal in the sense that the driving stochastics are close to a Wiener Process. The regression residuals of the temperature show a clear seasonal variation and the volatility term structure of CAT temperature futures presents a modified Samuelson effect. In order to achieve normality in standardized residuals, the seasonal variation is calibrated with a combination of a fourier truncated series with a GARCH model and with a local linear regression. By calibrating model prices, we implied the MPR from Cumulative total of 24hour average temperature futures (C24AT) for Japanese Cities, or by knowing the formal dependence of MPR on seasonal variation, we price derivatives for Kaohsiung, where weather derivative market does not exist. The findings support theoretical results of reverse relation between MPR and seasonal variation of temperature process.
A Semiparametric Panel Model for unbalanced data with Application to Climate Change in the United Kingdom
, 2010
"... This paper is concerned with developing a semiparametric panel model to explain the trend in UK temperatures and other weather outcomes over the last century. We work with the monthly averaged maximum and minimum temperatures observed at the twenty six Meteorological Office stations. The data is an ..."
Abstract
-
Cited by 4 (1 self)
- Add to MetaCart
This paper is concerned with developing a semiparametric panel model to explain the trend in UK temperatures and other weather outcomes over the last century. We work with the monthly averaged maximum and minimum temperatures observed at the twenty six Meteorological Office stations. The data is an unbalanced panel. We allow the trend to evolve in a nonparametric way so that we obtain a fuller picture of the evolution of common temperature in the medium timescale. Profile likelihood estimators (PLE) are proposed and their statistical properties are studied. The proposed PLE has improved asymptotic property comparing the the sequential two-step estimators. Finally, forecasting based on the proposed model is studied.