Results 1  10
of
206
Strictly Proper Scoring Rules, Prediction, and Estimation
, 2007
"... Scoring rules assess the quality of probabilistic forecasts, by assigning a numerical score based on the predictive distribution and on the event or value that materializes. A scoring rule is proper if the forecaster maximizes the expected score for an observation drawn from the distribution F if he ..."
Abstract

Cited by 357 (27 self)
 Add to MetaCart
(Show Context)
Scoring rules assess the quality of probabilistic forecasts, by assigning a numerical score based on the predictive distribution and on the event or value that materializes. A scoring rule is proper if the forecaster maximizes the expected score for an observation drawn from the distribution F if he or she issues the probabilistic forecast F, rather than G ̸ = F. It is strictly proper if the maximum is unique. In prediction problems, proper scoring rules encourage the forecaster to make careful assessments and to be honest. In estimation problems, strictly proper scoring rules provide attractive loss and utility functions that can be tailored to the problem at hand. This article reviews and develops the theory of proper scoring rules on general probability spaces, and proposes and discusses examples thereof. Proper scoring rules derive from convex functions and relate to information measures, entropy functions, and Bregman divergences. In the case of categorical variables, we prove a rigorous version of the Savage representation. Examples of scoring rules for probabilistic forecasts in the form of predictive densities include the logarithmic, spherical, pseudospherical, and quadratic scores. The continuous ranked probability score applies to probabilistic forecasts that take the form of predictive cumulative distribution functions. It generalizes the absolute error and forms a special case of a new and very general type of score, the energy score. Like many other scoring rules, the energy score admits a kernel representation in terms of negative definite functions, with links to inequalities of Hoeffding type, in both univariate and multivariate settings. Proper scoring rules for quantile and interval forecasts are also discussed. We relate proper scoring rules to Bayes factors and to crossvalidation, and propose a novel form of crossvalidation known as randomfold crossvalidation. A case study on probabilistic weather forecasts in the North American Pacific Northwest illustrates the importance of propriety. We note optimum score approaches to point and quantile
Estimation of TailRelated Risk Measures for Heteroscedastic Financial Time Series: an Extreme Value Approach
 Journal of Empirical Finance
, 1998
"... We propose a method for estimating VaR and related risk measures describing the tail of the conditional distribution of a heteroscedastic financial return series. Our approach combines pseudomaximumlikelihood fitting of GARCH models to estimate the current volatility and extreme value theory (EVT) ..."
Abstract

Cited by 230 (6 self)
 Add to MetaCart
We propose a method for estimating VaR and related risk measures describing the tail of the conditional distribution of a heteroscedastic financial return series. Our approach combines pseudomaximumlikelihood fitting of GARCH models to estimate the current volatility and extreme value theory (EVT) for estimating the tail of the innovation distribution of the GARCH model. We use our method to estimate conditional quantiles (VaR) and conditional expected shortfalls (the expected size of a return exceeding VaR), this being an alternative measure of tail risk with better theoretical properties than the quantile. Using backtesting of historical daily return series we show that our procedure gives better oneday estimates than methods which ignore the heavy tails of the innovations or the stochastic nature of the volatility. With the help of our fitted models we adopt a Monte Carlo approach to estimating the conditional quantiles of returns over multipleday horizons and find that t...
Multivariate Density Forecast Evaluation and Calibration
 in Financial Risk Management: HighFrequency Returns on Foreign Exchange,” Review of Economics and Statistics
, 1999
"... educational and research purposes, so long as it is not altered, this copyright notice is reproduced with it, and it is not sold for profit. Abstract: We provide a framework for evaluating and improving multivariate density forecasts. Among other things, the multivariate framework lets us evaluate t ..."
Abstract

Cited by 130 (19 self)
 Add to MetaCart
(Show Context)
educational and research purposes, so long as it is not altered, this copyright notice is reproduced with it, and it is not sold for profit. Abstract: We provide a framework for evaluating and improving multivariate density forecasts. Among other things, the multivariate framework lets us evaluate the adequacy of density forecasts involving crossvariable interactions, such as timevarying conditional correlations. We also provide conditions under which a technique of density forecast “calibration ” can be used to improve deficient density forecasts. Finally, motivated by recent advances in financial risk management, we provide a detailed application to multivariate highfrequency exchange rate density forecasts.
Probabilistic forecasts, calibration and sharpness
 Journal of the Royal Statistical Society Series B
, 2007
"... Summary. Probabilistic forecasts of continuous variables take the form of predictive densities or predictive cumulative distribution functions. We propose a diagnostic approach to the evaluation of predictive performance that is based on the paradigm of maximizing the sharpness of the predictive dis ..."
Abstract

Cited by 113 (22 self)
 Add to MetaCart
(Show Context)
Summary. Probabilistic forecasts of continuous variables take the form of predictive densities or predictive cumulative distribution functions. We propose a diagnostic approach to the evaluation of predictive performance that is based on the paradigm of maximizing the sharpness of the predictive distributions subject to calibration. Calibration refers to the statistical consistency between the distributional forecasts and the observations and is a joint property of the predictions and the events that materialize. Sharpness refers to the concentration of the predictive distributions and is a property of the forecasts only. A simple theoretical framework allows us to distinguish between probabilistic calibration, exceedance calibration and marginal calibration. We propose and study tools for checking calibration and sharpness, among them the probability integral transform histogram, marginal calibration plots, the sharpness diagram and proper scoring rules. The diagnostic approach is illustrated by an assessment and ranking of probabilistic forecasts of wind speed at the Stateline wind energy centre in the US Pacific Northwest. In combination with crossvalidation or in the time series context, our proposal provides very general, nonparametric alternatives to the use of information criteria for model diagnostics and model selection.
Density Forecasting: A Survey
 Journal of Forecasting
, 2000
"... A density forecast of the realization of a random variable at some future time is an estimate of the probability distribution of the possible future values of that variable. This chapter presents a selective survey of applications of density forecasting in macroeconomics and finance, and discusses s ..."
Abstract

Cited by 104 (11 self)
 Add to MetaCart
A density forecast of the realization of a random variable at some future time is an estimate of the probability distribution of the possible future values of that variable. This chapter presents a selective survey of applications of density forecasting in macroeconomics and finance, and discusses some issues concerning the production, presentation, and evaluation of density forecasts. This chapter first appeared as an article with the same title in Journal of Forecasting, 19 (2000), 235254. The helpful comments and suggestions of Frank Diebold, Stewart Hodges and two anonymous referees are gratefully acknowledged. Subsequent editorial changes have been made following suggestions from the editors of this volume. Responsibility for errors remains with the authors. 2 1. INTRODUCTION A density forecast of the realization of a random variable at some future time is an estimate of the probability distribution of the possible future values of that variable. It thus provides a complet...
Estimation of copulabased semiparametric time series models
 J. Econometrics
, 2006
"... This paper studies the estimation of a class of copulabased semiparametric stationary Markov models. These models are characterized by nonparametric invariant (or marginal) distributions and parametric copula functions that capture the temporal dependence of the processes; the implied transition di ..."
Abstract

Cited by 84 (11 self)
 Add to MetaCart
This paper studies the estimation of a class of copulabased semiparametric stationary Markov models. These models are characterized by nonparametric invariant (or marginal) distributions and parametric copula functions that capture the temporal dependence of the processes; the implied transition distributions are all semiparametric. Models in this class are easy to simulate, and can be expressed as semiparametric regression transformation models. One advantage of this copula approach is to separate out the temporal dependence (such as tail dependence) from the marginal behavior (such as fat tailedness) of a time series. We present conditions under which processes generated by models in this class are βmixing; naturally, these conditions depend only on the copula specification. Simple estimators of the marginal distribution and the copula parameter are provided, and their asymptotic properties are established under easily verifiable conditions. Estimators of important features of the transition distribution such as the (nonlinear) conditional moments and conditional quantiles are easily obtained from estimators of the marginal distribution and the copula parameter; their √ n − consistency and asymptotic normality can be obtained using the Delta method. In addition, the semiparametric
The Statistical and Economic Role of Jumps in ContinuousTime Interest Rate Models
, 2001
"... This paper provides an empirical analysis of the role of jumps in continuoustime models of the short rate. Statistically, if jumps are present di¤usion models are misspeci…ed and I develop a test to detect jumpinduced misspeci…cation. After …nding evidence for jumps, I introduce a nonparametric ju ..."
Abstract

Cited by 73 (0 self)
 Add to MetaCart
This paper provides an empirical analysis of the role of jumps in continuoustime models of the short rate. Statistically, if jumps are present di¤usion models are misspeci…ed and I develop a test to detect jumpinduced misspeci…cation. After …nding evidence for jumps, I introduce a nonparametric jumpdi¤usion model and develop an estimation methodology. The results point toward a dominant statistical role for jumps in determining the dynamics of the short rate relative to di¤usive components. Estimates of jump times and sizes indicate that jumps serve an interesting economic purpose: they provide a main conduit for information about the macroeconomy to enter the term structure. Finally, I investigate the pricing implications of jumps. While jumps do not appear to have a large impact on the crosssection of bond prices, they do have important implications for interest rate derivatives.
Rank1/2: A Simple Way to Improve the OLS Estimation of Tail Exponents,” mimeo
, 2006
"... A popular way to estimate a Pareto exponent is to run an OLS regression: log (Rank) = c − b log (Size), and take b as an estimate of the Pareto exponent. Unfortunately, this procedure is strongly biased in small samples. We provide a simple practical remedy for this bias, and argue that, if one want ..."
Abstract

Cited by 60 (8 self)
 Add to MetaCart
A popular way to estimate a Pareto exponent is to run an OLS regression: log (Rank) = c − b log (Size), and take b as an estimate of the Pareto exponent. Unfortunately, this procedure is strongly biased in small samples. We provide a simple practical remedy for this bias, and argue that, if one wants to use an OLS regression, one should use the Rank −1/2, and run log (Rank − 1/2) = c − b log (Size). The shift of 1/2 is optimal, and cancels the bias to a leading order. The standard error on the Pareto exponent is not the OLS standard error, but is asymptotically (2/n) 1/2 b. To obtain this result, we provide asymptotic expansions for the OLS estimate in such loglog ranksize regression with arbitrary shifts in the ranks.
Estimation of multivariate models for time series of possibly different lenghts
 Journal of Applied Econometrics
, 2006
"... We consider the problem of estimating parametric multivariate density models when unequal amounts of data are available on each variable. We focus in particular on the case that the unknown parameter vector may be partitioned into elements relating only to a marginal distribution and elements relati ..."
Abstract

Cited by 55 (5 self)
 Add to MetaCart
We consider the problem of estimating parametric multivariate density models when unequal amounts of data are available on each variable. We focus in particular on the case that the unknown parameter vector may be partitioned into elements relating only to a marginal distribution and elements relating to the copula. In such a case we propose using a multistage maximum likelihood estimator (MSMLE) based on all available data rather than the usual onestage maximum likelihood estimator (1SMLE) based only on the overlapping data. We provide conditions under which the MSMLE is not less asymptotically efficient than the 1SMLE, and we examine the small sample efficiency of the estimators via simulations. The analysis in this paper is motivated by a model of the joint distribution of daily Japanese yen–US dollar and euro–US dollar exchange rates. We find significant evidence of time variation in the conditional copula of these exchange rates, and evidence of greater dependence during extreme events than under the normal distribution. Copyright © 2006
Making and evaluating point forecasts
 Journal of the American Statistical Association
"... iv ..."
(Show Context)