Results 1  10
of
20
Probabilistic forecasts, calibration and sharpness
 Journal of the Royal Statistical Society Series B
, 2007
"... Summary. Probabilistic forecasts of continuous variables take the form of predictive densities or predictive cumulative distribution functions. We propose a diagnostic approach to the evaluation of predictive performance that is based on the paradigm of maximizing the sharpness of the predictive dis ..."
Abstract

Cited by 38 (15 self)
 Add to MetaCart
Summary. Probabilistic forecasts of continuous variables take the form of predictive densities or predictive cumulative distribution functions. We propose a diagnostic approach to the evaluation of predictive performance that is based on the paradigm of maximizing the sharpness of the predictive distributions subject to calibration. Calibration refers to the statistical consistency between the distributional forecasts and the observations and is a joint property of the predictions and the events that materialize. Sharpness refers to the concentration of the predictive distributions and is a property of the forecasts only. A simple theoretical framework allows us to distinguish between probabilistic calibration, exceedance calibration and marginal calibration. We propose and study tools for checking calibration and sharpness, among them the probability integral transform histogram, marginal calibration plots, the sharpness diagram and proper scoring rules. The diagnostic approach is illustrated by an assessment and ranking of probabilistic forecasts of wind speed at the Stateline wind energy centre in the US Pacific Northwest. In combination with crossvalidation or in the time series context, our proposal provides very general, nonparametric alternatives to the use of information criteria for model diagnostics and model selection.
Prequential Probability: Principles and Properties
, 1997
"... this paper we first illustrate the above considerations for a variety of appealling criteria, and then, in an attempt to understand this behaviour, introduce a new gametheoretic framework for Probability Theory, the `prequential framework', which is particularly suited for the study of such problem ..."
Abstract

Cited by 33 (2 self)
 Add to MetaCart
this paper we first illustrate the above considerations for a variety of appealling criteria, and then, in an attempt to understand this behaviour, introduce a new gametheoretic framework for Probability Theory, the `prequential framework', which is particularly suited for the study of such problems.
A Nonmanipulable Test
 Annals of Statistics
, 2009
"... A test is said to control for type I error if it is unlikely to reject the datagenerating process. However, if it is possible to produce stochastic processes at random such that, for all possible future realizations of the data, the selected process is unlikely to be rejected, then the test is said ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
A test is said to control for type I error if it is unlikely to reject the datagenerating process. However, if it is possible to produce stochastic processes at random such that, for all possible future realizations of the data, the selected process is unlikely to be rejected, then the test is said to be manipulable. So, a manipulable test has essentially no capacity to reject a strategic expert. Many tests proposed in the existing literature, including calibration tests, control for type I error but are manipulable. We construct a test that controls for type I error and is nonmanipulable. 1. Introduction. Professional
On Optimal Sequential Prediction for General Processes
 IEEE Transactions on Information Theory
, 2001
"... In the stochastic sequential prediction problem, the elements of a random process X 1 , X 2 , ... 2 R are successively revealed to a forecaster. At each time t the forecaster makes a prediction F t of X t based only on X 1 , ..., X t 1 , when X t is revealed, the forecaster incurs a loss `(F t , X t ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
In the stochastic sequential prediction problem, the elements of a random process X 1 , X 2 , ... 2 R are successively revealed to a forecaster. At each time t the forecaster makes a prediction F t of X t based only on X 1 , ..., X t 1 , when X t is revealed, the forecaster incurs a loss `(F t , X t ). This paper considers several aspects of the sequential prediction problem for unbounded, nonstationary processes under pth power loss , 1 < p < 1. In the first part of the paper it is shown that Bayes prediction schemes are Cesaro optimal under general conditions, that Cesaro optimal prediction schemes are unique in a natural sense, and that Cesaro optimality is equivalent to a form of weak calibration. Extensions of the existence and uniqueness results to generalized prediction, and prediction from observations with additive noise, are established.
Kolmogorov's Contributions to the Foundations of Probability
"... Andrei Nikolaevich Kolmogorov was the foremost contributor to the mathematical and philosophical foundations of probability in the twentieth century, and his thinking on the topic is still potent today. In this article we first review the three stages of Kolmogorov's work on the foundations of proba ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
Andrei Nikolaevich Kolmogorov was the foremost contributor to the mathematical and philosophical foundations of probability in the twentieth century, and his thinking on the topic is still potent today. In this article we first review the three stages of Kolmogorov's work on the foundations of probability: (1) his formulation of measuretheoretic probability, 1933, (2) his frequentist theory of probability, 1963, and (3) his algorithmic theory of randomness, 19651987. We also discuss another approach to the foundations of probability, based on martingales, that Kolmogorov did not consider.
Prequential randomness
"... Abstract. This paper studies Dawid’s prequential framework from the point of view of the algorithmic theory of randomness. The main result is that two natural notions of randomness coincide. One notion is the prequential version of the standard definition due to MartinLöf, and the other is the preq ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Abstract. This paper studies Dawid’s prequential framework from the point of view of the algorithmic theory of randomness. The main result is that two natural notions of randomness coincide. One notion is the prequential version of the standard definition due to MartinLöf, and the other is the prequential version of the martingale definition of randomness due to Schnorr. This is another manifestation of the close relation between the two main paradigms of randomness, typicalness and unpredictability. The algorithmic theory of randomness can be stripped of the algorithms and still give meaningful results; the typicalness paradigm then corresponds to Kolmogorov’s measuretheoretic probability and the unpredictability paradigm corresponds to gametheoretic probability. It is an open problem whether the main result of this paper continues to hold in the stripped version of the theory. 1
On Optimal Sequential Decisions Schemes for General Processes
 IEEE Transactions on Information Theory
, 2000
"... In the stochastic sequential decision problem the elements of a random process X 1 , X 2 , ... are successively revealed to a decision scheme. At each time t 1 the scheme takes an action F t based on the observed values of X 1 , ..., X t 1 : when X t is revealed, the scheme incurs loss `(F t , X t ) ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
In the stochastic sequential decision problem the elements of a random process X 1 , X 2 , ... are successively revealed to a decision scheme. At each time t 1 the scheme takes an action F t based on the observed values of X 1 , ..., X t 1 : when X t is revealed, the scheme incurs loss `(F t , X t ). The first part of the paper is devoted to some basic properties of Cesaro and strongly optimal decision schemes for general processes and strictly convex loss functions. It is shown in each case that optimal schemes are unique in a natural sense, and that optimality is equivalent to a form of calibration. For binary processes it is shown that thresholding an optimal prediction scheme for the squared loss yields an optimal binary prediction scheme for the Hamming loss. In the second part of the paper it is shown how to construct, from a countable family of candidate decision schemes, a single composite scheme whose asymptotic performance is as good as that of any member of the family. ...
Scoring, Nonresponse Adjustment, Imputation, Automated Model Building, National Survey of
"... (NSPY) represents a major component in the evaluation of an ongoing national media campaign designed to reduce illicit drug use among youth. Inperson surveys covering items on substance abuse, parenting practices, and awareness of antidrug media advertising are conducted with up to two youths and ..."
Abstract
 Add to MetaCart
(NSPY) represents a major component in the evaluation of an ongoing national media campaign designed to reduce illicit drug use among youth. Inperson surveys covering items on substance abuse, parenting practices, and awareness of antidrug media advertising are conducted with up to two youths and one adult per household. NSPY is organized into sixmonth long data collection rounds. Semiannual reports are published after each wave (Hornik et al, 2000; Hornik et al, 2001). NSPY reports are on a very tight schedule, with just seven weeks for preparing analytic data sets. Allowing two weeks for cleaning data, only five weeks are left for repeatedly performing three types of modeling tasks: weighting, imputation and the preparation of counterfactual projections. Counterfactual projections