Results 1  10
of
30
Probabilistic forecasts, calibration and sharpness
 Journal of the Royal Statistical Society Series B
, 2007
"... Summary. Probabilistic forecasts of continuous variables take the form of predictive densities or predictive cumulative distribution functions. We propose a diagnostic approach to the evaluation of predictive performance that is based on the paradigm of maximizing the sharpness of the predictive dis ..."
Abstract

Cited by 53 (16 self)
 Add to MetaCart
Summary. Probabilistic forecasts of continuous variables take the form of predictive densities or predictive cumulative distribution functions. We propose a diagnostic approach to the evaluation of predictive performance that is based on the paradigm of maximizing the sharpness of the predictive distributions subject to calibration. Calibration refers to the statistical consistency between the distributional forecasts and the observations and is a joint property of the predictions and the events that materialize. Sharpness refers to the concentration of the predictive distributions and is a property of the forecasts only. A simple theoretical framework allows us to distinguish between probabilistic calibration, exceedance calibration and marginal calibration. We propose and study tools for checking calibration and sharpness, among them the probability integral transform histogram, marginal calibration plots, the sharpness diagram and proper scoring rules. The diagnostic approach is illustrated by an assessment and ranking of probabilistic forecasts of wind speed at the Stateline wind energy centre in the US Pacific Northwest. In combination with crossvalidation or in the time series context, our proposal provides very general, nonparametric alternatives to the use of information criteria for model diagnostics and model selection.
Prequential Probability: Principles and Properties
, 1997
"... this paper we first illustrate the above considerations for a variety of appealling criteria, and then, in an attempt to understand this behaviour, introduce a new gametheoretic framework for Probability Theory, the `prequential framework', which is particularly suited for the study of such pr ..."
Abstract

Cited by 34 (3 self)
 Add to MetaCart
this paper we first illustrate the above considerations for a variety of appealling criteria, and then, in an attempt to understand this behaviour, introduce a new gametheoretic framework for Probability Theory, the `prequential framework', which is particularly suited for the study of such problems.
DEMPSTERSHAFER INFERENCE WITH WEAK BELIEFS
"... Beliefs specified for predicting an unobserved realization of pivotal variables in the context of the fiducial and DempsterShafer (DS) inference can be weakened for credible inference. We consider predictive random sets for predicting an unobserved random sample from a known distribution, e.g., t ..."
Abstract

Cited by 14 (11 self)
 Add to MetaCart
Beliefs specified for predicting an unobserved realization of pivotal variables in the context of the fiducial and DempsterShafer (DS) inference can be weakened for credible inference. We consider predictive random sets for predicting an unobserved random sample from a known distribution, e.g., the uniform distribution U(0, 1). More specifically, we choose our beliefs for inference in two steps: (i) define a class of weak beliefs in terms of DS models for predicting an unobserved sample, and (ii) seek a belief within that class to balance the tradeoff between credibility and efficiency of the resulting DS inference. We call this approach the Maximal Belief (MB) method. The MB method is illustrated with two examples: (1) inference about µ based on a sample n from the Gaussian model N(µ,1), and (2) inference about the number of outliers (µi ̸ = 0) based on the observed data ind X1,..., Xn with the model Xi ∼ N(µi,1). The first example shows that MBDS analysis does a type of conditional inference. The second example demonstrates that MB posterior probabilities are easy to interpret for hypothesis testing.
Probability, Causality and the Empirical World: A Bayesde FinettiPopperBorel Synthesis
 Statistical Science
, 2004
"... Abstract. This article expounds a philosophical approach to Probability and Causality: a synthesis of the personalist Bayesian views of de Finetti and Popper’s falsificationist programme. A falsification method for probabilistic or causal theories, based on “Borel criteria, ” is described. It is arg ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
Abstract. This article expounds a philosophical approach to Probability and Causality: a synthesis of the personalist Bayesian views of de Finetti and Popper’s falsificationist programme. A falsification method for probabilistic or causal theories, based on “Borel criteria, ” is described. It is argued that this minimalist approach, free of any distracting metaphysical inputs, provides the essential support required for the conduct and advance of Science.
On Optimal Sequential Prediction for General Processes
 IEEE Transactions on Information Theory
, 2001
"... In the stochastic sequential prediction problem, the elements of a random process X 1 , X 2 , ... 2 R are successively revealed to a forecaster. At each time t the forecaster makes a prediction F t of X t based only on X 1 , ..., X t 1 , when X t is revealed, the forecaster incurs a loss `(F t , X t ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
(Show Context)
In the stochastic sequential prediction problem, the elements of a random process X 1 , X 2 , ... 2 R are successively revealed to a forecaster. At each time t the forecaster makes a prediction F t of X t based only on X 1 , ..., X t 1 , when X t is revealed, the forecaster incurs a loss `(F t , X t ). This paper considers several aspects of the sequential prediction problem for unbounded, nonstationary processes under pth power loss , 1 < p < 1. In the first part of the paper it is shown that Bayes prediction schemes are Cesaro optimal under general conditions, that Cesaro optimal prediction schemes are unique in a natural sense, and that Cesaro optimality is equivalent to a form of weak calibration. Extensions of the existence and uniqueness results to generalized prediction, and prediction from observations with additive noise, are established.
A Nonmanipulable Test
 ANNALS OF STATISTICS
, 2009
"... A test is said to control for type I error if it is unlikely to reject the datagenerating process. However, if it is possible to produce stochastic processes at random such that, for all possible future realizations of the data, the selected process is unlikely to be rejected, then the test is said ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
A test is said to control for type I error if it is unlikely to reject the datagenerating process. However, if it is possible to produce stochastic processes at random such that, for all possible future realizations of the data, the selected process is unlikely to be rejected, then the test is said to be manipulable. So, a manipulable test has essentially no capacity to reject a strategic expert. Many tests proposed in the existing literature, including calibration tests, control for type I error but are manipulable. We construct a test that controls for type I error and is nonmanipulable.
Prequential randomness
"... This paper studies Dawid’s prequential framework from the point of view of the algorithmic theory of randomness. The main result is that two natural notions of randomness coincide. One notion is the prequential version of the standard definition due to MartinLöf, and the other is the prequential ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
This paper studies Dawid’s prequential framework from the point of view of the algorithmic theory of randomness. The main result is that two natural notions of randomness coincide. One notion is the prequential version of the standard definition due to MartinLöf, and the other is the prequential version of the martingale definition of randomness due to Schnorr. This is another manifestation of the close relation between the two main paradigms of randomness, typicalness and unpredictability. The algorithmic theory of randomness can be stripped of the algorithms and still give meaningful results; the typicalness paradigm then corresponds to Kolmogorov’s measuretheoretic probability and the unpredictability paradigm corresponds to gametheoretic probability. It is an open problem whether the main result of this paper continues to hold in the stripped version of the theory.
Merging of opinions in gametheoretic probability
, 2008
"... This paper gives gametheoretic versions of several results on “merging of opinions ” obtained in measuretheoretic probability and algorithmic randomness theory. An advantage of the gametheoretic versions over the measuretheoretic results is that they are pointwise, their advantage over the algor ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
This paper gives gametheoretic versions of several results on “merging of opinions ” obtained in measuretheoretic probability and algorithmic randomness theory. An advantage of the gametheoretic versions over the measuretheoretic results is that they are pointwise, their advantage over the algorithmic randomness results is that they are nonasymptotic, but the most important advantage over both is that they are very constructive, giving explicit and efficient strategies for players in a game of prediction. 1