Results 11  20
of
94
Reliability, Sufficiency, and the Decomposition of Proper Scores
, 2008
"... Scoring rules are an important tool for evaluating the performance of probabilistic forecasting schemes. In the binary case, scoring rules (which are strictly proper) allow for a decomposition into terms related to the resolution and to the reliability of the forecast. This fact is particularly well ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Scoring rules are an important tool for evaluating the performance of probabilistic forecasting schemes. In the binary case, scoring rules (which are strictly proper) allow for a decomposition into terms related to the resolution and to the reliability of the forecast. This fact is particularly well known for the Brier Score. In this paper, this result is extended to forecasts for finite–valued targets. Both resolution and reliability are shown to have a positive effect on the score. It is demonstrated that resolution and reliability are directly related to forecast attributes which are desirable on grounds independent of the notion of scores. This finding can be considered an epistemological justification of measuring forecast quality by proper scores. A link is provided to the original work of DeGroot and Fienberg (1982), extending their concepts of sufficiency and refinement. The relation to the conjectured sharpness principle of Gneiting et al. (2005a) is elucidated. 1
ensembleBMA: An R Package for Probabilistic Forecasting using Ensembles and Bayesian Model Averaging ∗
, 2007
"... ensembleBMA is a contributed R package for probabilistic forecasting using ensemble postprocessing via Bayesian Model Averaging. It provides functions for modeling and forecasting with data that may include missing ensemble member forecasts. The modeling can also account for exchangeable ensemble me ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
ensembleBMA is a contributed R package for probabilistic forecasting using ensemble postprocessing via Bayesian Model Averaging. It provides functions for modeling and forecasting with data that may include missing ensemble member forecasts. The modeling can also account for exchangeable ensemble members. The modeling functions estimate model parameters via the EM algorithm for normal mixture models (appropriate for temperature or pressure) and mixtures of gamma distributions with a point mass at 0 (appropriate for precipitation) from training data. Also included are functions for forecasting from these models, as well as functions for verification to assess forecasting performance. Thanks go to Veronica Berrocal and Patrick Tewson for lending their expertise on a number of important issues, to Michael Polakowski for his work on an earlier version of the package, and to Bobby Yuen for complementary work on ensembleMOS. We are also indebted to Cliff Mass, Jeff Baars, and Eric Grimit for many helpful discussions and for sharing data. Supported by the DoD Multidisciplinary Research Initiative
Probabilistic quantitative precipitation forecasting using ensemble model output statistics, 2012. arXiv preprint arXiv:1302.0893
"... ar ..."
(Show Context)
Combining Probability Forecasts
, 2008
"... Linear pooling is by the far the most popular method for combining probability forecasts. However, any nontrivial weighted average of two or more distinct, calibrated probability forecasts is necessarily uncalibrated and lacks sharpness. In view of this, linear pooling requires recalibration, even i ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Linear pooling is by the far the most popular method for combining probability forecasts. However, any nontrivial weighted average of two or more distinct, calibrated probability forecasts is necessarily uncalibrated and lacks sharpness. In view of this, linear pooling requires recalibration, even in the ideal case in which the individual forecasts are calibrated. Toward this end, we propose a beta transformed linear opinion pool (BLP) for the aggregation of probability forecasts from distinct, calibrated or uncalibrated sources. The BLP method fits an optimal nonlinearly recalibrated forecast combination, by compositing a beta transform and the traditional linear opinion pool. The technique is illustrated in a simulation example and in a case study on statistical and National Weather Service probability of precipitation forecasts.
Approximating the conditional density given large observed values via a multivariate extremes framework, with application to environmental data
 Ann. Appl. Stat
, 2012
"... ar ..."
Uncertainty quantification in complex simulation models using ensemble copula coupling
 Statist. Sci
, 2013
"... ar ..."
Optimal Probabilistic Forecasts for Counts
, 2009
"... Optimal probabilistic forecasts of integervalued random variables are derived. The optimality is achieved by estimating the forecast distribution nonparametrically over a given broad model class and proving asymptotic efficiency in that setting. The ideas are demonstrated within the context of the ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Optimal probabilistic forecasts of integervalued random variables are derived. The optimality is achieved by estimating the forecast distribution nonparametrically over a given broad model class and proving asymptotic efficiency in that setting. The ideas are demonstrated within the context of the integer autoregressive class of models, which is a suitable class for any count data that can be interpreted as a queue, stock, birth and death process or branching process. The theoretical proofs of asymptotic optimality are supplemented by simulation results which demonstrate the overall superiority of the nonparametric method relative to a misspecified parametric maximum likelihood estimator, in large but finite samples. The method is applied to counts of wage claim benefits, stock market iceberg orders and civilian deaths in Iraq, with bootstrap methods used to quantify sampling variation in the estimated forecast distributions.
Supplement to “Toxicity profiling of engineered nanomaterials via multivariate doseresponse surface modeling.” DOI:10.1214/12AOAS563SUPP
, 2012
"... ar ..."
Diagnostics of priordata agreement in applied Bayesian analysis
 J. Appl. Statist
, 2008
"... Summary. This article focused on the definition and the study of a binary Bayesian criterion which measures a statistical agreement between a subjective prior and data information. The setting of this work is concrete Bayesian studies. It is an alternative and a complementary tool to the method rece ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
Summary. This article focused on the definition and the study of a binary Bayesian criterion which measures a statistical agreement between a subjective prior and data information. The setting of this work is concrete Bayesian studies. It is an alternative and a complementary tool to the method recently proposed by Evans and Moshonov (2006). Both methods try to help the work of the Bayesian analyst in preliminary to the posterior computation. Our criterion is defined as a ratio of KullbackLeibler divergences; two of its main features are to make easy the check of a hierarchical prior and be used as a default calibration tool to obtain flat but proper priors in applications. Discrete and continuous distributions exemplify the approach and an industrial casestudy in reliability, involving the Weibull distribution, is highlighted.