Results 1  10
of
88
Using Bayesian model averaging to calibrate forecast ensembles
 MONTHLY WEATHER REVIEW 133
, 2005
"... Ensembles used for probabilistic weather forecasting often exhibit a spreaderror correlation, but they tend to be underdispersive. This paper proposes a statistical method for postprocessing ensembles based on Bayesian model averaging (BMA), which is a standard method for combining predictive distr ..."
Abstract

Cited by 139 (34 self)
 Add to MetaCart
(Show Context)
Ensembles used for probabilistic weather forecasting often exhibit a spreaderror correlation, but they tend to be underdispersive. This paper proposes a statistical method for postprocessing ensembles based on Bayesian model averaging (BMA), which is a standard method for combining predictive distributions from different sources. The BMA predictive probability density function (PDF) of any quantity of interest is a weighted average of PDFs centered on the individual biascorrected forecasts, where the weights are equal to posterior probabilities of the models generating the forecasts and reflect the models ’ relative contributions to predictive skill over the training period. The BMA weights can be used to assess the usefulness of ensemble members, and this can be used as a basis for selecting ensemble members; this can be useful given the cost of running large ensembles. The BMA PDF can be represented as an unweighted ensemble of any desired size, by simulating from the BMA predictive distribution. The BMA predictive variance can be decomposed into two components, one corresponding to the betweenforecast variability, and the second to the withinforecast variability. Predictive PDFs or intervals based solely on the ensemble spread incorporate the first component but not the second. Thus BMA provides a theoretical explanation of the tendency of ensembles to exhibit a spreaderror correlation but yet
A Hybrid Ensemble Kalman Filter / 3DVariational Analysis Scheme
"... A hybrid 3dimensional variational (3DVar) / ensemble Kalman filter analysis scheme is demonstrated using a quasigeostrophic model under perfectmodel assumptions. Four networks with differing observational densities are tested, including one network with a data void. The hybrid scheme operates by ..."
Abstract

Cited by 123 (18 self)
 Add to MetaCart
(Show Context)
A hybrid 3dimensional variational (3DVar) / ensemble Kalman filter analysis scheme is demonstrated using a quasigeostrophic model under perfectmodel assumptions. Four networks with differing observational densities are tested, including one network with a data void. The hybrid scheme operates by computing a set of parallel data assimilation cycles, with each member of the set receiving unique perturbed observations. The perturbed observations are generated by adding random noise consistent with observation error statistics to the control set of observations. Background error statistics for the data assimilation are estimated from a linear combination of timeinvariant 3DVar covariances and flowdependent covariances developed from the ensemble of shortrange forecasts. The hybrid scheme allows the user to weight the relative contributions of the 3DVar and ensemblebased background covariances. The analysis scheme was cycled for 90 days, with new observations assimilated every 12 h...
Probabilistic forecasts, calibration and sharpness
 Journal of the Royal Statistical Society Series B
, 2007
"... Summary. Probabilistic forecasts of continuous variables take the form of predictive densities or predictive cumulative distribution functions. We propose a diagnostic approach to the evaluation of predictive performance that is based on the paradigm of maximizing the sharpness of the predictive dis ..."
Abstract

Cited by 113 (22 self)
 Add to MetaCart
Summary. Probabilistic forecasts of continuous variables take the form of predictive densities or predictive cumulative distribution functions. We propose a diagnostic approach to the evaluation of predictive performance that is based on the paradigm of maximizing the sharpness of the predictive distributions subject to calibration. Calibration refers to the statistical consistency between the distributional forecasts and the observations and is a joint property of the predictions and the events that materialize. Sharpness refers to the concentration of the predictive distributions and is a property of the forecasts only. A simple theoretical framework allows us to distinguish between probabilistic calibration, exceedance calibration and marginal calibration. We propose and study tools for checking calibration and sharpness, among them the probability integral transform histogram, marginal calibration plots, the sharpness diagram and proper scoring rules. The diagnostic approach is illustrated by an assessment and ranking of probabilistic forecasts of wind speed at the Stateline wind energy centre in the US Pacific Northwest. In combination with crossvalidation or in the time series context, our proposal provides very general, nonparametric alternatives to the use of information criteria for model diagnostics and model selection.
Calibrated Probabilistic Forecasting Using Ensemble Model Output Statistics and Minimum CRPS Estimation
 MONTHLY WEATHER REVIEW VOLUME
, 2005
"... Ensemble prediction systems typically show positive spreaderror correlation, but they are subject to forecast bias and dispersion errors, and are therefore uncalibrated. This work proposes the use of ensemble model output statistics (EMOS), an easytoimplement postprocessing technique that address ..."
Abstract

Cited by 79 (14 self)
 Add to MetaCart
(Show Context)
Ensemble prediction systems typically show positive spreaderror correlation, but they are subject to forecast bias and dispersion errors, and are therefore uncalibrated. This work proposes the use of ensemble model output statistics (EMOS), an easytoimplement postprocessing technique that addresses both forecast bias and underdispersion and takes into account the spreadskill relationship. The technique is based on multiple linear regression and is akin to the superensemble approach that has traditionally been used for deterministicstyle forecasts. The EMOS technique yields probabilistic forecasts that take the form of Gaussian predictive probability density functions (PDFs) for continuous weather variables and can be applied to gridded model output. The EMOS predictive mean is a biascorrected weighted average of the ensemble member forecasts, with coefficients that can be interpreted in terms of the relative contributions of the member models to the ensemble, and provides a highly competitive deterministicstyle forecast. The EMOS predictive variance is a linear function of the ensemble variance. For fitting the EMOS coefficients, the method of minimum continuous ranked probability score (CRPS) estimation is introduced. This technique finds the coefficient values that optimize the CRPS for the training data. The EMOS technique was applied to 48h forecasts of sea level pressure and surface temperature over the North American Pacific Northwest in spring 2000, using the University of Washington mesoscale ensemble. When compared to the biascorrected ensemble, deterministicstyle EMOS forecasts of sea level pressure had rootmeansquare error 9 % less and mean absolute error 7 % less. The EMOS predictive PDFs were sharp, and much better calibrated than the raw ensemble or the biascorrected ensemble.
A comparison of probabilistic forecasts from bred, singularvector, and perturbation observation ensembles
 MON. WEA. REV
, 2000
"... The statistical properties of analysis and forecast errors from commonly used ensemble perturbation methodologies are explored. A quasigeostrophic channel model is used, coupled with a 3Dvariational data assimilation scheme. A perfect model is assumed. Three perturbation methodologies are considere ..."
Abstract

Cited by 55 (7 self)
 Add to MetaCart
The statistical properties of analysis and forecast errors from commonly used ensemble perturbation methodologies are explored. A quasigeostrophic channel model is used, coupled with a 3Dvariational data assimilation scheme. A perfect model is assumed. Three perturbation methodologies are considered. The breeding and singularvector (SV) methods approximate the strategies currently used at operational centers in the United States and Europe, respectively. The perturbed observation (PO) methodology approximates a random sample from the analysis probability density function (pdf) and is similar to the method performed at the Canadian Meteorological Centre. Initial conditions for the PO ensemble are analyses from independent, parallel data assimilation cycles. Each assimilation cycle utilizes observations perturbed by random noise whose statistics are consistent with observational error covariances. Each member’s assimilation/forecast cycle is also started from a distinct initial condition. Relative to breeding and SV, the PO method here produced analyses and forecasts with desirable statistical characteristics. These include consistent rank histogram uniformity for all variables at all lead times, high spread/ skill correlations, and calibrated, reducederror probabilistic forecasts. It achieved these improvements primarily because 1) the ensemble mean of the PO initial conditions was more accurate than the mean of the bred or
Using ensembles for shortrange forecasting
 Mon. Wea. Rev
, 1999
"... Numerical forecasts from a pilot program on shortrange ensemble forecasting at the National Centers for Environmental Prediction are examined. The ensemble consists of 10 forecasts made using the 80km Eta Model and 5 forecasts from the regional spectral model. Results indicate that the accuracy of ..."
Abstract

Cited by 51 (3 self)
 Add to MetaCart
Numerical forecasts from a pilot program on shortrange ensemble forecasting at the National Centers for Environmental Prediction are examined. The ensemble consists of 10 forecasts made using the 80km Eta Model and 5 forecasts from the regional spectral model. Results indicate that the accuracy of the ensemble mean is comparable to that from the 29km Meso Eta Model for both mandatory level data and the 36h forecast cyclone position. Calculations of spread indicate that at 36 and 48 h the spread from initial conditions created using the breeding of growing modes technique is larger than the spread from initial conditions created using different analyses. However, the accuracy of the forecast cyclone position from these two initialization techniques is nearly identical. Results further indicate that using two different numerical models assists in increasing the ensemble spread significantly. There is little correlation between the spread in the ensemble members and the accuracy of the ensemble mean for the prediction of cyclone location. Since information on forecast uncertainty is needed in many applications, and is one of the reasons to use an ensemble approach, the lack of a correlation between spread and forecast uncertainty presents a challenge to the production of shortrange ensemble forecasts. Even though the ensemble dispersion is not found to be an indication of forecast uncertainty, significant spread can occur within the forecasts over a relatively short time period. Examples are shown to illustrate how small uncertainties in the model initial conditions can lead to large differences in numerical forecasts from an identical numerical model. 1.
Evaluation of a shortrange multimodel ensemble system
, 2001
"... Forecasts from the National Centers for Environmental Prediction’s experimental shortrange ensemble system are examined and compared with a single run from a higherresolution model using similar computational resources. The ensemble consists of five members from the Regional Spectral Model and 10 ..."
Abstract

Cited by 43 (4 self)
 Add to MetaCart
(Show Context)
Forecasts from the National Centers for Environmental Prediction’s experimental shortrange ensemble system are examined and compared with a single run from a higherresolution model using similar computational resources. The ensemble consists of five members from the Regional Spectral Model and 10 members from the 80km Eta Model, with both inhouse analyses and bred perturbations used as initial conditions. This configuration allows for a comparison of the two models and the two perturbation strategies, as well as a preliminary investigation of the relative merits of mixedmodel, mixedperturbation ensemble systems. The ensemble is also used to estimate the shortrange predictability limits of forecasts of precipitation and fields relevant to the forecast of precipitation. Whereas error growth curves for the ensemble and its subgroups are in relative agreement with previous work for largescale fields such as 500mb heights, little or no error growth is found for fields of mesoscale interest, such as convective indices and precipitation. The difference in growth rates among the ensemble subgroups illustrates the role of both initial perturbation strategy and model formulation in creating ensemble dispersion. However, increase spread per se is not necessarily beneficial, as is indicated by the fact that the ensemble subgroup with the greatest spread is less skillful than the subgroup with the least spread. Further examination into the skill of the ensemble system for forecasts of precipitation shows the advantage gained from a mixedmodel strategy, such that even the inclusion of the less skillful Regional Spectral Model members improves ensemble performance. For some aspects of forecast performance, even ensemble configurations with as few as five members are shown to significantly outperform the 29km MesoEta Model. 1.
A comparison of precipitation forecast skill between small nearconvectionallowing and large convectionparameterizing ensembles
 SUBMITTED TO WEATHER AND FORECASTING
, 2009
"... An experiment is designed to evaluate and compare precipitation forecasts from a 5member, 4km gridspacing (ENS4) and a 15member, 20km gridspacing (ENS20) Weather Research and Forecasting (WRF) model ensemble, which cover a similar domain over the central United States. The ensemble forecasts ..."
Abstract

Cited by 40 (27 self)
 Add to MetaCart
An experiment is designed to evaluate and compare precipitation forecasts from a 5member, 4km gridspacing (ENS4) and a 15member, 20km gridspacing (ENS20) Weather Research and Forecasting (WRF) model ensemble, which cover a similar domain over the central United States. The ensemble forecasts are initialized at 2100 UTC on 23 different dates and cover forecast lead times up to 33 hours. Previous work has demonstrated that simulations using convectionallowing resolution (CAR; dx ~ 4km) have a better representation of the spatial and temporal statistical properties of convective precipitation than coarser models using convective parameterizations. In addition, higher resolution should lead to greater ensemble spread as smaller scales of motion are resolved. Thus, CAR ensembles should provide more accurate and reliable probabilistic forecasts than parameterizedconvection resolution (PCR) ensembles. Computation of various precipitation skill metrics for probabilistic and deterministic forecasts reveals that ENS4 generally provides more accurate precipitation forecasts than ENS20, with the differences tending to be statistically significant for precipitation thresholds above 0.25 inches at forecast lead times of 9 to 21 hours (0600 – 1800 UTC) for all accumulation intervals
2001: Ability of a poor man’s ensemble to predict the probability and distribution of precipitation.Mon
 Wea. Rev
"... A poor man’s ensemble is a set of independent numerical weather prediction (NWP) model forecasts from several operational centers. Because it samples uncertainties in both the initial conditions and model formulation through the variation of input data, analysis, and forecast methodologies of its co ..."
Abstract

Cited by 39 (5 self)
 Add to MetaCart
(Show Context)
A poor man’s ensemble is a set of independent numerical weather prediction (NWP) model forecasts from several operational centers. Because it samples uncertainties in both the initial conditions and model formulation through the variation of input data, analysis, and forecast methodologies of its component members, it is less prone to systematic biases and errors that cause underdispersive behavior in singlemodel ensemble prediction systems (EPSs). It is also essentially costfree. Its main disadvantage is its relatively small size. This paper investigates the ability of a poor man’s ensemble to provide forecasts of the probability and distribution of rainfall in the short range, 1–2 days. The poor man’s ensemble described here consists of 24 and 48h daily quantitative precipitation forecasts (QPFs) from seven operational NWP models. The ensemble forecasts were verified for a 28month period over Australia using gridded daily rain gauge analyses. Forecasts of the probability of precipitation (POP) were skillful for rain rates up to 50 mm day21 for the first 24h period, exceeding the skill of the European Centre for MediumRange Weather Forecasts EPS. Probabilistic skill was limited to lower rain rates during the second 24 h. The skill and accuracy of the ensemble mean QPF far exceeded that of the individual models for both forecast periods when standard measures such as the rootmeansquare error and equitable threat score were used.
Disentangling Uncertainty and Error: On the Predictability of Nonlinear Systems
 Nonlinear Dynamics and Statistics
, 2000
"... Chaos places no a priori restrictions on predictability: any uncertainty in the initial condition can be evolved and then quanti ed as a function of forecast time. If a speci ed accuracy at a given future time is desired, a perfect model can specify the initial accuracy required to obtain it, and ac ..."
Abstract

Cited by 34 (6 self)
 Add to MetaCart
Chaos places no a priori restrictions on predictability: any uncertainty in the initial condition can be evolved and then quanti ed as a function of forecast time. If a speci ed accuracy at a given future time is desired, a perfect model can specify the initial accuracy required to obtain it, and accountable ensemble forecasts can be obtained for each unknown initial condition. Statistics which reect the global properties of in nitesimals, such as Lyapunov exponents which de ne \chaos", limit predictability only in the simplest mathematical examples. Model error, on the other hand, makes forecasting a dubious endeavor. Forecasting with uncertain initial conditions in the perfect model scenario is contrasted with the case where a perfect model is unavailable, perhaps nonexistent. Applications to both low (2 to 400) dimensional models and high (10 7 ) dimensional models are discussed. For real physical systems no perfect model exists; the limitations of nearperfect models are consider...