TITLE

Calibrated Probabilistic Forecasting Using Ensemble Model Output Statistics and Minimum CRPS Estimation

AUTHOR(S)
Gneiting, Tilmann; Raftery, Adrian E.; Westveld III, Anton H.; Goldman, Tom
PUB. DATE
May 2005
SOURCE
Monthly Weather Review;May2005, Vol. 133 Issue 5, p1098
SOURCE TYPE
Academic Journal
DOC. TYPE
Article
ABSTRACT
Ensemble prediction systems typically show positive spread-error correlation, but they are subject to forecast bias and dispersion errors, and are therefore uncalibrated. This work proposes the use of ensemble model output statistics (EMOS), an easy-to-implement postprocessing technique that addresses both forecast bias and underdispersion and takes into account the spread-skill relationship. The technique is based on multiple linear regression and is akin to the superensemble approach that has traditionally been used for deterministic-style forecasts. The EMOS technique yields probabilistic forecasts that take the form of Gaussian predictive probability density functions (PDFs) for continuous weather variables and can be applied to gridded model output. The EMOS predictive mean is a bias-corrected weighted average of the ensemble member forecasts, with coefficients that can be interpreted in terms of the relative contributions of the member models to the ensemble, and provides a highly competitive deterministic-style forecast. The EMOS predictive variance is a linear function of the ensemble variance. For fitting the EMOS coefficients, the method of minimum continuous ranked probability score (CRPS) estimation is introduced. This technique finds the coefficient values that optimize the CRPS for the training data. The EMOS technique was applied to 48-h forecasts of sea level pressure and surface temperature over the North American Pacific Northwest in spring 2000, using the University of Washington mesoscale ensemble. When compared to the bias-corrected ensemble, deterministic-style EMOS forecasts of sea level pressure had root-mean-square error 9% less and mean absolute error 7% less. The EMOS predictive PDFs were sharp, and much better calibrated than the raw ensemble or the bias-corrected ensemble.
ACCESSION #
17239755

 

Related Articles

  • Increasing the Reliability of Reliability Diagrams. Bröcker, Jochen; Smith, Leonard A. // Weather & Forecasting;Jun2007, Vol. 22 Issue 3, p651 

    The reliability diagram is a common diagnostic graph used to summarize and evaluate probabilistic forecasts. Its strengths lie in the ease with which it is produced and the transparency of its definition. While visually appealing, major long-noted shortcomings lie in the difficulty of...

  • Information-Based Skill Scores for Probabilistic Forecasts. Ahrens, Bodo; Walser, André // Monthly Weather Review;Jan2008, Vol. 136 Issue 1, p352 

    The information content, that is, the predictive capability, of a forecast system is often quantified with skill scores. This paper introduces two ranked mutual information skill (RMIS) scores, RMISO and RMISY, for the evaluation of probabilistic forecasts. These scores are based on the concept...

  • Two Extra Components in the Brier Score Decomposition. Stephenson, D. B.; Coelho, C. A. S.; Jolliffe, I. T. // Weather & Forecasting;Aug2008, Vol. 23 Issue 4, p752 

    The Brier score is widely used for the verification of probability forecasts. It also forms the basis of other frequently used probability scores such as the rank probability score. By conditioning (stratifying) on the issued forecast probabilities, the Brier score can be decomposed into the sum...

  • Verification of Categorical Probability Forecasts. Zhang, H.; Casey, T. // Weather & Forecasting;Feb2000, Vol. 15 Issue 1, p80 

    This paper compares a number of probabilistic weather forecasting verification approaches. Forecasting skill scores from linear error in probability space and relative operating characteristics are compared with results from an alternative approach that first transforms probabilistic forecasts...

  • Distributions-Oriented Verification of Probability Forecasts for Small Data Samples. Bradley, A. Allen; Hashino, Tempei; Schwartz, Stuart S. // Weather & Forecasting;Oct2003, Vol. 18 Issue 5, p903 

    The distributions-oriented approach to forecast verification uses an estimate of the joint distribution of forecasts and observations to evaluate forecast quality. However, small verification data samples can produce unreliable estimates of forecast quality due to sampling variability and...

  • On Using “Climatology” as a Reference Strategy in the Brier and Ranked Probability Skill Scores. Mason, Simon J. // Monthly Weather Review;Jul2004, Vol. 132 Issue 7, p1891 

    The Brier and ranked probability skill scores are widely used as skill metrics of probabilistic forecasts of weather and climate. As skill scores, they compare the extent to which a forecast strategy outperforms a (usually simpler) reference forecast strategy. The most widely used reference...

  • Diversity in Interpretations of Probability: Implications for Weather Forecasting. de Elía, Ramón; Laprise, René // Monthly Weather Review;May2005, Vol. 133 Issue 5, p1129 

    Over the last years, probability weather forecasts have become increasingly popular due in part to the development of ensemble forecast systems. Despite its widespread use in atmospheric sciences, probability forecasting remains a subtle and ambiguous way of representing the uncertainty related...

  • Sensitivity of WRF Forecasts for South Florida to Initial Conditions. Etherton, Brian; Santos, Pablo // Weather & Forecasting;Aug2008, Vol. 23 Issue 4, p725 

    This study presents results from an experiment conducted to measure the impact of locally initializing a numerical weather prediction model on that model’s ability to predict precipitation and other surface parameters. The study consisted of quantifying the impact of initializing the...

  • Prediction of Consensus Tropical Cyclone Intensity Forecast Error. Goerss, James S.; Sampson, Charles R. // Weather & Forecasting;Jun2014, Vol. 29 Issue 3, p750 

    The extent to which the tropical cyclone (TC) intensity forecast error of IVCN and S5YY, consensus models routinely used by forecasters at the National Hurricane Center and the Joint Typhoon Warning Center, respectively, can be predicted is determined. A number of predictors of consensus...

Share

Read the Article

Courtesy of NEW JERSEY STATE LIBRARY

Sorry, but this item is not currently available from your library.

Try another library?
Sign out of this library

Other Topics