TITLE

MOS Uncertainty Estimates in an Ensemble Framework

AUTHOR(S)
Glahn, Bob; Peroutka, Matthew; Wiedenfeld, Jerry; Wagner, John; Zylstra, Greg; Schuknecht, Bryan; Jackson, Bryan
PUB. DATE
January 2009
SOURCE
Monthly Weather Review;Jan2009, Vol. 137 Issue 1, p246
SOURCE TYPE
Academic Journal
DOC. TYPE
Article
ABSTRACT
It is being increasingly recognized that the uncertainty in weather forecasts should be quantified and furnished to users along with the single-value forecasts usually provided. Probabilistic forecasts of “events” have been made in special cases; for instance, probabilistic forecasts of the event defined as 0.01 in. or more of precipitation at a point over a specified time period [i.e., the probability of precipitation (PoP)] have been disseminated to the public by the Weather Bureau/National Weather Service since 1966. Within the past decade, ensembles of operational numerical weather prediction models have been produced and used to some degree to provide probabilistic estimates of events easily dealt with, such as the occurrence of specific amounts of precipitation. In most such applications, the number of ensembles restricts this “enumeration” method, and the ensembles are characteristically underdispersive. However, fewer attempts have been made to provide a probability density function (PDF) or cumulative distribution function (CDF) for a continuous variable. The Meteorological Development Laboratory (MDL) has used the error estimation capabilities of the linear regression framework and kernel density fitting applied to individual and aggregate ensemble members of the Global Ensemble Forecast System of the National Centers for Environmental Prediction to develop PDFs and CDFs. This paper describes the method and results for temperature, dewpoint, daytime maximum temperature, and nighttime minimum temperature. The method produces reliable forecasts with accuracy exceeding the raw ensembles. Points on the CDF for 1650 stations have been mapped to the National Digital Forecast Database 5-km grid and an example is provided.
ACCESSION #
36435560

 

Related Articles

  • Time Zone Dependence of Diurnal Cycle Errors in Surface Temperature Analyses. Zou, X.; Qin, Z-K. // Monthly Weather Review;Jun2010, Vol. 138 Issue 6, p2469 

    Surface temperatures from both the NCEP analysis and ECMWF Re-Analysis (ERA-Interim) in January 2008 over the Africa–Eurasian region were compared with surface station measurements to study analysis errors in the diurnal cycle, with data sampled at 3-h time intervals. The results show the...

  • Bias Correction for Global Ensemble Forecast. Cui, Bo; Toth, Zoltan; Zhu, Yuejian; Hou, Dingchen // Weather & Forecasting;Apr2012, Vol. 27 Issue 2, p396 

    The main task of this study is to introduce a statistical postprocessing algorithm to reduce the bias in the National Centers for Environmental Prediction (NCEP) and Meteorological Service of Canada (MSC) ensemble forecasts before they are merged to form a joint ensemble within the North...

  • All About STARGAZING Forecasts. Creed, Phillip J. // Sky & Telescope;Feb2010, Vol. 119 Issue 2, p60 

    The article offers tips for stargazers on how to interpret and predict weather forecasts and create one's own based on observations in the sky. It advises users to see the Clear Sky Charts which are based on the Canadian Meteorological Center's (CMC) Global Environmental Multisclate (GEM) model,...

  • Statistical Analysis of Forecasting Models across the North Slope of Alaska during the Mixed-Phase Arctic Clouds Experiment. Yannuzzi, Victor T.; Clothiaux, Eugene E.; Harrington, Jerry Y.; Verlinde, Johannes // Weather & Forecasting;Dec2009, Vol. 24 Issue 6, p1644 

    The National Centers for Environmental Prediction’s (NCEP) Eta Model, the models of the European Centre for Medium-Range Weather Forecasts (ECMWF) and the National Aeronautics and Space Administration’s (NASA) Global Modeling and Assimilation Office (GMAO) models, and the Regional...

  • Developments in Operational Long-Range Climate Prediction at CPC. O'Lenic, Edward A.; Unger, David A.; Halpert, Michael S.; Pelman, Kenneth S. // Weather & Forecasting;Jun2008, Vol. 23 Issue 3, p496 

    The science, production methods, and format of long-range forecasts (LRFs) at the Climate Prediction Center (CPC), a part of the National Weather Service�s (NWS�s) National Centers for Environmental Prediction (NCEP), have evolved greatly since the inception of 1-month mean forecasts...

  • The Skill of Ensemble Prediction Systems. Atger, Frederic // Monthly Weather Review;Sep99, Vol. 127 Issue 9, p1 

    The performance of ensemble prediction systems (EPSs) is investigated by examining the probability distribution of 500-hPa geopotential height over Europe. The probability score (or half Brier score) is used to evaluate the quality of probabilistic forecasts of a single binary event. The skill...

  • An Improved Coupled Model for ENSO Prediction and Implications for Ocean Initialization. Part I: The Ocean Data Assimilation System. Behringer, David W.; Ji, Ming; Leetmaa, Ants // Monthly Weather Review;Apr98, Vol. 126 Issue 4, p1013 

    An improved forecast system has been developed for El Nino--Southern Oscillation (ENSO) prediction at the National Centers for Environmental Prediction. Improvements have been made both to the ocean data assimilation system and to the coupled ocean--atmosphere forecast model. In Part I of a...

  • An Improved Coupled Model for ENSO Prediction and Implications for Ocean Initialization. Part II: The Coupled Model. Ji, Ming; Behringer, David W.; Leetmaa, Ants // Monthly Weather Review;Apr98, Vol. 126 Issue 4, p1022 

    An improved forecast system has been developed and implemented for ENSO prediction at the National Centers for Environmental Prediction (NCEP). This system consists of a new ocean data assimilation system and an improved coupled ocean--atmosphere forecast model (CMP12) for ENSO prediction. The...

  • Evaluating Forecasters' Rules of Thumb: A Study of d(prog)/dt. Hamill, Thomas M. // Weather & Forecasting;Oct2003, Vol. 18 Issue 5, p933 

    Forecasters often develop rules of thumb for adjusting model guidance. Ideally, before use, these rules of thumb should be validated through a careful comparison of model forecasts and observations over a large sample. Practically, such evaluation studies are difficult to perform because...

Share

Read the Article

Courtesy of THE LIBRARY OF VIRGINIA

Sorry, but this item is not currently available from your library.

Try another library?
Sign out of this library

Other Topics