TITLE

Accounting for the Error due to Unresolved Scales in Ensemble Data Assimilation: A Comparison of Different Approaches

AUTHOR(S)
Hamill, Thomas M.; Whitaker, Jeffrey S.
PUB. DATE
November 2005
SOURCE
Monthly Weather Review;Nov2005, Vol. 133 Issue 11, p3132
SOURCE TYPE
Academic Journal
DOC. TYPE
Article
ABSTRACT
Insufficient model resolution is one source of model error in numerical weather predictions. Methods for parameterizing this error in ensemble data assimilations are explored here. Experiments were conducted with a two-layer primitive equation model, where the assumed true state was a T127 forecast simulation. Ensemble data assimilations were performed with the same model at T31 resolution, assimilating imperfect observations drawn from the T127 forecast. By design, the magnitude of errors due to model truncation was much larger than the error growth due to initial condition uncertainty, making this a stringent test of the ability of an ensemble-based data assimilation to deal with model error. Two general methods, “covariance inflation” and “additive error,” were considered for parameterizing the model error at the resolved scales (T31 and larger) due to interaction with the unresolved scales (T32 to T127). Covariance inflation expanded the background forecast members’ deviations about the ensemble mean, while additive error added specially structured noise to each ensemble member forecast before the update step. The method of parameterizing this model error had a substantial effect on the accuracy of the ensemble data assimilation. Covariance inflation produced ensembles with analysis errors that were no lower than the analysis errors from three-dimensional variational (3D-Var) assimilation, and for the method to avoid filter divergence, the assimilations had to be periodically reseeded. Covariance inflation uniformly expanded the model spread; however, the actual growth of model errors depended on the dynamics, growing proportionally more in the midlatitudes. The inappropriately uniform inflation progressively degradated the capacity of the ensemble to span the actual forecast error. The most accurate model-error parameterization was an additive model-error parameterization, which reduced the error difference between 3D-Var and a near-perfect assimilation system by ∼40%. In the lowest-error simulations, additive errors were parameterized using samples of model error from a time series of differences between T63 and T31 forecasts. Scaled samples of differences between model forecast states separated by 24 h were also tested as additive error parameterizations, as well as scaled samples of the T31 model state’s anomaly from the T31 model climatology. The latter two methods produced analyses that were progressively less accurate. The decrease in accuracy was likely due to their inappropriately long spatial correlation length scales.
ACCESSION #
19000532

 

Related Articles

  • Innovative HPC System Architectures Needed to Meet Earth Sciences' Demands. Nybert, Per; Carpenter, Ilene // Scientific Computing & Instrumentation;Aug2004, Vol. 21 Issue 9, p28 

    Discusses the use of Innovative high performance computing (HPC) platforms to meet the system demands of the Earth Sciences. Recognition of the role of Earth Science in defining the adoption of the HPC computer system to the limits of the computational demands of numerical modeling; Impact of...

  • The Sensitivity of the Number of Correctly Forecasted Events to the Threat Score:A Practical Application. Hilliker, Joby L. // Weather & Forecasting;Jun2004, Vol. 19 Issue 3, p646 

    The sensitivity (S) of the number of correctly forecast events (‘hits’) to changes in threat score is explored. An emphasis is placed on the practical utility of S for end users of operational systems who may need to further assess a system's performance whenever its threat score is...

  • Model Consensus. Fritsch, J.M.; Hilliker, J.; Ross, J.; Vislocky, R.L. // Weather & Forecasting;Oct2000, Vol. 15 Issue 5, p571 

    Consensus forecasts from the control runs of several operational numerical models are compared to 1) the control-run forecasts of the individual models that compose the consensus and to 2) other consensus forecasts generated by varying the initial conditions of the various individual models. It...

  • The Correspondence Ratio in Forecast Evaluation. Stensrud, David J.; Wandishin, Matthew S. // Weather & Forecasting;Oct2000, Vol. 15 Issue 5, p593 

    The correspondence ratio is developed to evaluate output from an ensemble of numerical weather prediction models. This measure is a simple extension of the threat score, or critical success index, to more than two fields and is used to measure the divergence of the forecast fields. The ratio is...

  • A Verification of Numerical Model Forecasts for Sounding-Derived Indices above Udine, Northeast Italy. Manzato, Agostino // Weather & Forecasting;Jun2008, Vol. 23 Issue 3, p477 

    In this work, 40 different indices derived from real soundings and the corresponding ECMWF model forecasts for the same location (near Udine, northeast Italy) are compared. This comparison is repeated for more than 500 days, from June 2004 to December 2005. The comparison is performed in terms...

  • rMeteorologist or Fortune Teller? COOK, JERRI // Countryside & Small Stock Journal;Jul/Aug2013, Vol. 97 Issue 4, p34 

    The article offers information on meteorology. It states that though Aristotle is credited for originating meteorology, but facts reveal that before he wrote a book on this science, people had already started studying cloud formations and atmospheric changes. It also states that weather can be...

  • FUTURE OBSERVING SYSTEM SIMULATION EXPERIMENTS. HOFFMAN, ROSS N.; ATLAS, ROBERT // Bulletin of the American Meteorological Society;Sep2016, Vol. 97 Issue 9, p1601 

    The article reports on advancements in areas of operational numerical weather forecasting (NWP) to challenge ability to conduct realistic observing system simulation experiments (OSSEs). It mentions role of OSSEs towards optimal use of observing systems. It also highlights use of OSSEs for...

  • Ensemble Data Assimilation with the NCEP Global Forecast System. Whitaker, Jeffrey S.; Hamill, Thomas M.; Xue Wei; Song, Yucheng; Toth, Zoltan // Monthly Weather Review;Feb2008, Vol. 136 Issue 2, p463 

    Real-data experiments with an ensemble data assimilation system using the NCEP Global Forecast System model were performed and compared with the NCEP Global Data Assimilation System (GDAS). All observations in the operational data stream were assimilated for the period 1 January–10...

  • The Potential Impact of Using Persistence as a Reference Forecast on Perceived Forecast Skill. Mittermaier, Marion P. // Weather & Forecasting;Oct2008, Vol. 23 Issue 5, p1022 

    Skill is defined as actual forecast performance relative to the performance of a reference forecast. It is shown that the choice of reference (e.g., random or persistence) can affect the perceived performance of the forecast system. Two scores, the equitable threat score (ETS) and the odds ratio...

Share

Read the Article

Courtesy of NEW JERSEY STATE LIBRARY

Sorry, but this item is not currently available from your library.

Try another library?
Sign out of this library

Other Topics