Article Analyzing Predictability and Communicating Uncertainty

Article Analyzing Predictability and Communicating Uncertainty

Stuart, N. A., R. H. Grumm, and M. J. Bodner, 2013: Analyzing predictability and communicating uncertainty: Lessons from the post-Groundhog Day 2009 storm and the March 2009 “megastorm.” J. Operational Meteor., 1 (16), 185199, doi: http://dx.doi.org/10.15191/nwajom.2013.0116. Journal of Operational Meteorology Article Analyzing Predictability and Communicating Uncertainty: Lessons from the Post-Groundhog Day 2009 Storm and the March 2009 “Megastorm” NEIL A. STUART NOAA/National Weather Service, Albany, New York RICHARD H. GRUMM NOAA/National Weather Service, State College, Pennsylvania MICHAEL J. BODNER NOAA/NWS/National Centers for Environmental Prediction, Camp Springs, Maryland (Manuscript received 15 November 2012; review completed 26 March 2013) ABSTRACT Forecasting winter storms in the northeastern United States during the 2008–2009 season was very challenging owing to large uncertainty in the numerical weather prediction guidance prior to each storm. Forecasts for the February 2009 post-Groundhog Day event and the March 2009 "megastorm" featured significant spatial and timing errors in storm track, precipitation type, and areal extent. Each storm’s impacts were communicated with considerable certainty, leading to confusion and misunderstanding of the actual uncertainty in each event. Both cases can serve as instructional examples for the forecast community to improve interpretation of levels of uncertainty, along with communication of uncertainty, during potentially high-impact events. Examples of spatial and temporal uncertainties associated with both storms are presented. These uncertainties are illustrated using ensemble data from the National Centers for Environmental Prediction Global Ensemble Forecast System, Short Range Ensemble Forecast, and the higher-resolution deterministic models such as the North American Mesoscale model, Global Forecast System, and European Center for Medium-Range Weather Forecasts. In addition to standard ensemble output, forecast anomalies are presented because highly anomalous situations frequently have large societal impacts. Techniques for analyzing ensemble probabilities for quantitative precipitation forecast (QPF) threshold values and ensemble plume QPF diagrams are demonstrated. In addition, various combinations of deter- ministic and ensemble means and spreads for mean sea level pressure and 500-hPa height will be presented to evaluate the predictability of surface low-pressure tracks. Several experimental techniques are presented to promote better understanding of predictability and better communication of uncertainty to users. 1. Introduction (1–2 ft) of snow, were studied using early research Evaluating predictability in weather forecasting versions of ensemble forecast output to determine if has been evolving rapidly in the past ten years with the any perturbations within the ensemble spread showed advent and maturing of ensemble forecast systems probabilities for the storms. Revelations of the success such as the Short Range Ensemble Forecast (SREF) of some of the ensemble data accurately predicting the system and the Global Ensemble Forecast System storm prompted the development and improvement of [GEFS; formerly the Medium Range Ensemble the aforementioned ensemble forecast output (SREF Forecast]. Events, such as the “surprise” storm of and GEFS). Improvements in the ensemble forecast January 2000 (Kleist and Morgan 2005; Tracton 2008) output, including increasing resolution and number of that affected the eastern United States with 30–60 cm perturbations within each dataset, are ongoing. Corresponding author address: Neil A. Stuart, National Weather Service, 251 Fuller Rd. Suite B300, Albany, NY 12203 E-mail: [email protected] 185 Stuart et al. NWA Journal of Operational Meteorology 31 October 2013 Since that landmark snowstorm of January 2000, ability to evaluate predictability. Predictability is techniques to analyze ensemble forecast guidance have defined as “the extent to which future states of a improved the predictability of all types of high-impact system may be predicted based on knowledge of weather events. Recent successes in extending warning current and past states of the system” (Glickman lead times to as much as three days have proven the 2000). Effective evaluation of predictability optimizes value of ensemble forecast guidance. Examples the potential for a consistent message from the include snowstorms such as the Valentine’s Day 2007 forecasting community when communicating levels of snowstorm (Grumm and Stuart 2007) and the active confidence or uncertainty to the user community. winters in the eastern United States since 2009, in Conflicting messages from the forecasting community addition to tropical cyclones such as Isabel (NWS (including the private sector1 and NWS2)—from five 2004), Irene (NWS 2012), and Sandy (Blake et al. days to as little as 24 h prior to storm impact—suggest 2013). Forecasters continue to learn how to determine that many forecasters do not always effectively use probabilities for the likelihood of various forecast ensemble guidance to assess predictability. scenarios by analyzing the spread in sources of In both events, forecast guidance provided by deterministic and ensemble guidance. numerical weather prediction (NWP) models and However, failures in evaluating predictability can ensemble prediction systems (EPS) showed result from misuse and misunderstanding of ensemble considerable spread about the ensemble mean, guidance in determining probabilities for a spectrum of indicating a high degree of uncertainty. In contrast, the weather scenarios. These problems were quite high degree of uncertainty was not communicated apparent during less-predictable storms (i.e., the effectively to the public by some sources of weather subjects of this study—the February 2009 and March information in the February ‘09 storm, as indicated by 2009 snowstorms), where specific regions with the the storm being described as “The Megastorm,” highest potential for high-impact snowfall amounts in “Ground Hogzilla,” and “Big Daddy Storm.” It also each storm were not well-resolved in ensemble or was compared to the superstorm of March 1993 deterministic guidance until 12–36 h prior to impact. (Kocin et al. 1995) five days prior to the expected The low predictability of these storms resulted in impact. conflicting messages from many sources of weather The use of these terms to describe the event information. Learning when to first notify the user suggested high confidence of a high-impact event community of the likelihood of a high-impact event when, in actuality, conflicting NWP guidance and can be related to the level of confidence determined considerable uncertainty associated with the event through analysis of ensemble probabilities. were present. Thus, it is clear that NWP and EPS data This study focuses on two challenging events that were misinterpreted and/or misunderstood. NWS occurred during the 2008–2009 winter season in which offices in the northeastern United States also were anecdotal evidence suggests segments of the suggesting a “significant storm likely” and the forecasting community misinterpreted and misused possibility of “significant rain and/or snow” in ensemble data, resulting in a much higher confidence Hazardous Weather Outlooks (HWOs) and Area of high-impact events than what was warranted based Forecast Discussions (AFDs) 3–5 days prior to storm on the large spread depicted in ensemble guidance. In impact. Terms used in HWOs and AFDs at adjacent early 2009, two snowstorm events were promoted by NWS offices included “likely” (≥55% probability), the forecasting community [including broadcast and “possible” (between 25 and 54% probability), “could,” Internet media and the National Weather Service and “may” (“could” and “may” are subjective terms (NWS)] as high-impact events three or more days that do not correspond to any specific probabilities but prior to onset. Both East Coast events, the non-event convey uncertainty). These words provided a of 3 February 2009 (hereinafter the February ‘09 1 event) and the “megastorm” of 3 March 2009 The Washington Post Capital Weather Gang articles on 4 Feb- (hereinafter the March ‘09 event), had significantly ruary 2009 and 6 February 2009 summed up issues and names that less impact than advertised. were assigned to this storm. The different levels of confidence expressed by 2 most sources of weather forecasts, including the NWS NWS Hazardous Weather Outlooks and Area Forecast and broadcast and Internet media, imply that Discussions across the northeastern and mid-Atlantic United States highlighted varying degrees of confidence for a significant storm, forecasters have a wide range of understanding and suggesting a broad spectrum of understanding of ensemble data. ISSN 2325-6184, Vol. 1, No. 16 186 Stuart et al. NWA Journal of Operational Meteorology 31 October 2013 conflicting message of confidence to users and implied more uncertainty than the superlative labels applied to the storm. The actual impact of the February ‘09 storm was much less than what was advertised, with 8–15 cm (3– 6 in) of snow and isolated reports of up to 20 cm (8 in) from central and eastern Pennsylvania through Long Island, New York, and Massachusetts (Fig. 1). The storm did not even attain a Northeast Snowfall Impact Scale (NESIS) ranking (Kocin and Uccellini 2004). In contrast, the March ‘09 storm, which also was predicted to be a megastorm by some private sector and broadcast information sources, had a more

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    15 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us