<<

THE SCIENCE OF NOAA’S OPERATIONAL HYDROLOGIC ENSEMBLE FORECAST SERVICE

by Julie Demargne, Limin Wu, Satish K. Regonda, James D. Brown, Haksu Lee, Minxue He, Dong-Jun Seo, Robert Hartman, Henry D. Herr, Mark Fresch, John Schaake, and Yuejian Zhu

HEFS extends hydrologic ensemble services from 6-hour to year-ahead forecasts and includes additional weather and climate information as well as improved quantification of major uncertainties.

s no forecast is complete without a description needs for risk-based management of water resources of its uncertainty (National Research Council and hazards across weather and climate scales have A of the National Academies 2006), it is necessary, increased, the research and operational communi- for both atmospheric and hydrologic predictions, to ties have been actively working on integration of the quantify and propagate uncertainty from various NWP ensembles into hydrologic ensemble predic- sources in the forecasting system. For informed risk- tion systems and quantification of all major sources based decision making, such integrated uncertainty of uncertainty in such systems. In particular, the information needs to be communicated to forecast- Hydrological Ensemble Prediction Experiment ers and users effectively. In an operational environ- (HEPEX; www.hepex.org/), launched in 2004, ment, ensembles are an effective means of producing has facilitated communications and collaborations uncertainty-quantified forecasts. Ensemble forecasts among the atmospheric community, the hydrologic can be ingested in a user’s downstream application community, and the forecast users toward improving (e.g., reservoir management decision support system) ensemble forecasts and demonstrating their utility in and used to derive probability statements about the decision making in water management (Schaake et al. likelihood of specific future events (e.g., probabil- 2007b; Thielen et al. 2008; Schaake et al. 2010). ity of exceeding a flood threshold). Atmospheric Ensemble approaches hold great potential for ensemble forecasts have been routinely produced by operational hydrologic forecasting. As demonstrated operational Numerical Weather Prediction (NWP) with atmospheric ensemble forecasts, the estimates of centers for two decades. Hydrologic ensemble fore- predictive uncertainty provide forecasters and users casts for long ranges have been initially based on with objective guidance on the level of confidence historical observations of and tem- that they may place in the forecasts. The end users perature as plausible future inputs (e.g., Day 1985) can decide to take action based on their risk tolerance. in an attempt to account for the uncertainty at the Furthermore, by modeling uncertainty, hydrologic climate time scales. Ensemble forecasts generated in forecasters can maximize the utility of weather and this fashion were considered viable beyond 30 days climate forecasts, which are generally highly uncer- where the climatic uncertainty would dominate tain and noisy (Buizza et al. 2005). With the major other uncertainty sources. More recently, as the uncertainties quantified and their relative importance

AMERICAN METEOROLOGICAL SOCIETY JANUARY 2014 | 79 Unauthenticated | Downloaded 10/10/21 01:09 PM UTC analyzed, ensemble forecasting helps identify areas of hydrologic ensemble forecast products in various where investments in forecast systems and processes downstream applications where decision support will have the greatest benefit. systems could be “trained” (van Andel et al. 2008). Development and implementation of hydrologic In the National Oceanic and Atmospheric ensemble prediction systems is still ongoing and Administration (NOAA)’s National Weather Service hence only limited operational experience exists. (NWS), an end-to-end Hydrologic Ensemble Forecast A number of case studies using experimental and Service (HEFS) is currently being implemented as (pre) operational systems, however, have demon- part of the Advanced Hydrologic Prediction Service strated their potential benefits (see, e.g., Cloke and (AHPS; McEnery et al. 2005) to address a variety of Pappenberger 2009 and Zappa et al. 2010 for refer- water information and service needs for flood risk ences). Recent verification studies of hydrologic management, water supply management, streamflow ensemble forecasts or hindcasts (i.e., forecasts that are regulations, recreation planning, ecosystem manage- retroactively generated using a fixed forecasting sys- ment, and others (Raff et al. 2013). Such a wide range tem) over long time periods include Bartholmes et al. of applications requires forcing inputs and hydrologic (2009), Jaun and Ahrens (2009), Renner et al. (2009), forecasts at multiple space–time scales and for mul- Hopson and Webster (2010), Demargne et al. (2010), tiple forecast horizons: from minutes for flash flood Thirel et al. (2010), Van den Bergh and Roulin (2010), predictions in fast-responding basins to years for water Addor et al. (2011), and Zappa et al. (2012) for short- to supply forecasts over larger areas (see examples in medium-range hydrologic forecasts and Kang et al. McEnery et al. 2005). To account for the forcing input (2010), Wood et al. (2011), Fundel et al. (2012), Singla uncertainty, the NWS River Forecast Centers (RFCs) et al. (2012), and Yuan et al. (2013) for monthly to have been using the Ensemble Streamflow Prediction seasonal hydrologic ensembles. Objective verifica- (ESP) component of the National Weather Service tion analysis of ensemble forecasts or hindcasts over River Forecast System (NWSRFS; National Weather multiple years should improve not only the science of Service 2012). ESP produces seasonal probabilistic hydrologic ensemble forecasting but also the utility forecasts of water supply based on the historical obser- vations of precipitation and temperature (the climate being considered stationary and repeating itself) and the current hydrologic conditions (Day 1985). AFFILIATIONS: Demargne—Office of Hydrologic Development, National Weather Service, NOAA, Silver Spring, Maryland, HEFS enhances ESP to include short-, medium-, and and HYDRIS Hydrologie, Saint Mathieu de Tréviers, France; long-range forcing forecasts, incorporate additional Wu—Office of Hydrologic Development, National Weather weather and climate information, and better quantify Service, NOAA, Silver Spring, Maryland, and LEN Technologies, the major uncertainties in hydrologic forecasting. Oak Hill, Virginia; Regonda and He—Office of Hydrologic The HEFS provides ensemble forecast and verifica- Development, National Weather Service, NOAA, Silver Spring, tion products and adds a major new capability to Maryland, and Riverside Technology, Inc., Fort Collins, Colorado; the NWS’s baseline river forecasting system: the Brown—Hydrologic Solutions Limited, Southampton, United Kingdom; Lee—Office of Climate, Water, and Weather Services, Community Hydrologic Prediction System (CHPS). National Weather Service, NOAA, Silver Spring, Maryland, and The next section presents an overview of the HEFS LEN Technologies, Oak Hill, Virginia; Seo—Department of Civil and its various components. In the subsequent four Engineering, The University of Texas at Arlington, Arlington, sections, the individual components are described Texas; Hartman—California–Nevada River Forecast Center, in more detail and selected illustrative verification National Weather Service, NOAA, Sacramento, California; results are presented to demonstrate HEFS potential Herr and Fresch—Office of Hydrologic Development, National benefits. Finally, future scientific and operational Weather Service, NOAA, Silver Spring, Maryland; Schaake— challenges for improving hydrologic ensemble fore- Consultant, Annapolis, Maryland; Zhu—Environmental Modeling Center, National Centers for Environmental Prediction, National casting services are discussed. Weather Service, NOAA, College Park, Maryland CORRESPONDING AUTHOR: Dr. Julie Demargne, 5 avenue du OVERVIEW OF THE HEFS. Uncertainty in Grand Chêne, 34270 Saint Mathieu de Tréviers, France hydrologic predictions comes from many differ- E-mail: [email protected] ent sources: atmospheric forcing observations and The abstract for this article can be found in this issue, following the predictions; initial conditions of the hydrologic table of contents. model, its parameters, and structure; and streamflow DOI:10.1175/BAMS-D-12-00081.1 regulations among other anthropogenic uncertain- In final form 6 April 2013 ties (Gupta et al. 2005). The uncertainties in the atmospheric forcing inputs are typically referred to

80 | JANUARY 2014 Unauthenticated | Downloaded 10/10/21 01:09 PM UTC as input uncertainty and those in all other sources as hydrologic uncertainty (Krzysztofowicz 1999). A hydrologic ensemble pre- diction system could either model the total uncertainty in the hydrologic output forecasts (e.g., Montanari and Grossi 2008; Coccia and Todini 2011; Weerts et al. 2011; Smith et al. 2012; Regonda et al. 2013) or explicitly account for the major sources of uncer- tainty, which is the primary Fig. 1. Schematic view of HEFS in the NWS’s CHPS, where dark gray boxes approach of HEFS (Seo correspond to ensemble-specific components and light gray boxes are also et al. 2006). As noted by used in operational single-valued forecasting. The current manual data assimilation procedures are not included in HEFS. The Hydrologic Processor Velázquez et al. (2011), hy- includes a suite of hydrologic, hydraulic, and reservoir models—for example, drologic ensemble predic- -17 model for snow ablation (Anderson 1973), the Sacramento Soil tion systems presented in Moisture Accounting model (SAC-SMA) (Burnash 1995), and the Unit the literature often account Hydrograph for flow routing. for the input uncertainty only. Recently, however, a few systems have included 2006) adjusts the streamflow ensembles to reflect various techniques to address specific hydrologic the total hydrologic uncertainty in a lumped manner uncertainties, such as hydrologic data assimilation and produce bias-corrected streamflow ensembles. to reduce and model the initial condition uncer- Along with the above uncertainty components, the tainty, Monte Carlo–based techniques to estimate Graphics Generator and the Ensemble Verification model parameter uncertainty, and postprocessing Service (EVS; Brown et al. 2010) enable forecasters to and multimodel ensemble approaches for hydrologic produce uncertainty-quantified forecast and verifica- structural uncertainty modeling (see references in Seo tion information that can be tailored to user needs. et al. 2006; Schaake et al. 2007b; Velázquez et al. 2011; Diagnostic verification of hydrologic forecasts Brown and Seo 2013; Liu et al. 2012). needs to be routinely performed by scientists and A schematic view of the HEFS is given in Fig. 1 operational forecasters to improve forecast quality along with the information flow. For input uncertain- (Welles et al. 2007) and to provide up-to-date verifica- ty modeling, the Meteorological Ensemble Forecast tion information in real time to users. Such activity Processor (MEFP; Schaake et al. 2007a; Wu et al. 2011) requires the capability of running the hydrologic combines weather and climate forecasts from various ensemble forecast system in hindcasting mode to sources to produce bias-corrected forcing (precipita- retroactively generate ensemble forecasts for multiple tion and temperature) ensembles at the space–time years using the newly developed ensemble forecasting scales of the hydrologic models. These ensembles have approaches. Verification of hindcasts may be used to coherent space–time variability among the different evaluate the benefits of new or improved ensemble forcing variables and across all forecast locations. The forecasting approaches, analyze the various sources Hydrologic Processor ingests the forcing ensembles of uncertainty and error in the forecasting system, and runs a suite of hydrologic, hydraulic, and reser- and guide targeted improvements of the forecasting voir models to produce streamflow ensembles. The system (Demargne et al. 2009; Renner et al. 2009; data assimilation (DA) process currently consists of Brown et al. 2010). Hindcast datasets may also be re- manual modifications of model states and param- quired by operational forecasters to identify historical eters by the forecasters based on their expertise; analog forecasts to make informed decisions in real therefore, it will be included in HEFS in the future time and by sophisticated users to calibrate decision when automated DA techniques are implemented. support systems (van Andel et al. 2008). For the For hydrologic uncertainty modeling, the hydro- above reasons, the HEFS includes, for each ensemble logic Ensemble Postprocessor (EnsPost; Seo et al. processor, capabilities for calibration and real-time

AMERICAN METEOROLOGICAL SOCIETY JANUARY 2014 | 81 Unauthenticated | Downloaded 10/10/21 01:09 PM UTC forecasting, as well as hindcasting to provide the development phase is targeted to be completed by large sample of events necessary to verify forecast the end of 2013 with HEFS implementation to all 13 probabilities with acceptable sampling uncertainty. RFCs in 2014. The project has been accelerated by an Comprehensive evaluation of the individual HEFS agreement with the New York City Department of components as well as the end-to-end system via Environmental Protection, which needs these new multiyear hindcasting is underway (Brown 2013); probabilistic forecast services to more efficiently and illustrative examples of verification results are pre- effectively manage the water supply system for New sented in this paper. York City (Pyke and Porter 2012). In the context of operational hydrologic forecasting Similarly to NWS operational single-valued in the NWS, the HEFS has been developed to im- hydrologic forecasting, HEFS uses CHPS, an open prove upon operational single-valued forecasting service-oriented architecture built on the Delft- and seasonal ESP forecasting while capturing user FEWS framework (Werner et al. 2004). It facilitates requirements, which include 1) supporting both incorporation of new models and tools, establishes real-time ensemble forecasting and hindcasting for interoperability with partners, and accelerates large-sample verification and systematic evaluation, research to operations. CHPS is critical in support- 2) maintaining interoperability with the single-valued ing the NOAA Integrated Water Resources Science forecasting system for the short range (given that and Services in partnership with federal agencies single-valued forecasting is only a special case of en- [e.g., U.S. Army Corps of Engineers (USACE) and semble forecasting), and 3) producing ensemble fore- U.S. Geological Survey] that have complementary cast information that is statistically consistent over a operational missions in water science, observation, wide range of spatiotemporal scales. The operational prediction, and management. Also, Delft-FEWS hydrologic and water resources models used for both provides an open interface to various data sources single-valued and probabilistic forecasting are simple and multiple hydrologic and hydraulic forecasting conceptual models applied in a lumped fashion, models (see examples of ensemble hydrologic predic- with relatively few parameters estimated by manual tion systems based on Delft-FEWS in Werner et al. calibration (a unique set of parameters being defined 2009, Renner et al. 2009, and Schellekens et al. 2011). for all flow regimes, from low flow to flooding con- Therefore, HEFS will benefit from the Delft-FEWS ditions). Expectedly, the hydrologic predictability coupling with different hydrologic and hydraulic could be limited in poorly monitored areas, with river models, as well as enhancements made within the gauges malfunctioning (e.g., during flood events) and Delft-FEWS community for hydrologic and water during rapidly changing hydrometeorological condi- resources forecast systems and services. tions. Moreover, modeling reservoir regulations and diversions is challenging because of the lack of reliable METEOROLOGICAL ENSEMBLE FORE- information for the RFC forecasters and changes of CAST PROCESSOR. Reliable and skillful atmo- reservoir operations to adjust to the current and fore- spheric ensemble forecasts are necessary for hydro- cast flow situation. Also, the estimation of historical logic ensemble forecasting. Ensemble forecasts from past forcings for model calibration and hindcasting NWP models are widely available from several atmo- may not be consistent with real-time meteorological spheric prediction centers. However, these ensembles model inputs, owing to changes in tools (e.g., gauges are generally biased in the mean, spread, and higher versus radar for precipitation estimation) and models, moments (Buizza et al. 2005), both unconditionally as well as estimation errors. To address these data and conditionally on magnitude, season, storm type, and model challenges, the RFCs have longstanding and other attributes. The conditional biases may be practices to apply in a subjective way manual modi- particularly large for heavy precipitation events that fications of model states and parameters for single- are crucial in flood forecasting (Hamill et al. 2006, valued forecasting (see Raff et al. 2013 for details on 2008; Brown et al. 2012). There are several statistical RFC practices)—modifications that are not currently techniques for estimating the conditional probabil- included in HEFS. ity distribution of an (assumed unbiased) observed The initial HEFS prototype system, referred to as variable given a potentially biased forecast (see, e.g., the Experimental Ensemble Forecast System (XEFS; references in Brown et al. 2012). These techniques vary www.nws.noaa.gov/oh/XEFS/), began testing at in their assumptions about the conditional (or joint) selected RFCs in 2009. The ongoing HEFS implemen- probability distribution, the predictors used (e.g., tation is based on three software development releases single-valued forecast, attributes of an ensemble fore- to five test RFCs, from spring 2012 to fall 2013. The cast), and the estimation of the statistical parameters

82 | JANUARY 2014 Unauthenticated | Downloaded 10/10/21 01:09 PM UTC (e.g., full period, seasonal, moving window, threshold daily ensembles through a diurnal variation model. dependent). Several techniques have been compared The above scheme is based on the same interpola- for specific variables and modeling systems (e.g., tion procedures used to calculate subdaily historical Gneiting et al. 2005; Wilks and Hamill 2007; Hamill temperature time series and account for the diurnal et al. 2008). Bias correction of precipitation ensemble cycle assumed in the operational calibration process forecasts is particularly challenging because precipi- (Anderson 1973). tation amount is intermittent, it depends strongly on For each hydrometeorological variable for a given space–time scale, and is relatively unpredictable in basin (i.e., precipitation, maximum temperature, and many cases (e.g., convective events). For hydrologic minimum temperature), a specific forecast source, forecasting with lumped models, the gridded NWP and a given forecast lead time, MEFP estimates the ensembles need to be processed at the basin scale, joint probability distribution of observations and which requires “downscaling” (described as a change single-valued forecasts based on a multiyear archive of support in geostatistics) and bias correction. This of the observations and forecasts. This calibration is downscaling includes corrections to match the clima- performed for each day of the year by pooling histori- tology of the forcings used to calibrate the hydrologic cal observed–forecast pairs from a time window cen- model. tered on that day in order to account for seasonality. The MEFP aims to generate unbiased ensembles In real time, given the current single-valued forecast, that capture the skill of the forecasts from multiple MEFP derives the conditional probability distribution sources for individual basins while preserving the of the observations, from which ensemble members space–time properties of hydrometeorological vari- are sampled. The ensemble members are generated ables (e.g., precipitation and temperature) across for each individual time step, and then the Schaake all basins (Schaake et al. 2007a; Wu et al. 2011). For shuffle (Clark et al. 2004) is applied to arrange the short-range forecasts, human forecasters generally ensemble values according to the ranks of the histori- add significant value to single-valued hydrometeo- cal observations. In this way, the produced ensemble rological forecasts derived from raw NWP forecasts time series preserve rank correlations for multiple (Charba et al. 2003). Also, postprocessing studies lead times and basins across hydrometeorological have repeatedly demonstrated that most information variables (e.g., precipitation and temperature). The from NWP medium-term ensembles comes from the ensemble copula coupling approach (Schefzik et al. ensemble mean (e.g., Hamill et al. 2004; Wilks and 2013) also aims to recover the space–time multivariate Hamill 2007). Therefore, the MEFP uses the single- dependence structure from the raw ensembles instead valued forecasts modified by human forecasters of the historical observations. Both approaches are for short-range forecast horizon (up to 7 days) and very attractive computationally, requiring only the the ensemble mean forecasts from multiple NWP computation of marginal ranks, and could be applied models for mid- to long range to generate seamless for any dimensionality. However, they are both lim- and calibrated hydrometeorological ensembles up to ited in the number of postprocessed ensembles and a 1-yr forecast horizon. Precipitation and temperature equal to the number of observed historical years for are processed slightly differently since precipitation the Schaake shuffle and to the number of raw en- is intermittent and highly skewed whereas the tem- sembles for the ensemble copula coupling (making perature distribution is nearly Gaussian. MEFP uses it difficult to use multiple forecast sources with dif- the normal quantile transform (NQT) to transform ferent numbers of ensembles). For extreme events, observed and forecast precipitation variables into if the NWP ensembles are skillful, the multivariate normal variates. The precipitation part of MEFP dependence structure should be contained in the raw also includes an explicit treatment of precipitation ensembles (and therefore should be realistically de- intermittency using the mixed-type bivariate meta- scribed with the ensemble copula coupling approach). Gaussian model (Herr and Krzysztofowicz 2005), However, it may be lacking in the observation record parametric and nonparametric modeling of the for the Schaake shuffle approach owing to the likely marginal probability distributions, and a parameter lack of occurrences of similar events historically over optimization under the continuous ranked probabil- the forecast horizon. If the NWP model output is ity score (CRPS; Hersbach 2000) and other criteria strongly structured, parametric copula approaches (see Wu et al. 2011 for details). For temperature, the might be used (as in Möller et al. 2013) to correct for MEFP procedure first generates ensembles of daily any systematic errors in the ensemble’s representation maximum and minimum temperatures, and then of the conditional dependence structure. However, generates ensembles at subdaily time steps from the such parametric procedures are very expensive

AMERICAN METEOROLOGICAL SOCIETY JANUARY 2014 | 83 Unauthenticated | Downloaded 10/10/21 01:09 PM UTC computationally and could be limited in practice forecasts as modified by the human forecasters. by the output dimensionality. We therefore suggest Depending on forecast locations, these forecasts are examining in the future alternative approaches, available from 1 to 5 forecast lead days for precipita- which use raw forecasts, observations, or some com- tion and up to 7 forecast lead days for temperature. bination of the two (e.g., “analogs”) for improved Validation results were reported by 1) Schaake et al. space–time rank structure. (2007a) for precipitation and temperature for one In general, the forecast uncertainty and skill are basin in California, 2) Demargne et al. (2007) for time-scale dependent. Even though the forecast precipitation ensembles (and corresponding stream- skill at the individual time steps may be limited, flow ensembles) for five basins in Missouri and especially for long lead times, the skill of forecasts Oklahoma, and 3) Wu et al. (2011) for three basins in aggregated over multiple time steps is likely to be Pennsylvania, Arkansas and Missouri, and California. useful and needs to be exploited for hydrologic and Figure 2 (from Wu et al. 2011) shows the CRPS val- water resources applications. Therefore, the MEFP ues of MEFP-generated 6-h precipitation ensembles for calibration and ensemble forecasting procedures are the first lead day for the North Fork of the American also applied to a set of precipitation accumulations River basin (NFDC1; 875 km2) near Sacramento, and temperature averages defined by the user across California, using three different methods. Method different forecast periods from the individual time 1 uses an implicit treatment of precipitation inter- steps (e.g., n-day events and x-month events up to mittency (Schaake et al. 2007a); methods 2 and 3 the maximum available forecast horizon for each model explicitly the precipitation intermittency, with forecast source). The final ensemble members at the method 3 adding parameter optimization based on individual time steps are sequentially produced by the CRPS. Since, for single-valued forecasts, the CRPS the Schaake shuffle for the original and aggregated collapses to the mean absolute error, the CRPS values temporal scales according to increasing forecast skill are compared to the mean absolute error of the con- at the individual scales and for the different forecast ditioning single-valued forecast. Figure 2 indicates sources, with the highest skill having the greatest that the quality of MEFP-generated precipitation influence on the final values (see Schaake et al. 2007a ensembles has improved significantly with the ex- for details). plicit intermittency modeling and that the technique MEFP has been experimentally implemented captures the skill in the conditioning single-valued and evaluated at several RFCs using single-valued forecast very well. In Fig. 3 (from Wu et al. 2011), the forecasts from various sources for a number of dif- reliability diagram and the relative operating char- ferent forecast horizons. For the short-range forecast acteristic (ROC) curve for method 3 indicate that the horizon, MEFP uses RFC operational single-valued MEFP precipitation ensembles are reliable (left plot)

Fig. 2. Mean CRPS of ensemble hindcasts of 6-h precipitation for all four 6-h periods in day 1 for the wet season (October–May) from 2000 to 2004 in dependent validation (i.e., parameter estimation) (“DV”) and independent validation (based on leave-one-year-out cross validation) (“CV”). The Mean CRPS measures the integrated squared difference between the cumulative distribution functions of a forecast and the corresponding obser- vation, which is then averaged across a set of events. The Mean CRPS is conditioned on observed mean areal precipitation (MAP) ≥ 0, 6.35, and 12.7 mm (corresponding to nonexceedance climatological probabilities of 0.73, 0.92, and 0.96, respectively, for the wet season). The vertical bars denote the 95% confidence intervals. The results are for the NFDC1 basin with upper and lower areas combined. (From Wu et al. 2011.)

84 | JANUARY 2014 Unauthenticated | Downloaded 10/10/21 01:09 PM UTC and capture very well the discriminatory skill (right based precipitation ensembles. All ensembles were plot) in the single-valued predictions. Wu et al. (2011) produced at a 6-h time step from 1979 to 2005, with show also that independent validation results (based 45 ensemble members, using method 3, and were veri- on leave-one-year-out cross validation) are similar fied with the EVS as daily totals. The mean error is to dependent validation (i.e., parameter estimation) reported for the ensemble mean (commonly used as a when using a historical archive of about 6 years. This single-valued representation of the ensemble in opera- suggests that the MEFP calibration with a similar tional forecasting), along with the continuous ranked or longer record length allows realizing in real-time probability skill score (CRPSS), which describes the applications the level of performance obtained in overall quality of the probabilistic forecast in refer- dependent validation, even if some degradation may ence to the climatology-based ensembles. At short be expected for rare events. lead times, the precipitation ensembles are relatively For medium range (up to 14 forecast lead days), the unbiased in the unconditional sense as evidenced by single-valued forecasts are obtained as the ensemble the mean error for the precipitation intermittency means from the frozen version (circa 1998) of the threshold; however, they underforecast for larger Global Forecast System (GFS; Hamill et al. 2006) of observed events. These Type-II conditional biases are the National Weather Service’s National Centers for common in ensemble forecasting systems since model Environmental Prediction (NCEP). A 30-yr reforecast calibration typically favors average conditions and archive at T62 resolution and stored on a 2.5° grid such conditional biases are more difficult to remove is available for the MEFP calibration. Verification with postprocessing (Brown et al. 2012). When com- results of GFS-based forcing ensembles are described paring to the climatology-based ensembles for all 14 in Schaake et al. (2007a) and Demargne et al. (2010) lead days, the MEFP-generated ensembles expectedly for the NFDC1 basin. show reduced conditional biases. The quality of GFS- Figure 4 shows the verification results from based precipitation decreases rapidly with increasing dependent validation for the 14-day GFS-based forecast lead time as evidenced by the increased mean precipitation ensembles compared to the climatology- error and the reduction in CRPSS. However, because

Fig. 3. (left) Reliability diagrams and (right) ROC curves relative to a threshold of 6.35 mm for MEFP-generated ensemble hindcasts of 6-h precipitation for all four 6-h periods in day 1 in DV and in leave-one-year-out CV. The reliability diagram describes the agreement between forecast probability of an event and the mean observed frequency of that event. Perfectly reliable forecasts have points that lie on the 45° diagonal line; the deviation from the diagonal line gives the conditional bias. The ROC curve plots, for a given event (e.g., flooding), the probability of detection (or hit rate) against the probability of false detection (or false alarm rate) for a range of probability levels (each one corresponding to a threshold at which a probability forecast leads to a binary decision). The false alarm rate and hit rate for the conditioning single-valued forecast are also plotted (“SVF”). The area under curve (“AUC”) is defined as the area below the ROC curve and above the diagonal (corre- sponding to climatological forecasts), with a perfect score of 1. The vertical bars denote the 95% confidence intervals. The results are for the NFDC1 basin with upper and lower areas combined. (From Wu et al. 2011.)

AMERICAN METEOROLOGICAL SOCIETY JANUARY 2014 | 85 Unauthenticated | Downloaded 10/10/21 01:09 PM UTC of the relatively large predictability of orographic pre- an improved vertical resolution for all 16 days; it also cipitation in the Sierra Nevada during the cold season includes uncertainty modeling enhancements (see in particular, GFS-based ensembles show useful skill Wei et al. 2008 and Hou et al. 2013, manuscript sub- in terms of CRPSS until 10 days. mitted to Tellus, for details). A new 25-yr ensemble As part of the ongoing comprehensive evaluation reforecast dataset has been completed by using the of HEFS ensembles, Brown (2013) analyzed verifica- configuration of the current operational GEFS and tion results of GFS-based precipitation and tempera- is available for public access (Hamill et al. 2013). For ture ensemble hindcasts for a 14-day forecast horizon the longer range, the MEFP ingests single-valued for four pairs of headwater–downstream test basins forecasts from the NCEP’s Climate Forecast System located in California, Colorado, Kansas–Oklahoma, (CFS version 2; Saha et al. 2013), which has been and Pennsylvania–New York. The GFS-based pre- operational since February 2011 and has shown some cipitation ensembles generally show skill against skill against climatology for hydrological ensemble climatology-based ensembles for the first week but forecasting (e.g., Yuan et al. 2013). MEFP constructs little or no skill in the second week. However, results lagged ensemble forecasts from the single-valued vary significantly with basin locations (e.g., reduced CFS forecasts to estimate the ensemble mean (used precipitation predictability in the southern plains), as single-valued forecast to drive the MEFP statisti- seasons (e.g., less skill during the dry season), and cal model) for a forecast horizon up to 9 months. magnitudes (e.g., underestimation of the probability MEFP requires long hindcast datasets of weather of precipitation and, more problematically, large pre- and climate forecasts from a fixed model to correct cipitation amounts), which underlines the need for a biases in the single-valued forecasts, particularly systematic and comprehensive evaluation of MEFP for rare events. Several studies have demonstrated ensembles across the different RFCs. that utilizing the reforecast dataset from the frozen MEFP has recently been enhanced to ingest version of a NWP model significantly improves the forecast from the NCEP’s latest Global Ensemble skill of temperature and precipitation forecasts (in Forecast System (GEFS), which was implemented in particular for heavy precipitation events), as well as February 2012. The new version of the GEFS uses hydrologic forecasts (Werner et al. 2005; Hamill et al. the latest GFS model version v9.0 with an increased 2006; Wilks and Hamill 2007; Hamill et al. 2008). horizontal resolution of T254 (~55 km) for 8 days and Validation of short- to long-term ensembles for vari-

Fig. 4. (left) Mean error of the MEFP-generated ensemble means for GFS-based precipitation ensembles (“Pgfs”) and climatological precipitation ensembles (“Pclim”); (right) CRPSS for the GFS-based precipitation ensembles in reference to the climatological ensembles. The CRPSS is positively oriented, with perfect skill of 1 and nega- tive value when the forecast has worse CRPS than the reference. The results are from dependent validation for the 1979–2005 period for the NFDC1 basin with upper and lower areas combined; the threshold values of 0 and 25 mm day–1 correspond to nonexceedance climatological probabilities of 0.69 and 0.95, respectively.

86 | JANUARY 2014 Unauthenticated | Downloaded 10/10/21 01:09 PM UTC ous RFC basins is underway to evaluate the expected thresholds of the distribution (e.g., Solomatine and performance of MEFP for producing seamless and Shrestha 2009; Brown and Seo 2013). Examples of skillful ensembles. postprocessing techniques for hydrologic ensemble In the future, MEFP should include forecasts from prediction systems include error correction based on other NWP models [e.g., the Short-Range Ensemble the last known forecast error (Velázquez et al. 2009), Forecast (SREF) system produced by the NCEP (Du an autoregressive error correction using the most et al. 2009)], techniques to estimate precipitation from recent modeled error (Renner et al. 2009; Hopson the combination of different NWP model output and Webster 2010; in the latter, postprocessing is also variables (e.g., total column precipitable water), and applied to multimodel ensembles), bias correction additional and/or alternative postprocessing tech- similar to the MEFP temperature methodology for niques, for example, to incorporate information from long-term ESP streamflow ensembles (Wood and the ensemble spread and higher moments (Brown and Schaake 2008), a Bayesian postprocessor for ensemble Seo 2010). In the experimental Meteorological Model- streamflow forecasts (Reggiani et al. 2009), error cor- Based Ensemble Forecast System (Philpott et al. 2012), rection for multiple temporal scales based on wavelet three Eastern Region RFCs and a Southern Region transformation (Bogner and Kalas 2008; Bogner and RFC are also investigating the use of SREF and GEFS Pappenberger 2011), and a generalized linear regres- ensembles, as well as North American Ensemble sion model using multiple temporal scales (Zhao Forecast System (NAEFS) ensembles, all produced et al. 2011). To help establish the reliability of differ- and bias corrected (at the grid scale) by the NCEP (Cui ent statistical postprocessors and predictors under et al. 2012) (experimental products available at www varied forecasting conditions, the HEPEX project .erh.noaa.gov/mmefs/). Grand-ensemble datasets includes an initiative to intercompare postprocessing such as The Observing System Research and Predict- techniques in order to develop recommendations for ability Experiment (THORPEX) Interactive Grand their operational use in hydrologic ensemble predic- Global Ensemble (TIGGE; Park et al. 2008) have tion systems (van Andel et al. 2012). significant potential to capture uncertainties in the In the HEFS, the EnsPost (Seo et al. 2006) initial conditions, the model parameterization, the accounts for the collective hydrologic uncertainty in data assimilation technique, and the model structure a lumped form. Since MEFP generates bias-corrected through the use of atmospheric ensembles from dif- hydrometeorological ensembles that reflect the ferent NWP models (e.g., Pappenberger et al. 2008; input uncertainty, EnsPost is calibrated with simu- He et al. 2009, 2010). However, the use of any NWP lated streamflow (i.e., generated from perfect future model ensembles in hydrologic modeling requires meteorological forcings) without any manual a long reforecast dataset in order to calibrate the modifications of model states and parameters. The meteorological ensemble forecast processor as well hydrologic uncertainty is, therefore, modeled inde- as the hydrologic and water resources models for pendently of forecast lead time. The postprocessed rare events. streamflow ensembles result from integration of the input and hydrologic uncertainties and hence HYDROLOGIC ENSEMBLE POSTPROCES- reflect the total uncertainty. The current version SOR. Sources of hydrologic bias and uncertainty of the EnsPost employs a parsimonious statistical may be unknown or poorly specified in hydrologic model that combines probability matching and ensemble prediction systems. Therefore, a range of time series modeling. Parsimony is important to statistical postprocessing techniques have been devel- reduce data requirements and, therefore, reduce the oped to account for the collective hydrologic uncer- sampling uncertainty of the estimated parameter tainty (Krzysztofowicz 1999; Seo et al. 2006; Coccia values. The procedure adjusts each ensemble trace and Todini 2011; Brown and Seo 2013; and references via recursive linear regression in the normal space therein). They aim to producing reliable (i.e., condi- (see Seo et al. 2006 for details). The regression is a tionally unbiased) hydrologic ensemble forecasts from first-order autoregressive model with an exogenous single-valued forecasts or “raw” ensemble forecasts, variable, or ARX(1,1), and uses normal-quantile- sometimes with the aid of covariates, accounting only transformed historical simulation and verifying for the hydrologic uncertainty in the forecasts. The observation. The regression parameter is optimized resulting probability distribution is described by a for different seasons and flow categories, taking complete density function (e.g., Krzysztofowicz 1999; into account that the correlation depends greatly Seo et al. 2006; Montanari and Grossi 2008; Todini on flow magnitude and season. Recently, this model 2008; Bogner and Pappenberger 2011) or several has been modified to better simulate temporal vari-

AMERICAN METEOROLOGICAL SOCIETY JANUARY 2014 | 87 Unauthenticated | Downloaded 10/10/21 01:09 PM UTC ability in the postprocessed streamflow ensembles flow indicate good reliability (left plot) and dis- by accounting for dependence in the normal space criminatory skill similar to the single-valued model between the residual error of the model fit and the predictions (right plot) for the first and fifth lead days. observed streamflow, as well as the serial correlation However, the current version of EnsPost is of limited in the residual error. utility for complex flow regulations and does not EnsPost is currently applied to daily observed and explicitly account for timing errors in the streamflow forecast streamflows; after statistical postprocessing, simulations (see Liu et al. 2011). the adjusted ensemble values are disaggregated to Regarding the quality of HEFS flow ensembles, subdaily flows. In Seo et al. (2006) and subsequent examples of dependent verification results are given studies for other locations, the EnsPost shows sat- for raw and postprocessed flow ensemble hindcasts isfactory results for short forecast horizons and for produced by the Hydrologic Processor and EnsPost all ranges of flow. However, independent valida- using the GFS-based precipitation and temperature tion shows slightly degraded results in comparison ensembles generated by MEFP. For flow hindcasting, to dependent validation when EnsPost parameters the Hydrologic Processor is first run in simulation were estimated from a 20-yr record, mainly owing mode with the observed precipitation and tem- to uncertainties in the empirical cumulative distribu- perature time series to generate the historical initial tion functions of observed and simulated flows. Seo conditions for all hindcast dates. Based on the his- et al. (2006) underlined that in real-time applications, torical initial conditions, flow ensemble hindcasts when the postprocessor parameters may be regularly are produced by the Hydrologic Processor for each (e.g., annually) updated using more than 20 years of hindcast date using the MEFP precipitation and tem- data, the performance of EnsPost would be similar perature ensemble hindcasts, and then postprocessed or better than the obtained independent validation by EnsPost. To evaluate the performance gain using results. Examples of cross validation results are shown MEFP and EnsPost, flow ensembles produced by the in Fig. 5 for postprocessed flow ensemble hindcasts Hydrologic Processor (using the same retrospec- produced with perfectly known future forcing. The tive initial conditions) from climatological forcing daily flow ensemble hindcasts were generated for the ensembles are used as reference forecasts. NFDC1 basin using 38 years of observed–simulated The example verification results are given for the flow records. In Fig. 5, the reliability diagram and the NFDC1 basin, for which 6-h ensemble hindcasts were ROC curve relative to a threshold of 95th-percentile produced from 1979 to 2005 and verified with EVS as

Fig. 5. (left) Reliability diagrams and (right) ROC curves relative to the 0.95 nonexceedance probability threshold for the NFDC1 basin for the 1961–99 period for EnsPost-generated flow ensemble hindcasts under the assump- tion of perfectly known future forcing, for days 1 and 5 in leave-one-year-out cross validation; the inset in (left) represents the histogram of predicted probability by the ensemble hindcasts.

88 | JANUARY 2014 Unauthenticated | Downloaded 10/10/21 01:09 PM UTC daily average flows. The comparisons of dependent underforecasting of large events. However, owing and independent validation results for MEFP and to hydrologic persistence or “basin memory,” the EnsPost in the previous studies (e.g., Wu et al. 2011; quality of the flow ensembles declines more slowly Seo et al. 2006) have shown their robustness. Thus, than that of the precipitation ensembles. Regarding the following dependent validation results for HEFS- CRPSS results, the sharp increase in skill between generated flow ensembles give a reasonable indication the first and second forecast days is due to the fact of the expected performance of HEFS in real-time that, for the first lead day, the climatology-based flow applications, when both MEFP and EnsPost are cali- ensembles too have good skill owing to persistence, brated with more than 25 years of data, even if some which results in reduced skill score for the GFS-based degradation is expected for rare events. As illustrated flow ensembles. The comparison of the CRPSS values in Figs. 2–4 for the NFDC1 basin, MEFP precipitation for the raw flow ensembles and the postprocessed ensembles perform well, particularly when compared flow ensembles shows that most of the flow forecast with climatological ensembles. The marginal value skill comes from the MEFP component, with limited of EnsPost depends largely on the magnitude of the impact of EnsPost. The additional improvement by systematic bias in the model-simulated streamflow. EnsPost is marginal because of small hydrologic For the NFDC1 basin, the model simulation is of biases and uncertainties in this basin. It decreases very high quality with a volume bias of only about very fast within the first few days as a reflection of 1%. As such, one may expect the contribution from the fast-decaying memory in the initial conditions, the EnsPost to be modest, coming mostly from im- noting that the prior observation is a predictor in proved reliability by adding spread to the streamflow the EnsPost. ensembles. However, as pointed out by Brown (2013), the Figure 6 shows the mean error for the ensemble overall skill of GFS-based postprocessed flow means and the CRPSS for the postprocessed flow ensembles in reference to climatology-based flows, as ensembles and raw flow ensembles in reference to the well as the relative contributions of the MEFP (with climatology-based flow ensembles. The GFS-based GFS forecasts) and EnsPost components, depend flow ensembles exhibit a conditional bias consis- on the basin location (as illustrated in Fig. 7 with tent with the conditional bias of the precipitation basins located in four different RFCs), flow amount, ensembles: overforecasting of small events and and season. In Fig. 8, examples of ensemble traces

Fig. 6. (left) Mean error of the ensemble means for GFS-based postprocessed flow ensembles (“Qgfs-post”) and climatology-based flow ensembles (“Qclim”); (right) CRPSS for the GFS-based postprocessed flow ensembles (Qgfs-post) and GFS-based raw flow ensembles (Qgfs) in reference to the climatology-based flow ensembles (Qclim). The results from dependent validation for the 1979–2005 period for the NFDC1 basin are shown for subsets of forecast-observed pairs relative to the 0.75 and 0.95 nonexceedance probability thresholds (29.6 m3 s–1 and 80.7 m3 s–1, respectively).

AMERICAN METEOROLOGICAL SOCIETY JANUARY 2014 | 89 Unauthenticated | Downloaded 10/10/21 01:09 PM UTC for two different basins illustrate how MEFP and EnsPost may help to predict a large flow event 5 days in advance but with a peak timing error (top plots), and may produce ensembles with a much reduced spread compared to climatology-based flows but with a low bias tendency (bottom plots). The performance of EnsPost depends largely on the availability of long-term observed and model-simulated flows and the assumption that the streamflow climatology is stationary over a multidecadal period. If additional stratification of the observed–simulated flow dataset is necessary for parameter estimation to improve the model fit for specific conditions (e.g., snowmelt), the EnsPost will require an even larger dataset for its calibration. In the future, for areas where observed and simulated flow data are available at subdaily scales (6 hourly

or hourly), direct modeling of the subdaily flow will Fig. 7. CRPSS for the GFS-based postprocessed flow be necessary for improved performance. The use of ensembles (Qgfs-post) in reference to the climatology- multiple temporal scales of aggregation to improve based flow ensembles. The results from dependent bias correction at longer ranges is under investigation. validation for the 1979–99 period are shown for all Evaluation of other bias-correction techniques forecast-observed pairs for the following basins: (including those used for atmospheric forcings) is the Eel River at Fort Seward (FTSC1) in California Nevada RFC (CNRFC), the Dolores River (DOLC2) also ongoing (e.g., van Andel et al. 2012) to find the in Colorado Basin RFC (CBRFC), the Chikaskia best approaches for different forecasting situations River near Blackwell (BLKO2) in Arkansas Basin RFC and forecast attributes. (ABRFC), and the inflow to Cannonsville Reservoir in Moreover, EnsPost needs to be currently applied Middle-Atlantic RFC (MARFC) (see Brown 2013 for without any manual modifications of model states details). and parameters to maintain the consistency between the real-time ensemble flows and the simulated flows used for its calibration, as well as the EnsPost- tion is required. Key attributes of forecast quality generated streamflow hindcasts and verification include the degree of bias of the forecast probabilities, results. Therefore, for real-time ensemble predic- whether unconditionally or conditionally upon the tion, the set of model states used in HEFS are forecasts (reliability or Type-I conditional bias) or generated with a simulation time window long observations (Type-II conditional bias), the ability enough to minimize the impact of any modifica- to discriminate between different observed events tions previously applied in single-valued forecasting. (i.e., to issue distinct probability statements), and skill Obviously, EnsPost needs to evolve along with the relative to a baseline forecasting system (Jolliffe and data assimilator component to utilize automated Stephenson 2003; Wilks 2006). Ensemble forecasting DA procedures. Meanwhile, given that the current systems, such as HEFS, are intended for a wide range manual modifications address significant limita- of practical applications, such as flood forecasting, tions in the operational models and datasets, we river navigation, and water supply forecasting. recommend analyzing the potential impact of these Therefore, forecast quality needs to be evaluated for modifications on the performance of HEFS flow a range of observed and forecast conditions in terms ensembles. Such comprehensive evaluation could of forecast horizon, space–time scale, seasonality, and offer objective guidance on best operational practices magnitude of event. The EVS, built on the Ensemble for applying manual modifications and cost-effective Verification System (Brown et al. 2010; freely available transitioning of experimental automated DA capabili- from www.nws.noaa.gov/oh/evs.html), was designed ties into operational ensemble forecasting. to support conditional verification of forcing and hydrologic ensembles, generated by HEFS, as well ENSEMBLE VERIFICATION. To evaluate as external ensemble forecasting systems. EVS is a the performance of HEFS for both research and flexible, modular, and open-source software tool pro- operational forecasting purposes, ensemble verifica- grammed in Java to allow cost-effective collaborative

90 | JANUARY 2014 Unauthenticated | Downloaded 10/10/21 01:09 PM UTC Fig. 8. Daily ensemble traces and ensemble means for the GFS-based postprocessed flow (Qgfs-post), GFS- based raw flow (Qgfs), and climatology-based flow (Qclim) for two different events in (top) the Eel River at Fort Seward (FTSC1) in CNRFC and (bottom) the Dolores River (DOLC2) in CBRFC.

research and development with academic and private • the ability to evaluate the quality of an ensemble institutions and rapid research-to-operations transi- forecasting system conditional upon many factors, tion of scientific advances. such as forecast lead time, seasonality, temporal Key features of EVS include the following (see aggregation, magnitude of event (defined in Brown et al. 2010 for details): various ways, such as exceedance of a real-valued threshold or climatological probability), and • the ability to evaluate forecast quality for any values of auxiliary variables (e.g., quality of flow continuous numerical variable (e.g., precipitation, ensembles conditional upon the amount of ob- temperature, streamflow, river stage) at specific served precipitation); forecast locations (points or areas) and for any • the ability to evaluate key attributes of forecast temporal scale or forecast lead time; quality, such as reliability, discrimination, and

AMERICAN METEOROLOGICAL SOCIETY JANUARY 2014 | 91 Unauthenticated | Downloaded 10/10/21 01:09 PM UTC skill, at varying levels of detail, ranging from Demeritt et al. (2013) for the European Flood Alert highly summarized (e.g., skill scores such as System. Pappenberger et al. (2013) formulated rec- CRPSS) to highly detailed (e.g., box plots of con- ommendations for effective visualization and com- ditional errors); munication of probabilistic flood forecasts among • the ability to aggregate the forecasts in time (e.g., experts, acknowledging that there is no overarching hourly to daily) and to evaluate aggregate perfor- agreement and one-size-fits-all solution. mance over a range of forecast locations, either by In HEFS, the Graphics Generator (GraphGen), a pooling pairs or computing a weighted average of generic software tool for CHPS, enables forecasters to the verification metrics from several locations; generate and visualize information for internal deci- • generating graphical and numerical outputs in a sion support during operations as well as disseminate range of file formats (R scripts are also provided the final products to end users. This tool is expected for further analysis and generation of custom to be accessed externally through a web service graphics); interface, which will allow the uncertainty-quantified • the ability to implement a verification study via the forecast and verification information to be tailored graphical user interface (GUI) or to batch process to the needs of specific external users. GraphGen a large number of forecast locations on the com- includes the functionality of the NWSRFS Ensemble mand line, using a project file in an XML format Streamflow Prediction Analysis and Display Program (the EVS can also be run within CHPS—e.g., to (National Weather Service 2012), such as generation produce diagnostic verification results for one or of spaghetti plots, expected value chart to describe the multiple hindcast scenarios); and ensemble distribution (minimum, maximum, mean, • the ability to estimate the sampling uncertainty in and standard deviation), exceedance probability the verification metrics using the stationary block bar graph for a few probability categories and for a bootstrap—synthetic realizations of the original given product (e.g., monthly volume), and exceedance paired data are repetitively generated and the probability distribution plot using current initial verification metrics are computed for each sample conditions compared to historical simulations (see to estimate a bootstrap distribution of the verifica- examples in McEnery et al. 2005). HEFS also needs tion metrics, from which the percentile confidence to provide uncertainty-quantified forecast informa- intervals are then derived. tion on maps for multiple forecast locations (e.g., 90% chance to exceed flood impact thresholds) as well as EVS is regularly enhanced to address needs from individual locations (e.g., expected value charts for modelers and forecasters as HEFS is being imple- all forecast lead times). New products to visualize mented and evaluated across all RFCs and since the the ensemble information are being evaluated, such Ensemble Verification System is being used in other as box-and-whisker plots with quantiles from the projects such as HEPEX. ensemble distribution, ensemble consistency tables, and visualization of peak timing uncertainty and GRAPHICS GENERATOR. Communicating magnitude uncertainty. In addition, information is uncertainty information to a wide range of end users needed to link and integrate traditional single-valued represents a challenge. As hydrologic ensemble fore- forecasts in the context of the estimated uncertainty. casting is relatively new, much research is needed to Several RFCs make prototype ensemble products and define the most effective methods of presenting such information available to their customers (see www information and design decision support system that .cnrfc.noaa.gov/index.php?type=ensemble) as well as maximize their utility (Cloke and Pappenberger interfaces that allow custom product generation based 2009). Challenges in communicating hydrologic on user specifications; this will help to enhance the ensembles include how to understand the ensemble operational national web interface for AHPS (http:// forecast information (e.g., value of the ensemble water.weather.gov/ahps/). mean, relation between spread and skill), how to Furthermore, verification information needs use such information (e.g., in coordination with to be provided along with forecast information deterministic forecasts), and how to communicate to support decision making (Demargne et al. it (e.g., spaghetti plots versus plume charts), even to 2009). Similar approaches have been reported by nonexperts (Demeritt et al. 2010). A variety of prac- Bartholmes et al. (2009), Renner et al. (2009), and tical approaches and products have been presented van Andel et al. (2008), where verification results by Bruen et al. (2010) for seven European ensemble on past performance are provided to forecasters forecasting platforms and by Ramos et al. (2007) and and decision makers. One such example is the web

92 | JANUARY 2014 Unauthenticated | Downloaded 10/10/21 01:09 PM UTC interface for the CBRFC water supply forecasts CONCLUSIONS AND FUTURE available from www.cbrfc.noaa.gov/. It enables CHALLENGES. The end-to-end HEFS provides, users to generate customizable datasets and prod- for short to long range, uncertainty-quantified fore- ucts for both probabilistic forecasts and verification cast and verification products that are generated by 1) statistics. Such visualization tools provide insights the MEFP, which ingests weather and climate forecasts into the strengths and weaknesses of the forecasts from multiple Numerical Weather Prediction models and can help users assess potential forecast errors in to produce seamless and bias-corrected precipitation the real-time forecasts. Along with the probabilistic and temperature ensembles at the hydrologic basin forecast information, the envisioned HEFS products scales; 2) the Hydrologic Processor, which inputs the will include context information (e.g., historical forcing ensembles into a suite of hydrologic, hydraulic, lowest and highest, specific years of interest— and reservoir models; 3) the EnsPost, which models El Niño, La Niña, or neutral), recent forecasts the collective hydrologic uncertainty and corrects and corresponding observations, forecasts from for biases in the streamflow ensembles; 4) the EVS, alternative scenarios, as well as historic analogs to which verifies the forcing and streamflow ensembles real-time forecasts. The selection of analogs (i.e., to help identify the main sources of skill and bias in past forecasts that are analogous to the current the forecasts; and 5) the Graphics Generator, which forecast) and the display of diagnostic verification enables forecasters to derive and visualize products statistics from similar conditions provide important and information from the ensembles. Evaluation of contextual information tailored to the specific real- the HEFS through multiyear hindcasting and large- time forecasting situation. sample verification is currently underway and results Through customer evaluations of the AHPS web- obtained so far show positive skill and reduced bias site and the NWS Hydrology Program, the NWS has in the short to medium term when compared to recognized the need to better communicate hydro- climatology-based ensembles and single-valued fore- logic forecast uncertainty information for the end casts. However, the performance varies significantly users to understand better and use such information with, for example, forecast horizons, basin locations, more effectively in their decision making. The NWS, seasons, and magnitudes, which underlines the need USACE, and Bureau of Reclamation conducted a for a systematic and comprehensive evaluation of comprehensive use and needs assessment of the water HEFS ensembles across the different RFCs. management community, stressing in particular Increased skill in forcing forecasts generally the need of more detailed information on product translates into increased skill in ensemble stream- skill and uncertainty, guidance for synthesizing the flow forecasts. As such, the HEFS should utilize the large amount of hydrometeorological information, most skillful forcing forecasts at all ranges of lead and training on probabilistic forecasting principles time. To translate this skill to ensemble streamflow and risk-based decision making (Raff et al. 2013). forecasts to the maximum extent, hydrologic un- Increased collaborations between forecasters, scien- certainty must be reduced as much as possible. For tists (including social and behavioral scientists), and example, assimilation of all available measurements decision makers should help to understand decision of streamflow, soil moisture, snow depth, and oth- processes with uncertainty-based forecasts, develop ers would reduce the initial condition uncertainty. innovative training and education activities to pro- Although not implemented in the first version of mote a common understanding, and, ultimately, the HEFS, a number of DA techniques have been increase the effectiveness of probabilistic forecasts developed and/or tested for the Sacramento rainfall- (Ramos et al. 2013; Demeritt et al. 2013; Pappenberger runoff model, the snow accumulation and ablation et al. 2013). To this end, the NWS Hydrology Program model, and hydrologic routing models to assimilate and the RFCs are involved in a number of outreach real-time observations and adjust model states within and training activities, as well as ongoing collabora- the assimilation window (Seo et al. 2003, 2009; Lee tion with the New York City Department of Environ- et al. 2011; Liu et al. 2012; He et al. 2012). In addition mental Protection. Finally, as CHPS and HEFS are to the data assimilator component, enhancements based on the Delft-FEWS platform, complementary are also planned to account for the parametric and visualization techniques and decision support systems structural uncertainties in the hydrologic models. As are expected to be shared within the Delft-FEWS com- shown by Georgakakos et al. (2004) and Velázquez munity as well as the broader community to maximize et al. (2011), combining predictions from different the utility of hydrologic and water resources forecast models outperforms individual model predictions as products and services. long as model-specific biases can be corrected.

AMERICAN METEOROLOGICAL SOCIETY JANUARY 2014 | 93 Unauthenticated | Downloaded 10/10/21 01:09 PM UTC Obviously, the different uncertainty modeling the RFCs. The authors would also like to thank Dr. Yuqiong approaches available in the HEFS and in other research Liu for her valuable contribution and three anonymous and operational systems will need to be rigorously reviewers. compared via ensemble verification to define opti- mized systems for operational hydrologic ensemble predictions. Close collaborations between scientists, REFERENCES forecasters, and end users from the atmospheric and Addor, N., S. Jaun, F. Fundel, and M. Zappa, 2011: An hydrologic communities, through projects such as the operational hydrological ensemble prediction system HEPEX, help support such intercomparison, as well as for the city of Zurich (Switzerland): Skill, case studies address the following ensemble challenges: and scenarios. Hydrol. Earth Syst. Sci., 15, 2327–2347. Anderson, E. A., 1973: National Weather Service River • seamlessly combine probabilistic forecasts from Forecast System—Snow accumulation and ablation short to long ranges and from multiple models model. NOAA Tech. Memo. NWS HYDRO-17, while maintaining consistent spatial and temporal 217 pp. relationships across different scales and variables; Bartholmes, J. C., J. Thielen, M.-H. Ramos, and • include forecaster guidance on forcing input fore- S. Gentilini, 2009: The European Flood Alert System casts and hydrologic model operations, especially EFAS—Part 2: Statistical skill assessment of probabi- in the short term; listic and deterministic operational forecasts. Hydrol. • improve accuracy of both meteorological and Earth Syst. Sci., 13, 141–153. hydrologic models and reduce the cone of uncer- Bogner, K., and M. Kalas, 2008: Error-correction tainty for effective decision support; methods and evaluation of an ensemble based hy- • improve the uncertainty modeling of rare events drological forecasting system for the Upper Danube (e.g., record flooding or drought) when availability catchment. Atmos. Sci. Lett., 9, 95–102. of analogous historical events is very limited; —, and F. Pappenberger, 2011: Multiscale error analy- • integrate and leverage conditional uncertainty as- sis, correction, and predictive uncertainty estimation sociated with NWP and human adjusted forecasts in a flood forecasting system. Water Resour. Res., 47, of atmospheric forcings; W07524, doi:10.1029/2010WR009137. • improve computing power, database, and data Brown, J. D., 2013: Verification of temperature, pre- storage, with forecasts becoming available at cipitation and streamflow forecasts from the NWS higher resolution and from an increasing number Hydrologic Ensemble Forecast Service (HEFS): of models, to produce long hindcast datasets for all Medium-range forecasts with forcing inputs from forcing inputs and hydrologic outputs for research the frozen version of NCEP’s Global Forecast System. and operation purposes; HSL Tech. Rep. to the NWS, 133 pp. [Available • improve the understanding of how uncertainty online at www.nws.noaa.gov/oh/hrl/hsmb/docs and verification information is interpreted and /hep/publications_presentations/Contract_2012- used in practice by different groups (including 04-HEFS_Deliverable_02_Phase_I_report_FINAL forecasters and end users) to provide this infor- .pdf.] mation in a form and context that is easily under- —, and D.-J. Seo, 2010: A nonparametric postproces- standable and useful to customers; and sor for bias-correction of hydrometeorological and • develop innovative training and education hydrologic ensemble forecasts. J. Hydrometeor., 11, activities to fully embrace and practice the en- 642–665. semble paradigm in hydrology and water resources —, and —, 2013: Evaluation of a nonparametric services and increase the effectiveness of probabi- post-processor for bias-correction and uncer- listic forecasts in risk-based decision making. tainty estimation of hydrologic predictions. Hydrol. Processes, 27, 83–105, doi:10.1002/hyp.9263. ACKNOWLEDGMENTS. This work has been —, J. Demargne, D.-J. Seo, and Y. Liu, 2010: The supported by the National Oceanic and Atmospheric Ensemble Verification System (EVS): A software tool Administration (NOAA) through the Advanced Hydro- for verifying ensemble forecasts of hydrometeoro- logic Prediction Service (AHPS) Program and the Climate logical and hydrologic variables at discrete locations. Predictions Program for the Americas of the Climate Environ. Modell. Software, 25, 854–872. Program Office. The research and development of HEFS —, D.-J. Seo, and J. Du, 2012: Verification of precipita- over the last decade has involved multiple scientists and tion forecasts from NCEP’s Short Range Ensemble forecasters from the Office of Hydrologic Development and Forecast (SREF) system with reference to ensemble

94 | JANUARY 2014 Unauthenticated | Downloaded 10/10/21 01:09 PM UTC streamflow prediction using lumped hydrologic Demeritt, D., S. Nobert, H. Cloke, and F. Pappenberger, models. J. Hydrometeor., 13, 808–836. 2010: Challenges in communicating and using Bruen, M., P. Krahe, M. Zappa, J. Olsson, B. Vehvilainen, ensembles in operational flood forecasting. Meteor. K. Kok, and K. Daamen, 2010: Visualizing flood Appl., 17, 209–222. forecasting uncertainty: Some current European —, —, —, and —, 2013: The European Flood EPS platforms—COST731 working group 3. Atmos. Alert System and the communication, perception, Sci. Lett., 11, 92–99. and use of ensemble predictions for operational flood Buizza, R., P. L. Houtekamer, Z. Toth, G. Pellerin, risk management. Hydrol. Processes, 27, 147–157, M. Wei, and Y. Zhu, 2005: A comparison of the doi:10.1002/hyp.9419. ECMWF, MSC, and NCEP Global ensemble predic- Du, J., G. Dimego, Z. Toth, D. Jovic, B. Zhou, J. Zhu, tion systems. Mon. Wea. Rev., 133, 1076–1097. J. Wang, and H. Juang, 2009: Recent upgrade of Burnash, R. J. C., 1995: The NWS river forecast system— NCEP short-range ensemble forecast (SREF) system. Catchment modeling. Computer Models of Watershed Proc. 19th Conf. on Numerical Weather Prediction Hydrology, V. P. Singh, Ed., Water Resources Publica- and 23rd Conf. on Weather Analysis and Forecasting, tions, 311–366. Omaha, NE, Amer. Meteor. Soc., 4A.4. [Available on- Charba, J. P., D. W. Reynolds, B. E. McDonald, and G. M. line at https://ams.confex.com/ams/23WAF19NWP Carter, 2003: Comparative verification of recent /techprogram/paper_153264.htm.] quantitative precipitation forecasts in the National Fundel, F., S. Jörg-Hess, and M. Zappa, 2012: Long- Weather Service: A simple approach for scoring range hydrometeorological ensemble predictions of forecast accuracy. Wea. Forecasting, 18, 161–183. drought parameters. Hydrol. Earth Syst. Sci. Discuss., Clark, M., S. Gangopadhyay, L. Hay, B. Rajagopalan, and 9, 6857–6887. R. Wilby, 2004: The Schaake shuffle: A method for Georgakakos, K., D.-J. Seo, H. Gupta, J. Schaake, and reconstructing space–time variability in forecasted M. B. Butts, 2004: Towards the characterization of precipitation and temperature fields. J. Hydrometeor., streamflow simulation uncertainty through mul- 5, 243–262. timodel ensembles. J. Hydrol., 298 (1–4), 222–241. Cloke, H. L., and F. Pappenberger, 2009: Ensemble flood Gneiting, T., A. E. Raftery, A. H. Westveld III, and forecasting: A review. J. Hydrol., 375, 613–626. T. Goldman, 2005: Calibrated probabilistic fore- Coccia, G., and E. Todini, 2011: Recent developments casting using ensemble Model Output Statistics and in predictive uncertainty assessment based on the minimum CRPS estimation. Mon. Wea. Rev., 133, model conditional processor approach. Hydrol. 1098–1118. Earth Syst. Sci., 15, 3253–3274, doi:10.5194/hess-15- Gupta, H. V., K. J. Beven, and T. Wagener, 2005: Model 3253-2011. calibration and uncertainty estimation. The Ency- Cui, B., Z. Toth, Y. Zhu, and D. Hou, 2012: Bias correc- clopedia of Hydrological Sciences, M. Anderson, Ed., tion for global ensemble forecast. Wea. Forecasting, John Wiley and Sons, 2015–2032. 27, 396–410. Hamill, T. M., J. S. Whitaker, and X. Wei, 2004: Ensemble Day, G. N., 1985: Extended streamflow forecasting reforecasting: improving medium-range forecast skill using NWSRFS. J. Water Resour. Plann. Manage., using retrospective forecasts Mon. Wea. Rev., 132, 111, 157–170. 1434–1447. Demargne, J., L. Wu, D.-J. Seo, and J. C. Schaake, 2007: —, —, and S. L. Mullen, 2006: Reforecasts: An im- Experimental hydrometeorological and hydrologic portant dataset for improving weather predictions. ensemble forecasts and their verification in the U.S. Bull. Amer. Meteor. Soc., 87, 33–46. National Weather Service. Quantification and reduc- —, R. Hagedorn, and J. S. Whitaker, 2008: Probabi- tion of predictive uncertainty for sustainable water listic forecast calibration using ECMWF and GFS resources management, IAHS Publ. 313, 177–187. ensemble reforecasts. Part II: Precipitation. Mon. —, M. Mullusky, K. Werner, T. Adams, S. Lindsey, Wea. Rev., 136, 2620–2632. N. Schwein, W. Marosi, and E. Welles, 2009: Appli- —, G. T. Bates, J. S. Whitaker, D. R. Murray, M. Fiorino, cation of forecast verification science to operational T. J. Galarneau, Y. Zhu, and W. Lapenta, 2013: river forecasting in the U.S. National Weather Ser- NOAA’s second generation global medium-range vice. Bull. Amer. Meteor. Soc., 90, 779–784. ensemble reforecast dataset. Bull. Amer. Meteor. Soc., —, J. D. Brown, Y. Liu, D.-J. Seo, L. Wu, Z. Toth, 94, 1553–1565. and Y. Zhu, 2010: Diagnostic verification of He, M., T. S. Hogue, S. A. Margulis, and K. J. Franz, hydrometeorological and hydrologic ensembles. 2012: An integrated uncertainty and ensemble-based Atmos. Sci. Lett., 11, 114–122. data assimilation approach for improved operational

AMERICAN METEOROLOGICAL SOCIETY JANUARY 2014 | 95 Unauthenticated | Downloaded 10/10/21 01:09 PM UTC streamflow predictions. Hydrol. Earth Syst. Sci., 16, Montanari, A., and G. Grossi, 2008: Estimating the 815–831. uncertainty of hydrological forecasts: A statistical He, Y., F. Wetterhall, H. L. Cloke, F. Pappenberger, approach. Water Resour. Res., 44, W00B08, M. Wilson, J. Freer, and G. McGregor, 2009: Tracking doi:10.1029/2008WR006897. the uncertainty in flood alerts driven by grand en- National Research Council of the National Academies, semble weather predictions. Meteor. Appl., 16, 91–101. 2006: Completing the Forecast: Characterizing and —, and Coauthors, 2010: Ensemble forecasting using Communicating Uncertainty for Better Decisions TIGGE for the July–September 2008 floods in the Using Weather and Climate Forecasts. The National Upper Huai catchment: A case study. Atmos. Sci. Academies Press, 124 pp. Lett., 11, 132–138. National Weather Service, cited 2012: National Weather Herr, H. D., and R. Krzysztofowicz, 2005: Generic prob- Service River Forecast System (NWSRFS) user ability distribution of rainfall in space: The bivariate manual documentation. National Weather Service. model. J. Hydrol., 306, 234–263. [Available online at www.nws.noaa.gov/oh/hrl Hersbach, H., 2000: Decomposition of the continuous /nwsrfs/users_manual/htm/xrfsdocpdf.php.] ranked probability score for ensemble prediction Pappenberger, F., J. Bartholomes, J. Thielen, H. Cloke, systems. Wea. Forecasting, 15, 559–570. R. Buizza, and A. de Roo, 2008: New dimensions in Hopson, T., and P. Webster, 2010: A 1–10 day ensemble early flood warming across the globe using grand- forecasting scheme for the major river basins of ensemble weather predictions. Geophys. Res. Lett., Bangladesh: Forecasting severe floods of 2003–07. 35, L10404, doi:10.1029/2008GL033837. J. Hydrometeor., 11, 618–641. —, E. Stephens, J. Thielen, P. Salamon, D. Demeritt, Jaun, S., and B. Ahrens, 2009: Evaluation of a probabi- S. J. van Andel, F. Wetterhall, and L. Alfieri, 2013: listic hydrometeorological forecast system. Hydrol. Visualizing probabilistic flood forecast information: Earth Syst. Sci., 13, 1031–1043. Expert preferences and perceptions of best practice Jolliffe, I. T., and D. B. Stephenson, 2003: Forecast Verifi- in uncertainty communication. Hydrol. Processes, cation: A Practitioner’s Guide in . 27, 132–146, doi:10.1009/hyp.9253. John Wiley and Sons, 240 pp. Park, Y.-Y., R. Buizza, and M. Leutbecher, 2008: TIGGE: Kang, T.-H., Y.-O. Kim, and I.-P. Hong, 2010: Com- Preliminary results on comparing and combining parison of pre- and post-processors for ensemble ensembles. ECMWF Tech. Memo. 548, 47 pp. streamflow prediction. Atmos. Sci. Lett., 11, 153–159. Philpott, A. W., P. Wnek, and J. D. Brown, cited 2012: Krzysztofowicz, R., 1999: Bayesian theory of probabi- Verification of ensemble river forecasts at the listic forecasting via deterministic hydrologic model. Middle Atlantic River Forecast Center. [Available Water Resour. Res., 35, 2739–2750. online at https://ams.confex.com/ams/92Annual Lee, H., D.-J. Seo, and V. Koren, 2011: Assimilation /webprogram/Paper199532.html.] of streamflow and in-situ soil moisture data into Pyke, G., and J. Porter, cited 2012: Forecast-based opera- operational distributed hydrologic models: Effects tions support tool for the New York City water supply of uncertainties in the data and initial model soil system. [Available online at http://fallmeeting.agu. moisture states. Adv. Water Resour., 34, 1597–1615. org/2012/files/2012/12/2012-AGU-Fall-Meeting- Liu, Y., J. D. Brown, J. Demargne, and D.-J. Seo, 2011: A Poster-Pyke-11x17.pdf.] wavelet-based approach to assessing timing errors in Raff, D., L. Brekke, K. Werner, A. Wood, and K. White, hydrologic predictions. J. Hydrol., 397 (3–4), 210–224. 2013: Short-term water management decisions: User —, and Coauthors, 2012: Advancing data assimilation needs for improved climate, weather, and hydrologic in operational hydrologic forecasting: Progresses, information. U.S. Army Corps of Engineers, Bureau challenges, and emerging opportunities. Hydrol. of Reclamation, and National Oceanic and Atmo- Earth Syst. Sci., 16, 3863–3887. spheric Administration Tech. Rep. CWTS 2013-1, McEnery, J., J. Ingram, Q. Duan, T. Adams, and 256 pp. [Available online at www.ccawwg.us/docs L. Anderson, 2005: NOAA’S Advanced Hydrologic /Short-Term_Water_Management_Decisions_ Prediction Service: Building pathways for better sci- Final_3_Jan_2013.pdf.] ence in water forecasting. Bull. Amer. Meteor. Soc., Ramos, M.-H., J. Bartholmes, and J. Thielen, 2007: 86, 375–385. Development of decision support products based on Möller, A., A. Lenkoski, and T. L. Thorarinsdottir, 2013: ensemble weather forecasts in the European flood Multivariate probabilistic forecasting using ensemble alert system. Atmos. Sci. Lett., 8, 113–119. Bayesian model averaging and copulas. Quart. J. Roy. —, S. J. van Andel, and F. Pappenberger, 2013: Do Meteor. Soc., 139, 982–991, doi:10.1002/qj.2009. probabilistic forecasts lead to better decisions?

96 | JANUARY 2014 Unauthenticated | Downloaded 10/10/21 01:09 PM UTC Hydrol. Earth Syst. Sci. 17, 2219–2232, doi:10.5194/ moisture and river flows over France for the spring hess-17-2219-2013. season. Hydrol. Earth Syst. Sci., 16, 201–216. Reggiani, P., M. Renner, A. H. Weerts, and P. A. H. Smith, P. J., K. Beven, A. Weerts, and D. Leedal, 2012: J. M. van Gelder, 2009: Uncertainty assessment Adaptive correction of deterministic models to via Bayesian revision of ensemble streamflow produce accurate probabilistic forecasts. Hydrol. predictions in the operational river Rhine fore- Earth Syst. Sci., 16, 2783–2799. casting system. Water Resour. Res., 45, W02428, Solomatine, D. P., and D. L. Shrestha, 2009: A novel doi:10.1029/2007WR006758. method to estimate model uncertainty using Regonda, S., D.-J. Seo, B. Lawrence, J. D. Brown, and machine learning techniques. Water Resour. Res., J. Demargne, 2013: Short-term ensemble streamflow 45, W00B11, doi:10.1029/2008WR006839. forecasting using operationally-produced single- Thielen, J., J. Schaake, R. Hartman, and R. Buizza, 2008: valued streamflow forecasts—A Hydrologic Model Aims, challenges and progress of the Hydrological Output Statistics (HMOS) approach. J. Hydrol., 497, Ensemble Prediction Experiment (HEPEX) following 80–96, doi:10.1016/j.jhydrol.2013.05.028. the third HEPEX workshop held in Stresa 27 to 29 Renner, M., M. G. F. Werner, S. Rademacher, and June 2007. Atmos. Sci. Lett., 9, 29–35. E. Sprokkereef, 2009: Verification of ensemble flow Thirel, G., F. Regimbeau, E. Martin, J. Noilhan, and forecasts for the River Rhine. J. Hydrol., 376 (3–4), F. Habets, 2010: Short- and medium-range hydro- 463–475. logical ensemble forecasts over France. Atmos. Sci. Saha, S., and Coauthors, 2013: The NCEP Climate Fore- Lett., 11, 72–77. cast System Version 2. J. Climate, in press. Todini, E., 2008: A model conditional processor to assess Schaake, J., and Coauthors, 2007a: Precipitation and predictive uncertainty in flood forecasting. Int. J. temperature ensemble forecasts from single-value River Basin Manage., 6, 123–137. forecasts. Hydrol. Earth Syst. Sci. Discuss., 4, 655–717. van Andel, S. J., R. K. Price, A. H. Lobbrecht, F. van —, T. M. Hamill, and R. Buizza, 2007b: HEPEX: The Kruiningen, and R. Mureau, 2008: Ensemble Hydrological Ensemble Prediction Experiment. Bull. precipitation and water-level forecasts for antici- Amer. Meteor. Soc., 88, 1541–1547. patory water-system control. J. Hydrometeor., 9, —, and Coauthors, 2010: Summary of recommenda- 776–788. tions of the first workshop on Postprocessing and —, A. Weerts, J. Schaake, and K. Bogner, 2012: Downscaling Atmospheric Forecasts for Hydrologic Post-processing hydrological ensemble predictions Applications held at Météo-France, Toulouse, France, intercomparison experiment. Hydrol. Processes, 27, 15–18 June 2009. Atmos. Sci. Lett., 11, 59–63. 158–161, doi:10.1002/hyp.9595. Schefzik, R., T. L. Thorarinsdottir, and T. Gneiting, 2013: Van den Bergh, J., and E. Roulin, 2010: Hydrological Uncertainty quantification in complex simulation ensemble prediction and verification for the Meuse models using ensemble copula coupling. Stat. Sci., and Scheldt basins. Atmos. Sci. Lett., 11, 64–71. in press. Velázquez, A., T. Petit, A. Lavoie, M.-A. Boucher, Schellekens, J., A. H. Weerts, R. J. Moore, C. E. Pierce, and R. Turcotte, V. Fortin, and F. Anctil, 2009: An evalua- S. Hildon, 2011: The use of MOGREPS ensemble - tion of the Canadian global meteorological ensemble fall forecasts in operational flood forecasting systems prediction system for short-term hydrological fore- across England and Wales. Adv. Geosci., 29, 77–84. casting. Hydrol. Earth Syst. Sci., 13, 2221–2231. Seo, D.-J., V. Koren, and N. Cajina, 2003: Real-time —, F. Anctil, M.-H. Ramos, and C. Perrin, 2011: variational assimilation of hydrologic and hydro- Can a multi-model approach improve hydrological meteorological data into operational hydrologic ensemble forecasting? A study on 29 French catch- forecasting. J. Hydrometeor., 4, 627–641. ments using 16 hydrological model structures. Adv. —, H. Herr, and J. Schaake, 2006: A statistical post- Geosci., 29, 33–42. processor for accounting of hydrologic uncertainty in Weerts, A. H., H. C. Winsemius, and J. S. Verkade, short-range ensemble streamflow prediction. Hydrol. 2011: Estimation of predictive hydrological uncer- Earth Syst. Sci. Discuss., 3, 1987–2035. tainty using quantile regression: examples from the —, L. Cajina, R. Corby, and T. Howieson, 2009: National Flood Forecasting System (England and Automatic state updating for operational stream- Wales). Hydrol. Earth Syst. Sci., 15, 255–265. flow forecasting via variational data assimilation. J. Wei, M., Z. Toth, R. Wobus, and Y. Zhu, 2008: Initial Hydrol., 367, 255–275. perturbations based on the Ensemble Transform Singla, S., J.-P. Céron, E. Martin, F. Regimbeau, M. Déqué, (ET) technique in the NCEP Global Operational F. Habets, and J.-P. Vidal, 2012: Predictability of soil Forecast System. Tellus, 60A, 62–79.

AMERICAN METEOROLOGICAL SOCIETY JANUARY 2014 | 97 Unauthenticated | Downloaded 10/10/21 01:09 PM UTC Welles, E., S. Sorooshian, G. Carter, and B. Olsen, 2007: Wood, E. F., X. Yuan, and J. K. Roundy, 2011: Enhanc- Hydrologic verification: A call for action and col- ing hydrological seasonal forecast by downscal- laboration. Bull. Amer. Meteor. Soc., 88, 503–511. ing CFSv2. 36th Annual Climate Diagnostics and Werner, K., D. Brandon, M. Clark, and S. Gangopadhyay, Prediction Workshop, Fort Worth, TX, NOAA, 2005: Incorporating medium-range numerical 110–115. weather model output into the ensemble streamflow Wu, L., D.-J. Seo, J. Demargne, J. Brown, S. Cong, and prediction system of the National Weather Service. J. Schaake, 2011: Generation of ensemble precipita- J. Hydrometeor., 6, 101–114. tion forecast from single-valued quantitative pre- Werner, M., M. van Dijk, and J. Schellekens, 2004: cipitation forecast via meta-Gaussian distribution DELFT-FEWS: An open shell flood forecasting sys- models. J. Hydrol., 399 (3–4), 281–298. tem. Proceedings of the 6th International Conference Yuan, X., E. Wood, J. Roundy, and M. Pan, 2013: on Hydroinformatics, S.-Y. Liong, K.-K. Phoon, and CFSv2-based seasonal hydroclimatic forecasts V. Babović, Eds., World Scientific, 1205–1212. over conterminous United States. J. Climate, 26, —, M. Cranston, T. Harrison, D. Whitfield, and 4828–4847. J. Schellekens, 2009: Recent developments in op- Zappa, M., and Coauthors, 2010: Propagation of un- erational flood forecasting in England, Wales and certainty from observing systems and NWP into Scotland. Meteor. Appl., 16, 13–22. hydrological models: COST-731 Working Group 2. Wilks, D. S., 2006: Statistical Methods in the Atmospheric Atmos. Sci. Lett., 11, 83–91. Sciences. 2nd ed. Academic Press, 648 pp. —, F. Fundel, and S. Jaun, 2012: A ‘Peak-Box’ approach —, and T. M. Hamill, 2007: Comparison of ensemble- for supporting interpretation and verification of MOS methods using GFS reforecasts. Mon. Wea. operational ensemble peak-flow forecasts. Hydrol. Rev., 135, 2379–2390. Processes, 27, 117–131, doi:10.1002/hyp.9521. Wood, A. W., and J. C. Schaake, 2008: Correcting errors Zhao, L., Q. Duan, J. Schaake, A. Ye, and J. Xia, 2011: A in streamflow forecast ensemble mean and spread. J. hydrologic post-processor for ensemble streamflow Hydrometeor., 9, 132–148. predictions. Adv. Geosci., 29, 51–59.

gary lackMann is a professor of at- mospheric sciences in the Department of marine, earth, and Atmospheric Sciences at Science/reFerence north carolina State University. Previously, Dr. lackmann served as a faculty member at New! priNt & CDhe paSt DecaDeFormats haS been characterizeD by SUny college at Brockport, a post doctoral meteorological observation, computing techniques, and data-visualization tech- Synoptic Midlatitude student at mcgill University in montreal, remarkable advances in nology. and a research meteorologist with the t Midlatitude Synoptic Meteorology technology and facilitates the meaningful application of concepts, theories, and tech- naval Postgraduate School in monterey, links theoretical concepts to modern niques using real data. As such, it both serves those planning careers in meteorological california. Dr. lackmann has worked at research and weather prediction and provides a template for the application of modern noAA’s Pacific marine envi ronmental lab “Professor Lackmann has preparedtechnology in classroom andan laboratory settings. excellent “Professor lackmann has prepared an excel- in Seattle, and has undertaken extensive lent synthesis of quintessential modern collaborations with the national Weather covereD in Depth: midlatitude synoptic–dynamic meteorology Service. He won an award for collaborative synthesis of quintessential modern] midlatitude that will serve advanced undergraduate Synoptic–dynamic meteorology applied research with noAA (2003), and ] Synoptically driven mesoscale phenomena and graduate atmospheric science students received the leroy and elva martin Award ] as well as working scientists and forecast- for teaching excellence at north carolina synoptic-dynamic meteorology.] numerical” weather prediction ers very well.” State University (2004). —Lance Bosart, Distinguished Professor, Department of Atmospheric and Environmental Sciences, University at Albany, — Lance Bosart, Distinguished Professor, Department of atmospheric and State University of New York. aMS bookS supports the American me- environmental sciences,teorological Society’sthe mission University to advance of albany, state University of new York “Dr. lackmann has given students of the atmospheric and related sciences, the aMerican Meteorological Society meteorology the gift of an outstanding technologies, applications, and services and related sciences, technologies, applications, and services for the benefit of society. up-to-date textbook on weather analysis for the benefit of society. it is the goal of (AmS) seeks to advance the atmospheric Founded in 1919, AmS has a membership of approximately 14,000 professionals, stu- and forecasting. He combines the building AmS Books to help educate the public and dents, and weather enthusiasts. AmS publishes 10 atmospheric and related oceanic and blocks of theory with modern observations advance science by publishing and distrib- hydrologic journals (in print and online), sponsors more than 12 conferences annually, Midlatitude Synoptic and modeling to provide an exceptionally uting high-quality books unique in content Midlatitude Synopticand offersMeteorology: numerous programs and services. Visit AmS online at www.ametsoc.org. clear understanding of the workings of our and character. atmosphere.” lackMann Meteorology —Steve Businger, Professor of Meteorology, University of Hawaii at Manoa. Dynamics, Analysis, and Forecasting DynaMicS, analySiS “not since Petterssen’s 1956 book has an & ForecaSting intelligent and useful synthesis of synoptic Gary LaCkMann meteorology been written by an expert in gary lackmann the field. lackmann fulfills a desperate need among today’s students and teachers. no The past decade has been characterized by remarkable advances book has more approached rossby’s vision of a bridge across the gap between theory in meteorological observation, computing techniques, and data- A m e r i c A n m e t e o r o l o g i c A l S o c i e t y and observation than Mid latitude Synoptic Meteorology .” —David M. Schultz, reader at visualization technology. Midlatitude Synoptic Meteorology links the Centre for Atmospheric Science, School of Earth, Atmospheric and Environmental Sci- theoretical concepts to modern technology and facilitates the ences, The University of Manchester. meaningful application of concepts, theories, and techniques © 2011, PaPerbaCk, 360 PaGeS Digital edition also available using real data. As such, it both serves those planning careers ISbn: 978-1-878220-10-3 in meteorological research and weather prediction and provides aMS CoDe: MSM a template for the application of modern LIST $100 MeMber $75 STuDenT $65 technology in the classroom. Midlatitude Synoptic Meteorology TEACHING CD Instructors: Midlatitude Synoptic Teaching CD, containing over 1,000 lecture slides, is now available!

AMERICAN METEOROLOGICAL SOCIETY www.ametsoc.org/amsbookstore 617-226-3998

98 | JANUARY 2014 Unauthenticated | Downloaded 10/10/21 01:09 PM UTC half-page horizontal -- 6.5” x 4.5625”