STATE OF NEW YORK PUBLIC SERVICE COMMISSION ______: In the Matter of Department of Public Service : Staff Investigation into the Utilities’ : Preparation for and Response to August 2020 : Case 20-E-0586 Tropical Storm Isaias and Resulting Electric : Power Outages : ______

AFFIDAVIT OF BRIAN CERRUTI ON BEHALF OF CONSOLIDATED EDISON COMPANY OF NEW YORK, INC.

I, Brian Cerruti, being duly sworn, depose and say:

1. My name is Brian Cerruti. My business address is 4 Irving Place, New York,

New York 10003. My official title is Project Specialist, but I perform the functions of a meteorologist. I have been employed by Consolidated Edison Company of New York, Inc.,

(Con Edison or the Company) for seven years.

2. My responsibilities include creating custom weather forecasts for the Company,

leading weather discussions on storm preparation conference calls, responding to questions from

operating personnel before and during a storm, overseeing contracts with weather information

vendors, developing, calibrating, verifying, and implementing outage prediction models for Con

Edison and Orange and Rockland Utilities, Inc. (O&R), and providing subject matter expertise to

Con Edison and O&R as needed. I am also the lead on the Company’s Probabilistic Load

Forecasting Project, which is a tool co-developed with a vendor, TESLA, that quantifies weather uncertainty in the Company’s electric and steam load forecasts.

3. I earned a Bachelor of Science degree in Meteorology from Rutgers University’s

George H. Cook School of Environmental and Biological Sciences and a Master of Science

1 degree in Atmospheric Science from Rutgers University’s Graduate School of Atmospheric

Science. My Master’s thesis was entitled “A Statistical Forecast Model of Weather-Related

Damage to a Major Electric Utility.” This thesis was also accepted for peer-reviewed publication by the Journal of Applied Meteorology and Climatology in February 2012.

4. Before working at Con Edison, I worked as a contractor at the Meteorological

Development Laboratory National Weather Service Headquarters in Silver Spring, Maryland as a

meteorologist. My job responsibilities included developing probabilistic wind speed forecasts

for over a thousand weather stations across North America using the Ensemble Kernel Density

model output statistics technique. I also applied my subject matter expertise in precipitation type

forecasting and algorithms to convert several scripts into for the Short Range Ensemble

Forecasting System Winter Guidance project. While working at the Meteorological

Development Laboratory, I published another graduate school research paper by the peer-

reviewed Bulletin of the American Meteorological Society, entitled “The Local Winter Storm

Scale: A Measure of the Intrinsic Ability of Winter Storms to Disrupt Society.” Prior to that, I was the Head Forecaster for the Rutgers University Public Service Electric and Gas (PSE&G)

Undergraduate Forecasting Program where my responsibilities included developing, calibrating, verifying, and implementing a damage prediction model for PSE&G’s overhead electrical distribution system, supervising undergraduate forecasters creating forecasts for PSE&G, leading weather discussions on conference calls in advance of adverse weather to aid in storm preparation, and providing subject matter expertise as needed.

Purpose of Affidavit

5. The purpose of my affidavit is to describe my forecasts for Tropical Storm Isaias and to explain why they were reasonable. I will discuss important concepts,

2 different weather models, my Isaias forecasts, and respond to specific statements in the Order1

and the Department of Public Service (Department) Report.2

Weather Forecasting Concepts

6. A weather forecast is a snapshot prediction of the future state of the atmosphere.

Meteorologists develop weather forecasts using many sources, including radar, satellite, and data, numerical weather prediction models (weather models), and model output statistics. Meteorologists apply their experience and expertise to such data to produce a weather

forecast. A weather forecast is typically comprised of temperature, precipitation, and wind

forecasts. In addition, meteorologists can also develop track forecasts, which predict the path of

tropical storms and sometimes nor’easters.

7. Numerical weather prediction models have become an integral part of developing

a weather forecast. For example, a single weather forecast may derive information from many

numerical weather prediction model forecasts. Numerical weather prediction models generally use differential equations to predict the future state of the atmosphere based on the initial conditions of the atmosphere.

8. Meteorologists compare numerical weather prediction model forecasts to observed conditions to assess the strengths and weaknesses of a given model’s predictions.

Meteorologists typically assess observed conditions by analyzing satellite, radar, and surface weather station data. Often, meteorologists use historical weather model forecast performance as

1Case 20-E-0586, In the Matter of Department of Public Service Staff Investigation into the Utilities’ Preparation for and Response to August 2020 Tropical Storm Isaias and Resulting Electric Power Outages, Order to Commence Proceeding and Show Cause (issued November 19, 2020) (Order).

2 Id., New York State Department of Public Service Staff Interim Investigation Report on Tropical Storm Isaias (issued November 19, 2020) (Report).

3 a general guide on which models to favor. However, models also have inherent biases due to the

physics or horizontal resolution, the distance between model calculation nodes, implemented within each model. As a result, post-processing of “raw” numerical weather prediction model output can result in improved weather predictions. This post-processing technique can take many forms, the most popular of which is called model output statistics. A meteorologist will use all this information - numerical weather prediction model forecasts, model output statistics forecasts, satellite data, radar data, surface observations, and his or her own experience – to develop a weather forecast.

9. The United States’ National Hurricane Center, which is part of the National

Weather Service (through the National Centers for Environmental Prediction), is the primary

source for information about tropical cyclones in the Atlantic Basin. The National Hurricane

Center develops hurricane-specific numerical weather prediction models and statistical models to

assist with tropical cyclone forecasting. The National Hurricane Center also develops consensus

models to improve track and intensity forecasts of tropical cyclones. The National Hurricane

Center produces forecasts of tropical cyclone track and intensity using these tools. It also

analyzes observational data to determine the location, intensity, and structure of tropical cyclones

for input into other numerical weather model simulations.

10. The National Weather Service develops local weather forecasts, which may use

numerical weather prediction models as inputs. The National Weather Service uses the National

Hurricane Center’s track and intensity forecasts as inputs to its own weather forecasts when tropical storms or hurricanes threaten a local area.

11. An “ensemble” is a collection of multiple numerical weather prediction model forecasts. Collectively, the group of forecasts helps to better capture the variability of the

4 atmosphere more completely than a single forecast. Often, ensembles are developed by slightly varying the atmospheric initial conditions in a numerical weather prediction model and then running that same model over the various initial conditions to generate multiple forecasts for given steps forward in time, in essence capturing the natural chaos within the atmosphere.

Alternatively, the same initial conditions can be used in similar numerical weather prediction models where the variability in the atmosphere is captured by the differing model physics, parameterizations, and computational schemes.

12. Spaghetti plots3 are used to visualize the output from numerical weather

prediction ensembles. They take many forms. The two most common are meteograms and

spatial maps. Meteogram spaghetti plots generally show the variability of ensemble members for

a specific location and weather variable over time, such as temperature forecasts from ensemble

members for a single weather station. Such forecasts are beneficial for diagnosing confidence in

the forecast for a specific location for a specific weather parameter. Another common spaghetti

plot is in the form of a map. For example, a map can be generated to show the track predictions

of cyclone centers over time from all ensemble members. Such forecasts are beneficial for

diagnosing confidence in where a tropical cyclone will track.

Comparison of Weather Models

13. The main “global” models are the American (Global Forecast System, or GFS),

European (European Centre for Medium-Range Weather Forecasts Integrated Forecast System, or ECMWF) and Canadian (Canadian Meteorological Centre’s Global Environmental

Multiscale, or GEM) models. In my experience, the European model is better than the American

3 Spaghetti plots is the nickname given to the computer model images that show potential tropical cyclone paths. When shown together, the individual model tracks can somewhat resemble strands of spaghetti noodles.

5 and Canadian models at predicting the evolution of the atmosphere. Among the three, the

European model routinely shows the best forecast verification and the Canadian the worst.

There have also been several analyses comparing the performance of each model and showing that the European model has better verification over many events. 4 In general, the National

Hurricane Center relies primarily on the American and European models, and the consensus models it develops.

14. The American model is run by the National Centers for Environmental Prediction

(NCEP), a division of the National Weather Service (and thus the National Oceanic and

Atmospheric Administration) that includes the National Hurricane Center. The principal advantage the European model has over the American model is in computational power. Recent upgrades to the running the American model have boosted performance, but

NCEP runs the American model simultaneously with regional models, the American model

4 See, e.g., Hagedorn, R., Buizza, R., Hamill, T.M., Leutbecher, M. and Palmer, T.N. (2012), Comparing TIGGE multimodel forecasts with reforecast‐calibrated ECMWF ensemble forecasts Q.J.R. Meteorol. Soc., 138: 1814-1827. https://doi.org/10.1002/qj.1895 (ECMWF forecasts were of comparable or superior quality to the multimodel predictions. The ECMWF EPS was the main contributor for the improved performance of the multimodel ensemble.); Zheng, Minghua, et al. Evaluating U.S. East Coast Winter Storms in a Multimodel Ensemble Using EOF and Clustering Approaches. Monthly Weather Review, vol. 147, no. 6, 2019, pp. 1967–1987. (The ECMWF ensemble has the best performance for the medium- to extended-range forecasts compared to the NCEP GFS and CMC GEM for historical East Coast cyclone cases at lead times of 1-9 days.); Korfe, N. G. and Colle, B. A., Evaluation of Cool-Season Extratropical Cyclones in a Multimodel Ensemble for Eastern North America and the Western Atlantic Ocean, Weather and Forecasting, vol. 33, no. 1, pp. 109–127, 2018. doi:10.1175/WAF-D-17- 0036.1 (ECMWF has the greatest probabilistic skill when compared to CMC and NCEP; however, on average the 90-member multi-model ensemble (NCEP+CMC+ECMWF) has better probabilistic skill than any single ensemble.); Titley, HA, Bowyer, RL, Cloke, HL. A global evaluation of multi‐model ensemble tropical cyclone track probability forecasts. Q J R Meteorol Soc. 2020; 146: 531– 545. https://doi.org/10.1002/qj.3712 (The verification results from the three individual ensembles show that the track probability forecasts from the ECMWF EPS display the best reliability, skill and value compared with the NCEP GEFS and MOGREPS-G.); Julian T. Heming, Fernando Prates, Morris A. Bender, Rebecca Bowyer, John Cangialosi, Phillippe Caroff, Thomas Coleman, James D. Doyle, Anumeha Dube, Ghislain Faure, Jim Fraser, Brian C. Howell, Yohko Igarashi, Ron McTaggart-Cowan, M. Mohapatra, Jonathan R. Moskaitis, Jim Murtha, Rabi Rivett, M. Sharma, Chris J. Short, Amit A. Singh, Vijay Tallapragada, Helen A. Titley, Yi Xiao, Review of Recent Progress in Tropical Cyclone Track Forecasting and Expression of Uncertainties, Tropical Cyclone Research and Review, Volume 8, Issue 4, 2019, Pages 181-218, ISSN 2225-6032, https://doi.org/10.1016/j.tcrr.2020.01.001. (ECMWF track forecasts in the Atlantic Basin are of similar skill to the National Hurricane Center’s track forecasts and consensus track forecasts but superior to the CMC and HWRF individual models for tropical cyclones in the 2015-2017 seasons).

6 ensembles, and multiple post-processing systems like model output statistics. The European supercomputers only run the European model and its ensemble. This allows the European model to harness more computing power for each model run. Also, the extra power allows the

European model to predict the atmosphere at a higher spatial resolution than the American model, which allows the European model to resolve finer details in the forecast compared to the

American model.

15. The other main difference between the American and European models is how the models set up the initial conditions, the snapshot of the weather at the very beginning of the forecasts. Due to chaos in the natural atmosphere, any errors in the initialization will grow over time and create inaccuracies in the numerical weather prediction model forecasts. The European model uses a “hot start” while the American model uses a “cold start.” Hot and cold refer to how much motion the atmosphere is allowed to have when the models start making forecasts. In a

“hot start” like the European model, the initialization scheme includes running the model to make forecasts on historical data leading up to the initialization time. During the European model’s “hot start” process, the model is also taking in actual weather conditions and adjusting the forecasts to better match the known weather conditions. Once the European model’s initialization numerical weather prediction model catches up to “now,” its mathematical atmosphere already represents the full motion of winds. The American model’s “cold start” takes the latest observational data and begins making forecasts without “spinning up” the atmosphere. This allows the American model to run with fewer computational resources than the European model but creates more numerous small-scale errors in the model’s forecasts that, on average, grow faster than errors within the European model.

16. The main “regional” models are the North American Mesoscale (NAM) model

7 and the Short Range Ensemble Forecast (SREF) model, an ensemble model based on the NAM.

The “high resolution” models are the high-resolution NAM (hiresNAM, nestedNAM, or

NAMnest), which is the NAM run for smaller areas and at finer horizontal resolution, the High

Resolution Rapid Refesh (HRRR) model, and Rapid Refresh (RAP) model.

17. The National Hurricane Center also runs special hurricane models to predict tropical cyclone track and intensity. The National Hurricane Center combines its individual models into “consensus” models, which demonstrate a smaller track and intensity error than single model predictions.

18. The Order at footnote 23 cites the University Corporation for Atmospheric

Research (UCAR) website. UCAR does not run models. Rather, UCAR displays track and intensity predictions from other sources, such as the American model, Canadian model, National

Hurricane Center specific models, and National Hurricane Center consensus models, as spaghetti plots on a map. UCAR is essentially an aggregator in this regard. In general, the individual models displayed on the UCAR generated maps often display worse forecast performance than the European model for track but are comparable for intensity forecasts.

Con Edison Weather and Impact Forecasts

19. One of my main responsibilities is forecasting storms and their potential impacts on Con Edison and O&R. My first step is to develop a weather forecast. This means I analyze numerical weather prediction model forecasts and compare them with satellite, radar, and surface weather observations and post-processed weather guidance such as model output statistics. I also factor in my experience with the given weather models within the anticipated weather pattern or scenario. My forecast is typically comprised of a temperature, precipitation, and wind forecast, and in the case of a tropical storm or hurricane, a track forecast.

8 20. Second, I develop a range of outcomes for each weather variable to assess my

confidence in the forecast and convey the uncertainty in the weather scenario. I base these

ranges on the spread of the numerical weather prediction guidance, my confidence in each model prediction, and my own experience with using these models as a forecast aid combined with the weather pattern or scenario.

21. Third, I develop an impact forecast for each operating region using impact models that I developed in 2015 and that I have updated annually since 2016. These models are tools that predict the number of outage job tickets the Company may experience for a given weather event. I use my weather forecast ranges as inputs into the impact models. The result is a range of outage jobs. I then enter the outage ranges into a probabilistic tool to produce a probabilistic forecast of outages jobs for each region for each day or weather event. I communicate my

weather and impact forecasts to Con Edison and O&R through email updates and during pre-

storm conference calls, which is what I did for Tropical Storm Isaias. To be clear, however, I do

not decide how Con Edison or O&R will staff for a storm. Con Edison and O&R operating personnel make that decision based in part on the information I provide them.

22. The impact models use multiple linear regression to relate historical daily outage

job data with historical daily weather variables. The models are fitted separately for each operating region and system. There are overhead distribution impact models for the

Bronx/Westchester, Brooklyn/Queens, Staten Island, and O&R areas and underground impact models fitted for the Brooklyn/Queens, Bronx, and Manhattan regions. After I update the models each year, I typically present on the improvements to my customers, the operational regions, and other management or interested parties. Over the years, I have made informal presentations to Utilimet, a group of utility-employed meteorologists who strive to collaborate

9 for improvements in weather and impact prediction. I also developed a presentation on the

impact models for the American Meteorological Society’s 2017 annual conference.5

23. I use the underground models during adverse winter weather to describe the

impact salt dispersion and melting snow and ice have on the underground distribution system.

The underground models use daily data from 2008-2019 covering the December - March period.

24. I use the overhead models whenever there is potential for the weather to exceed

certain triggers, typically wind-related weather events. The overhead models also include

variables to diagnose the effects of wet snow and heavy precipitation. The input variables for the

overhead models include the temperature variable (a measure of the thermal inertia of the distribution system to capture heat effects); maximum temperature; liquid water equivalent precipitation (the amount of rainfall plus melted snow, sleet, and freezing rain); maximum wind gust over two consecutive days (or a single day if gusts will not exceed 30 miles per hour for two

consecutive days); peak wind gust direction; snowfall; ratio of total snowfall to liquid water equivalent snowfall (also known as the snow to liquid ratio – used for diagnosing the stickiness, or wetness, of the snow); rainfall; magnitude of coastal flooding at Kings Point, New York; and

several binary variables. The binary variables are, in effect, switches to allow the model to

predict if lightning strikes, severe thunderstorms, or tropical cyclones will occur or not. O&R’s

overhead impact model also has a freezing rain variable to diagnose the effects of ice storms.

Con Edison’s overhead models do not include such a variable due to a lack of ice storm cases in

the Con Edison service territory. The models are fit with data since 2001 and employ a “roll-up”

technique to capture multi-day weather and impact events. For example, if a storm strikes on

5 While I developed the presentation, the presentation was delivered by a colleague. https://ams.confex.com/ams/97Annual/webprogram/Paper310999.html.

10 day 1, but weather or impact continue into day 2, the storm becomes a two-day event.

25. The overhead models also use parameterizations I have developed to assess

foliage, soil moisture, and the potential for “major storms.” Foliage is calibrated using historical temperature data and PhenoCam data. The PhenoCam is a network of cameras that point towards vegetation. The individual images’ pixel colors are analyzed for the percentage of red, green, or blue in each image. Studying how the colors change through the course of the year yields information about leaf coverage compared to summer. I calculated the typical first date of full summer foliage and the first day of bare winter foliage and then use a temperature analysis to derive foliage as a percent, which allows me to fit a foliage curve for different weather station locations. The soil moisture is developed from calibrating historical precipitation and foliage data with historical streamflow sensor data from the United States Geological Survey station data. I use the historical relationship between precipitation and foliage with streamflow to first predict the streamflow. Next, I compare the predicted streamflow with the historical 120-day centered median and standard deviation of streamflow data for the date the storm is predicted to occur. I then calculate soil saturation as the predicted streamflow divided by the sum of the median and standard deviation streamflow.

26. The overhead model also employs a wind vector technique I developed to

measure how rare the wind gust and direction are (wind speed and direction define a wind

vector). The rationale behind this variable is that the longer time between windstorms with a

given peak wind gust direction, the more trees become exposed to such winds. The technique

searches historical records for events where the winds are measured to be equal to or greater than

the event’s wind gust speeds from a similar direction. Then the difference between the storm’s

start date and the date the last time the peak gust was from a similar direction of equal or greater

11 strength is calculated and assigned as the wind vector method value.

27. After Winter Storms Riley and Quinn, I added a variable to the overhead models

that I call the “major storm” variable to account for overall cyclone intensity and duration. The

major storm variable is comprised of the sum of high tide water level values relative to minor

flood level (10 feet above Mean Lower Low Water) as measured by the Kings Point, New York tidal gauge for the duration of a weather event. For example, if a storm affects the Company for three high tide cycles and the total water levels at Kings Point, New York are 10.1 feet above

MLLW, 11 feet above MLLW, and 10.5 feet above MLLW, then the major storm variable is 1.6

(a result of the sum of each high tide water level minus 10 feet). The rationale behind this variable follows from an analysis of the typical weather pattern for many highly impactful weather events in the Company’s service territory. This pattern is usually comprised of a deep low-pressure system (characterized by low values of mean sea level pressure) near or just south of New York City and relatively higher mean sea level pressure readings north of New York

City. Such a pattern would result in a general easterly wind, which tends to blow across the western Atlantic Ocean and local area, piling water along the western end of Long Island Sound, where the Kings Point, New York tidal gauge is located. The magnitude of peak water levels is correlated with storm intensity and/or strong pressure gradients and large-scale strong wind conditions typical of events that tend to bring higher overhead impact than would otherwise be expected. Summing the high tide values over multiple high tides accounts for duration when weather systems persist for multiple high tides. Duration is important for major storms because, all things being equal, a storm with longer duration is likely to result in higher impact levels.

28. The overhead models also have variable transformations which combine multiple

inputs into a unique predictor that improves the model results. For example, the models include

12 a “tropical” variable, which is a function of peak gust speed, peak gust direction, foliage, rainfall,

and the “tropical binary” variable. The tropical variable is designed as a throttle for the impact

model predictions during tropical cyclones such that tropical cyclones with high precipitation,

strong winds, and high foliage are more impactful than events with relatively lower winds,

precipitation, or foliage. For example, a tropical cyclone may strike the area with powerful

winds from a rare direction with heavy rainfall in the middle of summer, like Tropical Storm

Irene in 2011. The tropical variable yields a high value for such an event and is assigned a high

weight by the regression program so that the predictions are increased for strong tropical

cyclones because it bears a high correlation with such events. However, not all tropical cyclones

are created equal. For example, Tropical Storm Fay, which struck the Company’s service

territory on July 10, 2020 as a 40 mile per hour tropical storm, was a minor event for both Con

Edison and O&R. The impact models predicted a range of 32-258 outage jobs for both service territories and the actual number of jobs was 180. However, had the “tropical” binary variable

been switched off, or had the “tropical” transformation not been present, the impact predictions for Fay would have underestimated the impact Fay brought to the Con Edison’s and O&R’s service territories by about 50 percent.

29. After running the impact models, I review their output so that the impact predictions are sensible and correlate with the weather predictions. On the rare occasion when the impact model predictions raise questions, I pursue the root cause of any issues immediately, and if necessary, re-fit the model. For example, an older version of the impact model included a variable that broke the peak wind gust down into discrete categories such as 0 for peak gusts less than 45 miles per hour, 1 for values greater than or equal to 45 but less than 55 miles per hour, and 2 for peak gusts greater than or equal to 55 miles per hour. After observing the model

13 produce an impact prediction that did not make sense, I realized that this needed to be changed.

Specifically, the model predicted a very small increase in impact when changing the peak gust

from 43 to 44 miles per hour, but a very large increase in impact when increasing the peak gust

from 44 to 45 miles per hour. As such, I judged the model’s sensitivity to be non-intuitively high

around these categorical breakpoints. Accordingly, I re-fit the model omitting the categorical

variables.

30. Before the current impact models, the Company used impact models developed

by O’Neill Consulting, IBM (Deep Thunder), and then refreshed versions of the O’Neill model.

Both the O’Neill model and IBM’s Deep Thunder model were for the Bronx/Westchester region only and used a limited history of outage job tickets and weather data to predict outages. The

IBM model employed a numerical weather prediction model directly coupled to the outage model it developed to produce impact forecasts. The Company was dissatisfied with the O’Neill model because it often over-predicted the impact for smaller-scale events due to its limited calibration dataset. In other words, the O’Neill model was developed almost exclusively from events that resulted in Company impact above normal operations. As a result, the model was poorly calibrated for distinguishing between non-impactful days and impactful days. The current model that I developed has improved calibration for distinguishing impactful and non-impactful days due to its inclusion of non-impactful dates. The new model also has improved performance for higher impact events due to the addition of the objective foliage and soil moisture parameters, the wind vector technique, and other predictor transformations not in the O’Neill model. The

Company phased the O’Neill model out of operations in 2016 once the models I developed became operational. The Company was dissatisfied with the IBM models because the IBM impact model was coupled to the IBM Deep Thunder numerical weather prediction model. By

14 coupling the numerical weather prediction model with the outage model and using the same developmental dataset as the O’Neill model, the Company often received poor predictions that did not materialize. This is because coupling the numerical weather prediction model with the outage model forces the outage model to use the numerical weather prediction forecasts as the sole input. In other words, a poor model forecast which would usually be given low weight in a forecast creation process by a meteorologist would be the only weather forecast used in IBM’s outage prediction model. For example, IBM’s numerical weather prediction model tended to over-intensify thunderstorms. This resulted in many impact predictions that were unrealistically high compared to actual data.

Isaias Forecasts

31. I developed Con Edison’s and O&R’s weather and track forecasts for Tropical

Storm Isaias according to the process I have already discussed. Starting on Wednesday, July 29 and continuing every day through Tuesday August 4, I analyzed the latest satellite data and

trends to draw conclusions about the storm’s structure, intensity, and track. Next, I analyzed

different numerical weather prediction models and compared model predictions to actual conditions. Then, I analyzed various track and intensity model predictions and compared them

to the National Hurricane Center’s forecasts, which I regarded as a “sanity check.” I also

reviewed the National Weather Service’s forecasts. Lastly, I developed my own track and

intensity forecasts, including an assessment of the various possible paths Isaias might take to

determine the potential impact on the Con Edison and O&R service territories. During my

deposition in this case on October 13, 2020, I was asked about how I developed my forecasts for

Isaias and gave essentially the same explanation.

32. Overall, my forecast tracks from July 29 through August 4 and my expected storm

15 arrival time aligned well with the National Hurricane Center. As explained below, my weather

forecasts for Con Edison and O&R were of similar or improved accuracy when compared to

those of the National Weather Service, which makes local forecasts.

33. Exhibit CERRUTI-1 is the special email briefing (sent in addition to my daily

forecast) I sent on the morning of Wednesday, July 29 discussing “Potential Tropical Cyclone

9,” which would eventually become Tropical Storm Isaias. I discussed the two most likely

scenarios at that point, one based on the European model, which is the left image in the body of

the email, and the other based on the American model, which is the right image in the body of the email. I stated that I favored the European model, which brought the storm over Hispaniola

and then towards Florida. The American model had a stronger storm tracking north of

Hispaniola and possibly up the coast. I stated that both scenarios were possible, and that

“Potential Tropical Cyclone 9” could bring tropical storm force winds to the Con Edison and

O&R service territory by 8:00 p.m. on August 3.

34. Exhibit CERRUTI-2 shows the National Hurricane Center’s track forecast issued

on July 29 around the same time as my July 29 special briefing. The National Hurricane

Center’s track forecast is most like the European model’s predictions, but the National Hurricane

Center’s forecast does not extend as far out in time as the images I chose to include in my special

briefing. This is because the National Hurricane Center’s track forecast did not assess the risks

of the system tracking into Con Edison’s and O&R’s service territories this far out. I chose to

include information that extends further in time than the National Hurricane Center’s track

forecasts to highlight the storm’s risks to Con Edison’s and O&R’s service territories.

35. Exhibit CERRUTI-3 shows my weather forecast from the morning of July 29.

While I had issued a special briefing on Potential Tropical Cyclone 9 that day, I did not include

16 any special update in my standard weather forecast because the storm was not yet a “named” storm. My weather forecast for August 4 called for winds of 5-15 miles per hour with gusts up to 20 miles per hour and isolated showers and thunderstorms.

36. Exhibit CERRUTI-4 are the point forecast matrices issued on July 29 before 9:30 a.m. by the National Weather Service Upton, New York Weather Forecast Office for Con

Edison’s and O&R’s service territories.6 These are the last National Weather Service forecasts issued before my weather forecast. Consistent with my weather forecast, the National Weather

Service forecast for August 4 was “Light Winds,” winds of 0-7 miles per hour, and a 50 percent chance of rain.

37. Exhibit CERRUTI-5 is the special email briefing I sent on the morning of

Thursday, July 30. I used PowerPoint to draw my Isaias track scenarios onto a background image comprised of the European model’s ensemble track forecasts, which I obtained from

CFAN, a vendor that produces European ensemble track forecasts for tropical cyclones. I chose the European model from CFAN because the European Model historically has the best track verification of any numerical weather prediction model and its ensemble captures the track uncertainty better than most other tools. I overlaid three scenarios: Track 1, a weaker storm that dissipates over Hispaniola and drifts toward south Florida; Track 2, a weaker storm that survives

Hispaniola, brushes eastern Florida, then tracks up the East Coast, making landfall in central

Long Island; and Track 3, a stronger storm that curves out to sea and misses the East Coast. I assessed my confidence in each scenario and chose to place the highest confidence on Track 2,

6 The point forecast matrices presented as exhibits are available at: https://mesonet.agron.iastate.edu/wx/afos/p.php?pil=PFMDMX&e=202012140150. For a key to decoding the information see https://www.weather.gov/lwx/readrdf.

17 which showed I expected the storm to track close to the Con Edison service territory. It should

be noted that any track scenario between Track 2 and Track 3 depicted a cyclone with relatively

low intensity which would require a track well inland resulting in further weakening before

arriving in Con Edison’s or O&R’s service territory. I forecast that the storm could bring

tropical storm force winds to the area as early as 12:00 p.m. on Tuesday, August 4.

38. Exhibit CERRUTI-6 shows the National Hurricane Center’s track forecast issued around the same time as my July 30 special briefing. It shows Isaias tracking within the bounds I drew on my forecast. As shown in Exhibit CERRUTI-7, the National Hurricane Center did not

include an arrival time for the Company’s service territories.

39. Exhibit CERRUTI-8 is my weather forecast from the morning of July 30, which includes an update on Tropical Storm Isaias. My weather forecast for August 4 was again for 5-

15 mile per hour winds, gusts up to 20 miles per hour, and isolated showers and thunderstorms.

40. Exhibit CERRUTI-9 shows the National Weather Service’s point forecast

matrices issued on the morning of July 30. The National Weather Service’s forecast for August

4 was again for “Light Winds,” defined as 0-7 miles per hour, and a 50 percent chance of rain.

41. Exhibit CERRUTI-10 is the email briefing I sent to the Company the morning of

Friday, July 31. I again used PowerPoint and European model ensemble information from

CFAN. I revised my confidence for each track based on the latest track forecasts from various

sources, primarily the European model and its ensemble, and the latest actual track data. I no

longer expected Track 1 because Isaias had re-developed its center north of Hispaniola and

intensified. Between Tracks 2 and 3, I thought Track 3 was more likely because Isaias had survived passage over Hispaniola and intensified, similar to the Track 3 scenario. I was concerned about the abundance of western leaning model forecasts, however, so I stated that a

18 track between Track 2 and Track 3 was the most likely outcome. I forecast that the storm could bring tropical storm force winds to the area as early as 12:00 p.m. on Tuesday, August 4.

42. Exhibit CERRUTI-11 is the National Hurricane Center’s track forecast issued

around the same time as my forecast on July 31. The National Hurricane Center’s track lies in-

between my Track 2 and Track 3 scenarios. As shown in Exhibit CERRUTI-12, the National

Hurricane Center predicted that tropical storm force winds could arrive around Monday evening.

My timing forecast was a bit later than the National Hurricane Center’s forecast because I expected Isaias to slow down off the coast of Florida, which is what happened.

43. Exhibit CERRUTI-13 is my weather forecast from the morning of July 31. My

weather forecast for August 4 was for Isaias’s circulation to cause breezy winds along the coast

(read in the email as “NYC” but “NYC and southern Westchester” is implied and understood to

the Company), but lighter winds inland (read in the email as “O&R” but “O&R and northern

Westchester” is implied and understood to the Company). The coastal wind forecast was for 15-

25 mile per hour winds with peak gusts up to 35 miles per hour. The inland wind forecast was

for winds of 5-15 miles per hour with gusts up to 25 miles per hour. I also forecast periods of

rain with embedded thunderstorms. This implies rain is expected and will be heavier than

showers.

44. Exhibit CERRUTI-14 shows the National Weather Service point forecast matrices

issued the morning of July 31. The National Weather Service’s forecast for August 4 was for

“Gentle” winds across New York City (8-14 miles per hour), with “Light” (0-7 miles per hour)

winds elsewhere and a 50 percent chance of rain.

45. Exhibit CERRUTI-15 is the email briefing I sent to the Company the morning of

Saturday, August 1. I again used PowerPoint and the European model ensemble information

19 from CFAN. I further revised my confidence for each track based on the latest track forecasts

from various sources, primarily the European model and its ensemble, and the latest actual track

data. Track 2 showed Isaias making landfall in southern Florida, tacking up the East Coast well

inland and away from the ocean, and tracking slightly west of New York City. This track

represented the western edge of the guidance envelope based on the latest information. Track 3 showed a stronger storm, which brushed against the Outer Banks of North Carolina, then curved

out to sea. I considered both Tracks to be “Likely,” but continued to state that the most likely

track would be somewhere between the two. I also forecast that the storm could bring tropical

storm force winds to the area as early as 12:00 p.m. on Tuesday August 4.

46. Exhibit CERRUTI-16 shows the National Hurricane Center’s track forecast

issued around the same time as my forecast on August 1. The National Hurricane Center’s track

lies in-between my Track 2 and Track 3 scenarios. As shown in Exhibit CERRUTI-17, the

National Hurricane Center predicted that tropical storm force winds could arrive late Tuesday

morning or early Tuesday afternoon, almost identical to my forecast.

47. Exhibit CERRUTI-18 is my weather forecast from the morning of August 1. My forecast for August 4 was for winds of 15-25 miles per hour, with peak gusts up to 35 miles per

hour and inland winds up to 5-15 miles per hour, with peak gusts up to 25 miles per hour. I also

forecast periods of rain with embedded thunderstorms and 1-4 inches of rainfall.

48. Exhibit CERRUTI-19 shows the National Weather Service Point Forecast

Matrices issued the morning of August 1. The National Weather Service’s forecast for August 4

was for “Breezy” winds across New York City (15-22 miles per hour), with “Gentle” (8-14 miles per hour) winds for Westchester, and “Light” (0-7 miles per hour) winds further north and west.

The precipitation forecast included a 50 percent chance of rain.

20 49. Exhibit CERRUTI-20 is the email briefing I sent to the Company the morning of

Sunday, August 2. I again used PowerPoint and the European model ensemble information from

CFAN. I further revised my confidence for each track based on the latest track forecasts from various sources, primarily the European model and its ensemble, and the latest actual track data.

Track 2 showed Isaias making landfall along the North and South Carolina boarder, tacking up the East Coast well inland and away from the ocean, and tracking slightly west of New York

City. This track represented the western edge of the guidance envelope based on the latest information. A track similar to Track 2 would likely result in a relatively weak storm due to its track well inland and away from the warm ocean surface temperatures. Track 3 showed a stronger storm, which brushed against the Outer Banks of North Carolina, then curved out to sea.

Track 2 was now “Expected” compared to Track 3. I forecast that the storm could bring tropical storm force winds to the area as early as 12:00 p.m. on Tuesday, August 4.

50. Exhibit CERRUTI-21 shows the National Hurricane Center’s track forecast issued around the same time as my forecast on August 2. The National Hurricane Center’s track

lies in-between my Track 2 and Track 3 scenarios. As shown in Exhibit CERRUTI-22, the

National Hurricane Center predicted that tropical storm force winds could arrive late Tuesday

morning or early Tuesday afternoon, almost identical to my forecast.

51. Exhibit CERRUTI-23 shows my weather forecast from the morning of August 2.

My weather forecast for August 4 was for winds of up to 40 miles per hour with peak gusts up to

55 miles per hour along the coast and winds up to 25 miles per hour with peak gusts up to 40

miles per hour inland. I also forecast periods of rain and embedded thunderstorms resulting in 1-

4 inches of rainfall.

52. Exhibit CERRUTI-24 shows the National Weather Service point forecast

21 matrices issued the morning of August 2. The National Weather Service’s forecast for August 4

was for sustained winds of 37 miles per hour with peak gusts of 56 miles per hour across and

1.35 inches of rain for New York City, sustained winds of 28 miles per hour with peak gusts of

43 miles per hour and 2.24 inches of rain for Westchester, and sustained winds of 13 miles per

hour and no wind gusts and 1.37 inches of rain further northwest for Montgomery, NY.

53. Exhibit CERRUTI-25 is the email briefing I sent to the Company the morning of

Monday, August 3. I again used PowerPoint and the European model ensemble information from CFAN. I further revised my confidence for each track based on the latest track forecasts from various sources, primarily the European model and its ensemble, and the latest actual track data. Track 2 continued to be “Expected” and showed Isaias making landfall along the North

Carolina boarder, tacking up the East Coast well inland and away from the ocean, and tracking slightly west of New York City. Track 3 was “Not Expected.” I forecast that the storm could bring tropical storm force winds to the area as early as 12:00 p.m. on Tuesday, August 4.

54. Exhibit CERRUTI-26 shows the National Hurricane Center’s track forecast issued around the same time as my forecast on August 3. The National Hurricane Center’s track lies very close to my Track 2. As shown in Exhibit CERRUTI-27, the National Hurricane Center

predicted that tropical storm force winds could arrive late Tuesday morning or early Tuesday afternoon, almost identical to my forecast.

55. Exhibit CERRUTI-28 shows my weather forecast from the morning of August 3.

My weather forecast for August 4 was for winds up to 45 miles per hour with peak gusts up to 60

miles per hour along the coast and winds up to 30 miles per hour with peak gusts up to 45 miles

per hour inland. I also forecast periods of rain and embedded thunderstorms resulting in 1-4

inches of rainfall for New York City and Westchester and 3-6 inches of rain for O&R.

22 56. Exhibit CERRUTI-29 shows the National Weather Service point forecast matrices

issued the morning of August 3. The National Weather Service’s forecast for August 4 was for

sustained winds of 51 miles per hour with peak gusts of 70 miles per hour across and 2.40 inches

of rain for New York City, sustained winds of 44 miles per hour with peak gusts of 62 miles per

hour and 3.11 inches of rain for Westchester, and sustained winds of 26 miles per hour with peak

gusts of 40 miles per hour and 3.52 inches of rain further northwest for Montgomery, NY.

57. Exhibit CERRUTI-30 is the email briefing I sent to the Company the morning of

Tuesday August 4. I again used PowerPoint, but this time I updated my track scenario onto a background image comprised of the latest National Hurricane Center track forecast. I accessed the National Hurricane Center’s track for this image through DTN’s WeatherSentry website, zoomed into the Company’s service territories. DTN is the Company’s corporate weather vendor and provides custom mapping and National Hurricane Center track information as part of

its product services. I used this approach because the track scenario I envisioned closely

matched that of the National Hurricane Center, and my confidence in the storm’s track was high

as of the morning of August 4. I forecast that the storm could bring tropical storm force winds to

the area as early as 12:00 p.m. on Tuesday, August 4. As shown in Exhibit CERRUTI-31, the

National Hurricane Center predicted that tropical storm force winds could arrive late Tuesday

morning or early Tuesday afternoon, almost identical to my forecast.

58. Exhibit CERRUTI-32 shows my weather forecast from the morning of August 4.

My weather forecast for that day was for winds up to 45 miles per hour with peak gusts up to 65

miles per hour along the coast and winds up to 30 miles per hour with peak gusts up to 50 miles

per hour inland. I also forecast periods of rain and embedded thunderstorms resulting in 1-4

inches of rainfall for New York City and Westchester and 3-6 inches of rain for O&R.

23 59. Exhibit CERRUTI-33 shows the National Weather Service point forecast matrices issued the morning of August 4. The National Weather Service’s forecast for that day was for sustained winds of 39 miles per hour with peak gusts of 57 miles per hour across and 0.44 inches of rain for New York City, sustained winds of 39 miles per hour with peak gusts of 57 miles per hour and 0.63 inches of rain for Westchester, and sustained winds of 40 miles per hour with peak gusts of 59 miles per hour and 1.90 inches of rain further northwest for Montgomery, NY.

60. I note that the National Weather Services’ sustained wind and peak gust forecast for August 4 is lower compared to the forecast I issued on August 3. This may be due to the fact the point forecast matrices capture the wind information as a snapshot valid for the given date and time, meaning that the forecast may have been higher than shown in the point forecast matrices. However, these values also represent the National Weather Service wind forecast valid for the corresponding time when the actual winds peaked across the area. The peak gust timing for most of the area actually occurred around 1:00 p.m. on August 4, and the peak wind speed and gust in the point forecast matrices corresponds to the “17” hour forecast, which adjusted for converting Greenwich Mean Time to Eastern Local Time yields a wind forecast valid for 1:00 p.m. on August 4. Therefore, the wind conditions predicted at 1:00 p.m. by the National

Weather Service correspond closely with the timing of the actual peak winds and can be taken as the National Weather Service’s true peak wind and gust forecast. As my discussion of my track forecasts demonstrates, there were no material differences between my track forecasts and the

National Hurricane Center’s track forecasts from July 29 through August 4. However, my track forecasts on July 29 through August 2 favored a more westerly track than the National Hurricane

Center, meaning that my forecasts were slightly closer to the actual track of the storm than the

National Hurricane Center at this time.

24 61. Both my track forecasts and the National Hurricane Center’s track forecasts show

a general westward shift in each successive forecast, which was consistent with the model

consensus. However, this general westward shift is different than what occurred on August 4, the day of the storm. By August 3, the models all developed high confidence in a track just west of New York City through Rockland and Westchester counties. This forecast generally held through the morning of August 4. However, from 8:00 a.m. to 2:00 p.m. on August 4 the storm’s actual track deviated roughly 35 miles west of the predicted track. This deviation placed the cyclone outside the cone of uncertainty issued by the National Hurricane Center. The National

Hurricane Center’s cone of uncertainty is a method to predict the potential track of a tropical cyclone. Its width is calibrated from historical National Hurricane Center track forecast errors and centered on the National Hurricane Center’s forecast track. The width in either direction represents a total two-thirds chance of where the cyclone will track based on the calculated historical track errors. That means there is a one in three chance of a cyclone tracking outside of the cone and a one-in-six chance of a cyclone tracking to either side of the cone (left of the cone

or right of the cone). Isaias’s actual deviation from the National Hurricane Center’s track and

cone represents an approximately 1 in 400 chance of occurring. I calculated this from the typical

forecast track deviation of the National Hurricane Center’s track forecast at 12-hours

(approximately 25 miles). Interpolating this value to six hours yields a six-hour deviation of

12.5 miles, assuming the National Hurricane Center’s initial position is equal to the storm’s actual initial position. Entering the interpolated error of 12.5 miles into a normal probabilistic distribution as the spread, with a deviation of 35 miles, yields a deviation at the 99.75th percentile

in the six-hour forecast.

62. Exhibit CERRUTI-34 shows the wind speed and wind gust forecasts that I issued

25 in advance of the storm for Con Edison and O&R. Exhibit CERRUTI-35 shows the wind speed

and wind gust forecasts that the National Weather Service issued for Con Edison and O&R from

the point forecast matrices valid for JFK Airport to represent New York City, White Plains, New

York to represent Westchester, and Montgomery, New York to represent O&R. It also shows

the Root Mean Squared Error (RMSE), a typical forecast accuracy metric, of the peak sustained

and peak gust forecast for each area. The National Weather Service and I issued similar

forecasts.

63. Exhibit CERRUTI-36 shows the forecast performance of my peak sustained wind predictions compared to those of the National Weather Service, calculated from the values given in Exhibits CERRUTI-34 and CERRUTI-35. It shows the RMSE for my peak sustained wind forecasts and those of the National Weather Service. It also shows the area averaged RMSE for my forecasts compared to the National Weather Service. Lastly, it shows the percent improvement of my forecast compared to the National Weather Service for the dates in question where a positive value indicates my forecast shows an improvement. The data shows that over this period, my peak sustained wind forecasts showed a 31.5 percent improvement over the

National Weather Service peak sustained wind forecasts issued at the same times with respect to the actual winds that occurred on August 4.

64. Exhibit CERRUTI-37 shows the forecast performance of my peak wind gust

predictions compared to those of the National Weather Service, calculated from the values given

in CERRUTI-34 and CERRUTI-35. It shows the RMSE for my peak wind gust forecasts and

those of the National Weather Service. It also shows the area averaged RMSE for my forecasts

compared to the National Weather Service. Lastly, it shows the percent improvement of my

forecast compared to the National Weather Service for the dates in question. The data shows

26 that over this period, my peak wind gust forecasts showed a 27.5 percent improvement over the

National Weather Service peak wind gust forecasts issued at the same times as compared to the

actual winds that occurred on August 4.

65. Exhibit CERRUTI-38 shows how the National Hurricane Center’s tropical storm

force wind speed probability forecasts changed over time for three specific locations: Central

Park, JFK Airport, and Poughkeepsie. Exhibit CERRUTI-38 shows that the probability of

tropical storm force winds generally increased as Isaias approached the Company’s service

territory, but that the increase was not steady and monotonic. There are peaks and valleys in the

chart which represent the probability of tropical storm force winds decreasing from one forecast

to the next.

Allegations

66. The Order (at 16) states that each day from July 31 through August 4 there was a

“considerable possibility” that Isaias would have “dramatic impacts on electric service” when the

storm struck New York. This is incorrect. The phrase “considerable possibility” has no meaning in meteorology. The appropriate focus should be on “probability” and my track and weather forecasts reflected the gradual westward shift in a similar manner to National Hurricane Center and National Weather Service. Exhibit CERRUTI-38 shows the National Hurricane Center’s predicted probability of tropical storm force winds (sustained winds greater than 39 miles per hour) for every Isaias update it issued from July 28 to August 4.7 The probability of tropical

storm force winds is given for three locations in or around Con Edison’s and O&R’s service

territories: JFK Airport, Central Park, and Poughkeepsie. Contrary to the Order’s statement,

7 Values are derived from the National Hurricane Center archived ‘Wind Speed Probabilities’ for Isaias located at: https://www.nhc.noaa.gov/archive/2020/al09.

27 Exhibit CERRUTI-38 shows that the probability of tropical storm force winds was inconsistent

from one forecast to the next. In fact, there are several forecasts where the probability of tropical

storm force winds decreases from one forecast to the next. But, despite these decreases, I continued to predict consistently increasing confidence in the slightly further west track. I used

my experience with such National Hurricane Center products and experience making forecasts

for low confidence scenarios to effectively smooth out noise in the latest predictions, which

prevented an inter-day forecast update from showing suddenly lower chances of tropical storm

force winds, similar to the National Hurricane Center’s products. Contrary to the Order’s

implications, the probability of tropical storm force winds does not exceed 50 percent until 5:00

a.m. on August 3 for JFK Airport and Central Park, and 11:00 a.m. on August 3 for

Poughkeepsie. In contrast, my weather forecast included the likelihood of tropical storm force

winds for New York City and southern Westchester County as early as the morning of August 2,

roughly the same time as the National Weather Service, according to the data shown in

CERRUTI-34 and CERRUTI-35.

67. The Order (pp. 15-17) alleges that my forecasts were unreasonable, and the

Department’s Report (pp. 8-10) implies that my forecasts were materially off compared to the

National Weather Service forecasts. Both the Order and Report are incorrect. As I have

explained, I consulted the National Hurricane Center’s track forecasts and National Weather

Service’s weather forecasts in developing my own forecasts, and my forecasts closely mirrored

theirs. Moreover, as I have explained, my sustained wind and gust forecasts were generally more

accurate than the National Weather Service over the period in question.

68. With respect to wind, the Report (at 8) states that on August 1 the National

Weather Service “predicted a 30 percent chance of sustained Tropical Storm Force winds (40-50

28 miles per hour) along the coast and a 10 to 20 percent chance of sustained Tropical Storm Force

winds (39-55 miles per hour) across the interior.” This statement is misleading. As shown in

Exhibit CERRUTI-38, the National Hurricane Center issued multiple forecasts. The forecast that

I had consulted when I issued my forecast at or around 9:30 a.m. stated that the probability of

tropical storm force winds was 6-10 percent across New York City and only 4 percent for

interior areas. My forecast that tropical storm force winds were not likely was consistent with

this forecast. Later that day, the National Hurricane Center increased its probability forecast to

12-19 percent across New York City and 9 percent inland. Then again, at around 5:00 p.m., the

National Hurricane Center increased its probability forecast to 27-33 percent for New York City

and 24 percent inland. By 11:00 p.m., however, the National Hurricane Center decreased its

probability forecast to 10-14 percent for New York City and 8 percent for inland areas. Thus, the Report’s allegation is based on cherry picking only one forecast from a moment in time on that day and does not tell an accurate story. Moreover, the up and down in the National

Hurricane Center’s probability forecasts demonstrates why it would be inadvisable for me to update my forecasts throughout the day but instead look at the larger picture. Here, I note that for the day, the National Hurricane Center’s average probability forecast was 17 percent for New

York City and 11 percent for inland areas, which is consistent with the forecast I issued for that day.

69. The Report (at 9) alleges Con Edison’s rainfall forecast was less than the National

Weather Service on August 1. Exhibit CERRUTI-39 shows my precipitation forecasts issued from July 31 to August 1. It also includes the actual rainfall measured on August 4 and the Bias of the forecasts issued from August 2 to August 4. The date range for the Bias calculation is selected to be August 2 to August 4 because the point forecast matrices issued by the National

29 Weather Service are non-deterministic for the first 72 hours of each forecast (a probabilistic

forecast is issued instead) and, thus, were not available until August 2 for the arrival of Isaias on

August 4. Bias is a typical forecast verification metric which measures, on average, how far

above or below the forecast is compared to the actual value. My forecast from August 1 called for 1-4 inches of rain across Con Edison’s and O&R’s service territories.

70. Exhibit CERRUTI-40 shows the National Weather Service’s point forecast matrices issued before 9:30 a.m. each day from July 1 to August 4. The point forecast matrices do not retain quantitative precipitation forecasts beyond three days of lead time but do provide a probability of precipitation for forecasts issued on July 31 to August 1. As Exhibits CERRUTI-

39 and CERRUTI-40 show, my precipitation forecasts are consistently higher than those from the National Weather Service over the period in question. The rainfall forecasts I made were higher than actual rainfall measurements for all regions, as shown by a Bias value greater than 0.

I did not create a quantitative precipitation forecast for July 31, but I did predict “periods of rain with embedded thunderstorms,” language that conveys a very high chance of precipitation. The

National Weather Service probability of precipitation is predicted to be 50 percent each day for all areas for July 31 and August 1, which strongly implies a lower precipitation forecast compared to the precipitation forecasts I issued on July 31 and August 1.

71. The Order (p. 17) further alleges that I “disregarded numerous models that

conflicted with [my] opinion” when I developed my forecasts and appears to be referring to the

UCAR maps cited in footnote 23. Contrary to the Order’s allegation, in developing my forecasts

for Isaias I reviewed the models the Order appears to refer to and gave them little individual

weight based on their poor historical performance, my knowledge of their strengths and

weaknesses, and performance for Isaias so far.

30 72. The UCAR maps the Order refers to aggregate multiple weather models. There

are “Early” track forecasts, which plot tracks for up to 14 different models, of which, three are

“consensus” models which use an average of the individual members, and “Late” forecasts,

which plot tracks for up to 16 different models. I note that the Order ignores that the UCAR

models, along with all other models, were changing day-to-day and run-to-run and did not

display a high degree of confidence at any time that a last-minute westward shift as did occur

would occur.

73. Exhibit CERRUTI-41 shows the percent of individual Isaias track forecasts that predicted a track close to or west of the New Jersey and Pennsylvania boarder where Isaias ultimately tracked. Overall, very few individual members tracked Isaias far enough west compared to Isaias’s actual track. It was not until August 4, the day the storm struck the service territories, and too late to substantially augment staffing plans, that any (only one in three) consensus models predicted Isaias’s further west track. The trend in these models day-by-day is

relatively flat and is consistently well below 50 percent.

74. Moreover, the models the Order cites did not ever predict a westerly track with a high degree of confidence and are generally known as being less accurate than the European model on average. For example, the Canadian model, which the Order presumably cites, typically has the worst verification among global models and is far behind the European model.

Another model, the HWRF, is useful for predicting the track and intensity of tropical cyclones

while undergoing consistent or rapid intensification. Isaias was not predicted to rapidly intensify

as it tracked up the East Coast, so it was appropriate to give the HWRF little weight. In other

words, while a small minority of individual track forecast models at times predicted a track

further west than the National Hurricane Center’s final track forecast, the members in questions

31 generally historically verified with larger track errors than the European model and the National

Hurricane Center track forecasts.

75. I gave more weight to the European model and its ensembles because the

European model is a global weather model that has an excellent track record for predicting tropical cyclone tracks and the general weather pattern. In my experience, and this is supported by numerous studies, it typically has the best performance dealing with cases where tropical cyclones interact with other weather feature such as with Superstorm Sandy. For example,

Exhibit CERRUTI-42 shows the performance of many numerical weather prediction models for the 2019 Atlantic hurricane season as compared to a baseline statistical model, the “CLIPPER5” model. The forecast skill of each model is skill relative to improvement over a background

“climatology” forecast, a traditional standard of comparison for skill forecasts. In this context, a

“climatology” forecast ignores the real-time weather features steering a tropical cyclone and instead relies on historical track information from cyclones of a similar intensity in a similar location during the same time of the year. Exhibit CERRUTI-42 shows that the European model’s track forecasts are better than all available forecasts, including the National Hurricane

Center’s forecasts, through the first 48-72 hours. It also shows that the only track forecast at any lead time to outperform the European model is the National Hurricane Center’s track forecast.

From 96 hours and beyond, the National Hurricane Center’s forecasts are the best, but a close second is the European track forecasts. Therefore, my decision to lean more heavily on the

European model forecasts and the National Hurricane Center track forecasts when developing my own tropical cyclone track forecasts is fully justified and consistent with the National

Hurricane Center’s own preferences.

76. During my deposition in this case, the examining attorney showed me the UCAR

32 maps referred to in footnote 23 (or similar maps) and I gave essentially the same explanation

about their weaknesses and being outliers and why I gave greater weight to the European and

consensus models. However, both the Order and the Report ignore my explanation.

77. The Order (p.17) further notes that during the storm one of Con Edison’s two

meteorologist positions was vacant and suggests that the lack of a second meteorologist caused

Con Edison’s forecasts to be unreasonable. I disagree. First, I am an experienced meteorologist

capable of developing a forecast without assistance from another meteorologist. For example,

weeks before Tropical Storm Isaias, I forecast Tropical Storm Fay and the storm matched my

forecast. Also, I have years of experience supervising other forecasters while Head Forecaster of

the Rutgers University and PSE&G Undergraduate Forecasting Program. Second, as I have

explained, my track and weather forecasts were reasonable and closely aligned with the National

Hurricane Center and the National Weather Service. Third, even when two meteorologists are

employed by the Company, there have been instances where one meteorologist is unavailable for

forecasting due to illness, vacation, or other reasons. My understanding is that the configuration

of multiple meteorologists is required in the long run to fulfill all the Company’s meteorological

needs. However, a single meteorologist can, and in the past, has, fulfilled all the Company’s

needs over a short duration such as for individual weather events.

78. The Order also quotes an email, Exhibit CERRUTI-43, from the day of the storm

in which I told SUNY Albany meteorologist Nick Bassill that I thought the National Hurricane

Center was “really fluffing th[e] storm up” and uses it to imply that my forecasts were unreasonable. However, the Order takes the statement out of context and creates a misleading impression. I mentioned to Dr. Bassill that I found the National Hurricane Center’s current intensity of 60 knots to be odd and higher than the surrounding information would otherwise

33 suggest. With the cyclone center fully over land, which means multiple surface weather stations

could sample Isaias’s winds, and such a small and compact system, one would expect to find

sustained winds of equal or close to 60 knots around the cyclone’s center, but I noted winds to be

much lower than that both near Isaias’s center and along the Delaware and Maryland coast where

a line of damaging thunderstorms was beginning to develop. Dr. Bassill agreed with me in his response and indicated confusion about why the National Hurricane Center measures peak winds over the sea instead of over land in such circumstances.

79. Moreover, the Order ignores that I adjusted my August 4 forecast to reflect the

National Hurricane Center’s update referenced in the email. I assessed the latest numerical

weather prediction model forecasts for the weather conditions that would result from Isaias

passing slightly further west and causing a powerful line of thunderstorms to develop and cross

the area around noon on August 4. As a result of my analysis, I increased both the wind gust

forecast and impact prediction to account for the latest local weather forecast information on

August 4. In addition, the comment at issue could not have negatively affected Con Edison’s

pre-storm staffing because it came on the morning of the storm, and because the forecast I issued

that morning showed increased winds and higher overhead impact relative to my August 3

forecast. If anything, the higher winds and higher impact prediction would have resulted in extra

crewing being requested, which, to the best of my knowledge, eventually occurred.

80. This concludes my affidavit.

34