Advanced weather monitoring for a Cable stayed bridge

Chandrasekar Venkatesh

June 11, 2018

Bachelors in Electrical and Electronics Engineering

Ph.D. in Electrical Engineering

Department of Electrical Engineering and Computer Science,

College of Engineering and Applied Science

Committee Chair: Dr. Arthur Helmicki

Committee Members:

Dr.Victor Hunt

Dr. Douglas Nims

Dr. Ali Minai

Dr. William Wee

Abstract

In the northern United States, Canada, and many northern European countries, snow and ice pose serious hazards to motorists. Potential traffic disruptions caused by ice and snow are challenges faced by transportation agencies. Successful winter maintenance involves the selection and application of the most optimum strategy, over optimum time intervals. The risk associated with operating the bridges during winter emergencies varies depending on the size of the structure, the material of the stays, volume of average daily traffic, geographical location, nature of terrain and surroundings etc.

The ‘Dashboard’, a monitoring system designed to help the bridge maintenance and operation personnel was developed at University of Cincinnati Infrastructure Institute. This was implemented at the Veterans Glass City Skyway Bridge in Toledo, Ohio. This system was also extended to the Port Mann Bridge in Vancouver, Canada.

The aim of this research is to come up with an advanced monitoring system which will help the bridge management team make control actions during winter emergencies on the VGCS and Port Mann bridges. The current monitoring system gives information on the status of ice accumulation/ snow accretion or shedding based on last one hour’s weather data. This dissertation focuses on adding intelligence to the existing system through addition of sensors, identifying patterns in events, adding cost-benefit analysis and incorporating forecast parameters, while also extending the system to other bridges and structures.

In essence a new, more intelligent monitor designed to make the control decisions easier and have all necessary information to make such decisions in one place will be invaluable to the officials in the transportation departments.

ii iii

Acknowledgements

I am grateful for my parents and my sister for their patience, encouragement and infinite support all my life. I would like to express my sincere gratitude to Dr. Arthur Helmicki and

Dr. Victor Hunt at the University of Cincinnati Infrastructure Institute, who have been great mentors during this rewarding journey. Their continuous feedback, enthusiasm, and encouragement were very significant for the completion of this project. I would also like to thank Dr. Douglas Nims, Dr.Ali Minai and Dr. William Wee for their thoughtful comments and discussions to improve the quality of my dissertation.

I am also thankful for Dr. Mahdi Norouzi, Mr. Biswarup Deb, Mr. Nithyakumaran

Gnanasekaran and Ms. Monisha Baskaran at University of Cincinnati and Dr. Ahmed

Abdelaal of the University of Toledo for their collaboration and assistance throughout the project. This project would have never been possible without the funding and support from the Ohio Department of Transportation (ODOT) and the British Columbia Ministry of

Transportation(BCMOT). I am also grateful to all my roommates, lab mates and friends who supported me during my time at Cincinnati.

Lastly, I thank the College of Engineering and Applied Science at the University of

Cincinnati, my Graduate Program Coordinator Ms. Julie Muenchen and the Department of

Electrical Engineering and Computer Science for all the facilities and scholarships.

iv

Table of Contents

Abstract ...... ii

Acknowledgements ...... iv

1. Introduction...... 1

2. Literature review and Weather monitoring system ...... 5

2.1. Data sources ...... 5

2.2. Ice accumulation/Snow accretion and shedding conditions ...... 7

2.2.1. Ice accumulation and shedding ...... 7

2.2.2. Snow accretion and shedding ...... 8

2.3. Dashboard algorithm ...... 9

2.4. Dashboard performance ...... 12

2.5. Weather forecast data ...... 13

3. Case study 1: VGCS bridge ...... 15

3.1. Bridge introduction...... 15

3.2. Dashboard details, Past events analysis and performance ...... 17

3.2.1. Introduction ...... 18

3.2.2. Sensor suite installation ...... 20

3.2.3. State transitions ...... 21

3.2.4. Dashboard performance ...... 23

v

3.3. Improvements to the algorithm ...... 31

3.4. Accumulation and shedding pattern in significant events form the past ...... 36

3.4.1. Revisiting past evets ...... 36

3.4.2. Assumptions and limitations ...... 45

3.4.3. Types of ice accumulation ...... 46

3.4.4. Types of ice shedding...... 48

3.4.5. Relationship with lane closure ...... 50

3.5. Lane closure and associated costs ...... 54

3.6. Probability of accidents and associated costs ...... 56

3.7. Conclusion ...... 58

4. Case Study 2: Port Mann bridge ...... 60

4.1. Bridge Introduction ...... 60

4.2. Dashboard details, past events analysis and performance ...... 64

4.2.1. Introduction ...... 64

4.2.2. Source stations and weights...... 65

4.2.3. State transitions ...... 71

4.2.4. Past events and Dashboard performance ...... 73

4.3. Snowpack processing ...... 76

4.4. Snow thickness on stays - Ts calculations ...... 86

4.5. Using forecast data sources in dashboard system ...... 91

vi

4.5.1. Data sources and models ...... 91

4.5.2. Dashboard rules and thresholds ...... 103

4.5.3. Analysis and results ...... 105

4.6. Forecast dashboard system ...... 117

4.6.1. Forecast bar charts ...... 117

4.6.2. Event summary charts ...... 123

4.6.3. New dashboard components ...... 126

4.7. Conclusion ...... 128

5. Conclusion and Future Work ...... 130

6. References ...... 132

Appendix A: classification ...... 136

Appendix B: VGCS significant events charts ...... 141

Appendix C: Port Mann 2016-17 event summaries ...... 153

vii

List of Figures

Figure 1- Cable stayed bridges in North America [3] ...... 2

Figure 2 - Algorithm concept ...... 11

Figure 3 – Veterans Glass City Skyway Bridge [5] ...... 16

Figure 4 – Ice accumulation on the VGCS stays [14] ...... 16

Figure 5 – Dashboard website components [5] ...... 18

Figure 6 – Website History section ...... 19

Figure 7 – Website Algorithm parameters section [5] ...... 20

Figure 8 – Custom sensor suite installed at VGCS [9] ...... 20

Figure 9 - Accumulation cycle state transitions...... 22

Figure 10 - Shedding cycle state transitions ...... 22

Figure 11 - Ice thickness and Leaf wetness on Dec 9, 2013 ...... 26

Figure 12 - Ice thickness on Jan 22, 2015 ...... 27

Figure 13 - Temperature and solar radiation on Jan 22, 2015 ...... 28

Figure 14 - Stay temperatures on Jan 22, 2015 ...... 29

Figure 15 - Piece of ice from Jan 23, 2015 [22]...... 30

Figure 16 – Comparison if Ice and derived variable ‘Cumulative Ice’...... 32

Figure 17 – Station weights section ...... 33

Figure 18 – Email lists section...... 34

Figure 19 – Alarm thresholds section ...... 35

Figure 20 – Transition times section ...... 36

Figure 21 - Ice accumulation on stays [25] ...... 38

viii

Figure 22 Ice shedding from the stays [26] ...... 40

Figure 23 - Piece of ice from the stays[22] ...... 44

Figure 24 - Stay temperatures and Solar radiation on Jan 22, 2015 ...... 47

Figure 25 – Mar 3, 2015 event summary with accumulation and shedding types ...... 51

Figure 26 – Jan 22, 2015 event summary with accumulation and shedding types ...... 52

Figure 27 – Dashboard system concept...... 58

Figure 28 – New improved dashboard system concept ...... 59

Figure 29 – Port Mann Bridge [32] ...... 61

Figure 30 – Snow falling from Port Mann stays [35] ...... 62

Figure 31 – Crack windshield of a vehicle on Port Mann Bridge [36] ...... 63

Figure 32 – Port Mann Dashboard dials ...... 64

Figure 33 – Port Mann monitor locations ...... 68

Figure 34 – Port Mann Dashboard Accretion state transitions ...... 72

Figure 35 - Port Mann Dashboard Shedding state transitions ...... 72

Figure 36 – Snowpack processing results for events in December 2013 ...... 79

Figure 37 – Snowpack processing results for events in Feb-Mar 2014 ...... 80

Figure 38 – Alert comparisons for different Cut-off frequencies (Nov 10, 2013 to Jan 23, 2014)

...... 81

Figure 39 - Alert comparisons for different Cut-off frequencies (Feb 10, 2014 to Apr 31, 2014)

...... 81

Figure 40 – Snowpack processing results for winter 2017-17 ...... 85

Figure 41 – Snow density chart for Dec 5, 2016 event ...... 86

Figure 42 – TS values comparison for Feb 22-25, 2014 event ...... 90

ix

Figure 43 – Additional locations on Pacific coast for forecast analysis ...... 97

Figure 44 - Forecast comparison for temperature at Jan 29, 2018 23:00 for the bridge location.. 98

Figure 45 - Forecast comparison for temperature at Jan 29, 2018 15:00 for the bridge location 100

Figure 46 - Forecast comparison for temperature at Jan 30, 2018 05:00 for the bridge location 101

Figure 47 - Forecast comparison between South Tower and Midspan locations ...... 102

Figure 48 – Forecast data analysis for Dec 19-20, 2017 event...... 106

Figure 49 - Forecast data analysis for Dec 19-20, 2017 event ...... 107

Figure 50 – Events and false alerts during winter 2017-18 in DICast model forecast data processing ...... 109

Figure 51 - Events and false alerts during winter 2017-18 in Dark Sky Weathernet’s forecast data processing ...... 113

Figure 52 - Events and false alerts during winter 2017-18 in Wunderground’s forecast data processing ...... 114

Figure 53 – Summary of forecast analysis at additional locations on Pacific coast ...... 116

Figure 54 - Forecast data analysis for Dec 19-20, 2017 event ...... 118

Figure 55 – Forecast persistence for Dec 19-20, 2017 event (6 hour windows) ...... 119

Figure 56 - Forecast data analysis persistence calculations for Dec 19-20, 2017 event ...... 120

Figure 57 - Forecast persistence for Dec 19-20, 2017 event (hourly) ...... 121

Figure 58 - Forecast persistence with estimated TS for Dec 19-20, 2017 event (hourly) ...... 122

Figure 59 – Change in Average estimated TS for Dec 19-20, 2017 event (hourly) ...... 123

Figure 60 – Feb 17-18, 2018 event summary ...... 124

Figure 61 – New proporsed dashboard layout ...... 127

Figure 62 – March 3, 2015 event summary ...... 142

x

Figure 63 – Jan 21-24, 2015 event summary...... 143

Figure 64 – Jan 3 2015, event summary ...... 143

Figure 65 – Apr 3, 2014 event summary ...... 144

Figure 66 – Dec 3, 2013 event summary...... 144

Figure 67 – Mar 25, 2013 event summary ...... 145

Figure 68 – Mar 16, 2013 event summary ...... 145

Figure 69 – Feb 26, 2013 event summary ...... 146

Figure 70 – Feb 20-24, 2013 event summary ...... 146

Figure 71 – Jan 3-21, 2009 event summary ...... 147

Figure 72 – Dec 9-10, 2007 event summary ...... 147

Figure 73 – Dec 10-11, 2007 event summary ...... 148

Figure 74 – Mar 27, 2008 event summary ...... 149

Figure 75 – Mar 28, 2008 event summary ...... 150

Figure 76 – Dec 16, 2008 event summary ...... 150

Figure 77 – Dec 19, 2008 event summary ...... 151

Figure 78 - Dec 23-24, 2008 event summary ...... 151

Figure 79 – Winter 2016-17 events summary chart at Port Mann bridge ...... 153

xi

List of tables

Table 1 - Data sources sampling times ...... 10

Table 2 - Accumulation and shedding types and their correlation with lane closure ...... 54

Table 3 - Lane closure costs ...... 55

Table 4 –List of weather stations for Port Mann Dashboard monitor ...... 67

Table 5 – Port Mann monitor weather station weights ...... 70

Table 6 – Port Mann summary of events ...... 74

Table 7 – Spatio-temporal analysis using airports ...... 75

Table 8 – Alert summaries for different Cut-off frequencies ...... 84

Table 9 – Weather forecast data sources ...... 95

Table 10 – Summary of DICast model forecast data processing ...... 112

Table 11 – Airport and LOCW conditions and events categorization ...... 137

Table 12 – PWS classifier conditions categorization using WMO codes ...... 138

Table 13 – VGCS RWIS staitons Precipitation code classification (post Nov 2015) ...... 139

Table 14 - VGCS RWIS staitons Precipitation intensity classification (post Nov 2015) ...... 139

Table 15 - VGCS RWIS staitons Precipitation categorization (after Nov 2015) ...... 140

Table 16 – VGCS significant events list ...... 142

Table 17 – VGCS all significant events Accumulation and Shedding types ...... 152

Table 18 - Winter 2016-17 events summary at Port Mann bridge ...... 155

xii

1. Introduction

In the northern United States, Canada, and many northern European countries, snow and ice pose serious hazards to motorists. Potential traffic disruptions caused by ice and snow are challenges faced by transportation agencies[1]. Until now, snow and ice control had been generally reactive in nature. However, this has becoming increasingly costly as traveling public expectations for road conditions increased. Successful winter maintenance involves the selection and application of the most optimum strategy, over optimum time intervals[2]. Figure 1 below shows a list of all cable stayed bridges in North

America as of 2012. The region in blue shows the bridges that are affected by severe winter weather between the months of November and March. The risk associated with operating the bridges during winter emergencies varies depending on the size of the structure, the material of the stays, volume of average daily traffic, geographical location, nature of terrain and surroundings etc.

1

Legend = Open to Traffic = Under Construction = Proposed 1 2 FIGURE 1 Known cable stayed bridges located in the mainland United States and lower tier of Canada 3 overlaid onto the damagingFigure ice 1storm- Cable footprint stayed map bridges (1946-2014) in North(Weeks, America2014). [3] The Veterans Glass City Skyway (VGCS) Bridge is located in Toledo, the fourth most

populated city in Ohio. The bridge crosses the Maumee river and carries three lanes of

traffic on each side. The VGCS Bridge put into service in 2007, is a cable-stayed type bridge

with a single pylon, and the stays are covered with stainless steel sheathing. The stays of

the bridge are covered with stainless steel sheathing, which offers aesthetic and life cycle

cost advantage over other materials. However, during winter there are issues of ice

forming on top of these stainless-steel cables and eventually falling off onto traffic, which

presents safety issues for the motorists traveling below. In such situations, the Ohio

Department of Transportation (ODOT) closed down lanes in as and when required until

2 adverse conditions passed. Due to the complexity of ice accumulation/shedding phenomena, determining control actions during such occurrences is extremely difficult.

In order to aid ODOT in their preparation and response to these icing events, an automatic detection and monitoring web based system, the ‘Dashboard’ was implemented in January of 2011[4]. Since its inception the dashboard has undergone several changes and modifications based on the addition of data sources and sensors and the learning over the last five years. This system utilizes existing weather station measurement data to monitor most recent weather conditions and detect possible events, alerting officials and providing tools to document and further assess the situation[5].

A similar system was designed for the Port Mann bridge in the Metro Vancouver region in Canada to aid the operations and management team of the British Columbia

Ministry of Transportation. The nature of weather events at this bridge is different from the events at VGCS, with snow accretion and shedding being the major threat as opposed to icing. However, the concept of a ‘Dashboard’ monitor was extended to this bridge as well, but as more complexity in the system are discovered the need for increasing the intelligence of the system increases.

The aim of this research is to come up with an advanced monitoring system which will help the bridge management team make control actions during winter emergencies on the VGCS and Port Mann bridges. The current monitoring system gives information on the status of ice accumulation/ snow accretion or shedding based on last one hour’s weather data. The intelligence of the monitoring system is improved by considering events as a whole to identify types of ice accumulation and possible time and type of shedding with regards to VGCS. The new monitor could also give suggestions on lane closure decisions

3 and costs associated with such control actions. The intelligence of the monitor can also be increased by incorporating forecast data and alerting officials ahead of time to help them prepare for potential events. In essence a new, more intelligent monitor designed to make the control decisions easier and have all necessary information to make such decisions in one place will be invaluable to the officials in the transportation departments.

Commonly used terms:

 Dashboard team – research team at University of Toledo and University of

Cincinnati who have collaborated on both the case studies in this thesis

 Events – winter weather related emergency on the bridge like ice accumulation or

snow accretion on the stays

 Control actions – control measures taken during events like closing lanes on the

road, dropping collars, diverting traffic etc.

 Operations and management team, officials – team of bridge operations technicians

and managers who make decisions about control actions

 Alarm or Alert – email or text alert sent by dashboard system to all users indicating

possible emergency situation

 Temperature – unless specified otherwise it refers to air temperature

 Classifier – sensor that distinguishes type of precipitation

 SOS – Snow On Stays, indicates the presence on snow on stays in the Port Mann

dashboard monitor

 DICast – Dynamically Integrated forecast model

4

2. Literature review and Weather monitoring system

The primary objective of developing the ‘dashboard’ weather monitoring system was to leverage existing weather data from sources available on the web and to use the sensors installed in the bridge to develop a virtual instrument. This virtual instrument allows weather researchers, infrastructure researchers, and transportation personnel to monitor potential snow/icing events. The dashboard analyzes the most recent data to check for conditions that are favorable for accumulation of ice/snow and alerts officials if those conditions are met. If conditions are met for a continuous period of time the dashboard prompts the users to make a visual inspection of the bridge and confirm the presence of ice/snow or the lack of it. If presence is confirmed the algorithm, then checks for conditions favorable for the ice/snow to come off and sends alerts when they are met.

2.1. Data sources

To make the dashboard robust and reliable data from multiple sources, in and around the bridge are collected. There are primarily four data sources used by the dashboard algorithm; METAR data from airports close to the bridge, Local weather stations, also known as CWOP (Citizen weather Observatory Program) stations, RWIS

(Road Weather Information System) stations and a weather sensor suites or stations installed on or close to the bridge by the research team to capture the micro climate.

5

METAR is a format for reporting weather information[6]. A METAR weather report is predominantly used by pilots in fulfillment of a part of a pre-flight weather briefing, and by meteorologists, who use aggregated METAR information to assist in weather forecasting. METAR typically comes from airports or permanent weather observation stations. Reports are generated once an hour, but if conditions change significantly, a report known as a SPECI may be issued several times in an hour. A typical METAR contain data for the temperature, dew point, wind speed and direction, cloud cover and heights, visibility, barometric pressure, precipitation amount, lightning, and other information.

Local weather(LOCW) stations are stations typically installed by individuals in their private property for their own needs. They share their weather information with public services to make the data available to a wider group of people who would like to use them.

These stations are not equipped with a standard range of equipment like the METAR stations. Most local weather stations have a subset of sensors of a METAR station and they report on an hourly or more frequent basis. The most dependable and useful stations close to the bridge are chosen for the purposes of the dashboard.

RWIS is a combination of technologies that uses historic and current climatologically data to develop road and weather information to aid in roadway-related decision making. A typical RWIS contain data for the air temperature, dew point temperature, surface temperature and relative humidity. Some RWIS stations report wind speed and direction and precipitation type. Primarily the purpose of the RWIS stations is for the department of transportation to assess the condition of the roads and monitor them for their viability in adverse weather conditions.

6

Apart from these there are custom designed weather sensors installed on the bridge

(or very close to it) to capture the bridge microclimate. These stations are designed based on the type of snow/icing problems, the feasibility of installing sensors, the available infrastructure and budget constraints.

2.2. Ice accumulation/Snow accretion and shedding conditions

The conditions for Snow/Ice accumulation and shedding are necessary to design the dashboard system. These rules will be based on the values reported by the sensors, to capture the conditions occurring at the locations to determine if they have the potential to cause ice or snow accumulation or shedding. They are discussed in detail in the two sections.

2.2.1. Ice accumulation and shedding

Based on extensive analysis of the past icing events’ weather conditions[7], it was determined that certain weather properties were common and persistent during each of the ice events between 2007 and 2009. These properties led to the development of the first three criteria to use when evaluating for icing event conditions[8]. The last three rules were added after the sensors were exhaustively tested and then installed on the bridge[9].

Criteria that would likely cause Ice Accumulation:

1. Precipitation with air temperature at the bridge below 32o F.

2. with air temperature at the bridge below 32o F.

7

3. Snow with air temperature at the bridge above 32o F.

4. Wet surface and stay temperature below 32o F.

5. Precipitation recorded by bucket and stay temperature below

32oF.

6. Ice recorded by the Ice detector.

Similar to the Ice accumulation criteria, close observation on the past icing events and testing of the sensors installed at the bridge led to criteria for Ice fall, which are listed below.

Criteria that would likely cause Ice Fall:

1. Air Temperature above 32o F (warm air).

2. Clear sky during daylight (solar radiation).

3. Stay Temperature above 32o F.

4. Solar radiation sensor reports Sunshine.

2.2.2. Snow accretion and shedding

The rules of for snow accumulation and shedding are an extension of the rules for

Ice accumulation with minor adjustments. These thresholds were chosen based on analysis of past events prior to 2013.

Criteria that would likely cause Snow Accretion:

8

1. Precipitation with temperature between -2.2 and +2.2 oC.

2. Snow with temperature between -2.2 and +2.2 oC.

3. Wet surface and temperature between -2.2 and +2.2 oC.

4. Precipitation (ratio > 0.1 or quantity >0.5mm/hr) recorded by

precipitation gauge and temperature between -2.2 and +2.2 oC.

5. Ice recorded by the Ice detector.

Criteria that would likely cause Snow Shedding:

1. Temperature above 1o C (warm air).

2. Clear sky during daylight (solar radiation).

3. Rain (wash off the snow)

These rules are used in determining the conditions at a particular station which then respond through a voting scheme to set the status of conditions.

2.3. Dashboard algorithm

Each of the weather stations has its own sampling rate, which is shown in Table 1 below. This has a considerable significance on the time between which the algorithm is run.

Since METAR data is important in both the accumulation and shedding determination and its update time is 1 hour, the algorithm cannot be run for less than one-hour time difference to avoid checking same records for consecutive algorithm run. Therefore, the dashboard algorithm is run at the top of each hour.

9

Source Update Time

RWIS 10 minutes

METAR 1 hour or less

LOCW 1 hour or less

Bridge Sensors 10-15 minutes

Table 1 - Data sources sampling times

Sensors in any environment can occasionally misread the actual measurement so for all the weather stations (RWIS, METAR, LOCW and Bridge Sensors), we pre-process all the records for the last hour but even if only a certain percentage of the total records from the last hour meet any (or combination) of the three ice accumulation criteria, then the station has met the icing criteria as a whole for the last hour. For RWIS 80% of the data must meet the criteria whereas in the case of the Airports, LOCW and the Bridge sensors 50% of the data is enough. The likelihood of accumulating or shedding conditions are checked and a station condition is determined for each of these stations. If conditions for accumulation or shedding are met then the station is assigned a value 1, if not it is assigned 0. Each station is assigned a weight, based on its reliability, proximity and its ability to reflect the conditions on the bridge[8]. The station weights are multiplied by their respective station conditions and if the sum of these products exceeds a set threshold then conditions for accumulation or fall is said to be met.

10

Figure 2 - Algorithm concept

Below are the mathematical equations representing the dashboard algorithm.

푛 (1) ∑1(푠푖 ∗ 푤푖) = 푊퐿

(2) 𝑖푓 푊퐿 ≥ 푇퐻; 퐼푐𝑖푛푔 푝표푠푠𝑖푏푙푒

Here,

푠푖 = Station condition (0 or 1), value is 1 if a certain percentage of the records meet ice accumulation conditions in the last hour, otherwise 0.

푤푖 = Station weight

n = number of stations

WL = Likelihood

11

TH = Threshold, 0.3 in this case

The dashboard algorithm has a persistence check built into it. Ice accumulation is usually a process that happens over a long period of time, in about 8-10 hours. So, the accumulation cycle in the dashboard has three stages, when the algorithm steps from one stage to the other based on the persistence of conditions favorable for ice accumulation.

When conditions have been met for over 8 hours email alerts are sent to officials to check the stays for presence of ice. Once the presence of ice is confirmed the algorithm then starts to check for conditions of shedding. Shedding occurs at a much faster rate than accumulation, therefore the shedding cycle has three stages stretching over just 3 hours. If conditions for shedding are met for three consecutive hours it is very likely that most or all of the ice has shed from the stays.

The snow accretion process though is faster and significant snow can accumulate in less than two hours. The algorithm steps from one stage to the other based on persistence of conditions favorable for snow accretion. Once snow on stays is true the algorithm checks conditions for shedding and alerts on those. The time frame for snow accretion and shedding are discussed later in case studies.

2.4. Dashboard performance

The purpose of the dashboard when it was designed was to alert officials when there is a potential for adverse conditions on the bridge. The dashboard has alerted either ahead of time or at the beginning of every significant event since its inception in 2011.

12

It has alerted on every significant event on the VGCS bridge in Toledo where ice accumulation is of concern. There were 11 significant events between 2011-2015 when the dashboard was in operation. The dashboard alerted at an average of 4.5 hours prior to the confirmation of ice on stays during these events. The initial alerts were generally triggered by the airports that are situated south of the bridge.

The dashboard has also alerted on 16 events between Nov 2013- Mar 2015 and Nov

2017 – Mar 2018 on the Port Mann bridge when the monitor was in operation. During all these events the presence of snow on the bridge is confirmed using visual confirmation.

These average lead time at the Port Mann bridge before the snow events is 1.8 hours. This is a shorter time period than VGCS because of the nature of weather events.

The performance of each monitoring system is discussed in detail in their respective case studies.

2.5. Weather forecast data

Approximately $4T of the US economy is weather sensitive[10], and severe weather causes almost $11.2B in damage and 524 fatalities per year[11]. When decision makers have the flexibility to mitigate the impacts of weather, forecasts can add substantial value in mitigating some of those costs.

The Road Weather Management Program of the Federal Highway Administration

(FHWA) has documented best practices utilized by traffic managers, maintenance managers, and emergency managers in response to various weather threats while accounting for parameters like safety, mobility and productivity[12]. There are detailed

13 methods to regulate roadway and inform the public under the traveler information section of their document for management practices. These practices require the use of weather forecast information for their operations during Evacuation traffic management. Weather forecasts are also used to create an interactive telephone information system to inform travelers of route specific road conditions. Forecast data is also used to indicate possible flooding conditions using a web-based flood warning system.

Decision analysis has a role in integrating weather forecasts with business operations and planning models. A simple average of the storm track produced by multiple models as the consensus track, is used by tropical cyclone forecasters as a decision support system. The purpose of the consensus track is to improve accuracy, not to measure uncertainty. The predictability of a given storm drives the divergence among multiple models. However, no probabilistic information is extracted from multiple forecast sources.

There are no automated methods to weight models’ forecasts according to indicators of their performance for a given storm, or for past storms[13].

The availability of weather forecast data from different models and sources, each with their own merits and demerits promise a lot of value in decision making for cable stayed bridges in winter emergencies.

14

3. Case study 1: VGCS bridge

3.1. Bridge introduction

The Veterans Glass City Skyway (VGCS) Bridge is located in Toledo, the fourth most populated city in Ohio. The bridge crosses the Maumee river and carries three lanes of traffic on each side. The VGCS Bridge put into service in 2007, is a cable-stayed type bridge with a single pylon, and the stays are covered with stainless steel sheathing. The stays of the bridge are covered with stainless steel sheathing, which offers aesthetic and life cycle cost advantage over other materials. However, during winter there are issues of Ice forming on top of these stainless-steel cables and eventually falling off onto traffic, which presents safety issues for the motorists traveling below. In such situations, the Ohio

Department of Transportation (ODOT) closed down lanes in as and when required until adverse conditions passed. The icing problem depends on the weather data that is stochastic in nature. Due to randomness in the occurrence of ice accumulation/shedding processes, determining control actions during such occurrences is extremely difficult.[5]

15

Figure 3 – Veterans Glass City Skyway Bridge [5]

In order to aid ODOT in their preparation and response to these icing events, an automatic detection and monitoring web based system, the ‘Dashboard’ was implemented in January of 2011. Since its inception the dashboard has undergone several changes and modifications based on the addition of data sources and sensors and the learning over the last five years. This system utilizes existing weather station measurement data to monitor most recent weather conditions and detect possible events, alerting officials and providing tools to document and further assess the situation. The dashboard was designed to accelerate and enhance existing ODOT predictions and response protocols to icing events.

Figure 4 – Ice accumulation on the VGCS stays [14]

16

There are mainly two kinds of icing namely Precipitation icing and In-cloud icing.

Both the categories may cause severe damage to the infrastructures like Bridges, Power lines, and Aircraft turbines[8]. Any such scenario, where the atmospheric icing in severe weather conditions is so intense that can cause damage, is worth taking into consideration.

Four icing events occurred on the VGCS Bridge for the period between Dec 2007 and Jan

2009 where ice fall either occurred or was considered close to occurring. Using the dates of these events provided by ODOT, weather data from nearby weather stations was analyzed to find common basic weather properties. This analysis along with a similar analysis by Kathleen Jones at the U.S. Army Cold Regions Research and Engineering

Laboratory (CRREL) found that the common properties leading up to and during these icing events were in rain, snow, or freezing rain also accompanied by fog[15]. Jones’ analysis also states that ice events are most likely when there is cold air below, warm air aloft, a high to hold the cold air in place, and precipitation of liquid water. Since icing on the stays has not been monitored until ice was observed, it is uncertain exactly how often ice did not accumulate when conditions were favorable for an icing event. There have been several observed occasions in the past where weather conditions favorable for an ice event have resulted in no ice accumulation[16]. The monitoring system coupled with visual observations has helped reduce these uncertainties and has provided historical data for analysis.

3.2. Dashboard details, Past events analysis and performance

17

3.2.1. Introduction

The dashboard for VGCS was developed in January 2011. The system was operational till mar 2016 after which it was transitioned to ODOT servers for continued usage.

The main panel of the dashboard contains the icing speedometer showing all the states including {G, Y1, Y2, Y3, O, R1, R2, and R3}.The main panel also includes the reporting function for ODOT, which can be used to report icing status after visual inspection. The ticker on the main panel shows icing conditions of the last 48 hours. Figure

12 is a screen shot of the web site showing the dashboard main panel.

Figure 5 – Dashboard website components [5]

18

Dashboard includes an interactive map of the weather stations where current sensor readings are shown and where historical readings can be plotted on a timeline.

There are also cameras installed on the bridge that can be seen from this interactive map.

The history tool on the website allows for more detailed information on the past evaluations, responses submitted, and any additional notes available. This section also provides the tool to check a short summary of the weather data between two dates.

Figure 6 – Website History section All of the project documentation has been included and listed in this section for reference as well as how the dashboard operates and how to use the web site.

The purpose of the Algorithm Parameters tab is to provide a platform to dynamically make changes without having to go through the algorithm code. The tab has options to add, delete and modify details on the fly.

19

Figure 7 – Website Algorithm parameters section [5]

3.2.2. Sensor suite installation

Figure 8 – Custom sensor suite installed at VGCS [9]

In summer 2013 a few sensors were installed on the bridge to capture the micro climate on the bridge and make the dashboard algorithm more reliable. The bridge sensors

20 comprise of Geokon Thermistors[17], LWS-L Dielectric Leaf Wetness Sensor[18], Goodrich

Ice Detector[19], MetOne Rain Tipping Gage[20] and the sunshine BF5 sensor[21]. The thermistors were installed on the stays to monitor the stay temperatures, as it was observed that the stay temperatures could be different than the air temperature. The leaf wetness sensor gives us information on the surface status of the sensor, it tells us if the surface of the sensor is wet. The ice detector gives the thickness of ice accumulation when conditions for icing are favorable. The rain tipping gage gives us the amount of precipitation, it also has a heater built in to measure precipitation during snow or freezing rain. The sunshine sensor gives us values of global and diffused radiations using which we can determine the presence of significant sunshine.

3.2.3. State transitions

The dashboard algorithm has a persistence check built into it. Ice accumulation is usually a process that happens over a long period of time, in about 8-10 hours. So, the accumulation cycle in the dashboard has three stages, when the algorithm steps from one stage to the other based on the persistence of conditions favorable for ice accumulation. When conditions have been met for over 8 hours email alerts are sent to ODOT officials to check the stays for presence of ice.

21

Figure 9 - Accumulation cycle state transitions

Figure 10 - Shedding cycle state transitions

Once the presence of ice is confirmed the algorithm then starts to check for conditions of shedding. Shedding occurs at a much faster rate than accumulation, therefore

22 the shedding cycle has three stages stretching over just 3 hours. If conditions for shedding are met for three consecutive hours it is very likely that most or all of the ice has shed from the stays.

3.2.4. Dashboard performance

The dashboard has been functioning for more than five winter seasons and it has been successful in capturing almost all the events in this time. It has particularly functioned well in capturing the major ice accumulation events and separating them from the minor or temporary events. After the addition of the special icing sensors suite the validation of the dashboard performance has become convenient. The major events are discussed below, supported measurements from the sensors (whenever available) that gives an idea about how each event is different from the other.

The first major event recorded by the dashboard was between February 20th and

25th in 2011. The conditions for Icing started around 3PM in the afternoon on the 20th at which point the dashboard started issuing initial alarms. Around 10PM the dashboard was set to an alert state after a visual confirmation of ice on the stays. The conditions sustained for the next four days, through which the dashboard remained at an Alert state. On the 24th the dashboard sent alarms on shedding at 7AM and continued to do so until 3PM, courtesy of air temperatures above 32OF. Visual inspections confirmed shedding until 4PM. At 6PM the dashboard again started alerting on shedding, again based on the air temperatures, these alerts remained active until 1AM on the 25th. There were comments at 11PM on the

23

24th about a possible second round of shedding of the remaining ice on the stays. At 9AM on the 25th after visual inspections verified there was no significant ice on the stays the dashboard returned to ‘Clear’ state. This was the first event since the formation of the dashboard algorithm, this event served as a good example to study the consistency of information from the data sources which helps in assigning weights to the stations and to tune other parameters of the dashboard, such as the time delays between different states and so on. It is to be noted that this event was similar to the previous events where the process of accumulation was slow, taking about eight to ten hours to build up and the accumulated ice stayed on the stays for a few days, while the shedding process was rapid.

The winter of 2011-12 was not eventful, with only a few minor alerts raised by the dashboard. The next significant event of the dashboard was on the 26th February 2013. The dashboard started issuing alerts about possible accumulation at noon. At 5PM there were reports of accumulation of 0.25 inches of ice, at this point the dashboard was at the ‘Alert’ stage, waiting for indications from weather sources for shedding conditions. As the temperatures rose the dashboard alerted on shedding from 7PM to 9PM and once visual inspections confirmed that there was no ice on the stays the dashboard returned to a ‘Clear’ state. This event was an example where the accumulation was quicker than the previous events, 0.25 inches accumulating in less than five hours. Although the dashboard’s alerts were on time, the algorithm was designed for events where the buildup of ice would take about eight to ten hours or longer.

On the 27th and 28th of February 2013 the dashboard sent out alerts for several hours, discontinuously as the temperatures were borderline and there was a lot of

24 precipitation. However, field inspections indicated that there was no significant accumulation. On 16th March 2013, the dashboard sent out alerts continuously for thirteen hours based on low temperatures and constant precipitation, however the reports indicated that there was about 0.07 inches of ice on the stays, which was deemed insignificant. On 25th March 2013, the dashboard started alerting before noon and at 6PM the conditions for accumulation had been met for eight consecutive hours. At this point, visual inspections were made and reports suggested that there was a rain-snow mix but there was no significant accumulation. At 7PM as the temperatures started rising adverse conditions were cleared.

The only major event of the 2013-14 winter was on 9th December 2013. The dashboard alerted at 2AM on the 9th for just an hour abut did not continue to alert as the weather reports from the stations did not indicate possibility of accumulation in the following hours. However, manual inspection at 5AM confirmed the presence of thin layers of ice on the stays. At this stage, the dashboard moved to an ‘Alert’ state and remained at the state until 2PM when it was decided that the little ice present was insignificant and that the situation was not a cause of concern. This event was of high importance as this was the first event since the installment of the icing sensors suite on the bridge. The performance of the Ice detector and the Leaf Wetness sensor were in accordance with the conditions on the bridge. Figure 7 below shows how the Ice detector recorded the accumulation of ice and how up to 0.12 inches of ice could have accumulated on the stays. Based on this event the newly installed sensors were factored into the algorithm with thresholds set based on lab experiments.

25

Ice detector and Leaf Wetness sensor

0.14 800

0.12 700

0.1 600 0.08 500

Ice Ice (in) 0.06 Voltage (mV) 400 0.04

0.02 300

0 200 12/8/13 19:12 12/9/13 4:48 12/9/13 14:24 12/10/13 0:00

Ice LeafWetness

Figure 11 - Ice thickness and Leaf wetness on Dec 9, 2013

The remaining of the 2013-14 winter was not eventful, with a few minor alerts in

February and March 2014.

The first event of the 2014-15 season was between January 21st and 25th 2015. The dashboard started issuing alerts at 8AM in the morning on 21st. At 10AM the dashboard moved to an ‘Alert’ state based on reports from the field, and remained at the state until

10AM on 22nd January. At 10AM the dashboard detected the increase in the temperatures of the stay thermistors and the rise in temperatures from other weather sources, based on which alerts of possible shedding were raised. Until 5PM the dashboard alerted on shedding intermittently, based on the conditions for shedding being met. It went back to the ‘Alert’ state at 6PM. During this time, the reports from the field stated that the ice had moved from the top of the cables and were now present on the underside of the cable.

There were also reports of small pieces of ice falling off the stays. On analysis of the event

26 the statements were supported by the Stay thermistors and the Sunshine sensor. This can be seen in Figure 9 where two thermistors, the ones at the top of the stay rising rapidly compared to others when there was sunshine. This could have caused the ice to melt, the water run along the sides and freeze on the underside of the stays. Another interesting point to be noted is that the temperatures at the two sides of the bridge were also significantly different at the same time. Figure 10 shows how thermistors at the southern end of the bridge were significantly higher than the northern end. This could be because of the shadow of the stays at one end falling on the other end, thereby blocking the sunlight. It is also to be noted that there was no precipitation since the evening of 21st and the Ice detector also did not show any increase as shown in Figure 8.

Ice detector

0.18

0.12 Ice Ice (in) 0.06

0 1/21/2015 0:00 1/22/2015 0:00 1/23/2015 0:00

Figure 12 - Ice thickness on Jan 22, 2015

27

Stay Thermistors 60

50

40

30 Temperature Temperature (degF)

20 1/22/2015 0:00 1/22/2015 12:00 1/23/2015 0:00 7X20TUO 7X20TUS 7X20TWS 7X20TLS 7X20TEO 7X20TES 7X20TUO 7X20TUS 7X20TWS 7X20TLS 7X20TEO 7X20TES

Solar sensor

500

250

Solar ration (W/sq.m) 0 1/22/2015 0:00 1/22/2015 12:00 1/23/2015 0:00

Global_Rad Diffuse_Rad

Figure 13 - Temperature and solar radiation on Jan 22, 2015

28

Stay thermistors

55

50

45

40

35

Temperature Temperature (degF) 30

25

20 1/22/2015 0:00 1/22/2015 12:00 1/23/2015 0:00 7X20TUO 7X20TUS 7X20TWS 7X20TLS 7X20TEO 7X20TES 8X08TUO 8X08TUS 8X08TWS 8X08TLS 8X08TEO 8X08TES

Figure 14 - Stay temperatures on Jan 22, 2015

On 23nd January the dashboard remained in the ‘Alert’ state until 12 PM when the air temperatures started rising up causing the dashboard to alert on possible shedding.

Conditions for shedding were met for the next 10 hours and the dashboard remained alerting on shedding. One side of the bridge was closed to public traffic from 8AM. Reports from the field stated that there was shedding of pieces of ice in the morning and later around 4PM. Sheaths of ice up to a thickness of 0.12 inches were found to have shed from the stays. At 6 PM field observations deemed that the bridge was safe for public traffic and the outer lanes were opened. Although the temperatures increased only after 12PM, shedding of small pieces of ice did occur earlier in the morning. This suggests that temperature and new precipitation are not the only causes for shedding. From looking at weather data from other sources, one possible explanation for the shedding of ice in the

29 morning was high wind speeds. Wind speeds reported by a weather station on the bridge were very erratic and the wind speeds from neighboring locations report very high values.

The shedding could also be due to vibrations on the bridge, vibration due to traffic causing the smaller pieces of ice to fall off. Figure 11 shows a piece of ice that fell off the stays.

Figure 15 - Piece of ice from Jan 23, 2015 [22]

On 24th the dashboard continued to alert on possible shedding whenever the conditions were met. No significant shedding was reported from the field. On the morning on 25th the conditions were declared clear after visual inspections confirmed that there was no ice on the stays.

This event was similar to the event in 2011, where the accumulation was over a period of eight to ten hours. However, in this event shedding occurred on multiple days, making it the most significant event since the inception of the dashboard. This event was also of very high importance in the understanding of the sensor performance, shedding mechanisms and how geographical factors affect shedding of ice.

30

The next event was on March 3rd 2015, when the dashboard started alerting at

10AM. At 1PM reports from the field confirmed the presence of ice on the stays, although of very small thickness. From 2PM the dashboard alerted on possible shedding until 6PM as the temperatures were seen to be rising. Reports at 6PM stated that there were small pieces of ice falling off the stays but size was not of concern. At 8PM the conditions were reported to have cleared and the dashboard returned to a ‘Clear’ state. The interesting observation from this event was that the stay temperatures were all identical as they were rising, unlike the precious event when two of them rose rapidly. The sunshine sensor reported no sunshine on that day, therefore the heating was due to the rise and air temperature. This also meant that the rise in temperature was uniform at both ends of the bridge.

In summary, the dashboard has been performing really well in alerting ODOT officials on impending weather events.

3.3. Improvements to the algorithm

There have been several improvements and upgrades to the dashboard system since

2013. The most significant improvement was the addition of custom sensor suite. The choice of sensors and bench testing was conducted by Biswarup Deb and results are documented[9]. The installation of all these sensors was carried out in June 2013. The sensors were added to the algorithm in the following winter when successful operation of the sensors was confirmed. They were incorporated into the algorithm with the stations

31 weights suggested by Deb based on his bench testing results. However, after a few events the threshold of the Leaf wetness sensor was set lower to reduce the number of false alerts.

The Ice detector has a heater built into it to heat the probe of the sensor when the amount of ice on it reaches its maximum limit of 0.15in. This causes the data from the ice detector to zero itself every time the ice exceeds 0.15in. During large events this could happen several times and there was a necessity to find the cumulative ice through the entire event to get the total possible ice thickness on the stays and thereby the severity of the event. For example, in Figure 16 below the plot of the left shows the values reported by the ice detector. The spikes in the data as it rises indicates the times the heater turned itself on and zeroed the value of ice thickness being reported. The newly processed variable

‘Cumulative Ice’ is shown in the plot on the right which

Figure 16 – Comparison if Ice and derived variable ‘Cumulative Ice’ accounts for the heater and displays the total accumulation of ice during an event.

This processed variable also tries to account for the ice melting off by tracking the temperature. It is set to zero if any of the stay thermistors exceed the freezing point of 320F.

In the summer of 2015 the dashboard system was planned to be moved in the control of the operations team and this meant that the research team would no longer have

32 control of changing thresholds and station weights and such. This also required the ability to change the important algorithm parameters from a web interface. To include this capability a new ‘Algorithm Parameters’ section was designed on the website[5]. This has four major sections namely Station weights, email list, Alarm thresholds and Transition times.

The station weights section has a table that shows the weather stations, their recommended default weights, the current weight that is being used in the algorithms and the text box to enter the new weights. The user can click on update to reflect the changes and the click on the History button to see the historical changes made to this section.

Figure 17 – Station weights section

The email list section has a table showing the list of names and email addresses of the users who are on the email list when alarms are issued. The user can add a new name and email address to the existing list and also delete an email address from the list if need

33 be. The user can click on update to reflect the changes and the click on the History button to see the historical changes made to this section.

Figure 18 – Email lists section

The alarm thresholds table shows the list of sensors and their recommended default thresholds, the current thresholds that is being used in the algorithms and the text box to enter the new thresholds. The user can click on update to reflect the changes and the click on the History button to see the historical changes made to this section.

34

Figure 19 – Alarm thresholds section

The last section is a transition times table which shows the list of Rules-Transition types and their recommended default transition times, the current transition times that is being used in the algorithms and the text box to enter the new transition times. The user can click on update to reflect the changes and the click on the History button to see the historical changes made to this section.

35

Figure 20 – Transition times section These along with other minor updates from time to time keep improving the intelligence of the dashboard system.

3.4. Accumulation and shedding pattern in significant events form the past

3.4.1. Revisiting past evets

The first significant event that turned all heads towards such behavior on the bridge was back in December 2007. The data from Toledo Express Airport and Metcalf Field indicated freezing rain and fog occurred on December 9 - 10, which is believed to have caused ice accretion on the stays. Rainfall with temperatures above freezing triggered the

36 ice shedding from stays, which took place on December 11. Pieces up to 8-9 inches long were observed to have shed from the sheaths. The Toledo Blade newspaper on December

12, 2007 reported of Ice shedding on December 11th[7]. According to the report two lanes were closed and two accidents recorded near or on the bridge.

The next event on the bridge was in March 2008[23]. Ice accumulated on the 27th and shed on 28th. Weather data from the airports show that a snow and rain mixture with temperatures falling below freezing, concurrent with a fog, could have caused ice formation on the stays on the evening of March 27th. Clear skies and air temperatures above freezing on March 28th are believed to be the shedding triggers. Comments from visual inspections suggested that the ice fall was initiated by the sun coming out. It correlates with the rise in air temperature which went above freezing around 11 AM on the 28th. Two lanes were closed on the 28th and one accident was recorded during this event.

The third major event occurred in December 2008[24]. On December 17th, ice was first observed on the stays, although no precipitation was recorded from the airports on this day. However, data from December 16th met conditions that could have caused accumulation. On December 19th conditions for accumulation of ice was met for several hours, but there are no visual reports to confirm this. However, ODOT closed two lanes on either direction for five days between December 19th and 24th. On December 24th ice shedding occurred and the weather data from airports suggests that temperatures above freezing and gusty winds could have been the cause.

January 2009 saw the next icing event[24]. Ice first was observed on January 3rd; but the airports did not record any precipitation. On January 4th, the airport records show that there was some rain and the temperatures were cold, indicating possible accumulation.

37

The next few days saw the temperatures siting well below freezing and heavy snow and occasion rain. On January 13th, the air temperature went above freezing for the first time since January 5th. Gusty winds and this rise in temperature could have triggered the ice shedding from the stays as observed by ODOT. Although most of the ice shed on Jan 13th, persistence of ice on the stays forced ODOT to keep one lane closed in both directions. The thin layer of ice remaining on the stays could have melted or sublimated before January

21st, when the lanes were finally opened. Figure 2 shows the accumulation of ice on the stays during this event.

Figure 21 - Ice accumulation on stays [25]

The next major icing event occurred in February 2011[23]. This icing event was observed and recorded from ice formation on the evening of February 20th through the ice shedding on the morning of February 24th. On February 20th, the RWIS station on the bridge reported freezing rain, then a drop in temperature. The wind was from the east side

38 and clear ice with few air bubbles (glaze ice) was deposited by the freezing rain. By the morning on 21st, the ice on top of stay cable was 0.25 inches thick, on the east side about

0.5 inches thick, on the bottom there were small icicles and ice was about 0.5 inches thick.

On the west side, the stay was bare except some frozen rivulets. On the 21st there was a mix of snow, freezing rain, and sleet which, however most of it was observed to fall off and not add to the accumulation as the temperatures were well below freezing the entire day. On

22nd February there was no more precipitation, however the ice on the stays started to melt midday after the sun came out. This led to cracks in ice and water dripping from icicles, but towards the end of the day most of the water refroze. This resulted in a different distribution of ice on the stays, with the sides becoming much thicker than the top of the stays. Measurements from the field confirmed this phenomenon. One lane was closed for most part of 22nd. On February 23rd, the air temperature never rose above 32o F but measurements on the field indicated stay temperatures varying from 32o F to 36o F during the middle of the day. This has correlated with the sun coming out and warming the stays.

Visual inspections report that the inner layer of ice had started to melt and water running between the cable and a sheet of ice was visible. Small pieces of ice were also observed to chip off. Large pieces of ice were found to be precariously hanging onto the stays; visible in

Figure 3. Such conditions prompted ODOT to keep 2 lanes closed through most of the day.

On February 24th, shedding of large chunks of ice was observed. Shedding seems to have started around the same time air temperatures started to rise above 32o F. All three lanes were closed for a couple of hours in the morning, but two lanes were kept closed the entire day until all of the ice was confirmed to have shed from the stays.

39

Figure 22 Ice shedding from the stays [26]

The event on February 26th 2013 started between 10 and 11 AM in the morning[9].

The RWIS station on the bridge started reporting rain while the stay temperatures started dropping down to 23o F. Continuous precipitation through the day had led to an accumulation of 0.25 inches thick layer of ice on the stays. The air temperature. Later in the day around 4 PM, the air temperature started rising above 32o F and subsequently the stay temperatures also started rising. This led to shedding of ice which was observed between 6

– 10 PM. At 7PM reports from the site suggested that the ice on the stays had greatly reduced. Continuous rain and rise in the ambient temperature sustained the shedding and at 10 PM the stays were clear of ice. No lanes were closed during this event. This was the first-time accumulation and shedding was observed in the same day. Until this event accumulation was assumed to be an 8-10-hour long process and generally shedding would

40 occur on days after. This was the first quick accumulation and quick shedding event observed on this bridge, and no lanes were closed during the event.

The next event occurred on March 16 2013[9]. Accumulation of ice started at 3 AM in the morning. The air and stay temperatures dropped to below 32o F and rain was being recorded since 12 AM. Visual observations confirmed the presence of ice at 11 AM, but reported that the thickness was less than 0.07 inches. Once the sun came out the thermistors recorded the rise in temperatures above 32o F by 2 PM. At 3 PM visual inspections confirmed that the stays were clear of ice. Rain was still being reported by the

RWIS station while visual observations reported light mist in the area. No lanes were closed during this event and at 7 PM the bridge was declared clear of any adverse conditions.

The next event on March 25 2013 was very similar to the previous event[9].

Conditions for accumulation were met at 5 AM when the temperatures were below 32o F and rain was being reported by the RWIS station. At 11 AM on field reports confirmed the presence of ice on stays but remarked that the thickness of accumulation was insignificant.

It was also reported that the precipitation was a rain and snow mix. Around 1 PM the air temperature along with all the stay temperatures went above 32o F. With the constant precipitation, the ice is believed to have washed off the stays past 1 PM. At 2 PM reports confirmed that most of the ice had washed off and that traces of ice were still present on the stays. Since the temperatures were all well above 32o F after that the ice shed completely and at 7 PM it was declared that adverse weather was clear. No lanes were closed during this event as well.

41

In the summer of 2013 a new sensor suite was installed on the bridge. This was installed to capture the microclimate on the bridge and to get more accurate information about the conditions. The sensor suite consisted of a Leaf wetness sensor, solar radiation sensor, ice detector and a rain bucket. With this set of sensors information on precipitation, type of wetness on a surface, thickness of ice and the amount of solar radiation could be collected and used to understand the events better.

The next event occurred on December 9, 2013 starting at 2 AM in the morning. The stay temperatures were below 32o F and the leaf wetness sensor reported possible accumulation conditions. The ice detector also recorded a thickness of 0.12 inches by 3 AM.

No precipitation was recorded by the RWIS stations but it was a very windy day with the average wind speeds bring greater than 20 mph. Reports from the bridge confirmed the presence of ice on the stays at 9 AM. At 2 PM the cables were reported to be clear of ice. The temperature never went above 32o F and there was no precipitation. From the thickness on the ice detector we could observe that the ice was coming off. This event showed that temperature, solar radiation and precipitation are not the only factors that trigger shedding. There was no lane closure during this event.

The next icing event on the bridge was observed on April 13, 2014. The RWIS station reported rain starting 9 AM however the temperatures dropped to 32o F and below only at 11 AM. The ice detector started showing accumulations starting 10 AM. The total accumulation during the event was 0.26 inches on the ice detector, but there were no visual inspections to collect thickness on the stays. The ice is expected to have come off the stays around 3 PM when there was a sudden decrease in the ice detector’s thickness and the temperatures rising above 32o F. Wind speed also started rising after 3 PM. All the stay

42 temperatures stayed above 32o F after 4 PM. This was a significant event as this was the first time the ice detector had exceeded an accumulation of 0.25 inches and this was considered a critical value for thickness of ice on the stays, however no lanes were closed during this event and there are no reports from visual inspections.

The next event occurred on January 3, 2015. This event started early in the morning around 6 AM. The RWIS station did not report any precipitation but neighboring weather stations reported rain. The temperatures were all below 32o F and the ice detector started showing accumulations around 7 AM. The accumulation from the ice detector reached 0.35 inches before 10 AM and continued on to 0.52 inches through the afternoon. Ice is believed to have come off around 5 PM when the air and stay temperatures started rising above 32o

F. No lanes were closed during this event and there were no visual observations made, therefore the exact thickness of accumulation on the stays is unknown.

The event from January 21-25, 2015 is the most significant event of all the events discussed. This event stretched for 4 days and included lane closures on 3 days. It is significant because all sensors were active and contributing during this event and there were multiple reports from the bridge site which helped analyze the data. The event started between 7-8 AM on January 21st; the temperatures were all below 32o F and the

RWIS station was reporting snow starting at 6 AM. After 7 AM the ice detector showed accumulation of ice and the presence of ice on the stays was confirmed at 10 AM. By 12 PM the ice detector reported accumulation of 0.2 inches. However, reports at the end of the day recorded a thickness of 0.07 inches of ice on the stays. No lanes were closed on the 21st.

Around 10 AM on January 22nd there was enough solar radiation to warm up the top of the stays to a temperature above 32o F. Between 10 AM and 12 PM the RWIS reported rain as

43 opposed to snow in the earlier hours. The air and other stay temperatures stayed below

32o F while the temperature of the top of the stays rose to 50o F. At 12 PM small pieces of ice were reported to have shed from the stays and the lane closer to the stays was closed.

At 6 PM two lanes were closed. On January 23rd at 7 AM all lanes were closed after there was reports of ice shedding from the stays. This is believed to be due to extreme winds.

Between 6-8 AM all local weather stations reporting wind speeds recorded very high gusts or out of range values suggesting there were very high wind gusts. During the day between

11 AM and 1 PM, the top of the stays was above 32o F again. Ice pieces with the curvature of the stays were found on the ground which were 0.12 inches thick as shown in Figure 4.

One lane was eventually opened to traffic at 6 PM. On January 24th, there was no shedding until 3 PM when the temperatures started moving above 32o F. By 6PM one more lane was opened to traffic and by end of the day all the ice was reported to have shed and adverse conditions were cleared.

Figure 23 - Piece of ice from the stays[22]

The event on March 3, 2015 started between 9-10 AM. The RWIS station did not report any precipitation but the Leaf wetness sensor reported wet conditions, the ice

44 detector started reporting accumulation and the temperatures were all below 32o F. At 1PM the presence of ice was confirmed but the thickness was only 0.03 inches. Ice continued to accumulate past 2 PM when the ice detector reported a thickness of 0.1 inches. Around 3

PM the temperatures started rising above 32o F and at 4 PM small pieces of ice were seen falling off the stays. Around 5 PM visual reports confirmed that the remaining ice was dripping down the stays. No lanes were closed during this event and at 7 PM the stays were clear of ice.

This section has covered each significant event between Nov 2007 – Mar 2015 in detail. This information is used to understand and possibly derive relationships between atmospheric conditions, corresponding sensor values, conditions on the bridge and appropriate control actions.

3.4.2. Assumptions and limitations

One of the biggest limitations in this research is the difficulty in recording actual conditions on the field. Results and theories are formed based on limited information that was gathered during icing events. Other limitations include the inconsistent nature of data from the RWIS stations. The format of RWIS reports changes often and this makes interpretation of RWIS data very difficult. Another limitation is the inability to inspect and measure the upper half of the stays and the pylon. It is extremely difficult to assess the conditions beyond 20 feet; the thickness and other properties of ice may not be the same as they are at ground level.

45

3.4.3. Types of ice accumulation

Accumulation of ice on stainless steel stays is a factor of various atmospheric and the structure’s conditions such as air temperature, stay temperature, precipitation type and quantity etc[7]. Based on the previous significant events described in section 2, accumulation can be categorized into five types[27]. These categories are not mutually independent; more than one type could be contributing to accumulation at a given time.

The first type is caused due to cold temperatures below or at 32o F and the presence of rain or fog as discussed above. This type of accumulation was observed on 10 of the 13 events.

This type of accumulation has been accompanied by other types 8 of the 10 times it has occurred.

The second type of accumulation is caused due to precipitation of snow when the air or stay temperatures are above 32o F. The snow hits the warm stays and melts, but it cools down rapidly resulting in the water freezing to ice. This type of accumulation has taken place 4 times of the 13 events.

The third type of accumulation is when the temperatures are cold and below or at

32o F accompanied by snowfall. This has also occurred 4 times over the 13 events and has proved to be influential since 3 of these events have led to lane closure.

The fourth type of accumulation is when the air temperature is below or at 32o F but the top of the stays is warmed up above 32o F due to sunshine. The sides and the bottom of the stays are still below 32o F. This can be seen in Figure 24. The temperature plot shows the steep increase in the temperature of two thermistors while the others are below 32o F; while the solar radiation plot shows that there was a good difference between Global radiation and diffused radiation indicating sunshine during the same time. This causes the

46 ice on top of the stays to melt and roll over to the sides and the bottom. The sides being colder causes the water to freeze leading to uneven distribution of ice around the stays.

Although there is no new precipitation the redistribution of ice leads to new thicknesses which is localized accumulation. The phenomenon has been observed twice during the 13 events both of which had led to lane closures.

Figure 24 - Stay temperatures and Solar radiation on Jan 22, 2015

The fifth type of accumulation is when the ice detector on the bridge reports an accumulation of ice for two consecutive hours irrespective of the temperature or precipitation. This has been observed even when there has been no recorded precipitation.

Ice has accumulated on the probe of the ice detector earlier than it has on stays since the

47 probe is small. This has been a good precursor during events and this type of accumulation has been observed 5 times during those 13 events.

3.4.4. Types of ice shedding

The shedding of ice from the stays as previously thought to occur due to one of two reasons, rising temperatures or solar radiation. On thorough examination of shedding phenomena over the 13 events it has been observed that shedding can be categorized into

9 types[27]. Some of these types are subsets of others but each type has a different meaning in the context of the events they have occurred in. Like accumulation types these categories are not mutually independent; more than one type could be contributing to shedding at a given time.

The first type of shedding is the simplest type. This is when the air temperature rises above 32o F and the sun is shining down. This has caused large pieces of ice to fall off the stays. These large pieces are generally in the curvature of the sheath. This type of shedding has been observed in 2 of the 13 events.

The second type of shedding is when the stay temperatures are locked just below or at 32o F but the air temperature is above 32o F. This happens as the air gets warmer faster than the ice since the latent heat of melting ice must be overcome before it can rise above

32o F. This delays the shedding of ice. This phenomenon is clearly visible on the stay temperature plots where the temperatures are pegged at 32o F for a period before it starts

48 rising again. This type of shedding generally leads to water turning to ice and dripping form the stays and has been observed in 2 of the 13 significant events.

The third type of shedding is similar to accumulation type 4 where the ice on top melts due to the sun and it rolls over to the side and bottom of the stays. This occurs when the air temperature is below or at 32o F. Although ice has not shed entirely from the stays, this phenomenon has led to uneven distribution where ice has shed locally from the top of the stay. This again is a significant event occurring on 2of the 13 events.

The fourth shedding type is when solar radiation leads to shedding. This has occurred on 4 occasions. This type of shedding leads to small pieces of ice chipping off the stays. Although this occurs every time types 1 or 3 have occurred, this type has been observed before conditions for the other types have been satisfied. And when this has occurred individually it has often led to the lanes being open.

The fifth type of shedding is due to high speed wind gusts. Winds with gusts over 10 mph can cause the ice to shed. These winds typically blow off small icicles or loose pieces of ice present on the stays. This has been observed in as many as 10 occasions during the 13 events.

The sixth type of shedding is when the ice comes down neither in solid or liquid form. This has been observed on 3 occasions. The shedding pf ice from the stays has not been visible and has simple sublimated from the stays. This has been captured by the ice detector on the bridge. The ice thickness value from the detector drops down rapidly while all the temperatures are below or at 32o F.

The seventh type of shedding is when the stay temperatures rises above 32o F. This is independent of the air temperature. This has been observed on 6 of the 13 occasions and

49 this type of shedding has generally been associated with small and large pieces of ice falling off the stays.

The eighth shedding type is when wind accompanied by rain causes the ice to come off while the top of the stays is warm and above 32o F. This combination has been observed during three events and has resulted in small pieces of ice being blown off the stays.

The ninth type of shedding is when the air warms up reaching temperatures of above 32o F. This has occurred 9 times during the 13 significant events. This has been significant when it occurs by itself and it has usually been a precursor to shedding type two.

This can be used to make decisions on control actions.

3.4.5. Relationship with lane closure

To understand the relationship between lane closures and the types of accumulation and shedding let us look at some examples of events.

50

Figure 25 – Mar 3, 2015 event summary with accumulation and shedding types

Figure 25 above shows how the event on Mar 3 transpired. The details of this event have been discussed in section 3.4.1. The region shaded in yellow shows us the time when accumulation could have taken place. In this particular event there was just one type of accumulation observed. The region shaded in light red is the time during which shedding occurred. The four types of shedding that was observed is also annotated in the figure.

Figure 26 below is the summary of the events at the bridge on Jan 22, 2015. This was day 2 of the event that lasted between Jan 21-24. This example shows how the yellow region and the red region overlap, indicating that there are times when both accumulation and shedding occur simultaneously. This also shows four different types of accumulation and four different types of shedding occurring on the same day.

51

Figure 26 – Jan 22, 2015 event summary with accumulation and shedding types

Once these accumulating and shedding types were identified for each event, the aim was then to find out how these types were associated with control actions like lane closure.

During an event at any point one of the 4 lane closure states are possible. Either all lanes can be open, represented by L0 in Table 2 below, one lane is closed represented by L1 in

Table 2, 2 lanes closed – L2 and three lanes closed – L3. Table 2 below shows the conditional probability of occurrence of each of the accumulation types, shedding types and lane closure. It can be seen that every time accumulation type four has occurred lane closure has taken place. Similarly, the probability of closing lanes during an event where accumulation type three has been observed is 0.75. It can also be seen that the probability of not closing lanes when accumulation type five has occurred is 0.8, while lanes have never been closed when shedding type two and shedding type 6 have been observed.

52

P P P P P P P P P P P P P P P P P P

(A (A (A (A (A (S (S (S (S (S (S (S (S (S (L (L (L (L

Number of 1| 2| 3| 4| 5| 1| 2| 3| 4| 5| 6| 7| 8| 9| 0| 1| 2| 3|

X) X) X) X) X) X) X) X) X) X) X) X) X) X) X) X) X) X)

events

1 A 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 .3 .4 .1 .3 .2 .1 .3 .4 .5 .1 .6 .3 .8 .4 .3 .4 .2

0 0 0 0 0 A 0 0 0 0 0 0 0 0 0 0 0 4 .7 1 .2 .2 .2 0 .2 2 .5 .5 .5 .5 .5 .5 .5 .5 .5 .5 .5 5 5 5 5 5

0 0 0 0 0 0 0 0 0 A 0 0 0 4 .2 1 1 .2 .2 0 .7 1 .2 0 .2 .2 1 .7 .7 3 .5 .5 .5 5 5 5 5 5 5 5 5 5

A 0 0 0 0 0 2 1 1 1 1 0 1 1 1 1 0 1 1 1 4 .5 .5 .5 .5 .5

A 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 1 0 5 .6 .2 .2 .2 .2 .2 .2 .8 .6 .6 .2 .6 .8 .2 .2 .2

S 0 0 0 0 0 0 0 0 0 0 2 1 1 0 1 1 0 1 0 1 .5 .5 .5 .5 .5 .5 .5 .5 .5 .5

S 0 0 0 0 0 0 2 0 0 0 1 0 0 0 1 1 0 0 0 2 .5 .5 .2 .5 .5 .5

0 0 0 0 0 0 0 0 0 0 S 3 .6 1 .6 .6 .3 .6 0 1 1 1 .6 0 .6 .3 1 .6 0 .6 3 6 6 6 3 6 6 6 3 6 6

0 0 0 0 0 S 0 0 0 0 0 0 0 4 1 .7 .2 .7 0 1 1 0 .2 1 .7 4 .5 .5 .5 .5 .5 .5 .5 5 5 5 5 5

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 S 9 .5 .2 .4 .2 .4 .1 .1 .3 .4 .2 1 .4 .2 .5 .5 .3 .3 .2 5 6 2 4 2 4 1 1 3 4 2 4 2 6 6 3 3 2

53

0 0 0 0 0 S 3 .3 0 .3 0 1 .3 0 0 .6 0 1 0 .3 0 1 0 0 0 6 3 3 3 6 3

0 0 0 0 0 0 0 0 0 0 0 0 0 S 0 0 6 .3 1 .1 .1 .1 .1 .3 .3 .6 0 1 .8 .8 .1 .1 .1 7 .5 .5 3 6 6 6 6 3 3 6 3 3 6 6 6

0 0 0 0 0 0 0 0 0 0 0 0 S 3 .6 1 .3 .3 .3 .3 .6 0 .6 .6 0 1 1 .6 1 .3 .3 .3 8 6 3 3 3 3 6 6 6 6 3 3 3

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 S 9 .8 .2 .4 .2 .3 .2 .2 .3 .4 .5 .1 .5 .3 .4 1 .2 .2 .2 9 9 2 4 2 3 2 2 3 4 6 1 6 3 4 2 2 2

0 0 0 0 0 0 0 0 0 0 0 0 0 L 7 .7 .1 .1 .5 0 .1 .2 .1 .1 .7 .4 .7 .2 .5 1 0 0 0 0 1 4 4 8 4 8 4 4 2 3 2 8 8

0 0 0 0 0 0 0 0 L 3 .6 1 .6 1 .3 0 0 0 1 1 .3 0 .3 .6 0 .6 1 .6 1 6 6 3 3 3 6 6 6

L 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 0 0 0 1 2 .8 .4 .6 .4 .2 .2 .4 .4 .6 .2 .2 .4 .4 .4

L 0 0 0 0 2 1 1 1 1 0 1 1 1 0 1 0 1 1 1 3 .5 .5 .5 .5

Table 2 - Accumulation and shedding types and their correlation with lane closure

This information can be used to make decisions on closing lanes based on the types

of accumulation and shedding seen in the actual weather data and forecast data collected.

3.5. Lane closure and associated costs

54

Closing lanes on any road has many costs associated with it like travel delay costs, stopping delays, queueing delays, fuel consumption and emission costs, business impacts, inconvenience to local community etc. On a busy interstate highway like the I-280 where there are over 45000 cars and 8000 trucks on an average, closing lanes can lead to significantly large user costs[28]. ODOT was magnanimous in making efforts to calculate these user costs for closing the VGCS bridge.

Car Truck

Avg Daily traffic on the bridge 45370 8290

Avg hourly traffic on the bridge 1890 372

Cost of closure for entire day $ 178783 $ 102462

Cost of closure for one hour $ 7449 $ 4269

Cost of closure of one lane for one hour $ 0 $ 0

Cost of closure of two lanes for one hour $ 744.9 $ 426.9

Cost of closure of all three lanes for one $ 7449 $ 4269 hour

Table 3 - Lane closure costs

Table 6 above shows the user costs for closing the VGCS bridge[28]. These costs include costs for closing both sides of the bridge. Closing all three lanes of the bridge for one hour will result in a total cost of $11,718. However, leaving one lane open will reduce

55 the cost to $1,172 while having two lanes open and closing just one lane has $0 user cost.

These are average costs; however, we have had comments in the past from the ODOT operations management that they had kept lanes closed during the weekend as the costs during the weekend are much lesser than on weekdays. There were also some comments which suggested that the cost of closing lanes during the night is lesser than closing during the day. Since we do not have exact numbers or any information on these comments, for purposes of this study the costs are assumed to be uniform on all days and all times of the day.

3.6. Probability of accidents and associated costs

The falling ice from the VGCS bridge has been a threat to the travelling public. From the police reports obtained it has been observed that there has been a total of ten or more accidents on the bridge during the days of the significant events listed in section 3.1.

However, all these accidents need not have been due to the ice falling off the stays, but it is alarming that on ten of the thirteen events accidents have been reported. The cost of an accident has to be quantified for the purposes of this research.

Looking at the total duration, from the time of the first dashboard alarm to the time the event concluded with either the dashboard going back to clear or an ODOT report confirming that the stays or clear of ice; the thirteen significant events have lasted for 702 hours. Therefore, the probability of an accident occurring on the bridge during icing events is 0.014 per hour.

56

The National Safety council categorizes accidents into three types, namely property damage, non-fatal injury and fatal accidents. The average cost of an accident involving property damage is $9100, while the cost of an accident involving a non-fatal injury is

$78700. The cost of a fatal accident resulting in death of the person involved is $1420000.

These costs include wage and productivity losses, medical expenses, administrative expenses, motor vehicle damage, and employers’ uninsured costs[29]. These costs are also averaged to account for accidents involving cars, trucks, motorcycles and other types of vehicles.

The conditional probability of a crash being fatal given that an accident has occurred is 0.0054 while the conditional probability of an accident causing a non-fatal injury to the people involved given that an accident has occurred is 0.3810. The most common type of accidents are the ones which result in property damage with a conditional probability of

0.6136. These probabilities are for independent of the weather conditions. Under wet weather conditions when there is rain or snow or sleet precipitation or in conditions of fog the conditional probability of fatal accidents goes down to 0.0039. Similarly, the conditional probability of non-fatal accidents goes down to 0.3342 while that of property damage goes up to 0.6619. All the probabilities are calculated for the motor vehicle crash reports in the year 2014.[30]

The average cost of an accident was therefore calculated to be $35507. So, the effective cost to the user at the risk of an accident on the bridge during an event is $506.

57

This is the cost of allowing the traffic to pass through the bridge during an event. These costs have quantified the risk of operating the bridge during adverse weather conditions.

3.7. Conclusion

The dashboard system was initially designed to inform the maintenance and operations personnel about the conditions on the bridge. Figure 27 below shows was the idea of the process of management when this system was developed in 2011.

Figure 27 – Dashboard system concept However, with many events over the years we have understood that there are more variables and components that go into the complex decision-making process. There is an inherent cost benefit analysis associated with these decisions. The factors that play into this are the probability of accidents, cost of an accident and cost of closing lanes among other things. For a system to automatically do such an analysis and suggest control actions needs much more intelligence. The types of accumulation and shedding and the mapping of

58 these types to control actions, costs and probabilities haven been analyzed to build this intelligent system shown below.

Figure 28 – New improved dashboard system concept

Addition of forecasts to the system would enhance the intelligence and give ample lead times for the officials to make decisions that involves millions of dollars and safety of motorists.

59

4. Case Study 2: Port Mann bridge

4.1. Bridge Introduction

The Port Mann (PM) is a cable-stayed type bridge with two pylons and three spans over the Fraser River, connecting Coquitlam to Surrey in British Columbia near Vancouver,

Canada. This 10-lane structure is currently the second longest cable-stayed bridge in North

America and was the widest bridge in the world at the time of opening[31]. The bridge has a 42m clearance above high-water level and is 2.02km long and 65m wide. The towers are approximately 75m tall above deck level. The main span is 470 meters long making it the second longest cable-stayed span in the western hemisphere. The two towers and 288 cables span over a length of 850meters. The average monthly traffic on the bridge is over

60

3.4 million as of 2016.

Figure 29 – Port Mann Bridge [32]

In December 2012, shortly after opening, the bridge experienced unanticipated, weather-related events in which snow accumulated on and then shed off of the stays during an arctic outflow[33], [34]. Conceivably other similar winter conditions could also lead to snow/ice accretion. The shedding of this snow from the stays presents safety issues for the motorists traveling below.

61

Figure 30 – Snow falling from Port Mann stays [35]

According to local newspapers during December 2016, "slush bombs" affected the bridge again though the BC Government stated that these weren't as severe as the 2012

"ice bombs." During December, the bridge was closed due to the threat of falling snow off of the cables and possible icy conditions[36], [37].

62

Figure 31 – Crack windshield of a vehicle on Port Mann Bridge [36]

When this occurs, the bridge operations personnel can deploy a system consisting of a series of chains mounted at the top of each stay. As the chains are released, they ride down the stay sheaths knocking off the accumulated snow in a controlled manner. The operations team have developed a manual for chain deployment during such events. The goal of the research team at UC was to assist system operators by providing the environmental and bridge status information necessary to efficiently manage and implement operations leading up to and during such winter events.

63

4.2. Dashboard details, past events analysis and performance

4.2.1. Introduction

The dashboard for Port Mann bridge was developed in October-November 2013 to assist operations personnel for the winter of 2013-14. When the dashboard was initially made it was made with a limited set of then available stations. Since then many sensors stations have been added to the system to increase the reliability and the intelligence of the system. The dashboard was active from December 2013 till April 2015 after which it was turned off. The system was revived in October 2017 and it has been operational since then.

The dashboard system for the Port Mann bridge consists of five dashboard dials.

Each dial represents the level of its corresponding Accretion or shedding rule. Figure 32 below shows the dashboard dials as displayed on the website. Each dial represents the level of persistence of Accretion/Shedding.

Figure 32 – Port Mann Dashboard dials The three accretions rules in this system are:

1. Cold and Rain

64

2. Cold and Snow

3. Cold, Snow and wind (Ts)

Chapter 2.3 has detailed information about the accretion rules and how they were formed. The thresholds for what constitutes to ‘Cold’ and how Rain, Snow and Ts are defined are also discussed in the same chapter.

If conditions for accretion are met the system moves to the appropriate levels based on the duration and intensity. Based on the alert protocol set the systems sends alert emails to appropriate officials. If conditions for Accretion rule 3 are met and system moves to a state of L1 or above for that rule then the system variable ‘Snow on Stays’ moves to a

TRUE state. This means that there is a high likelihood of Snow to be present on the cable stays. If the system does not detect an event the ‘Snow on Stays’ can be set to TRUE manually through an email feedback system. Once the presence of Snow on cables has been confirmed the system then starts checking for Shedding conditions.

The two shedding rules in the system are:

1. Snow on Stays and Warming

2. Snow on Stays and Raining

4.2.2. Source stations and weights

The dashboard system comprises of the following stations listed in Table 4 below.

Station Name Type of station Source/Website

65

MAU021

IBCPITTM3

IBCCOQUI5 Local weather station Wunderground.com IBCSURRE6

IBCSURRE10

IBCSURRE21

IBCBURNA10

CWWK

CWMM Airport Wunderground.com

CYXX

CYVR

South Tower

Mid span Bridge station FTP site

Johnston Hill

JHPrecip

BRADNR RWIS station SAW reports EAGLRG

66

STRAC

MTSTRA

Table 4 –List of weather stations for Port Mann Dashboard monitor The some of these stations are currently not functional but they were part of the system in the previous years. As discussed in Chapter 2.1 the dashboard collects information from a wide range of sources for redundancy. The system will have enough information in case of loss of one station or one entire source.

Details of the data obtained from Airports, Local weather stations and RWIS stations are discussed in Chapter 2.1. There are five bridge stations which have sensors installed specifically to monitor the conditions of the bridge (or close to the bridge). There is a station installed on top of a tower, the south tower to capture conditions at the top of the bridge. This station has sensors that report temperature, pressure, humidity, windspeed, leaf wetness value and solar radiation. It also reports ice thickness, snowpack and a precipitation deterioration ratio. There are two stations located at road level in middle of the bridge, the two midspan stations. The first midspan station has sensors that report temperature, dewpoint, humidity and surface parameters like temperature, level of grip, water thickness, snow thickness and ice thickness. The second station at midspan was installed upon request of the dashboard team to obtain values for wind at road level on the bridge. This station reports temperature, humidity and wind data.

The other two bridge stations are not installed exactly at the bridge but is located very close to the bridge, at Johnston Hill. Installing all the necessary sensors on the bridge was not possible because of availability of space and other resources. Some of the sensors

67 are also affected by factors like moving traffic, interference with other nearby airport signals etc. Thus, a location at Johnston Hill, just south of the bridge was chosen to install other sensors. The first station at Johnston hill is the exact replica of the Midspan station, it has sensors that report temperature, dewpoint, humidity and surface parameters. The second station at Johnston Hill has sensors that report temperature, humidity, precipitation quantity, snowpack and wind. This station is also equipped with a sensor that classifies the type of precipitation, to distinguish between rain, snow and ice. This sensor was installed at the request of the dashboard team and is very useful to determine the exact conditions on the bridge, as we know that the bridge experiences a microclimate where there could be precipitation at just the bridge and not in the neighboring towns. If the neighboring local weather stations don’t report Snow or Ice then the dashboard would be blind to conditions on the bridge. Locations of all these bridges are represented in Figure 33.

Figure 33 – Port Mann monitor weather station locations

68

The data from all the sources listed above combine to make a robust set of data for the algorithm to work on. Based on analysis made on historic data and the performance of a station in the past few years each station has been assigned weights. The role of these station weights and their significance in the algorithm have been discussed in Chapter 2.3.

These weights represent the importance of a station to the algorithm and is also based on how well the station represents the conditions on the bridge. Table 5 below summarizes the weight of each station.

Accretion Accretion Accretion Shedding Shedding

Rule 1 Rule 2 Rule 3 Rule 1 Rule 2

CWWK 0.1 X X 0.1 X

CWMM 0.1 X X 0.1 X

CYXX 0.1 0.1 X 0.1 0.1

CYVR 0.1 0.3 X 0.1 0.1

IBCPITTM3 0.1 X X 0.1 X

IBCCOQUI5 0.3 X 0.3 0.3 X

IBCSURRE6 0.3 0.3 0.3 0.3 0.3

IBCSURRE10 0.1(0) 0.1(0) 0.1(0) 0.3(0) 0.1(0)

IBCSURRE21 0.3 X 0.3 0.3 X

IBCBURNA10 0.1 0.1 0.3 0.1 0.1

69

MUA021 X X X 0.1(0) X

BRADNR 0.1 0.1 0.1 0.1 0.1

EAGLRG 0.1 0.1 0.1 0.1 0.1

STRAC 0.1 0.1 0.1 0.1 0.1

MTSTRA 0.1 0.1 0.1 0.1 0.1

MTSTRA- X X 0.1 X X STRAC

South Tower 0.3 X X 0.3 X

Midspan X 0.3 X 0.3 X

Johnston Hill X 0.3 X 0.3 X

JH Precip 0.3 0.3 0.3 0.3 0.3

Midspan-JH X X 0.3 X X proxy

Table 5 – Port Mann monitor weather station weights You can note that each station has a different weight for each rule. This is based on the instrumentation available at the station and also on the reliability of the data coming from it. For example, if some station does not report type of precipitation or quantities then it cannot be used in the three accretion rules and second shedding rule. If a station does not get a vote for a particular rule then it is denoted by ‘X’ in the table. Some stations were reliable in the past and had a weight of 0.1 or 0.3, but they are now inactive. The weights of

70 these stations have been set to 0 to remove them from the algorithm. These are represented by ‘(0)’ (zero in parentheses) next to the old station weights.

Calculating Ts to get an estimate of thickness of snow on the cables needs reliable precipitation quantity and windspeed measurements, and this was possible only at 8 locations. The most reliable stations to get a good Ts were IBCSURRE21, IBCSURRE6 and

JHPrecip station. These were still not representative of the events at the bridge because the windspeed at the bridge is much larger than what is recorded at these stations. This required in installation of a wind sensor at the Midspan location. The wind from this location was combined with the precipitation values from Johnston Hill station to create a

Proxy Ts voter. A similar concept was applied to the RWIS stations MTSTRA and STRAC, two stations at Mt.Strachan where one station had only precipitation values while the other had just windspeeds. Combining stations to make additional voters was a way to get a wider range of Ts values for the dashboard algorithm.

4.2.3. State transitions

As mentioned above the dashboard has 4 levels for each rule namely ‘Clear’, ‘L1’, ‘L2’ and ‘L3’. The transition of each rule is based on the sum total of votes from each station exceeding the set threshold. The state transitions for accumulation rules are represented visually in Figure 34.

71

Figure 34 – Port Mann Dashboard Accretion state transitions

The state of each accumulation rule depends not only on the current conditions but also on its current state and the conditions of the last 6 hours. This takes into account the persistence of apposite weather conditions for each accumulation rule. It is worth noting that accumulation rule 3 can move out of ‘Clear’ state only if accumulation rule 2 is at ‘L1’ or higher. Another feature to note is that when accumulation rule 3 reaches a state of ‘L1’ or higher, the variable Snow_On_Stays is set to 1 (Boolean TRUE). This Snow_On_stays variable serves as the starting condition for the shedding rules.

Figure 35 - Port Mann Dashboard Shedding state transitions

72

Figure 35 above shows the state transitions for both the shedding rules. The time intervals between the transitions are very similar to the accumulation rules. The shedding rules are active only as long as the Snow_On_Stays variaboe is at ‘1’ (Boolean TRUE). Once the Snow_On_Stays returns to ‘0’ (FALSE) the shedding rules back down to ‘Clear’ state.

4.2.4. Past events and Dashboard performance

Table 6 below shows a list of all dates that were of concern on the Port Mann bridge since 2013. Snow has been recorded or reported during each of these event dates. A range of dates shows long events. These events could have been events where it started late on day 1 and continued onto day 2. There are some events that lasted for many days continuously making management of cable stay technicians on the bridge extremely difficult.

2013-14 2014-15 2015-16 2016-17 2017-18

Nov 25-28, 30

1-4, 23-28, 19, 21, 25, 27- Dec 19-20 20, 30-31 28-31 28, 29

Jan 1-4 6-9 25-26

3-10, 15-25, 13-14, 17-18, Feb 3-8 27-28 23-24

73

Mar 1-3 7-14

Table 6 – Port Mann summary of events

The dashboard has alerted either before or at the start of each of the twenty events when it was operational in 2013-15 and 2017-18. First alert from Dashboard is on an average 2.85 hrs before the first reports of Snow on the bridge. The report of Snow on the bridge is determined from the Classifier at Johnston Hill or the Ice detector at the south tower or from looking at images from cameras mounted at the top of the bridge. Further, with hindcasting we have proved that we would have alerted for all 12 chain-drop events in 2016-17 if the dashboard was operational at that time. Analysis of events tell us how the events transpired and help us correlate operations and sensor data. This study tells us that the dashboard is robust in terms of the data sources, their weights and the thresholds on accretion rules that generate alerts. The dashboard is able to alert either ahead of time or at the beginning of an event and this has been proved with 100% confidence.

Another study was made to identify severe events from mild events and also track spatio-temporal patterns in the winter storms that affect the bridge. To do this the two airports CYVR and CYXX were used. These are two stations where type of precipitation is recorded and lie on either direction. CYVR is located to the west of the bridge while CYXX is located to the south east. There was a total of 39 isolated weather fronts that were identified between November 2013 and March 2017. The first snow reports during each event was recorded for all three locations. Table 7 below shows results of the number of events where there was Snow at the airports but not at the bridge, represented in the ‘False alerts’ column. The number of times there was snow at the bridge but there was either no snow at the airport or was at a later time is shown in the ‘Misdetection’ column. Column D –

74

‘Before or simultaneously at bridge’ shows the number of times the airports reported snow ahead of time or at the same time as the bridge. This number tells us the number of events in which the direction of the weather front moving could be from the airport to the bridge.

The time in Column E – ‘Average lead time’ is the time difference between the Snow reports at the airport and the bridge for all events part of Column D. Column F lists the percentage of total events where the airport would have alerted on or before snow hit the bridge.

Snowed at Bridge (Total events – 39, from Nov 2013 to March

2017)

A B C D E F

Before or False Average lead time Airport Misdetection simultaneously (D/39)*100 alert (hr) at bridge

CYVR 2 14 20 2.36 51.28%

CYXX 5 4 26 2.25 66.67%

CYVR AND 0 15 15 2.15 38.46% CYXX

CYVR OR 7 3 31 2.40 79.48% CYXX

Table 7 – Spatio-temporal analysis using airports

One important observation in this table is that if both CYVR and CYXX report snow then there are no False alerts. In other words, events where both the stations report are of high importance and we are certain that there will be snow on the bridge. There are 15 events in this category when the two airports report snow on or before snow is reported

75 on the bridge. This is contradictory to the belief that snow hits the bridge only through arctic outflows which move from west to east. Given the number of events in this category it is possible that the storm could move from south west to north east thereby reaching both the airports first before reaching the bridge. This can be tested by verifying with other stations between the two airports that report type of precipitation.

4.3. Snowpack processing

The snowpack sensor installed at Johnston Hill and the South tower give us the depth of snow on the ground. It is an ultrasonic sensor is installed pointing at the ground. It is calibrated such that the value when it is point at the ground with no obstruction is 0cm. If there is any kind of precipitation or snow above the ground then it reports the amount of snow. This sensor reports very noisy data with the values of snowpack varying in both positive and negative directions by fraction of centimeters both during actual events and during non-event reports. Such noisy data makes it difficult to see a rise due to Snow.

Simple averaging or regression techniques could be used to overcome this issue but they were late to respond and alert to changes in Snowpack. Since the dashboard is a live system and alerting in time is of primal importance a better method to detect this change was required. The other issue with using raw data to alert was that there was no zeroing factor at the end of the event. There are events when there is snowfall causing the snowpack values to rise by a few centimeters followed by few cold days when the temperature does not rise up thereby not melting the snow under the sensor. Subsequently

76 when there is new snow a simple threshold will not detect this as a new event, since there is no intelligent zeroing of the sensor’s output.

To overcome these issues a snowpack processing algorithm was developed to track changes in snowpack data and alert based on increase in snowpack. To overcome the zeroing problem the alert decision was designed based on the rise in snowpack (slope of the signal) in the most recent data. To filter out the high frequency noise in the signal a low pass filter was designed and the cut off frequency was designed just enough to retain the necessary information in the signal and filter out just the noise. The lowpass filter cannot work on us the most recent data, which is only 4 data points. Based on experiments run with various time periods 1 week of snowpack data (672 data points) was chosen to be the optimal time frame to perform low pass filtering and analyzed for rise in snowpack.

With the lowpass filters there is the issue of ‘end effect’ where the ends of the signal tend to have a much lower value than the actual value, since the signal is being forced to a low frequency. Since this is a real-time signal and out point of interest is in the most recent data, which falls at the end point of the signal this ‘end effect’ had to be compensated. This was achieved by padding the signal with 1-day worth of data (96 data points). This padding was done based on the general trend in the previous 4 hours of data. Processing the new signal, which now has 768 data points (672+96) gives a smooth signal, with no end effects and is representative of the actual signal in the most recent hour. The most recent data (4 data points) is then checked for accumulation with a set threshold to alert for snow.

The general algorithm for processing snowpack is:

Step 1:

After every 15 minutes data for the last 1 week (672 data points) is selected.

77

Let the data points be P1 , P2 , P3 ,…, PN

If there are not enough data points, the first point of the series is repeated as many times (filled to the start of the series) as required to obtain 672 points.

Step 2:

The average delta ‘S’ in the last 4 hours (16 data points) is determined.

For data points P1 , P2 , P3 ,…, PN

S = (PN – PN-15)/16

Step 3:

Additional data points (padding) using this delta ‘S’ is to be added. 1 day’s length of data points (96 points) are added as padding.

For data points P1 , P2 , P3 ,…, PN the padded points will be

PN+1 = PN + (S/5), PN+2 = PN+1 + (S/5), …, PN+96 = PN+95 + (S/5)

Note: One fifth of ‘S’ is used to pad the data to avoid a steep increase or decrease in case sudden fluctuations in the data.

Step 4:

The data(672+96 data points) now is filtered using an ideal low pass filter, after which the data points that were added as padding are discarded. The result will now contain processed data (672 data points).

Step 5:

78

The accumulation (using the processed data) over the last hour (last four points) is calculated.

A = QN – QN-4

Step 6:

Steps 1 to 5 are repeated for every new data point (4 times every hour) and if a minimum of 3 of the 4 the accumulation values exceed the threshold an alarm is declared at the top of the hour.

Figure 36 below shows the alerts generated based on Snowpack data for a test period between November 10, 2013 to January 23, 2014.

Figure 36 – Snowpack processing results for events in December 2013 The thresholds are tuned to alert as soon as a significant increase in snowpack signal is obtained while minimizing the number of false alerts. Figure 37 shows the results of Snowpack processing from February 10, 2014 to April 31, 2014. The two alerts before the February 22nd event are noteworthy. These alerts were due to a sudden jump in

79 snowpack which satisfied the processing algorithm’s thresholds but there was no actual snow. This was the only false alert in the winter of 2013-14 which was sued as the test data to decide the thresholds.

Figure 37 – Snowpack processing results for events in Feb-Mar 2014

To obtain the optimal threshold and cut off frequency a series of experiments were done and the number of alarms, false alarms (processing algorithm alarms when there is no significant increase in snowpack) and nonevent alarms (alarms detected by snowpack when there is an actual increase in snowpack but there was no snow precipitation) were calculated. The charts below in Figure 38 show the number of alarms, number of nonevent alarms and number of false alarms for the test data between Nov 10, 2013 and Jan 23,

2014.

80

Figure 38 – Alert comparisons for different Cut-off frequencies (Nov 10, 2013 to Jan 23, 2014) The charts below in Figure 39 show the number of alarms, number of nonevent alarms and number of false alarms for the test data between Feb 10, 2014 and Apr 31,

2014.

Figure 39 - Alert comparisons for different Cut-off frequencies (Feb 10, 2014 to Apr 31, 2014)

Table 8 summarizes the results of the various test thresholds and cut off frequencies.

81

Parameters Nov 10 – Jan 23 Feb 10 – Apr 30

Cut off freq = 8.524e- • 90 alarms • 83 alarms

05 Hz, • Prompt alarms during • Irregular at the beginning Threshold = events of the event on Feb22, but better 0.05cm/15min than all other combinations • 37 non event alarms

• 30 non event alarms • 21 false alarms

• 13 false alarms

Cut off freq = 8.524e- • 54 alarms • 54 alarms

05 Hz, • Fairly continuous • Irregular at the beginning Threshold = alarms on both events of the event on Feb22 0.07cm/15min • 18 non event alarms • 11 non event alarms

• 9 false alarms • 8 false alarms

Cut off freq = 8.524e- • 31 alarms • 41 alarms

05 Hz, • Late to alarm on Dec • Irregular at the beginning Threshold = 9 of the event on Feb22 0.1cm/15min • 9 non event alarms • 3 non event alarms

• 7 false alarms • 5 false alarms

82

Cut off freq = 7.802e- • 68 alarms • 62 alarms

05 Hz, • Fairly continuous • Irregular at the beginning Threshold = alarms on both events of the event on Feb22 0.05cm/15min • 23 non event alarms • 17 non event alarms

• 22 false alarms • 8 false alarms

Cut off freq = 7.802e- • 33 alarms • 39 alarms

05 Hz, • Late to alarm on Dec • Irregular at the beginning Threshold = 9, but earlier than most of the event on Feb22 0.07cm/15min cases • 5 non event alarms

• 5 non event alarms • 3 false alarms

• 9 false alarms

Cut off freq = 7.802e- • 14 alarms • 29 alarms

05 Hz, • Late to alarm on Dec • Irregular at the beginning Threshold = 9, Irregular on Dec 19 of the event on Feb22 0.1cm/15min • No non event alarms • 1 non event alarms

• 4 false alarms • No false alarms

Cut off freq = 7.079e- • 54 alarms • 57 alarms

05 Hz, • Little irregular at the • Irregular at the beginning

83

Threshold = beginning of the event on of the event on Feb22

0.05cm/15min Dec 9 • 16 non event alarms

• 15 non event alarms • 7 false alarms

• 13 false alarms

Cut off freq = 7.079e- • 31 alarms • 36 alarms

05 Hz, • Late to alarm on Dec • Irregular at the beginning Threshold = 9 of the event on Feb22 0.07cm/15min • 6 non event alarms • 4 non event alarms

• 9 false alarms • No false alarms

Cut off freq = 7.079e- • 16 alarms • 30 alarms

05 Hz, • Late to alarm on Dec • Irregular at the beginning Threshold = 9 of the event on Feb22 0.1cm/15min • No non event alarms • 1 non event alarm

• 4 false alarms • No false alarms

Table 8 – Alert summaries for different Cut-off frequencies Figure 40 below was generated during the hindcasting process to check how the snowpack processing would have performed during the winter of 2016-17.

84

Figure 40 – Snowpack processing results for winter 2017-17

Snow density was identified to be a key parameter that factored in the decision- making process. Wet snow has a high co efficient of sticking to the stays and events with wet snow have much higher impacts than events with dry snow. Dry snow is not sticky and tends to simply bounce off the stays. Since there are no sensors to measure density of snow directly this had to be derived from the available information[15]. Snow density can be calculated using the following formula.

푔푚 푃푟푒푐𝑖푝 퐴푐푐푢푚(𝑖푛 푚푚) 푔푚 푆푛표푤 퐷푒푛푠𝑖푡푦 (𝑖푛 ) = ∗ 1( ) 푐푚3 푆푛표푤푝푎푐푘(𝑖푛 푚푚) 푐푚3

Snow density is calculated from Snowpack and Hourly precip values at Johnston Hill station. This is not accurate but given the sensor limitations this gives us a rough estimate of what the density might be. Snow density is calculated as and when the data for

85 snowpack and precipitation are available. Snow density is calculated after the first alarm from snowpack processing. Snowpack at the time of the first alarm of the event is used as the ‘zero’ to calculate new snow in the event that is used to calculate density at a given time.

Figure 41 – Snow density chart for Dec 5, 2016 event Figure 41 above shows the snow density calculated for the event on Dec 5-6, 2016.

The snow density at the beginning of the event is generally not representative because of the latency in the snowpack sensor. The density of snow can be trusted once the accumulation for the event is above 3cm. Snow with a density less than 0.15 or 0.1 is considered dry snow. From the figure it is evident that this event had a lot of wet snow as was confirmed from visual observations.

4.4. Snow thickness on stays - Ts calculations

86

Calculating the thickness of snow accumulation on the stays was necessary for the operations and management team to make decisions about their control actions. The snowpack sensor reports the thickness of snow on the ground, but this is not representative of the amount of snow that has accreted on the stays. The snow on the stays is both gravity driven and wind driven. Moreover, each stay on the bridge may have a different amount of snow on it based on location, direction of precipitation, direction of wind, exposure to solar radiation etc. On a given stay there could be different amounts of accretion along the length of the stay, more snow at the top and bottom and lesser at the middle. At the top the effect of winds could be larger and at the bottom more snow could accrete due to disturbances from traffic and gravitational effect. Even at one particular stay at a particular location the thickness of accretion could be different along different parts of the circumference. For example, if the wind direction was mostly from the eastern side then the eastern side of the stays tend to have more snow than the other three sides.

There are multiple studies on determining the thickness of snow accretion on cylindrical surfaces. Most of these studies though are on calculating snow thickness on high voltage electrical conductors used for long distance transmission[24], [25]. These studies cannot be directly applied the cable stays on the bridge which are at different angles of orientation with respect to the ground, have a much larger diameter and are made of high density polymers as opposed to metallic conductors. Moreover, the formulas used in these studies include many variables like sticking coefficient of snow with respect to the surface, density of snow exact direction of wind etc. Obtaining all these values in real time and calculating an estimate of snow accreted on the cables is not an easy process to automate.

87

Given the limitations in the sensors installed and making assumptions on factors like snow density and effect of gravity and wind on accretion a simple formula was derived to get an estimate of snow thickness on the cables - Ts.

Mass accretion rate of snow per unit area per hour on a flat vertical surface is given by,

푚푚 푔 푃 [ ] 푚 [ ] 3 ℎ푟 [ ] 푚퐴 2 = 10 푚 푉 푚 ℎ푟 [ ] 푠 푉푠 푠

where mA is mass accretion of snow per unit area per hour, P is precipitation rate per hour, VS is fall speed of snowflakes and V is windspeed. VS depends on the size, shape and how wet the snowflakes are. The range of values for VS is approximately 1-1 m/s, being conservative we assume the value to be 1m/s.

The rate of increase in the thickness of the snow layer Ts is the mass/area accretion rate divided by the density of the fallen or accreted snow s. Therefore, on a vertical surface,

푔 푚 [ ] 2 푚푚 퐴 푚2ℎ푟 푚 10푚푚 푇푠 [ ] = 푔 4 2 ℎ푟 휌푠[ ] 10 푐푚 푐푚 푐푚3

3 A common assumption for the density of snow on the ground is s=0.1 g/cm . The density of a layer of wet, windblown snow is likely to be significantly higher, values of

3 s=0.7-0.9 g/cm have been reported. First Ts was calculated using 0.1 but that was too small for wind-blown wet snow and it did not match the thickness that was reported on the stays for events in the past. A value of 0.4 was assumed and tested for a few events and

88 they matched the field observations. Since the snow on the stays is a combination of both wind-blown accretion and gravity driven accretion a value of 0.4 which is between 0.1 and

0.7 is a fair assumption to use in calculations.

With the assumption for snow density to be 0.4 and making adjustments for units of various parameters, the equation above can be simplified to calculate Ts based on precipitation rate and windspeed. 푚푚 푠 푚푚 푚 푇 [ ] = 2.5 [ ] 푃 [ ] 푉 [ ] 푠 ℎ푟 푚 ℎ푟 푠

Since precipitation and windspeed can be obtained from most weather stations this serves as a good method to estimate hourly accumulation of snow on the stays. A cumulative of this would give the total snow accumulated on the stay for an event. This cumulative is a good measure of severity of an event.

The Ts estimates are calculated real-time and displayed on the dashboard during an event. This was initially calculated at some of the local weather stations and the Johnston

Hill Precipitation station. The stations at South tower and Midspan locations don’t have a sensor that reports precipitation rates. The Ts estimate from Johnston Hill was often less than the snow thickness on the stays. The reason for this is the low windspeeds at Johnston

Hill as compared to the bridge. The windspeeds at Johnston Hill was always less than the windspeeds at Midspan and the windspeeds at the south tower was significantly higher because of the difference in elevation. So, to get an alternative estimate for Ts which would be more representative of the bridge was to use a proxy voter in the dashboard algorithm.

This proxy voter is an estimate for Ts obtained by using the precipitation information from

Johnston Hill and Windspeed measurements from the Midspan. Ts estimates from the proxy station was much larger than other stations, courtesy the windspeeds. Using this proxy

89 voter was useful in setting the alerts off earlier than the other stations, due to large Ts values. The same concept of creating a proxy voter was extended to the two RWIS stations at Mt. Strachen where one of them reports only windspeed and the other reports only precipitation. This provided as a good estimate for Ts at that location.

Figure 42 – TS values comparison for Feb 22-25, 2014 event

Figure 42 shows the TS values for the events between Feb 22-25, 2014. TS is set to zero when the Snow_On_Stays variable is turned off thereby indicating the end of an event.

Ts was initially designed to be used as a measure to detect minor and major events and differentiate them. However, do to the nature of the operations and management protocols and their dependence on forecasts to make decisions, real-time Ts is used as a documenting parameter. It tells us about the importance of an event both in terms of magnitude and intensity of snow accretion. This concept of calculating an estimated Ts is also extended to forecast data to determine the severity of an event in the future, however,

90 the accuracy and validity of the result will depend on the accuracy of the data from forecast models.

4.5. Using forecast data sources in dashboard system

The Dashboard website that was designed to aid the maintenance and operations team during snow events on the bridge is a tool that works only on current weather conditions. Due to the nature of operations and the necessity to prepare for an expected event, the management team makes decisions from five days prior to an anticipated event.

The management team needs to assemble resources, both manpower and facilities. The availability of these resources and the cost associated with acquiring them for their services demands good forecasting services to give information about an anticipated event.

The management team use various forecast sources and models, some specific to the bridge location. The aim of the team at UC is to present the data from various forecasts in an easy, user friendly manner which helps the management team in making decisions.

The management team look at a time frame of five days to look for possible events.

Two to three days before an event they are interested in knowing the certainty of an event occurring. 24 hours before an event the team is looking to understand the intensity of the event and possibly 4-5-hour window of the event starting time. The next few sections will focus on the kinds of forecast sources that have been used and how the forecast data is converted to actionable information.

4.5.1. Data sources and models

91

Weather forecast data is generally available in two formats, hourly and daily weather forecasts. Hourly weather forecasts report all weather parameters for every hour of the day. Daily weather forecasts report parameters as a summary for an entire day, or summary for daytime and night time. The models that report hourly data is generally useful for the short term or near future of 24 hours to 48 hours. The daily forecast models are useful to know the summary as a whole. Due to the nature of results required for the requirements discussed above we analyze the use of hourly weather data. We can collect data from daily weather forecast models and display them on the website, but they are not intended to be used in automated alerting and reporting processes at this stage.

Table 9 below shows the list of all forecast sources and models that have been collected and stored in the database for analysis. The source column in the table is the website from which data is obtained. Multiple models of data are downloaded from SpotWx but the data is generated by Environment Canada or National Oceanic and Atmospheric

Administration and National Weather Service of the United States of America. The ‘Hourly’ column gives information about the duration of forecast for hourly forecast models and the

‘Daily’ column gives the same information for daily forecast models. The forecast models are updated on a regular basis and this differs for each model and the generating source.

The frequency with which the model is updated is given in the ‘Model update frequency’ column. Based on the restrictions in collecting the data and limitations in database storage capacities data is collected at specific time intervals as given in the ‘Collection frequency’ column.

Source Model Hourl Daily Model Collectio Data

92

y update n collectio

frequenc frequenc n

y y Location

s

Northwest 36 Only for DICast 1 hr 1 hr Weathernet hours bridge

48 1 hr 1 hr Bridge DarkSky Proprietar hours and 4 Weathernet y 7 days (1 1 hr 1 hr airports per day)

240 1 hr 6 hr hours Bridge Wundergroun Proprietar 10 days and 4 d y (1 per 6 hr airports

day)

24 6 hr 1 hr Environment hours Only for 4

Canada 7 days (2 airports 6 hr 1 hr per day)

SpotWx HRDPS 48 hrs 6 hr 6 hr Only for

93

Continenta bridge, l (Env CYXX and

Canada) CYVR.

RDPS (Env 48 hrs 6 hr 6 hr Canada)

10 days

GDPS (Env (3 hr 6 hr 6 hr Canada) intervals

)

16 days

GEPS (Env (6 hr 6 hr 6 hr Canada) intervals

)

HRRR 24 hrs 6 hr 6 hr

RAP 24 hrs 1 hr 1 hr

80 hrs

(1/hr

NAM for 30 1 hr 1 hr

hrs, 1

per

94

3hr for

next 2

days)

1

0 days (3 6 6 GFS hr hr hr intervals

)

Table 9 – Weather forecast data sources

Data is also collected at the airports apart from the bridge location. This is because the airports have a classifier to detect the type of precipitation. The forecasts at these locations can be used to compare the accuracy of forecasts for a wide range of locations and to generally increase the sample space of available forecasts. In the table above the four airports mentioned are CYXX, CYVR, CWMM and CWWK; the four airports discussed in section 4.2. There is also a possibility of including forecasts for CYVR and CYXX in the potential forecast algorithm in the future years. They could be included in the algorithm the same way it is being used in the ‘current conditions’ dashboard algorithm. At the time of integration this forecast data can be used to verify the quality of forecast and will help in deciding the voting weight of the source, location and model. The DICast model data obtained from Northwest Weathernet is for two specific locations, Port Mann bridge top of south tower (PMBST) and Port Mann Bridge Midspan section surface level (PMMSSF). The major difference between these two sets of data is the windspeed difference between the two locations due to the difference in elevation.

95

North West Weathernet’s DICast forecast data was being used by the operations and management team in the past years to make decisions. This model has the ability to incorporate the most recent current weather data and use that to adjust the forecast values specific to a location. This makes this model far more accurate than the other models for the Port Mann bridge location. There are various limitations with other models, most common problem with all the other models is the accuracy of the forecast for the bridge location. The forecasts for other models are generated for a larger geographical grid typically in the range of 4 kilometers to 20 kilometers. The forecasts at the nearest location on the grid is used as the forecast for the bridge. The Vancouver region is coastal and there are not a lot of weather stations to the west. This leads to poor boundary conditions for the forecast models when they are initiated, making the general forecast not as reliable as an inland location.

To increase the number of samples of snow events in forecast data to make a thorough study of the data from Dark Sky Weathernet and Wunderground forecast data for

11 other locations along the north pacific coast. These locations were chosen based on the availability of a weather station with a classifier that could report different types of precipitation. The locations chosen are shown in Figure 43 below.

96

Figure 43 – Additional locations on Pacific coast for forecast analysis To validate the forecasts and to monitor how it changes with time simple temperature validation studies were performed. For a given data and time the actual temperature was compared to the different forecast temperatures for that date and time. Some models have predictions for 7-10 days while others have predictions only for the last 36-48 hours. They are represented in Figure 42 below.

97

8

7

6

5

4

Temp (degC) Temp 3

2

1

0 1/17/2018 12:00 1/20/2018 0:00 1/22/2018 12:00 1/25/2018 0:00 1/27/2018 12:00 1/30/2018 0:00 2/1/2018 12:00 Model Run Time

WU hourly 10day DarkSky 48 hr hourly (degC) Env Canada_PittMed hourly 24hr NW_Midspan Midspan Actual NW_ST South Tower Actual

Figure 44 - Forecast comparison for temperature at Jan 29, 2018 23:00 for the bridge location

In Figure 44, flat lines corresponding to Midspan Actual and South Tower Actual represent the actual temperature at 23:00 hrs on Jan 29, 2018. The horizontal axis on the figure shows the various model run times. The data from Wunderground’s hourly model is generated as early as 10 days earlier to the actual time, the longest blue line in the forecast.

The other forecast data shown in this figure are the ones from DarkSky API’s 48 hour forecast, Environment Canada’s forecast for Pitt Meadows location and the forecasts from

North West Weathernet’s DICast model for both South Tower and Midspan locations. The figure above shows how the temperature predictions for Jan 29, 2018 23:00 hrs has changed. The Wunderground model was initially predicting the temperature to be around

98

4OC on Jan 20 but quickly changed it to 3OC 12 hours after the first prediction. On Jan 22 the prediction went further down to 2OC for 6 hours and came back up to 3OC and subsequently to 4OC on Jan 23. On Jan 26 the prediction was up at 5OC and on Jan 28 it rose up to 6OC and stayed at 6 till Jan 29 23:00 hrs. These fluctuations are big in magnitude because of how early these forecasts are with respect to the actual time. Forecasts from the DICast model show lesser variations however their predictions are also off by 1OC. What is also noteworthy in their model is that there is a correction in the final 4-6 hours before the actual time. The forecasts from Dark Sky show some oscillations 48-36 hours before the time but they stabilize at the 36-hour mark, and finally adjust again in the last 4-6-hour window. This figure was created for night time when the temperatures should be ideally settled for a normal day. To look at other times during the day please look at the figures below. Figure 44 shows the variations in forecast for temperature prediction at 15:00 hrs, when usually the temperature is on the decrease during a normal day. At such times it is noteworthy that the model is under predicting with respect to the actual temperature and some models like the DICast and Dark Sky catch up as the Model Run Time gets closer to the actual time.

99

12

10

8

6 Temp (degC) Temp 4

2

0 1/17/2018 12:00 1/20/2018 0:00 1/22/2018 12:00 1/25/2018 0:00 1/27/2018 12:00 1/30/2018 0:00 2/1/2018 12:00 Model Run Time

WU hourly 10day DarkSky 48 hr hourly (degC) Env Canada_PittMed hourly 24hr NW_Midspan Midspan Actual NW_ST South Tower Actual

Figure 45 - Forecast comparison for temperature at Jan 29, 2018 15:00 for the bridge location

Figure 44 shows predictions for 05:00 hrs and this example is similar to one in Figure 42.

The predictions are overestimated initially but they correct themselves as they get closer to reality.

100

12

10

8

6 Temp (degC) Temp 4

2

0 1/18/2018 0:00 1/20/2018 0:00 1/22/2018 0:00 1/24/2018 0:00 1/26/2018 0:00 1/28/2018 0:00 1/30/2018 0:00 Model Run Time

WU hourly 10day DarkSky 48 hr hourly (degC) Env Canada_PittMed hourly 24hr NW_Midspan Midspan Actual NW_ST South Tower Actual

Figure 46 - Forecast comparison for temperature at Jan 30, 2018 05:00 for the bridge location

Comparisons and validations precipitation probabilities and quantities were also done.

However, there is not easy method to compare the predicted value to actual quantity since the predicted value is a function of few parameters and only in combining them using custom equations will give us results in a comparable form. These equations can be different for each source or model. Therefore, temperature is used as the standard to test the validity of the source.

All the data sources and models generate forecasts for a “geographical grid”, since generating forecasts for every possible set (latitude, longitude) position is not practically and economically feasible. The data being collected for the bridge corresponds to the

101 geographically closest vertex on the grid. The data from North West Weathernet is the only source where the DICast model gives data specific to the bridge location. The DICast data actually gives exact location based forecast for five locations namely, Port Mann top of

South Tower, Port Mann Midspan surface level, Johnston Hill, Alex Fraser bridge and Rice

Mill Road. Alex Fraser bridge and Rice Mill road are other locations in the Vancouver are that have potential snow accretion/shedding problems. In this project, since we focus on the Port Mann bridge, a comparison the forecasts between the South Tower and Midspan forecasts were done to see the difference between them. This is illustrated in Figure 47 below.

Forecast generated at 3/22/2018 17:45 30 25 20 15

10 Temp (deg (deg C) Temp

Precip (mm/hr) Precip 5 Windspeed Windspeed (kmph) 0 3/22/2018 16:48 3/23/2018 4:48 3/23/2018 16:48 3/24/2018 4:48 PMBST AirTemperature PMBST WindSpeed PMBST QPF PMMSSF AirTemperature PMMSSF WindSpeed PMMSSF QPF

Figure 47 - Forecast comparison between South Tower and Midspan locations In Figure 47 above PMBST refers to Port Mann Bridge South Tower and PMMSSF refers to

Port Mann Midspan Surface level. Air temperatures, Wind speeds and quantity of precipitation (QPF) from forecast generated at “03/22/2018 17:45” for both these locations are compared in the figure. The forecast for quantity of precipitation for both the

102 locations are exactly the same and they line up on top of each other. The temperatures from both the locations are also fairly similar, temperature at the top of the South tower is slightly lower than the predictions for the Midspan section. This is expected though since the top of the south tower is more than 100ft higher than the Midspan surface. This change in elevation is also the reason for the apparent difference in windspeeds that is evident in the forecasts. Such comparisons were made over multiple forecast run times and the trend remains the same. The only significant differences between the forecast for South tower and Midspan locations are that the temperatures for South tower are slightly lower and the

Windspeeds are higher compared to Midspan. These observations are true with the actual weather data as well, therefore both these forecasts sources are used in further processing.

4.5.2. Dashboard rules and thresholds

The forecast data collected from various sources and models discussed in the previous sections are in numerical format and it is not easy for human intelligence to understand and process the information in a consistent, unbiased manner for a long duration of time. It is not easy to use forecast that contains ten variables which might change drastically with models being refreshed every hour and convert it to actionable information in the bridge operations and management process. To comprehend multiple forecasts over long periods of time in a dependable manner a forecast dashboard similar to the current weather dashboard was envisioned.

The forecast dashboard should help the management team in understanding the duration, intensity and certainty of an upcoming event. To check for these conditions, dashboard rules similar to the current weather dashboard can be used. A rule for ‘Cold and

103

Rain’ is defined to check the likelihood of cold temperatures and the probability of rain in the forecast. This rule is important as cold temperatures and rain can lead to formation of ice or rime on the stay cables which can be ass detrimental as snow. There is also a possibility that the rain in the forecast can eventually turn into snow since the temperatures for ‘Cold’ conditions are close to freezing point of water.

A second rule for ‘Cold and Snow’ is defined to check the likelihood of cold temperatures and the probability of snow in the forecast. Further, the concept of calculating TS based on windspeed and precipitation rates discussed in section 4.4 can be extended to forecast data to get an estimation of thickness of snow.

The dashboard rules are applied to the DICast model data since this is the model that has been generated exclusively for the bridge and adjusted using current weather conditions. While using this data source temperatures in the range of +3 to -3 oC is considered ‘Cold’ in the dashboard rules. This threshold is slightly higher than the threshold used for current conditions dashboard too account for the fluctuations and errors in forecast. This is also a conservative measure and this threshold can be narrowed down later based on the performance of the forecast dashboard. For conditions to be classified as ‘Rain’ the probability of precipitation as a whole has to be above 30%, the probability of rain has to be above 30% and the quantity of precipitation expected should be greater than 0.01 mm/hr. Similarly, to be classified as ‘Snow’ the probability of precipitation as a whole has to be above 30%, the probability of snow has to be above 30% and the quantity of precipitation expected should be greater than 0.01 mm/hr. These conditions are checked on the hourly forecast. If the forecast for a particular hour meets

104 the conditions both for ‘Cold and Rain’ or ‘Cold and Snow’ then it gets a vote for that particular rule for that particular hour.

Similar thresholds can be used for other sources of forecast as applicable. The thresholds for probability and quantity of precipitation might have to be adjusted based on the scale of the forecast variables and the dependability of the forecast studied over a number of events. Based on preliminary studies using Wunderground’s hourly forecast the temperature thresholds and precipitation probabilities are the same as the ones for DICast model while quantity of precipitation is set to 0.001 mm/hr. This is because the quantity of precipitation in these forecasts are very small compared to what actually occurs. Similarly, based on preliminary studies the data from Dark Sky Weathernet can be used with the same temperature thresholds of the DICast model. The probability of overall precipitation threshold is retained at 30% while the threshold for quantity of precipitation is set at 0.001 mm/hr. The probability of Rain is increased to 40% but the probability of Snow is retained at 30%. These numbers have been arrived at based on the study made from the forecast data collected for the 12 locations. There was a total of 113 Snow events in those locations between February 8 and March 6, 2018. The total number of detections and false alerts were analyzed for each source to decide the threshold.

4.5.3. Analysis and results

Let us first discuss the results of DICast model data from Northwest Weathernet.

The forecasts for both locations South tower and Midspan were processed using the rules mentioned in section 4.5.2. To discuss the results of processing let us look at the first event

105 of the winter 2017-18 which occurred on Dec 19, 2017. The plot below shows the results for this particular event.

Figure 48 – Forecast data analysis for Dec 19-20, 2017 event In Figure 48 the horizontal axis represents the Model Run Time, denoting the time at which forecast was run while the vertical axis represents results of each model run. The vertical axis is in units of time, with each point on the horizontal axis producing results for

36 hours along the vertical axis. This plot is shown only for the time period in which the event was predicted in the forecast. The two triangles shaded in light red are the times for which this chart is not applicable. The triangle in the upper left is empty space because there is no forecast available, the model gives us only 36 hours of data and this region is past 36 hours. The triangle on the lower right-hand side is the time at which forecast meets reality, therefore there cannot be any forecasts below the hypotenuse of that triangle. The blue dots in the plot shows the time for which the ‘Cold and Rain’ condition was met in the forecast while the orange dots show the times for which the ‘Cold and Snow’ rule was met.

The empty white regions are the times for which neither of these conditions were met,

106 which means no events are anticipated at those times. The two yellow stars in the plot shows that time at which the events started in the forecast. This particular event started at

11 AM on January 19, 2017. There was continuous snow until 8 PM followed by continuous rain until 1.30 AM on January 20, 2017. ‘Cold and Rain’ conditions were predicted as early as 12AM on December 18, 2017 and ‘Cold and Snow’ conditions were predicted as early as

3AM on December 18, 2017.

To get the duration of an event in the forecast the total number of hours that meet the condition in that particular hour of forecast (particular Model Run Time on the horizontal axis) can be summed up. To get the persistence of forecast for particular hour the total number of dots for that particular hour (particular Date Time on the vertical axis).

Figure 49 - Forecast data analysis for Dec 19-20, 2017 event

Figure 49 above shows that an event of 13 hours of duration was expected on

December 18 and this was being persistently predicted in the forecast. The forecasts generated closer to the event did not expected snow to occur at a later time but the forecast for ‘Cold and Rain’ was very accurate for this event.

107

To send alerts based on forecast data the predictions in the forecast need to be for an event that is long and persistent enough, but also alert as early as possible. To achieve this without making a high number of false alerts, an event was declared based on forecasts only if the duration of the event is three hours or longer and the event has been persistent in the forecasts for a minimum of three hours. This means that an alert is sent when there is an occurrence of 3x3 set of dots in the plot. A 3x3 grid of blue dots will trigger a ‘Cold and

Rain’ alert while a 3x3 grid of orange dots will trigger a ‘Cold and Snow’ alert. Once an alert has been generated for an anticipated event there are no more alerts and it is left to the operations team to monitor and track the changes in forecast which could move the event sooner or later. If the start time of the event differs by 8 hours in comparison to first alert then a new alert is sent and this is considered a new event.

Based on this alert protocol the events of the winter of 2017-18 were analyzed.

Figure 50 contains the forecasts for both the South Tower and Midspan sections, which are almost identical except for the difference in wind predictions. The vertical and horizontal axis represent the same as in the previous figures. The date of every alert generated is shown in the figure. The dates in white boxes represent dates when snow was reported on the bridge and the dates in red boxes are dates when there was no snow on the bridge.

108

Figure 50 – Events and false alerts during winter 2017-18 in DICast model forecast data processing The number of events, correct detections, false alerts and missed detections are documented in Table 10 . The table comprises of three types of events namely ‘Cold and

Snow’ events, ‘Cold and Rain only’ events and ‘False alert’ events. The ‘Alert at’ column for both the accretion rules shows the time at which a forecast alert was generated for the event. ‘Cold and Rain only’ events are events when there was no snow on the bridge. The

‘Cold and Snow’ events could contain intermittent rain. The lead time represents the time difference between the alert time and first report of Snow (or Rain) on the bridge. It is worth noting that there were only 2 missed detections during the entire winter, both of them very minor events with only one of them being a ‘Cold and Snow’ event.

ACR 2: Cold and ACR 1: Cold and Rain Snow Event start Notes Lead Lead at Alert at Alert at time time

109

(hrs) (hrs)

12/18/17 12/18/17 12/19/17 30.00 28.00 5:00 7:00 11:00

12/21/17 Very minor snow, None None 17:00 missed detection

12/23/17 12/25/17 None 34.50 19:00 5:30

12/27/17 12/26/17 12/27/17 9.00 31.00 Cold and 0:00 2:00 9:00

Snow events 12/27/17 12/26/17 12/28/17 15.50 33.50 (Snow 0:00 18:00 3:30 reported by Forecast spot on classifier and 12/28/17 12/29/17 for Cold and Rain, 16.50 None Temp in cold 10:00 2:30 very minor snow range) event

1/24/18 1/24/0018 1/25/18 24.00 24.00 15:00 15:00 15:00

2/12/18 2/12/18 2/13/18 38.00 20.00 10:00 22:00 18:00

2/15/18 2/15/18 2/17/18 31.50 31.50 23:00 23:00 6:30

2/16/18 30.00 2/16/18 30.00 2/17/18

110

16:00 16:00 22:00

2/22/18 2/22/18 2/23/18 31.00 31.00 2:00 2:00 9:00

2/20/18 2/20/18 2/21/18 30.00 30.00 7:00 7:00 13:00

3/2/18 3/2/18 Very minor event, 0.00 None 21:00 21:00 missed detection

1/10/18 1/11/18 23.00 None 3:00 2:00

1/30/18 1/31/18 17.00 None 16:00 9:00

2/14/18 2/15/18 2/15/18 30.00 22.00 16:00 0:00 22:00 Cold and 2/24/18 2/24/18 2/24/18 Rain only 20.00 20.00 2:00 2:00 22:00 events 2/25/18 2/25/18 2/27/18 29.50 29.50 19:00 19:00 0:30

2/26/18 2/26/18 2/28/18 27.00 26.00 21:00 22:00 0:00

3/1/18 3/2/18 3/1/18 2:00 26.50 15.50 15:00 6:30

False alerts None 12/24/17

111

– no 2:00 precipitation 1/6/18 4:00 None

2/28/18 None 1:00

Table 10 – Summary of DICast model forecast data processing

There were only three false alerts in the entire winter. When looking at the data for these two events, they were not persistent for more than 6 hours on all three occasions.

There were a number of events with alerts on ‘Cold and Snow’ but which turned out to be

‘Cold and Rain only’ events. These alerts are useful for the operations personnel because rain events can very easily turn into snow events and it is always better to err on the side of caution.

The data Dark Sky Weathernet’s model was processed and the results are displayed in Figure 51.

112

Figure 51 - Events and false alerts during winter 2017-18 in Dark Sky Weathernet’s forecast data processing

The blue and orange dots in this scatter plot mean the same as the previous charts.

They represent ‘Cold and Rain’ and ‘Cold and Snow’ conditions being met in the forecast respectively. The grey and yellow dots are their corresponding reality dots. These represent the presence of those conditions on the bridge. So, if the forecast was good at predicting, a series of blue dots should ideally lead to a grey dot. Similarly, a series of orange dots should eventually end in a yellow dot. If the blue does not end in a grey then they are false alerts. If there are grey dots without a trail of blue then they are missed detections. The same applies to orange and yellow dots as well. From this you can see that there are plenty of ‘Cold and Rain’ false alerts. To avoid these the threshold for ‘Rain’ can be increased. The missed detection rate in ‘Cold and Snow’ is also high, therefore the threshold for Snow could also be reduced.

113

Figure 52 - Events and false alerts during winter 2017-18 in Wunderground’s forecast data processing

Figure 52 shows the results from processing Wunderground’s forecast data using the processing rules discussed in the previous section. The data for this forecast source was available only from Jan 18, 2018 therefore the first half of this plot is to be ignored. The variables in this plot are the same as the ones in the previous figure. Here the results are much better and there are blue dots leading to grey dots for every major event. The prediction for snow seems to fluctuate a little, therefore there are not as many orange dots leading to yellows, but the threshold for ‘Snow’ can to lowered to be more conservative.

As mentioned in the previous section 4.5.1 forecasts from Dark Sky Weathernet and

Wunderground were collected at 12 other locations on the North Pacific coast to get more data points to prune the thresholds. The analysis from those locations were done with the rules and thresholds discussed in the previous sections and the results are shown below.

Locati No. No. Darksky (48 hr forecast) Wunderground (240 hr

114 on of of forecast)

Sno Sno Alerts Avg Fals Missed Alerts Lea Fals Missed w w genera Lea e detecti genera d e detecti day even ted d aler ons ted tim aler ons s ts tim ts e ts

e

CYPR 8 6 10 40. 4 0 7 56. 3 2

4 3

CYBD 10 8 6 47. 2 4 4 96. 1 5

5 25

CYYJ 5 3 9 35. 6 0 2 66. 2 3

3 5

CYZP 2 0 7 43. 7 0 0 0 0 0

6

PANC 18 16 3 36. 0 13 0 0 0 16

6

PAJN 16 14 12 37. 0 2 7 88 1 8

5

PACV 14 13 11 34. 0 2 3 87 0 10

6

115

PAYA 17 12 10 40. 1 3 3 47 0 9

7

PASI 11 11 9 36. 1 3 3 77. 0 8

6 3

PAOM 24 20 9 32. 1 15 3 47 0 17

15

PAPH 16 10 15 34. 7 2 2 83 2 10

2

Total 113 101 37. 29 44 34 72. 9 88

5 6

Figure 53 – Summary of forecast analysis at additional locations on Pacific coast

The results show that over 113 events that occurred between Feb 8, 2018 and Mar

9, 2018 there were 44 missed detections with Dark Sky Weathernet’s data while there were 29 false alerts. The average lead times for all these alerts is 37.5 hours, which gives the operations team ample time to look at other forecast sources and identify a significant event. With Wunderground’s data there were only 9 false alerts but there were 88 missed detections. This makes the Wunderground model a very dependable model to identify events with certainty. In other words, the model might miss many events, but if it does alert then the operations team can be certain that there is going to be an event. This location also gives a lead time of 72.6 hours which is more than 3 days’ notice.

Therefore, with Northwest Weathernet’s DICast model being very dependable for the bridge and other sources like Dark Sky and Wunderground having their own inputs to

116 the alerts there is ample information available to design a new forecast based dashboard to display results to the user.

4.6. Forecast dashboard system

Building a new intelligent dashboard system drove the need for drastic changes in the database design to store the data from different stations and forecasts. It also needed a change in the processing algorithms to account for the change in database design and to cater to the requirements of the new website. The previous dashboard system was designed using PHP adopting the Zend framework to handle user interactions with the website. With the shift in the new generation of websites to move away from PHP and build websites using Python and JavaScript to utilize the latest plotting and security functions, the new dashboard will be developed on such a platform. The new website will also be made modular to include dashboards and results for two bridges, Port Mann and Alex

Fraser bridge, which is also in the Metro Vancouver area facing similar problems.

4.6.1. Forecast bar charts

The dashboard as it was designed in 2013 was developed to monitor and alert based on current weather data. This serves as a good tool to keep track of conditions at the bridge during an event, to save and archive information for further analysis and research in the future. However, it does not help the management team in making decisions 2-3 days before an event. To increase the intelligence of the dashboard forecast can be incorporated as discussed in section 4.5.

117

Once the analysis was made the next step is to display the data to the users, operations and management team and researchers in a user-friendly manner. The dashboard, with its dials to indicate the level of accretion and shedding provides the user with quick information. It contains information about the type of precipitation, duration of the event, amount and intensity of snow, possibility of shedding etc., without any prior knowledge about the stations, sensors, the numbers and data etc. The dashboard is a tool that pre-digests the information for the user to enable quick decision making. The forecast dashboard therefore also needs to combine the data from various forecast models and represent the information in a simple user-friendly manner.

Figure 54 - Forecast data analysis for Dec 19-20, 2017 event

The scatter plot in Figure 54 to display the results from forecast processing contains lots of useful information including duration of event, persistence in forecasts, exact start times etc. However, it is not as simple as the dials on the dashboard that display similar results based on current weather data. To achieve a similar chart that captures all

118 information another step of processing is done on the scatter plots to generate bar charts shown in Figure 55 below.

Figure 55 – Forecast persistence for Dec 19-20, 2017 event (6 hour windows) The horizontal axis in this plot shows the time in future. Since the forecasts are available only for 36 hours it is now restricted to 36 hours but it can be extended up to 7 days. The blue bars represent the persistence of likelihood of a ‘Cold and Rain’ event at that time, in the forecast. The orange bars represent the same information for ‘Cold and Snow’ events. For the convenience of the user the times on the horizontal axis has been grouped into 6-hour intervals, from 0:00 – 6:00, 6:00 - 12:00, 12:00 – 18:00 and 18:00 – 0:00 hours.

This also helps the management team decide on number of shifts required to manage the bridge in an upcoming event. The chart shown above was generated at 12:00 hrs on

12/18/2107.

119

Figure 56 - Forecast data analysis persistence calculations for Dec 19-20, 2017 event The values on the bar chart are the persistence of events in that particular 6-hour window, in the forecasts. To understand this, refer to the figure above. At 12:00 on

12/18/2017 the forecasts available would have been only the section of the plot that is not shaded by the blue box. The forecast persistence is calculated for 6-hour windows along the vertical axis of this plot. The probability for ‘Cold and Rain’ of each 6-hour window is calculated based on the total number of blue dots in those six hours divided by the total number of possible dots. For example, in the time period between 12/19/2017 6:00 –

12:00 the total number of possible dots is 93, and only 11 of those are blue dots, where conditions for ‘Cold and Rain’ has been satisfied. This gives it a probability of 11/93 represented by a value of 0.113 in the bar chart. Similar calculations can be done for each six-hour window for both ‘Cold and Rain’ and ‘Cold and Snow’ rules to be displayed on the bar chart.

120

If the forecast fluctuates about the possibility of an event then the values for these bars will reduce, if the forecast is consistent the bars will have a high value indicating the certainty of the event. To get more information about intensities and exact start times the more involved users can then look at raw data, scatter plots and processed forecast TS.

Figure 57 - Forecast persistence for Dec 19-20, 2017 event (hourly) After some feedback from the operations and management team, a similar bar chart was created for hourly intervals. These charts are more useful in giving information about exact start times of events but are quite noisy and not useful for a non-technical user.

Therefore, the bar charts with 6-hour windows will be used to display on the dashboard while this can be used by specific user groups to display data as desired.

Other advantages of using such charts is that these charts have the ability to combine multiple forecast sources into a single chart with a single result. The sources can be combined with a weighted voting scheme, similar to the current conditions dashboard.

121

Sources that are dependable would have higher weights than others, to find the optimal balance between false alerts and missed detections.

Figure 58 shows the predicted thickness of snow TS included with the persistence chart which gives information about the intensity of the event as well.

12/18/2017 12:00:00 PM 1.00 160 0.90 140 0.80 120 0.70 0.60 100 0.50 80 0.40 60 0.30

40 (mm) Ts Estimated 0.20 20

Event persistence in forecast persistence Event 0.10 0.00 0

Cold and Rain Cold and Snow Hourly Ts-South Tower (mm) Ts accum-South Tower (mm)

Figure 58 - Forecast persistence with estimated TS for Dec 19-20, 2017 event (hourly)

The values for both hourly TS as well as accumulation is shown in Figure 58 above, scaled on the right vertical axis. These measures however keep changing with every model run.

Therefore, to represent the persistence of this predicted TS in the forecast the average of all predicted TS for the particular time for all previous model runs will give a better estimate and idea. This is also displayed on the dashboard as represented in Figure 59 below.

122

20

15

10

5

Estimated Total Ts (cm) Ts Total Estimated 0 12/18/17 12/18/17 12/18/17 12/18/17 12/19/17 12/19/17 12/19/17 2:45 8:45 14:45 20:45 2:45 8:45 14:45 Model Run Time

ST Average MS Average

Figure 59 – Change in Average estimated TS for Dec 19-20, 2017 event (hourly)

The green band over the red line represents one standard deviation in the TS predictions for the estimated total TS at the South Tower for that particular time. This serves as a confidence band for the amount of snow expected for an event. This gives information about how the intensity of the event has changed over different model run times.

4.6.2. Event summary charts

At the end of every event the management team study the event and analyze their control actions. The dashboard has been an integral part of this post event analysis with all its data and plotting features. The aim was to find a correlation between the sensor data and their control actions and justify the decisions made during the event.

After many discussions and events, the ideal way to summarize an event was determined. This chart contains information from sensors on the bridge and Johnston Hill, processed values, manual measurements from the field and the operations alert levels as

123 defined in the operations team working protocol. A sample of an event summary plot is shown below.

Figure 60 – Feb 17-18, 2018 event summary

Figure 60 summarizes the event that occurred on February 17-18, 2018. The horizontal axis on both the charts here is time, spanning over the two days. The first strip chart displays four variables; temperatures from South tower and Johnston Hill on the left vertical axis, and ‘Snow on Cables’ variable on the right vertical axis. Snow on Cables is a manual report from on field measurements. This can be reported through the new dashboard website. This chart also shows the Conditions from the classifier at Johnston Hill and this is plotted in 3 categories, ‘Snow’ - 2, ‘Rain’ - 1 and ‘Other/None’ – 0 on the left vertical axis. The strip chart at the bottom shows the hourly precipitation rates obtained from the precipitation gauge and the Snow pack values from Johnston Hill on left vertical

124 axis. The value for calculated snow density (discussed in section 4.3) is plotted on the right vertical axis. The different shades in the background indicate the appropriate operations team alert level. At alert level 2 the operations team is monitoring the forecasts and radar information and ready to be deployed on the field. On the morning of 17th the operations team were at alert level 2 and they were ready the entire day. There was mild snow for a couple of hours in the morning but the operations team did not consider that a threat to move to alert level 3. They moved to alert level 3 at 17:00 hours on the 17th in anticipation of Snow later in the evening. At alert level 3 the snow and ice technicians and the rope access teams assemble at the bridge and are ready to perform control actions when required. Snow eventually started at 22:00 hours and the first set of chain drops were done at 00:30 hours on 18th , which by their protocol is alert level 4. There were more chain drops performed as based on the necessity until 3:30 hours following which they moved down to alert level 3, where the technicians are on standby, waiting at the bridge to move back to alert level 4 if needed. At 13:00 hours on 18th the operations team declared that the bridge was clear of adverse weather and moved down to alert level 2. Charts for other events in winter of 2017-18 can be found in the appendix.

The management team found a lot of use in this summary charts where they can map the critical times during an event and see how the operations team respond to it, which will them in future to make decisions if they see similar weather patterns in future events. These charts which are currently made manually will be automatically generated on the new website, making it useful both during the event for operations personnel and after the event for management and evaluation.

125

4.6.3. New dashboard components

The new intelligent dashboard that is being developed will cater to two bridges, Port

Mann bridge and Alex Fraser bridge. Although they are both situated in Metro Vancouver are they are at geographically different location and surroundings. The control methods and operations are also different at both bridges. Therefore, the new dashboard will need two sets of current dashboard dials and two forecast bar charts to display the information.

Figure 61 below shows a development version of the new dashboard. The dials are rearranged to accommodate the forecast bar charts. The other elements on the previous dashboard like 48-hour summary and station health status are in a collapsible menu which will be displayed only on the user’s request. Information about TS and Snow on Stays variable will pop up on the dashboard when applicable. The same concept is extended to the forecast section well.

126

Figure 61 – New proporsed dashboard layout

The plotting section is immensely improved in the new website, which was one of the primary reasons to make the new one. Interactive charts will flexible zoom levels enables the user to analyze a wide range of data with minimal number of clicks. Other

127 sections like the Map section and camera section that was part of the previous dashboard are also mimicked with upgrades to their latest versions and security requirements.

4.7. Conclusion

The dashboard system was developed in 2013 as a simple tool to help the Port

Mann bridge maintenance crew during snow relate emergencies. Initially a simple monitor was developed which was a modified version of the monitor that was developed for VGCS, with some changes to account for the differences between icing and snow accretion weather phenomena. During the first two years of it operation in 2013-15 there were several improvements to the system.

 Calculation of TS to give estimate of Snow thickness

 Feedback to the system using Snow_On_Stays variable.

 Addition of RWIS stations to get more voting stations in the algorithm.

 Installation and integration of Leaf wetness sensor, Solar radiation sensor, Stay

thermistors and Classifier at Johnston Hill.

 Processing raw snowpack data to alert based on snowpack increase and calculate

Snow density during an event.

 Improve intelligence of the algorithm by including forecast data and convert them

into actionable results.

 Analysis and hindcasting of various events using both current weather data and

forecast data to summarize, document and understand past events.

128

The new dashboard website with forecast data integrated will be the intelligent feedback system that was envisioned at the end of chapter 3. This will add value to the operations team while offering room for growth and expansion based on their requirement.

129

5. Conclusion and Future Work

Managing cable stayed bridges in adverse winter weather conditions is a problem of different dimensions and involves complex decision making. Substantial value can be added by formulating a problem in a decision analytic framework. This is achieved by highlighting relationships that are either not apparent or unknown to users or decision makers. Some of the non-weather-related factor are cost of closure, structural health of the bridge, safety of public and safety of snow and ice technicians. The weather-related factors include duration of the event, intensity of the event, nature of precipitation, forecast prediction etc.

A standard protocol or procedure cannot bot defined to manage such emergencies as there are too many non-linearities associated with this decision making. This also involves the interpretation and risk strategies of different management groups. The aim of the dashboard is to simply provide necessary information to the decision-making team to aid in their process. The dashboard was developed as a simple system to summarize the current conditions on the bridge. The intelligence of the system was increased in a step wise fashion,

 Including other weather reporting sources

 Installing localized sensors

 Hindcasting and identifying patterns in events

 Mapping event signatures to control actions

 Quantifying costs for control actions and accidents

 Enabling users to set custom thresholds

130

 Calculating derived variables

 Incorporating various forecast sources

 Giving probabilities and persistence of events in the future

All the improvements were driven by need and there is plenty of room for improvement. The use of radar imagery to track storms before and during the event would be a great addition to the future generations of the dashboard. The use of images from road traffic cameras in and around the bridge can also help track the storm and give accurate information about the onset of an event. The forecast dashboard has been tuned only for one source of information based on the limited dataset. The system developed is however modular. In future years more models, for which data is currently being collected can be turned and included in the algorithm to make it more robust.

The dashboard monitoring system will always be evolving, becoming more intelligent as the will the complexity of understanding the weather effects on the bridges and the associated decision making.

131

6. BIBLIOGRAPHY

[1] J. Zhang, K. Das Debendra, and R. Peterson, “Selection of Effective and Efficient Snow

Removal and Ice Control Technologies for ColdRegion Bridges,” J. Civil, Environ.

Archit. Eng., vol. 3, no. 1, pp. 1–14, 2009.

[2] C. S. H. R. P. (C-SHRP), “Anti-Icing and RWIS Technology in Canada,” no. July, 2000.

[3] C. Mirto, A. Abdelaal, D. Nims, T.-M. Ng, V. Hunt, A. Helmicki, C. Ryerson, and K. Jones,

“Icing Management on the Veterans’ Glass City Skyway Stay Cables,” Transp. Res. Rec.

J. Transp. Res. Board, vol. 2482, pp. 74–81, Sep. 2015.

[4] J. Kumpf, A. Helmicki, D. Nims, V. Hunt, and S. Agrawal, “Automated Ice Inference and

Monitoring on the Veterans Glass City Skyway Bridge,” J. Bridg. Eng., vol. 17, pp. 975–

978, 2012.

[5] D. Nims, V. Hunt, and A. Helmicki, “User Manual for Veterans’ Glass City Skyway

Bridge Monitoring System,” Feb. 2017.

[6] Aeronautical Information Manual, Section 7-1-7, “Categorical Outlooks”. Federal

Aviation Administration.

[7] K. F. Jones, “Toledo weather conditions associated with ice accumulation on the

Skyway stays,” Cold Reg. Res. Eng. Lab. Hanover, NH 03755, no. April 12, 2010, 2010.

[8] S. Agarwal, “Automated ice monitoring system for the Veterans’ Glass City Skyway,”

University of Cincinnati, 2011.

[9] B. Deb, “CONTINUED WEATHER MONITORING SYSTEM FOR THE VETERANS ’ GLASS

CITY SKYWAY A Thesis submitted to the Division of Research and Advanced Studies

of the University of MASTER OF SCIENCE ( M . S .) in the Department of Electrical

132

Engineering By,” 2014.

[10] J. A. Dutton and J. A. Dutton, “OPPORTUNITIES AND PRIORITIES IN A NEW ERA FOR

WEATHER AND CLIMATE SERVICES,” Bull. Am. Meteorol. Soc., vol. 83, no. 9, pp.

1303–1312, Sep. 2002.

[11] “Summary of Natural Hazard Statistics for 2002 in the United States Summary of

2002 Weather Events, Fatalities, Injuries, and Damage Costs,” 2003.

[12] P. A. Pisano, L. Goodwin, and P. Pisano, “Current Practices in Transportation

Management During Inclement Weather,” 2003.

[13] E. Regnier, “Doing something about the weather,” Omega, vol. 36, no. 1, pp. 22–32,

2008.

[14] J. W. Belknap, “No Title,” no. December, 2011.

[15] K. F. Jones, “The density of natural ice accretions related to nondimensional icing

parameters,” Q. J. R. Meteorol. Soc., vol. 116, no. 492, pp. 477–496, 1990.

[16] K. F. Jones, “Ice accretion in freezing rain,” CRREL Rep. 96-2, no. April, p. 31, 1996.

[17] “Thermistors & Thermistor Strings Model 3800 / 3810.” Geokon, Inc., 2009.

[18] “LWS-L Dielectric Leaf Wetness Sensor.”

[19] “0872 F1 Ice Detector.” Campbell Scientific (Canada)Corp., 2011.

[20] “Met One Rain Gage Models 380 and 385.”

[21] N. W. John Wood, Edmund Potter, Stephen Nobbs, “Sunshine Sensor type BF-5.”

Delta-T Devices Ltd, 2010.

[22] D. Nims, “Piece of ice from Jan 23, 2015.” 2015.

[23] S. ArbabzadeganHashemi, “Ice Prevention or Removal of the Veteran’s Glass City

Skyway Cables,” The University of Toledo, 2013.

133

[24] A. M. Abdelaal, “Atmospheric Icing on Bridge Stays,” The University of Toledo, 2016.

[25] D. Nims, A. Helmicki, V. Hunt, T. T. Ng, and P. Talaga, “Port Mann Bridge Stay Cable

Wet Snow Prevention and Remova l Executive Summary Presented to

Transportation Investment Corporation ( TIC ) and British Columbia Ministry of

Transportation ( BCMoT ) Prepared By,” no. March, pp. 1–22, 2015.

[26] D. Nims, V. Hunt, A. Helmicki, and T. Ng., “Ice Prevention or Removal on the Veteran’s

Glass City Skyway Cables,” no. 134489, 2014.

[27] C. Venkatesh, A. Helmicki, V. Hunt, D. Nims, and A. Abdelaal, “Qualitative Analysis of

Ice Accumulation and Shedding on a Cable Stayed Bridge,” in 98th Annual Meeting,

2018.

[28] ODOT, “Work Zone User Cost Calculations,” 2015.

[29] National Safety Council, “Estimating the costs of unintentional injuries, 2011,” no. C,

pp. 23–25, 2013.

[30] B. C. Tefft, S. R. Associate, and A. F. for T. Safety, “Motor Vehicle Crashes, Injuries, and

Deaths in Relation to Weather Conditions, United States, 2010-2014,” no. January, pp.

2010–2014, 2016.

[31] Buckland & Taylor Ltd., “Port mann bridge,” Buckland & Taylor Ltd.

[32] S. Cooper, “Engineers were worried about Port Mann Bridge Engineers were worried

about Port Mann Bridge Engineers were worried about Port Mann Bridge,” Sunday

Province, pp. 1–2, 2013.

[33] G. Hoekstra, “Port Mann Bridge reopens after ‘slush bombs’ endanger drivers,

damage vehicles,” VANCOUVER SUN, 20-Dec-2012.

[34] B. Y. G. Hoekstra and V. Sun, “More claims lodged with ICBC for vehicle damage from

134

falling ice and snow on bridges The number of claims from drivers on the Port Mann

Bridge has,” pp. 1–2, 2012.

[35] P. M. Bridge, “Port Mann Bridge Cable Snow Accretion Mitigation Study Report.”

[36] J. McElroy, “Slush bombs damage cars on Alex Fraser and Port Mann bridges,” CBC

News, 05-Dec-2016.

[37] “‘Slush bombs’ shatter windshields on Alex Fraser, Port Mann bridges,” CTV News

Vancouver, Vancouver, 05-Dec-2016.

135

Appendix A: Precipitation classification

For the purposes of the Port Mann and VGCS dashboards the precipitation reports from Airports and Local weather stations have been grouped into the following six categories.

Group LOCW Airport Conditions Airport Events

Conditions

Nil SKC, CLR, Clear, Mostly Cloudy, Overcast, Partly

OVC,FEW, SCT, Cloudy, Scattered Clouds

BKN

Fog Fog, Heavy Fog, Light Fog, Light Freezing Fog

Fog, Partial Fog, Patches of Fog, Shallow

Fog, Mist

Rain RA, DZ Light Drizzle, Light Rain, Light Rain Fog-Rain, Rain,

Showers, Rain, Rain Showers, Heavy Rain, Rain-

Heavy Rain Showers, Light Thunderstorm,

Thunderstorms and Rain, Thunderstorm, Thunderstorm

Thunderstorms and Rain

Snow SN, SG Light Snow, Light Snow Showers, Light Rain-Snow,

Snow Grains, Snow, Snow Grains Snow

Ice IC, PL Light Ice Pellets, Light Small Hail Hail

136

Showers, Light Freezing Drizzle, Light

Freezing Rain

Unknown NoData NoData, Unknown, Widespread Dust, No Data,

Funnel Cloud Tornado

Table 11 – Airport and LOCW conditions and events categorization

The precipitation reports from the Classifier installed at Johnston Hill has been categorized into these six groups based on their WMO codes. This pertains only to the Port

Mann dashboard.

Group Classifier – WMO code equivalent WMO code

Nil No significant weather observed 0

Mist, Fog, FOG, visibility <= 1km, Fog or ice fog, in patches, Fog

or ice fog, decreasing last hour, Fog or ice fog, no change last 10, 20, 30, hour, Fog or ice fog, increasing last hour, Haze or smoke or dust Fog 31, 32, 33, in suspension, visibility >= 1km, Haze or smoke or dust in 34, 4, 5, 6 suspension, visibility <= 1km, Widespread dust in suspension in

the air

, Rain (not freezing), Liquid Precipitation - slight or moderate, 23, 43, 44,

Liquid Precipitation – heavy, Freezing precipitation - slight or 47, 48, 50,

Rain moderate, Freezing precipitation – heavy, Drizzle - not freezing 51, 52, 53,

– slight, Drizzle - not freezing – moderate, Drizzle - not freezing 57, 58, 61,

– heavy, Drizzle and rain – light, Drizzle and rain - moderate or 62, 63

137

heavy, Rain – light, Rain – moderate, Rain - heavy

Drizzle (not freezing) or snow grains, Snow, Freezing rain or 22, 24, 25, freezing drizzle, Solid precipitation - slight or moderate, Solid 45, 46, 67, Snow precipitation – heavy, Mixed rain and snow - light, Mixed rain 68, 71, 72, and snow, moderate or heavy, Snow - light, Snow - moderate, 73, 77 Snow – heavy, Snow grains

Drizzle - freezing - slight, Drizzle - freezing - moderate, Drizzle - 54, 55, 56, freezing – heavy, Freezing rain - light, Freezing rain - moderate, Ice 64, 65, 66, Freezing rain – heavy, Ice pellets - light, Ice pellets - moderate, 74, 75, 76 Ice pellets – heavy

PRECIPITATION, Precipitation - slight or moderate, 21, 40, 41, Unknown Precipitation - heavy 42

Table 12 – PWS classifier conditions categorization using WMO codes The different types of Precipitation Type codes, Precipitation Intensity codes from the RWIS reports for VGCS is shown in the tables below. The suggested categorization of these reports into Rain, Snow and Other types are given in Table 15.

Pc_Type code Precipitation/Condition

1 Light

2 Unknown

138

3 None

4 Rain

5 Snow

6 Frozen

-1 Error

Table 13 – VGCS RWIS staitons Precipitation code classification (post Nov 2015)

Pc_Intens code Precipitation/Condition

1 Other

2 Unknown

3 None

4 Light

5 Moderate

6 Heavy

-1 Error

Table 14 - VGCS RWIS staitons Precipitation intensity classification (post Nov 2015)

Categories New Classification

Rain 4, 6

139

Snow 5, 6

None/Other 1, 2, 3, -1, Any other code

Table 15 - VGCS RWIS staitons Precipitation categorization (after Nov 2015)

140

Appendix B: VGCS significant events charts

The table below list the 13 significant events at VGCS between 2007-2015

Events Lane Closures Traffic Incidents

Dec 2007 2 Yes

Mar 2008 2 Yes

Dec 2008 2 No

Jan 2009 1 No

Feb 20-25, Multiple Lane closures 2011

Feb 26, No records of any lane closures One accident 2013

Mar 16, No records of any lane closures One accident 2013

Mar 25, No records of any lane closures One accident 2013

Dec 9, No records of any lane closures One accident 2013

Apr 3, One accident, Shedding reported by No records of any lane closures 2014 UT student.

141

Jan 3, 2015 No records of any lane closures One accident

Jan 21-25, Multiple lane closures, NB Shedding reported in Local news. 2015 completely closed on Jan 23.

Mar 3, No records of any lane closures One accident 2015

Table 16 – VGCS significant events list The event summary charts for all these events are as follows.

Figure 62 – March 3, 2015 event summary

142

Figure 63 – Jan 21-24, 2015 event summary

Figure 64 – Jan 3 2015, event summary

143

Figure 65 – Apr 3, 2014 event summary

Figure 66 – Dec 3, 2013 event summary

144

Figure 67 – Mar 25, 2013 event summary

Figure 68 – Mar 16, 2013 event summary

145

Figure 69 – Feb 26, 2013 event summary

Figure 70 – Feb 20-24, 2013 event summary

146

Figure 71 – Jan 3-21, 2009 event summary

Figure 72 – Dec 9-10, 2007 event summary

147

Figure 73 – Dec 10-11, 2007 event summary

148

Figure 74 – Mar 27, 2008 event summary

149

Figure 75 – Mar 28, 2008 event summary

Figure 76 – Dec 16, 2008 event summary

150

Figure 77 – Dec 19, 2008 event summary

Figure 78 - Dec 23-24, 2008 event summary

151

Table 17 below lists all the events and shows the accumulation and shedding types for each event.

Table 17 – VGCS all significant events Accumulation and Shedding types

152

Appendix C: Port Mann 2016-17 event summaries

Figure 79 below shows the list of all events at Port Mann bridge in the winter of

2016-17. The horizontal axis represents time and is 5 months long. The primary vertical axis shows the Ts for each event in millimeters and the Snowpack from Johnston Hill is plotted in cm on the secondary vertical axis.

Figure 79 – Winter 2016-17 events summary chart at Port Mann bridge

Table 18 summarizes each of the events with information about chain drops, the max value of snowpack, stations that would have alerted if dash board was running etc.

Max values (cm) Alert from

Chain Snowpack Proxy Other

Event dates drops Diff Ts Ts Airports Bridge Notes

12/17/2015 No 2 4.4 2.2 CYXX PWS Alert from CYXX & PWS. Minor

153

event.

Alert from CYXX & PWS. Minor

12/27/2015 No 0.6 10.1 3.9 CYXX PWS event.

Alert from CYXX, CYVR & PWS.

CYXX, 1 chain dropped much later

1/4/2016 No 0 0 0 CYVR PWS than the event.

1/5/2016 No 0.5 0.8 0.3 PWS Alert from PWS. Minor event.

Alert from CYVR. Quick

12/5/2016 Yes 11.1 21.4 0.65 CYVR accumulation and chain drop.

12/9/2016 - Alert from CYVR. Accretion - 3

12/12/2016 Yes 15.3 55.2 15.3 CYVR days. Chains dropped - 3 days.

12/18/2016

- Alert from CYVR. Chains

12/19/2016 Yes 22.4 30 3 CYVR dropped on both days.

12/23/2016

- Alert from CYVR. Chains

12/24/2016 Yes 6.3 2.2 0.3 CYVR dropped on 12/24.

12/26/2016

- Alert from CYVR. Chains

12/27/2016 Yes 17.9 35.6 4.9 CYVR dropped on both days.

12/31/2016 CYXX, Alert from CYVR, CYXx and

- 1/1/2017 Yes 18.1 12.8 0.6 CYVR PWS PWS. Chains dropped on

154

12/31.

Alert from CYVR. No chains

1/6/2017 No 0 0 0 CYVR dropped.

Alert from CYVR and CYXX.

2/3/2017 - CYXX, Accretion - 4 days. Chain

2/9/2017 Yes 30.4 46.8 20.3 CYVR dropped - 3 days.

2/23/2017 -

2/24/2017 No 0.7 2.5 0.5 PWS Alert from PWS. Minor event.

2/25/2017 - Alert from CYXX and PWS.

2/26/2017 No 2 3.7 0.9 CYXX PWS Minor event.

2/27/2017 No 0.6 0.6 0.3 CYVR Alert from CYVR. Minor event.

2/28/2017 - Alert from CYVR. Quick

3/1/2017 Yes 2.1 2.8 0.9 CYVR accumulation and chain drop.

3/4/2017 No 0.7 1.9 2.5 PWS Alert from PWS. Minor event.

3/5/2017 - CYXX,

3/6/2017 Yes 10.1 12.3 3.7 CYVR Alert from CYXX and CYVR.

3/7/2017 - Alert from PWS. Maybe chain

3/8/20117 Yes? 7.9 12 2.4 PWS was dropped?

3/8/2017 -

3/9/2017 No 6.7 3.5 0.4 PWS Alert from PWS. Minor event.

Table 18 - Winter 2016-17 events summary at Port Mann bridge Since the dashboard was not in operation between Apr 2015 and Nov 2017 the results in this table are from hindcasting.

155

156