<<

USING BAYESIAN MODEL AVERAGING TO IMPROVE HURRICANE TRACK

FORECASTS

by

ROBERT HILL SHACKELFORD JR

(Under the Direction of JEFFREY H. DORFMAN)

ABSTRACT

I study whether Bayesian composite forecasting can produce improved track forecasts for hurricanes. Using data on hurricanes back to 2005, the first step is to find a set of storms most similar to the one to have its track forecast. Then, the performance of ten hurricane forecasting models on those similar storms is used to calculate the weights that will be placed on each of these models. These weights are used to form a Bayesian composite forecast of the track for the hurricane of interest, rather than the currently, more standard, simple average utilized by the National Hurricane Center (NHC). On a small selection of recent hurricanes, the performance of the Bayesian composite forecast tracks are compared to the individual model forecasts and the

NHC official forecasts. In most of our cases, the Bayesian composite forecast is more accurate than the NHC forecast.

INDEX WORDS: Bayesian, Hurricane Forecasting, Bayesian Model Averaging, Tracking

USING BAYESIAN MODEL AVERAGING TO IMPROVE HURRICANE TRACK

FORECASTS

by

ROBERT HILL SHACKELFORD JR

BS, University of , 2014

BSEC, University of Georgia, 2014

A Thesis Submitted to the Graduate Faculty of the University of Georgia in Partial Fulfillment of

the Requirements for the Degree

MASTER OF SCIENCE

ATHENS, GEORGIA

2017

© 2017

Robert Shackelford

All Rights Reserved

USING BAYESIAN MODEL AVERAGING TO IMPROVE HURRICANE TRACK

FORECASTS

by

ROBERT HILL SHACKELFORD JR

Major Professor: Jeffrey H. Dorfman

Committee: Marshall Shepherd

Craig Landry

Electronic Version Approved:

Suzanne Barbour Dean of the Graduate School The University of Georgia August 2017

DEDICATION

This paper is dedicated to my wife Hannah. I love you so much and I am so thankful for how you have fought for me as I have worked on this paper. This paper is also dedicated to my family and friends who have made me the person I am today. This paper is most importantly dedicated to Jesus Christ, who provided me the means to get my Master’s degree. I look forward to how You will continue to use me in the next season.

iv

ACKNOWLEDGEMENTS

I would like to acknowledge Jeff Dorfman for helping me with the coding to make our results a viable alternative to tracking hurricanes. I would also like to acknowledge for Theresa

Andersen and Marshall Shepherd for assisting in interpreting the NOAA hurricane data to be able to use their archive to make it all possible.

.

v

TABLE OF CONTENTS

Page

ACKNOWLEDGEMENTS ...... v

LIST OF TABLES ...... ix

LIST OF FIGURES ...... xi

CHAPTER

1 INTRODUCTION ...... 1

2 BACKGROUND ...... 5

3 DATA ...... 14

4 METHODOLOGY ...... 16

5 RESULTS ...... 22

TABLE 1: FORECAST ACCURACY RESULTS...... 23

TABLE 2: MSE COMPARISONS ...... 24

TABLE 3: MATTHEW FORECAST WITH POWER 2 ...... 25

TABLE 4: MATTHEW FORECAST WITH POWER 10 ...... 25

TABLE 5: MATTHEW FORECAST WITH POWER 50 ...... 25

TABLE 6: NHC FORECAST FOR MATTHEW ...... 26

TABLE 7: OFFICAL TRACK FOR MATTHEW ...... 26

TABLE 8: EARL FORECAST WITH POWER 2 ...... 27

TABLE 9: EARL FORECAST WITH POWER 10 ...... 27

TABLE 10: EARL FORECAST WITH POWER 50 ...... 27

vi

TABLE 11: NHC FORECAST FOR EARL ...... 28

TABLE 12: OFFICAL TRACK FOR EARL ...... 28

TABLE 13: DANNY FORECAST WITH POWER 2 ...... 29

TABLE 14: DANNY FORECAST WITH POWER 10 ...... 29

TABLE 15: DANNY FORECAST WITH POWER 50 ...... 29

TABLE 16: NHC FORECAST FOR DANNY ...... 30

TABLE 17: OFFICAL TRACK FOR DANNY ...... 30

TABLE 18: BERTHA FORECAST WITH POWER 2 ...... 31

TABLE 19: BERTHA FORECAST WITH POWER 10 ...... 31

TABLE 20: BERTHA FORECAST WITH POWER 50 ...... 31

TABLE 21: NHC FORECAST FOR DANNY ...... 32

TABLE 22: OFFICIAL TRACK FOR DANNY ...... 32

TABLE 23: HUMBERTO FORECAST WITH POWER 2 ...... 33

TABLE 24: HUMBERTO FORECAST WITH POWER 10 ...... 33

TABLE 25: HUMBERTO FORECAST WITH POWER 50 ...... 33

TABLE 26: NHC FORECAST FOR HUMBERTO ...... 34

TABLE 27: OFFICIAL TRACK FOR HUMBERTO ...... 34

TABLE 28: ACCURACY MEASURES FOR FORCASTS ...... 38

TABLE 29: ISAAC FORECAST ...... 38

TABLE 30: NHC FORECAST FOR ISAAC ...... 38

TABLE 31: OFFICIAL TRACK FOR ISAAC ...... 39

TABLE 32: HERMINE FORECAST ...... 39

TABLE 33: NHC FORECAST FOR HERMINE ...... 40

vii

TABLE 34: OFFICIAL TRACK FOR HERMINE ...... 40

TABLE 35: SANDY FORECAST ...... 40

TABLE 36: NHC FORECAST FOR SANDY ...... 41

TABLE 37: OFFICIAL TRACK FOR SANDY ...... 41

TABLE 38: JOAQUIN FORECAST ...... 41

TABLE 39: NHC FORECAST FOR JOAQUIN ...... 42

TABLE 40: OFFICAIAL TRACK FOR JOAQUIN ...... 42

TABLE 41: MATTHEW FORECAST ...... 42

TABLE 42: NHC FORECAST FOR MATTHEW ...... 43

TABLE 43: OFFICIAL TRACK FOR MATTHEW ...... 43

6 CONCLUSION ...... 47

REFERENCES ...... 48

APPENDICES

A CODING FOR SIMILAR STORMS ...... 53

B CODING FOR FINDING FORECAST ...... 54

viii

LIST OF TABLES

Page

Table 1: WEIGHTS FOR MATTHEW POWER 2 ...... 58

Table 2: WEIGHTS FOR MATTHEW POWER 10 ...... 59

Table 3: WEIGHTS FOR MATTHEW POWER 50 ...... 59

Table 4: SSES FOR MATTHEW ...... 60

Table 5: WEIGHTS FOR EARL POWER 2 ...... 61

Table 6: WEIGHTS FOR EARL POWER 10 ...... 61

Table 7: WEIGHTS FOR EARL POWER 50 ...... 62

Table 8: SSES FOR EARL ...... 62

Table 9: WEIGHTS FOR DANNY POWER 2 ...... 63

Table 10: WEIGHTS FOR DANNY POWER 10 ...... 63

Table 11: WEIGHTS FOR DANNY POWER 50 ...... 64

Table 12: SSES FOR DANNY ...... 64

Table 13: WEIGHTS FOR BERTHA POWER 2...... 65

Table 14: WEIGHTS FOR BERTHA POWER 10...... 66

Table 15: WEIGHTS FOR BERTHA POWER 50...... 66

Table 16: SSES FOR BERTHA ...... 67

Table 17: WEIGHTS FOR HUMBERTO POWER 2 ...... 68

Table 18: WEIGHTS FOR HUMBERTO POWER 10 ...... 68

Table 19: WEIGHTS FOR HUMBERTO POWER 50 ...... 69

ix

Table 20: SSES FOR HUMBERTO ...... 69

Table 21: HUMBERTO SIMILAR STORMS ...... 70

Table 22: EARL SIMILAR STORMS ...... 71

Table 23: DANNY SIMILAR STORMS ...... 71

Table 24: MATTHEW SIMILAR STORMS ...... 72

Table 25: BERTHA SIMILAR STORMS ...... 72

Table 26: WEIGHTS FOR ISAAC...... 73

Table 27: SSES FOR ISAAC ...... 73

Table 28: ISAAC SIMILAR STORMS ...... 74

Table 29: WEIGHTS FOR HERMINE ...... 74

Table 30: SSES FOR HERMINE ...... 75

Table 31: HERMINE SIMILAR STORMS ...... 75

Table 32: WEIGHTS FOR SANDY ...... 76

Table 33: SSES FOR SANDY ...... 76

Table 34: SANDY SIMILAR STORMS ...... 77

Table 35: WEIGHTS FOR JOAQUIN ...... 77

Table 36: SSES FOR JOAQUIN ...... 78

Table 37: JOAQUIN SIMILAR STORMS...... 78

Table 38: WEIGHTS FOR MATTHEW ...... 79

Table 39: SSES FOR MATTHEW ...... 79

Table 40: MATTHEW SIMILAR STORMS ...... 80

x

LIST OF FIGURES

Figure 1: GIS FOR DANNY ...... 81

Figure 2: GIS FOR HUMBERTO ...... 82

xi

CHAPTER 1

INTRODUCTION

USING BAYESIAN MODEL AVERAGING TO IMPROVE HURRICANE TRACK

FORECASTS

Hurricanes carry a strong affect both economically and socially across the , causing massive damage to infrastructure and property both on and near the coast. Although certain hurricanes have most of their damage brought either through , , tornadoes, or tidal surge, the most incredible storms are those that cause billions of dollars of damage from all of these categories simultaneously. However, one must remember that there is more to consider to these storms than their initial impact: the remainder of rain and wind from a dissipating powerful hurricane can cause additional damage inland, and hurricanes can bring about tornadoes and tidal surges. These surges can cause lasting impact for coastal cities and the economies of cities that rely on tourism.

Hurricane Katrina caused Coastal and ’s economy to come to a standstill, as well as directly increased American oil prices (Knabb, 2005). President Bush sought

$105 billion for repairs along the Gulf Coast, but some studies estimate that Katrina caused $150 billion of damage, considering the extra harm inland as well as unreported damage. Furthermore, exports were greatly reduced because of the standstill that this storm brought to the

Mississippi River. was the costliest in American history, and relief efforts continue on to this day. Hurricane Katrina was an outlier when it comes to damage, but hurricanes on average can cause billions of dollars in damage. One such storm was Hurricane

1

Sandy. Sandy hit as a category two hurricane, but then traveled to as a post- tropical (Donovan, 2013). Its flooding, medical issues, and accidents post-hurricane led to 147 direct deaths and 75 indirect deaths. Direct damage alone cost $50 billion, with damage caused by the weakened system raising the price even more. Another famous, yet not quite as damaging, hurricane was Isaac. As a category one hurricane, Isaac caused only $2.35 billion in damages and caused 35 deaths (Berg, 2013). Even so, Hurricane Isaac proves the seriousness of hurricane prediction, as a category one hurricane can cause death and lead to billions of dollars in destruction.

The National Hurricane Center has been updating its forecasting tools and have continued to improve their forecasting. According to the National Oceanic and Atmospheric

Administration, the error for a hurricane’s path five days out is 350 miles. The error decreases to

100 miles the day before landfall, and we seek to discover news ways to potentially contribute to the continued decreasing of this error.

This paper uses Bayesian composite forecasting, specifically non-parametric Bayesian, to refine the previous tracking system for hurricane movement. As stated above, hurricanes can bring destruction to people and towns via debris, tornadoes, flooding, and . The lives and wellbeing of people can be destroyed as they are forced to evacuate from the hurricane's path. There is great awareness of the damage hurricanes can bring, yet awareness does not cure the difficulty in predicting where these deadly storms are going to go. The median cost for damage from a hurricane making landfall in the U.S. is $1.8 billion (US Department of

Commerce 2017). When considering a reliable account of hurricane damage, this measure is the most accurate. The average measure is greatly affected by major storms like Hurricane Katrina, boosting the average cost to $9 billion, not including the property damage inland.

2

In order to follow this study, some more background information on hurricanes is helpful.

“There are an average of 10.1 named storms each year, receiving names so that forecasters may more quickly and efficiently discuss the powerful storms - especially if the storm is fast-moving.

Of these named storms, an average of 5.9 become hurricanes and 2.5 become major hurricanes.

Major hurricanes are defined by being at least a category three or greater on the Saffir-Simpson

Hurricane Scale. On this scale, category one sustained are ranging from 74-95 mph and there will be some damage. The category two sustained winds range from 96-110 mph and there will be even greater levels of damage. A category three sustained winds range from 111-129 mph and have devastating damage. A category four sustained winds range from 130-156 mph and have colossal damage. Finally, a category five hurricane has sustained winds ranging from 157 mph and higher and lead to catastrophic damage” (US Department of Commerce 2017).

In this paper, we use hurricane data from the to demonstrate proof of concept for a new, more accurate method of hurricane track forecasting. First, for each formed storm, a set of similar past hurricanes is selected. Then, for a set of hurricane prediction models, the past model prediction tracks for each of those selected storms is examined to measure how accurate each model was in similar situations. These relative forecasting performances are used to place weights on each model. Finally, we form a Bayesian composite forecast using the track forecasts for the current storm from the different prediction models, where each forecast receives a weight based on that model’s past performance in such situations. We test this method on hurricanes as they are transitioning from a tropical storm to a hurricane, and with hurricanes as they approach landfall.

This thesis applies the method only to the Eastern American coast, for a small set of past storms. With the concept now proved to have potential the method can be tested more widely,

3

both fine-tuning the methodology and widening the regions and storms on which it is tested.. If successful upon further testing, the result would be increased effectiveness and efficiency in responding hurricane activity. The potential cone of trajectory of a hurricane can cause a fear for many people that live along the coast, and this study can narrow that cone - removing fear and allowing citizens to save money. Eckel, El-Gamal, and Wilson (2009) demonstrate how significantly improving the tracking of hurricanes can reduce the negative psychological effects caused by hurricanes for the millions of people near the coast.

Improved forecasts not only save lives by allowing fuller evacuation of affected populations, but also save money for people who can avoid a needless evacuation. The outline of this paper is as follows: in chapter 2, we provide a literature review. In chapter 3, we discuss the data. Chapter 4 outlines the Bayesian Approach. In chapter 5, we present the results. Finally, chapter 6 is the conclusion.

4

CHAPTER 2

BACKGROUND

Hurricanes are very significant forces of nature. They can be a major financial and economic obstacle for coastal cities and can lead to billions of dollars in damages. Hurricanes can spur relocations and uproot the lives of millions of people. These storms can affect many cities along the coast and inland, and the moisture from these systems can lead to the development of other storms that can cause even more damage. The risk of loss of life and property damage is just as, if not more, powerful than other natural disasters. Yet tracking these powerful storms has still been a difficult challenge, especially when seeking to identify trajectory for a storm far out at sea. This is due to the unpredictability of the , but if we are able to increase the ability to predict the track of a hurricane more than a couple days out, people will be able to prepare for a more orderly evacuation. This would additionally give people the option of staying and protecting their houses to weather the storm. Our research explores how to implement Bayesian Composite Forecasting to create a weighted average of a set of hurricane forecast tracks. The weights will be based on data measuring the performance of the component forecasting models on past hurricanes, and should lead to a more accurate hurricane forecast. The following papers are the foundation for my research.

In order to more accurately compare the Bayesian composite forecast, there must be an understanding of the steps taken by the National Hurricane Center to make their forecasts.

According to Vickery et al. (2000), “the mathematical simulation of hurricanes is the most accepted approach for estimating wind speeds for the design of structures and assessment of

5

hurricane risk. This approach is used in the majority of the Atlantic and was introduced by

Russell (1968, 1971) and was expanded by Batts et. All (1980), Georgious et al. (1983),

Neumann (1991), and Vickery and Twisdale (1995b).” These storms are created with the key conditions to hurricane formation and strengthening such as central low pressure, direction, speed, and positioning. The Monte Carlo approach is used during each of the simulations, and the statistical representation is shown and moves along the line of the most favorable conditions in this synthesized storm and atmosphere. This model differs from older models by the models used and the wind fields and the filling rates. Also, the size of the uniform hurricane climatology is larger due to the new technology. This model is updated and re forecasted every 6 hours. This data is compared to the derived results of historical data.

In terms of the empirical model, the number of storms simulated in any year is taken from a negative binomial distribution with a mean of 8.4 storms/year and a standard deviation of

3.56 storms/year. The initial location of the storm is based on the HURDAT database. It is based on the historical starting dates to make sure that the climate during the start is similar, and it is reforcast every 6 hours. The changes are made using the following formula:

where the “a”s are constants, the Ψ and λ are storm latitude and longitude, and the c is the speed at time i. Also, the θ are the storm at the time step i. Finally, the E is the random error term.

To evaluate their predictions of storm tracks, Vickery et al. (2000) simulate storms. This increases their sample size, but it is questionable whether hurricane forecasting performance can be accurately measured with simulations.

6

Rappaport et. all. (2008) discusses each branch of the National Hurricane Center and what they have done to update their technology. They begin by discussing the Hurricane

Specialist Unit, which focuses solely on hurricanes. During hurricane season, they work to debrief and inform citizens of the storms and how they are developing and moving. They give warnings and update their forecasts every 6 hours. The next branch is the Tropical Analysis and

Forecast Branch (TAFB). They focus on maritime and tropical products and work with the NHC products to post daily updates with graphics and texts. The final branch is the Technical Support

Branch(TSB). They focus on developing new branches of technology that are used to further update their forecast. Some of these improvements are present in their technologies and being able to more accurately measure the jet stream and using aircraft and microwave sensors to more accurately measure oceanic conditions that can lead to more favorable conditions. The NHC uses

“consensus” of slightly altered initial conditions and uses a type of average between these conditions to generate their models. They have had some improvement in intensity as forecasters are getting more adapted to the new technologies. They have also improved storm structure through including the “Wind radii” and have improved in storm surge prediction due to the Sea,

Lake, and Overland Surges from Hurricanes (SLOSH) program, but the range of error here is still improving. They also employ an average of different model’s forecasts, but they do not use the same weighted technique that we will be using.

Elsner and Bossack (2001) measure the frequency of landfalling hurricanes by using a

Bayesian approach to account for uncertain data. The study begins by providing background information to define a hurricane; then, the total span of hurricane activity is divided into three blocks of 50 years. The Bayesian approach is used because it combines data that are precise with older data that have reliable attributes but has less accuracy. They also test for stationarity which

7

ensures the data are valid. The results of this show that there is a decrease in the results as time progresses. The results of all these tests help to show that there is a bias in the earliest hurricane measuring times. Elsner and Bossack conclude that there are more errors in the data prior to

1900. This paper brings a new idea to the front: discussing the changes between different regions and differences on the coast line.

Elsner also wrote a paper in 2003 with the objective of using climate factors to track hurricanes. He and his team began by using National Hurricane Center’s Database (HURDAT) that contains hurricane locations from the “best-track” dataset. They also use a k-means cluster analysis, which gives the maximum and final intensity of each hurricane. They discovered that the ocean conditions are very significant when it comes to formation and the movement of the storms.

In Elsner’s and Jagger’s 2004 paper, they expand on their other papers using Bayesian approaches. Their objective in this paper was to continue in their research in hurricane frequency, but they are beginning to account for conditions that affect hurricane development. In this paper, they apply the Bayesian approach specifically to the regression. This allows their team to expand on the Bayesian approach and actually add predictors to the hurricane forecasting. The fact that the parameters of the data are statistically significant shows that the early hurricane data, although prone to error, is still essential. Elsner and Jagger discuss another significant factor to hurricane forecasting: cold tongue indices. The cold tongue is defined as areas of lower sea surface temperatures that inhibit hurricanes from being as powerful or as frequent. These temperature fluctuations are related to the El Nino Southern Oscillation (ENSO), which has an effect on hurricane forecasting. The fact that the ENSO affects the magnitude and location of the cold tongues can have a direct influence in hurricane forecasting. This paper also

8

introduces new data called the Comprehensive Ocean-Atmospheric Data Set. The COARDS data is vital for the hurricane forecasting data before 1950 because it is used to obtain the sea surface temperatures for the ocean area of interest. Their results coincide with previous papers in that sea surface temperatures are one of the most significant factors of forecasting hurricanes. This paper also introduces using the regression of tree ring data to fill in the missing cold tongue indices, which helped lead to more accurate hurricane tracking.

Zhao and Chu “Bayesian Multiple Changepoint Analysis of Hurricane Activity in the

Eastern North Pacific: A Markov Chain Monte Carlo Approach” (2005) had the objective to more accurately measure hurricane counts. This paper expands on the Markov Chain Monte

Carlo approach, which Elsner and Jagger proved was significant in hurricane forecasting. They used the results of the previous studies to notice the changes that were occurring in the oceans and how they were affecting the hurricane count and strength. Their contribution to the hurricane forecasting research is to add a specific type of Markov Chain Monte Carlo (MCMC) approach, called the Gibbs sampler, to the previous study of Elsner and Jagger. They also adds a Poisson process to implement a three hypothesis system of changepoints in order to provide a more thorough version of a model used to hurricane forecast. There is also a set of parameters that represents the Bayesian inference for each of the hypotheses in their separate experiments. Their work assumes that the rates of hurricanes are invariant with a certain time, and the stationary

Poisson process serves as the hurricane count.

In Elsner, Murane, and Jagger (2006), the objective is to more accurately forecast hurricane and hurricane counts. They used the same ideas in previous papers, but they also included a dummy variable for the hurricane predictions prior to 1851 to compensate for the predictions becoming less precise. The reason for the dummy variable is because there is a

9

chance that a hurricane occurred before 1851 and nobody recorded it. Elsner, Murane, and

Jagger employed a Bayesian approach to combine a small section of precise data with an older set of hurricane data that isn’t as accurate. They realized that there is a chance that older datasets aren’t as accurate, so a Bayesian approach is more relevant. There are minor differences in the rates between the time periods, but they explain that that is not going to affect the study. Elsner,

Murane, and Jagger also introduced the concept that sea surface temperatures are very important for the formation of hurricanes. The sea surface temperatures (SSTs) reflect the pressure differences between the higher and lower latitudes. This is also referred to as the Atlantic

Multidecadal Oscillation (AMO), which is good background for people who may not be as versed in hurricane formation. They also discuss the fact that the precipitation availability over the oceans helps to increase the power of the hurricane. In their paper, they discovers that a high

SST and a low North Atlantic Oscillation (NAO) will lead to a larger number of US hurricanes.

Reich and Fuentes (2007) employs a multivariate semiparametric Bayesian spatial modeling framework for wind fields in hurricanes. Their goal is to use wind fields to measure hurricane growth and how it causes damage because they view wind fields as a means to measure wind vectors. This paper also relates to hurricane forecasting because wind fields in hurricanes contribute to the massive potential storm surges that are possible in hurricanes. They implement this model because they wanted to account for asymmetries for wind fields. They also use the stick-break to compensate for an unknown parameter in general Bayesian statistics. They realized that the semiparametric model helps avoid oversmoothing near the center of a hurricane, allowing it to be more accurately predicted. This study takes the Bayesian model and applies it to

Hurricane Andrew’s data and is able to more accurately predict the wind speeds of the hurricane at the most powerful locations in the hurricane itself. The way that they do this actually allows

10

for a nonstationarity and a non-normality to occur, so researchers are able to get more flexibility in the model.

Kang, Lim, and Elsner (2015) updates some of their previous works. This paper discusses the idea of track-forecast uncertainty. They discuss the old method of using the empirical cumulative density (ECD) function to discover forecast track error distances. Kang,

Lim, and Elsner discusses some of the problems of the old methods and proposes a new method for track-forecast uncertainty: Bayesian inference. The conclusion was the error is decreasing over time and that the improved forecasting tools have decreased the amount of error for each forecast. They are beginning to discover the means to comparing the differences in the distributions between the original method and the Bayesian updating method. These methods are similar until one gets to the medians of the two distributions. This shows that the ECD method can have a sample size that is far off from the original population. Having a sample size that is distant makes the error at higher lead times greater than that of the Bayesian updating, which shows that this new method can lead to much greater hurricane forecasting. The Bayesian update removes the unknown of the “true” parameters by using a fitted distribution and fixed parameters. These changes allow the Bayesian approach to have a larger probability circle than the original method. The original method also tends to have more error variation because the ranked errors are not common and they have a nonlinear relationship. These errors can cause volatile shifts in intervals, but the Bayesian approach does not have this unpredictability.

LeSage and Magura (1992) use weights that can vary over time to combine to create a forecast. They use this forecasting method because the accuracy of certain models is subject to change between different times. They begin by using the Granger and Ramanatha (1984) method but utilize the Gordon and Smith (1988, 1990) model that enables the weights to be dynamic and

11

change over time. This allows them to be able to handle abrupt changes between time periods for a specific model and reduces the effect of outlying data points for a specific model.

Min and Zellner (1993), performs experiments to determine if the combining forecast discussed in LeSage and Magura (1992) is the optimal approach to forecasting. They learned that it isn’t always optimal to simply combine good and bad forecasts to make an optimal forecast, so they developed a Bayesian forecast selection rule based on a predictive loss criterion which is used to determine which of the forecasting, including the combined, is optimal. They deal with the questions of using fixed weights or variable weights, to combine forecasts or not to combine and how to do this. They realized that there is an analysis needed to discover the optimal weight w* and that analysis is needed of the forecasts considered in the combinations. They concluded that autoregressive leading indicator models have uncertain results with pooled models, but with time variable parameter models, combining forecasts can lead to a decrease in the root mean squared error of the forecasts. Using Bayesian and non-Bayesian approaches did not produce different results and also using different Bayesian methods did not give significant results as well. Their pooling techniques lead to a decrease in their forecasting errors.

Li and Dorfman (1994) discuss using a composite forecasting model for state-level employment that is designed to protect state budget processes by creating a forecast that is robust in nature. The paper shows that the method can create accurate forecasts of employment without having to choose a model specification. This model uses a time varying weighting scheme to factor in varying economic conditions and to adjust the weights for suboptimal forecasts. The weights are based on the probability that the specific model is correct in that current time, and uses a logic model to reflect on the past performance of the model in similar economic conditions. These results may not gain a large increase in significance or a reduction in error, but

12

they will create a robustness that will allow for a more consistent forecast and a steadiness in the presence of outliers.

The fourth paper by Dorfman (1998) is on Bayesian forecasting of hog prices. He uses logic based techniques for creating a composite qualitative forecast in an application to hog prices specifically. The goal is to seek the direction of motion with a forecast of the price changes for hogs. If the forecasts are available on an individual level, the method requires little information to create the composite forecast. The three individual forecasts used in this model are based on past forecasts of hog prices and the actual price. The logic models referenced above are used to create the weights for the models, and it is again using a probability that the forecast will be correct. The composite model outperforms the three components: a reduced form forecasting model based on the model of livestock and poultry, a state space time series model, and a set of expert forecasts with no known model.

13

CHAPTER 3

DATA

All the data for this study are from the NOAA hurricane database (NHC/NOAA, 2017).

This archive contains many different types of hurricane data, organized by ocean. Their website contains many different types of data: best track, storm history, and fixes, forecasts, and descriptive data. Since we are doing our own forecasting, the forecasting data will only be used as a comparison after we get our results. Fix file data is not required because we are looking for the conditions of the storm and of the atmosphere. Descriptive data would not be effective for our project because this is the subjective classification of the storm, and this study requires information about the storm’s power, which these data do not give. Best track is defined as the post storm analysis of the where the storm actually traveled, so it will be used in this project because the best track is used to measure proximity. These data are needed for all the specific hurricanes whose tracks were forecasted. The storm history data are required for each of the hurricanes. We took that hurricane data back to 2005 and compiled it all into one file so that we could search the data for the most similar storms. This data contained many variables, but we felt that latitude, longitude, wind speed, pressure, and hurricane movement speed are the most important to determining the similar storms. The compiled data is simplified down to those five variables and compared to our base time, and as a results, we are able to collect data on the similar storms. MATLAB had some problems with certain pressures and ended up removing them, so we had to do some data cleaning. In order to do this action, we had to use the

14

corresponding velocities of the storms to estimate for the pressure that would be associated with these pressures (NOAA). Using these data estimations, we were able to get more accurate readings of which storms were similar.

In going through the data, there are a number of factors that could lead to the results being skewed. The main one of these is the number of variables that were cropped. There were

42 original variables ranging from wind intensity, level of hurricane development, gusts, subregion, max seas, depth of the storm, etc. These variables can lead to a more accurate interpretation of these results, but due to time and computer constraints, they had to be removed.

Specifically, hurricane direction is a potential variable that can lead to a more accurate forecast.

Other potential errors are the fact that the data is missing a number of data points for certain storms in the history. Human error also can cause an inaccuracy of results in these forecasts. Another potential error comes from the lacking variables that the NOAA data contains. Even though they have an array of 42 variables, one that could have improved results would have been sea surface temperatures around the storm. Having that in the list of variables in this equation would make for a much more exact conclusion to giving the weights. Also, if there was anything on upper level wind speed and direction, that also may have been able to help. This could have helped because we would have been able to see the winds that pulled the storm certain directions and if those winds matched up to a storm in history, there is a chance of that storm moving along a certain track. Excluding so many variables causes the weights to more dramatically alter the differences in the formula.

15

CHAPTER 4

METHODOLOGY

In Rappaport (2008), there is discussion of a “consensus” of forecasts using similar initial conditions and there is a type of a simple average that is utilized to make the final forecast. We are implementing a similar idea, but our forecast is implementing a weighted average over forecasts from different models not different initial conditions. To start the research, we had three main steps: find similar storms, find weights, and create our forecast. To find the similar storms, we compiled and simplified the data on hurricanes back from 2005 and compared their data to the current storm at the base time. Once we get these similar storms, we determined the sum of squared errors (SSEs) and used these to calculate the weights. Then, we apply these weights to each of the chosen models to create our own forecast.

For example, we chose from 2016. We used a point where the latitude was 14.2 degrees north and the longitude was 67.1 degrees west. Once the base location is set, then the entire lifecycle of every other hurricane is compared to it. The variables that were used to select the similar storms were velocity of the storm, latitude of the storm, longitude of the storm, mean sea level pressure, and hurricane movement speed. After this, weights were assigned to each of these variables that were used to determine the similar storm. We first set up multiple trials of storms with varying weights for each of these five variables, comparing each set of 10 similar storms to the base row of data. The final result is a weight of 4 for the latitude, 5 for longitude, 1 for mean sea level pressure, 2 for velocity of the storm, and 3 for hurricane speed. The higher the weight, the more the program will attempt to match that particular variable

16

up directly to the variable of the base storm. Latitude and longitude need to be matched up as close as possible, so they have the highest weights, and since longitude numbers are generally higher in the Atlantic, this variable needs the highest weight. Storms that are coming from Cape

Verde tend to move around the Sub-Tropical Ridge and will end up moving toward the northwest, and the effects of these currents can be affected by the hurricanes speed. The last two, which are mean sea level pressure and velocity of the storm, still are important, but they will not affect the location of the storm but rather the strength of the storm. If a storm has the exact same wind speed and pressure but is nowhere near the location of the “current storm,” that storm would not be as relatable to the path of the “current storm.”

The next step is to find the ten storms most similar to the current storm at the base time.

To create a list of the most similar storms, the data was entered into one MATLAB file. When determining the similar storms, we used the following formula to determine the 10 similar storms:

2 2 2 Diffi=4(Lati-Lat0) +5(Longi-Long0) +2(VMaxi-VMax0) +

2 2 1(MSLPi-MSLP0) +3(HurSPDi-SPD0)

with Diffi= the difference in these 5 variables between the current evaluated storm “i” and the base storm “0”. In this formula, Lat= latitude, Long= longitude, VMax= velocity of the storm, MSLP= mean sea level pressure, and HurSPD= movement speed of the hurricane and each of the coefficients being multiplied by these variables serves as the weights being added to these variables. In this case, the latitude and the longitude have the highest weights (4 and 5 respectively), so as a result, the similar storms will have a higher likelihood to have closer latitudes and longitudes to the base time. Hurricane movement speed gets the next highest weight followed by velocity of the storm and pressure with 2 and 1 respectively. The 10 lowest values

17

for Diffi serve as the 10 storms deemed most similar and used to assess performance and construct the weights used to make our forecast. Making sure that the hurricanes are all at a similar time of measure, the 12Z before the tropical storm developed into a hurricane served as the point of consistency for our models. The 10 storms with the smallest Diff score relative to the storm to be forecast are selected.

Once the 10 most similar storms were discovered, the models are chosen next. These models are used to establish the forecast. The goal is to get a combination of dynamic and statistical models in order to gain a balanced example of forecasting potential. The chosen models must have predicted the track of every hurricane in the set of similar storms. We then examine forecasting performance on these similar storms at times 12, 24, 36, 48, and 72 hours from the initial time chosen.

For each model, for the 10 most similar storms, compute:

2 2 SSEi=∑(Latiforecast-Latiactual) +(Longiforecast-Longiactual) summed for 50 points

with the SSEi= the sum of squared error for the current model “i”. The current latitude and longitude of the particular similar storm are compared to the actual track of that storm, which is available by viewing the BEST track of the hurricane. The BEST track serves as a post storm evaluation of the storm’s path. The SSEs measure the accuracy of that model's forecast of the similar storms in relation to what the similar storms actually did. We use these SSEs to give weights to the models’ forecast of the similar storms and as a result leads to the forecast of the current storm. The acknowledgement of each SSE being a sum to 50 points results from the fact that each model has five forecasts for each of the 10 similar storms that are present in the current model for the current storm. The SSE serves as a culmination of the error that each models has as it forecasts.

18

Before the weights are calculated, some information must be given. According to

Dorfman (1994), Bayesian model averaging starts with prior weights, which serve as the probability that each model is the true or correct model prior to seeing its ability to forecast. In our case we choose equal prior weight to avoid favoring any of our 10 models. Since we have 10 models, our prior weights are equal to 0.1 for each model. In fully Bayesian model averaging, the posterior weights are the prior weights multiplied by each model's marginal likelihood, which measures the average fit of each model over the entire parameter space. However, with the hurricane forecasting models in this study, we do not have enough statistical evidence on each model to compute a marginal likelihood. Since we do not have marginal likelihood, we use a function of the SSEs in their place. This can be taken as nonparametric Bayesian model averaging. To get the weights, use the SSEs to compute the following formulas:

-2/2 Li=(SSEi/50)

-10/2 Li=(SSEi/50)

-50/2 Li=(SSEi/50) with Li= the term that is used to calculate the weights. This serves as the approximation of the marginal likelihoods. The SSEs are divided by 50 because each model has 5 forecasts of each of the 10 similar storms, and we raise it to the negative exponent because large values reflect poor forecasting performance. We chose three different variables to raise to the power. The different exponents reflect that we can count observations by forecast dimension (-2/2), by storm evaluated (-10/2), or by time specific forecasts measured (-50/2). The power of 50/2 is the most dynamic and led to the best results. Then, we sum up the Lis and use them to calculate the following formula:

weighti=Li/LTotal;

19

with weight representing the weight that is going to be assigned to each model “i”. This normalizes the weights in order to have all the weights sum to one and produces the weights assigned to each model.

To finalize our forecast, we take the weights and multiply them by the latitudes and longitudes that each of the models forecast. From here, sum up the weighted forecasts of the ten models to get the composite forecast, combining the weighted models’ forecasts. The objective is to compare these results to the other forecasts. The formula for this is as follows:

Forecastt= ∑weighti*forecasti for models 1-10;

In this equation, multiply each model's forecast by their weight and then add them all together to get a forecast for that particular time. We wanted a forecast at the current time 0 for

12, 24, 36, 48, and 72 hours into the future. The models that have a more accurate rating of the particular storm will have a higher weight and will make up a larger component of the forecast.

These descriptions are all taken from the NOAA official website (US Department of

Commerce 2017). The first model that is used is OCD5, which are defined as “Combination of

CLP5 and Decay-SHIFOR run on operational inputs”. This model is a blend of these two statistical models designed to provide an early depiction of track and intensity. Model 2 is CLIP, which is the climatology-Persistence model 3-day. According to NOAA’s website, this is the “no skill forecast” that is used to compare all the other models. This model is a multiple regression statistical model that best considers current motion and climatological information around the storm. Model 3, has the acronym XTRP, which stands for the Extrapolation using past 12-hr motion. Extrapolation is the continuation of the current conditions of the storm, so this model is applying its conditions 12 hours prior and assumed constant. Model 4 goes by the acronym

BAMM, which stands for the Beta and Advection model, medium. This model is a vertical

20

average horizontal wind from the hurricanes current position from the aviation model. We chose the medium level because we wanted to pick a middle ground between studying and still getting a the majority of the structures. The fifth model is the acronym SHIP, which stands for the Statistical Hurricane Intensity Prediction. This model takes synoptic levels of current and forecasted information regarding temperatures, vertical shear, stability, and more.

The optimal combination of these factors leads to the optimal hurricane development. The sixth model, LBAR, is the Limited Area Barotropic Model, which serves as a modified version of the

GFS with equations that predict the evolution of the storm. This system also includes an average of the winds speeds. . The seventh model, labelled NGX, is the NAVGEM/NOGAPS [GFS tracker]. This model is dynamic in nature and is used to solve the evolution of wind speed, temperatures, etc. using mathematical equations. The eighth model, labelled GFDL, is the

Geophysical Fluid Dynamics Laboratory model. This model was mentioned in Rappaport(2008) by mentioning that this dynamic model has improved significantly and has greatly lead to the improvement of hurricane forecasting. Our ninth model is referred to as AVNI, which is the

GFS Model (Interpolated 06 hours). This dynamic model is also used to observe the GFS model, but is predicted for 6 hours in the future. The tenth model, labelled MRCL, is the McAdie Radii

CLIPER model which is a statistical model used to evaluate hurricane forecasts. (US Department of Commerce 2017). We can compare the SSEs of these individual models to determine how well each one did in forecasting the particular storm, but ultimately, we are using the SSEs to determine our weights to apply to each of these models. Because we are forecasting hurricane tracks, we only use the latitude and longitude to determine our forecast weights.

21

CHAPTER 5

RESULTS

We have tested this forecasting tool on ten storms. Making sure every time is consistent, we compared our forecast to what the hurricane actually did, and we are comparing our forecast to the forecast the National Hurricane Center released at the same time for the same storm. We set up two different conditions for our forecasts: tracking the storm as it becomes a hurricane and forecasting for a hurricane that is roughly 72 hours from landfall. The result is that the Bayesian composite forecast outperformed the National Hurricane Center forecast four out of five storms in both of these conditions, thus having the superior forecasting performance in eight out of ten cases. The storms that we selected for this initial proof of concept as they became hurricanes were Hurricane Matthew (2016), Hurricane Earl (2016), (2015), Hurricane

Bertha (2013), and Hurricane Humberto (2013). Hurricane Matthew was the only storm for which our method didn’t outperform the NHC, but we got very close to their forecast. The storms that we tested close to landfall were Hurricane Matthew (2016), Hurricane Isaac (2012),

Hurricane Sandy (2012), (2015), and (2016). We calculated the mean absolute percentage error (or MAPE) and mean squared error (MSE) for each of these storms at points 12, 24, 36, 48, and 72 hours in the future to demonstrate that our forecasting method is a viable option when predicting hurricane tracks. Forecasting performance results for the initial 5 hurricanes are in table 1.

22

TABLE 1. FORECAST ACCURACY RESULTS

Storm name Our NHC MSENHC MSE(50) NHCMAPE- MAPE(50) MAPE OUR MAPE Matthew 0.2596% 0.2510% 355.8 373.82 -.0086% Earl 0.1907% 0.2354% 478.4 353.1 .0447% Bertha 0.3567% 0.5154% 1676.4 758.13 .1587% Humberto 0.3915% 0.5208% 349.2 285.27 .1293% Danny 0.2522% 0.3284% 302.8 127.12 .0762%

The MAPE checks how well we did in terms of percentage absolute error from the original position of the storm. The only storm for which we didn’t outperform the NHC forecast was Matthew, but the distance between our error and their error was extremely small. With all the other storms, we had a lower absolute error percentage and mean squared error than the

National Hurricane Center.

Table 2 shows the mean squared errors and how our forecasts outperformed the National

Hurricane Center, providing evidence on how the different exponents in our pseudo-likelihood measure work at chosing good weights for our model averaging forecasts.

23

TABLE 2. MSE COMPARISONS

Storm MSENHC MSE(2) MSE(10) MSE(50) MSENHC Name -MSE(50) Matthew 355.8 803.318 488.23 373.82 -18.02 Earl 478.4 310.23 322.70 353.1 125.3 Bertha 1676.4 868.33 1038.2 758.13 918.27 Humberto 349.2 403.59 276.85 285.27 63.93 Danny 302.8 384.256 257.70 127.12 175.68

The general trend for these results is that as we increase the power of the exponent, we are able to get closer to or outperform the results of the NHC. As the power is increased, the significance of models that were not as accurate on the similar past storms decreases to zero and all that remains are the models that performed the best in similar past circumstances.

In the upcoming forecasting tables, it is noted that the weights constructed when the SSEs are raised to the power of -50/2 lead to the best results. Our MAPEs also were consistently better with a power of 50/2, so these are presented in the table.

The following forecasts are organized by Hurricane. We will begin with the tables that are associated with Hurricane Matthew.

24

TABLE 3. MATTHEW FORECAST WITH POWER 2

Date Latitude(2) Longitude(2) 9/26/2016 Time 12Z 14.36 N 68.21W 9/27/2016 Time 0Z 14.34 N 70.61 W 9/27/2016 Time 12Z 14.35 N 72.49 W 9/28/2016 Time 0Z 14.52 N 74.06 W 9/29/2016 Time 0Z 15.96 N 76.60 W

TABLE 4. MATTHEW FORECAST WITH POWER 10

Date Latitude(10) Longitude(10) 9/26/2016 Time 12Z 14.33 N 68.14 W 9/27/2016 Time 0Z 14.18 N 70.38 W 9/27/2016 Time 12Z 14.07 N 72.01 W 9/28/2016 Time 0Z 14.10 N 73.32 W 9/29/2016 Time 0Z 15.52 N 75.16 W

TABLE 5. MATTHEW FORECAST WITH POWER 50

Date Latitude(50) Longitude(50) 9/26/2016 Time 12Z 14.32 N 68.13 W 9/27/2016 Time 0Z 14.12 N 70.23 W 9/27/2016 Time 12Z 13.99 N 71.70 W 9/28/2016 Time 0Z 13.91 N 73.07 W 9/29/2016 Time 0Z 15.24 N 74.34 W

25

TABLE 6. NHC FORECAST FOR MATTHEW

Date NHC Latitude NHC Longitude 9/26/2016 Time 12Z 14.3N 68.0 W 9/27/2016 Time 0Z 14.1 N 70.2 W 9/27/2016 Time 12Z 13.9 N 71.8 W 9/28/2016 Time 0Z 13.9 N 72.8 W 9/29/2016 Time 0Z 15.3 N 74.4 W

TABLE 7. OFFICAL TRACK OF MATTHEW

Date Official Latitude Official Longitude 9/26/2016 Time 12Z 14.1N 65.5 W 9/27/2016 Time 0Z 14.2 N 68.1 W 9/27/2016 Time 12Z 13.8 N 70.4 W 9/28/2016 Time 0Z 13.4 N 71.9 W 9/29/2016 Time 0Z 13.5 N 73.5 W

These numbers confirm that our forecasts were very close to the forecasts that the NHC

presented for the same times in Hurricane Matthew’s life cycle. We have rounded our number to

the nearest hundredth decimal point for convenience. As the exponent’s power increases in the

weights formula, the forecast improves until it rivals the National Hurricane Center’s forecast.

26

TABLE 8. EARL FORECAST WITH POWER 2

Date Latitude(2) Longitude(2) 08/03/2016 Time 12Z 16.77 N 86.60 W 08/04/2016 Time 0Z 17.15 N 88.80 W 08/04/2016 Time 12Z 17.68 N 90.92 W 08/05/2016 Time 0Z 18.28 N 92.92 W 08/06/2016 Time 0Z 19.12 N 96.56 W

TABLE 9. EARL FORECAST WITH POWER 10

Date Latitude(10) Longitude(10) 08/03/2016 Time 12Z 16.84 N 86.66 W 08/04/2016 Time 0Z 17.24 N 88.89 W 08/04/2016 Time 12Z 17.77 N 91.00 W 08/05/2016 Time 0Z 18.38 N 92.91 W 08/06/2016 Time 0Z 18.97 N 96.35 W

TABLE 10. EARL FORECAST WITH POWER 50

Date Latitude(50) Longitude(50) 08/03/2016 Time 12Z 16.80 N 86.66 W 08/04/2016 Time 0Z 17.22 N 89.01 W 08/04/2016 Time 12Z 17.70 N 91.15 W 08/05/2016 Time 0Z 18.35 N 93.03 W 08/06/2016 Time 0Z 18.84 N 96.47 W

27

TABLE 11. NHC FORECAST FOR EARL

Date NHC Latitude NHC Long 08/03/2016 Time 12Z 16.8 N 86.6 W 08/04/2016 Time 0Z 17.5 N 89.0 W 08/04/2016 Time 12Z 18.0 N 91.5 W 08/05/2016 Time 0Z 18.5 N 93.5 W 08/06/2016 Time 0Z 19.0 N 97.5 W

TABLE 12. OFFICAL TRACK OF EARL

Date Official Latitude Official Longitude 08/03/2016 Time 12Z 16.3 N 84.3 W 08/04/2016 Time 0Z 17.3 N 86.9 W 08/04/2016 Time 12Z 17.4 N 89.4 W 08/05/2016 Time 0Z 18.0 N 91.2 W 08/06/2016 Time 0Z 18.9 N 95.6 W

For the forecasts for Hurricane Earl, we ended up outperforming the NHC with our two and our

50 power forecast. The 10 power forecast was also close to this as well, but the general trend is

that the 50 power forecast is the best to use when predicting storms.

28

TABLE 13. DANNY FORECAST WITH POWER 2

Date Latitude (2) Longitude(2) 08/16/2015 Time 12Z 11.40 N 42.52 W 08/17/2015 Time 0Z 11.98 N 44.24 W 08/17/2015 Time 12Z 12.57 N 46.01 W 08/18/2015 Time 0Z 13.36 N 47.98 W 08/19/2015 Time 0Z 14.80 N 51.95 W

TABLE 14. DANNY FORECAST WITH POWER 10

Date Latitude(10) Longitude(10) 08/16/2015 Time 12Z 11.42 N 42.40 W 08/17/2015 Time 0Z 12.08 N 44.02 W 08/17/2015 Time 12Z 12.73 N 45.68 W 08/18/2015 Time 0Z 13.56 N 47.53 W 08/19/2015 Time 0Z 15.05 N 51.24 W

TABLE 15. DANNY FORECAST WITH POWER 50

Date Latitude(50) Longitude(50) 08/16/2015 Time 12Z 11.40 N 42.10 W 08/17/2015 Time 0Z 12.30 N 43.51 W 08/17/2015 Time 12Z 13.09 N 45.01 W 08/18/2015 Time 0Z 13.99 N 46.71 W 08/19/2015 Time 0Z 15.49 N 49.82 W

29

TABLE 16. NHC FORECAST FOR DANNY

Date NHC Latitude NHC Longitude 08/16/2015 Time 12Z 11.4 N 42.6 W 08/17/2015 Time 0Z 11.9 N 44.2 W 08/17/2015 Time 12Z 12.4 N 45.8 W 08/18/2015 Time 0Z 13.1 N 47.5 W 08/19/2015 Time 0Z 14.4 N 51.5 W

TABLE 17. OFFICIAL TRACK FOR DANNY

Date Official Latitude Official Longitude 08/16/2015 Time 12Z 11.2 N 40.6 W 08/17/2015 Time 0Z 11.7 N 42.5 W 08/17/2015 Time 12Z 12.3 N 44.4 W 08/18/2015 Time 0Z 13.2 N 46.2 W 08/19/2015 Time 0Z 14.7 N 49.4 W

Our models outperformed the NHC with our 10 and our 50 forecast. This model will tend to be

the general trend for which of our forecasts do not outperform the National Hurricane Center.

Since the power of two forecast gives a more even weight patter between the 10 models, models

that are not as accurate will have unnecessary weight in the power of 2 model.

30

TABLE 18. BERTHA FORECAST WITH POWER 2

Date Latitude(2) Longitude(2) 08/03/2014 Time 12Z 24.12 N 74.04W 08/04/2014 Time 0Z 26.91 N 75.14 W 08/04/2014 Time 12Z 29.82 N 74.96 W 08/05/2014 Time 0Z 32.52 N 73.65 W 08/06/2014 Time 0Z 37.15 N 67.63 W

TABLE 19. BERTHA FORECAST WITH POWER 10

Date Latitude(10) Longitude(10) 08/03/2014 Time 12Z 24.06 N 73.82 W 08/04/2014 Time 0Z 26.78 N 74.70 W 08/04/2014 Time 12Z 29.74 N 74.18 W 08/05/2014 Time 0Z 32.48 N 72.57 W 08/06/2014 Time 0Z 37.07 N 65.23 W

TABLE 20. BERTHA FORECAST WITH POWER 50

Date Latitude(50) Longitude(50) 08/03/2014 Time 12Z 23.64 N 74.16 W 08/04/2014 Time 0Z 25.88 N 75.42 W 08/04/2014 Time 12Z 28.33 N 75.29 W 08/05/2014 Time 0Z 30.67 N 74.05 W 08/06/2014 Time 0Z 34.34 N 66.83 W

31

TABLE 21. NHC FORECAST FOR BERTHA

Date NHC Latitude NHC Longitude 08/03/2014 Time 12Z 24.3 N 73.7 W 08/04/2014 Time 0Z 27.4 N 74.3 W 08/04/2014 Time 12Z 30.7 N 73.6 W 08/05/2014 Time 0Z 33.7 N 71.7 W 08/06/2014 Time 0Z 39.5 N 64.5 W

TABLE 22. OFFICIAL TRACK FOR BERTHA

Date Official Latitude Official Longitude 08/03/2014 Time 12Z 21.4 N 71.6 W 08/04/2014 Time 0Z 24.1 N 73.1 W 08/04/2014 Time 12Z 26.8 N 73.6 W 08/05/2014 Time 0Z 30.5 N 73.4 W 08/06/2014 Time 0Z 36.8 N 69.3 W

Here are our Hurricane Bertha forecasts. All were able to improve upon the forecasts that the

NHC predicted for the same time period of the hurricane’s life cycle.

32

TABLE 23 HUMBERTO FORECAST WITH POWER 2

Date Latitude(2) Longitude(2) 09/10/2013 Time 12Z 15.09 N 28.56 W 09/11/2013 Time 0Z 16.30 N 29.45 W 09/11/2013 Time 12Z 17.82 N 30.06 W 09/12/2013 Time 0Z 19.97 N 30.66 W 09/13/2013 Time 0Z 23.12 N 32.07 W

TABLE 24. HUMBERTO FORECAST WITH POWER 10

Date Latitude(10) Longitude(10) 09/10/2013 Time 12Z 15.11 N 28.33 W 09/11/2013 Time 0Z 16.49 N 28.96 W 09/11/2013 Time 12Z 18.23 N 29.25 W 09/12/2013 Time 0Z 20.58 N 29.56 W 09/13/2013 Time 0Z 23.96 N 30.48 W

TABLE 25. HUMBERTO FORECAST WITH POWER 50

Date Latitude(50) Longitude(50) 09/10/2013 Time 12Z 14.98 N 28.06 W 09/11/2013 Time 0Z 16.54 N 28.58 W 09/11/2013 Time 12Z 18.59 N 28.72 W 09/12/2013 Time 0Z 20.76 N 28.93 W 09/13/2013 Time 0Z 24.27 N 29.64 W

33

TABLE 26. NATIONAL HURRICANE CENTER FORECAST FOR HUMBERTO

Date NHC Latitude NHC Longitude 09/10/2013 Time 12Z 15.3 N 28.5 W 09/11/2013 Time 0Z 16.6 N 29.2 W 09/11/2013 Time 12Z 18.5 N 29.5 W 09/12/2013 Time 0Z 20.5 N 29.8 W 09/13/2013 Time 0Z 23.5 N 31.0 W

TABLE 27. OFFICIAL TRACK FOR HURRICANE HUMBERTO

Date Official Latitude Official Longitude 09/10/2013 Time 12Z 14.3 N 27.3 W 09/11/2013 Time 0Z 15.1 N 28.3 W 09/11/2013 Time 12Z 16.3 N 28.9 W 09/12/2013 Time 0Z 18.6 N 28.9 W 09/13/2013 Time 0Z 23.2 N 29.5 W

With our forecasts for Humberto, we ended up outperforming the NHC with our 10

forecast and our 50 forecast. Our two was not able to match it, but it still was close to the official

forecast for that time in the hurricanes life cycle.

The next step is to convert these improvements into miles. Start by taking the difference

between our MAPE and the NHC MAPE, which for Danny is .0762%. To see the improvement

we have in forecasting latitude, we take a latitude from our 12 hour forecast on August 16, 2015

at time 12Z (11.40 N) and do the following:

Improvement= abs( 0.0762%* (11.40)*69.172*.1)= 6.01 Miles

34

with 6.01 equaling the miles improvement in latitude. For longitude, take a longitude from the same time (42.10W) and use the following formula:

Improvmentlong= abs(.0762%*(42.10 W)*66*.1)= 21.18 miles in longitude.

To see the improvement we have in latitude 72 hours into the future, we take a latitude from August 18, 2015 at time 0Z (15.49 N) compute the following:

Improvement= abs( 0.0762%* (15.49)*69.172*.1)= 8.16 Miles in Latitude.

For our 72 hour longitude improvement, take a longitude from the same time (49.82 W) and use the following formula:

Improvmentlong= abs(.0762%*(49.82 W)*66*.1)= 25.06 Miles in Longitude. A positive result from these calculations are that our miles for improvement increases as we forecast further out from the current time.

For Humberto, the different between the MAPES is .1293%. To see the improvement we have in latitude, we take a latitude from our 12 hour forecast on September 10, 2013 at time

12Z(14.98 N)and do the following:

Improvement= abs( 0.1293 %* (14.98)*69.172*.1)= 13.40 miles in latitude

For longitude, take the longitude from the same time (28.06 W) and use the following formula:

Improvmentlong= abs (0.1293 %* (28.06 W)*65*.1) = 23.58 miles in longitude

To see the improvement we have in our 72 hour forecast, we take a latitude from September 13,

2013 at time 0Z (24.27 N) and do the following:

Improvement= abs( 0.1293 %* (24.27)*69.172*.1)= 21.71 miles in latitude

For longitude, take a longitude from the same time (29.64 W) and use the following formula:

Improvmentlong= abs (0.1293 %* (29.64)*65*.1) = 24.91 Miles in Longitude

35

For Matthew, the different between the MAPES is -.0086%. To compare to the NHC, we have in latitude, we take a latitude from our 12 hour forecast on September 26, 2016 at time

12Z(14.32 N)and do the following:

Improvement= abs( .0086%* (14.32 N)*69.172*.1)= -.85 miles in latitude

Which means we were .85 miles less accurate then the National Hurricane Center.

For longitude, take the longitude from the same time (68.21W) and use the following formula:

Improvmentlong= abs (0.0086* (68.21W)*65*.1) = 3.81 miles in longitude

Which means we were 3.81 miles less accurate than the NHC in longitude.

To compare 72 hour forecast, we take a latitude from September 29, 2016 at time 0Z (15.24 N) and do the following:

Improvement= abs( 0.0086* (15.24 N)*69.172*.1)= .91 miles in latitude

Which means at the 72 hour forecast, we were .91 miles less accurate than their forecast.

For longitude, take a longitude from the same time (74.34 W) and use the following formula:

Improvmentlong= abs (0.0086%* (74.34 W)*65*.1) = 4.16 Miles in Longitude

Which means we were 4.16 miles less accurate in the 72 hour forecast with longitude.

For Bertha, the different between the MAPES is .1587%. To see the improvement we have in latitude, we take a latitude from our 12 hour forecast on August 3, 2014 at time 12Z

(23.64N) and do the following:

Improvement= abs( 0.1587%* (23.64N)*69.172*.1)= 25.95 miles in latitude

For longitude, take the longitude from the same time (74.16W) and use the following formula:

Improvmentlong= abs (0.1587%* (74.16W 65*.1) = 76.50 miles in longitude

To see the improvement we have in our 72 hour forecast, we take a latitude from September 13,

2013 at time 0Z (34.34 N)and do the following:

36

Improvement= abs( 0.1587%* (34.34 N)*69.172*.1)= 37.70 miles in latitude

For longitude, take a longitude from the same time (66.83 W) and use the following formula:

Improvmentlong= abs (0.1587%* (66.83 W)*65*.1 )= 68.94 Miles in Longitude

For Earl, the different between the MAPES is .0447% To see the improvement we have in latitude, we take a latitude from our 12 hour forecast on August 3, 2016 at time 12Z (16.80

N)and do the following:

Improvement= abs( 0.0447% (16.80 N)*69.172*.1)= 5.19 miles in latitude

For longitude, take the longitude from the same time (86.66 W) and use the following formula:

Improvmentlong= abs (0.0447%* (86.66W *65*.1) = 25.18 miles in longitude

To see the improvement we have in our 72 hour forecast, we take a latitude from August 6, 2016 at time 0Z (18.84 N) and do the following:

Improvement= abs( 0.0447%* (18.84N)*69.172*.1)= 5.83 miles in latitude

For longitude, take a longitude from the same time (96.47 W) and use the following formula:

Improvmentlong= abs (0.0447* (96.47W)*65*.1) = 28.03 Miles in Longitude

The next set of tables that will follow are the tables from the 72 hours prior to landfall forecasts. We only used the 50 forecast here because it tended to be more accurate than the forecasts for the other powers. We will show the MAPEs for these forecasts. The Following table shows the MAPEs between our landfall forecasts and the National Hurricane Center’s. The trend was that both ours and the National Hurricane Center’s 72 hour error was usually the highest, but if they outperformed us in MAPE, it was usually because our overall error was higher. But, in both of these examples, we had a lower error at the 72 hour, but it wasn’t enough to offset our error that occurred in the other hours.

37

TABLE 28: ACCURACY MEASURES FOR LANDFALL FORECASTS

Storm name Our NHC MSENHC MSE(50) NHCMAPE- MAPE(50) MAPE OUR MAPE Matthew 0.0583% 0.0435% 157 143.9853 -.0148%

Joaquin 0.2766% 0.3196% 6649 4715.649 .043% Sandy 0.0536% 0.0688% 308 167.6811 .0152% Isaac 0.1200% 0.1008% 644 799.2004 -.0192% Hermine 0.0676% 0.1940% 1601 232 .1264%

TABLE 29: ISAAC FORECAST

Forecast Time Latitude Longitude August 28, 2012 Time 12Z 19.35 N 74.26 W August 29, 2012 Time 0Z 21.43 N 76.86 W August 29, 2012 Time 12Z 23.02 N 79.29 W August 30, 2012 Time 0Z 24.52 N 81.38 W August 31, 2012 Time 0Z 26.88 N 84.38 W

TABLE 30: NHC FORECAST FOR ISAAC

Forecast Time Latitude Longitude August 28, 2012 Time 12Z 19.4 N 74.1 W August 29, 2012 Time 0Z 21.7 N 76.7 W August 29, 2012 Time 12Z 23.4 N 79.4 W August 30, 2012 Time 0Z 24.9 N 81.6 W August 31, 2012 Time 0Z 27.1 N 84.6 W

38

TABLE 31: OFFICIAL TRACK FOR ISAAC

Forecast Time Latitude Longitude August 28, 2012 Time 12Z 19.6 N 73.9 W August 29, 2012 Time 0Z 21.8 N 76.7 W August 29, 2012 Time 12Z 23.4 N 80.0 W August 30, 2012 Time 0Z 24.2 N 82.6 W August 31, 2012 Time 0Z 26.8 N 86.7 W

Here are the forecasts for Isaac compared to the storms actual track. We were unable to outperform the NHC here, but we were close to their forecast. The MAPE tables after the forecasts will provide clarity.

Here are the results for Hurricane Hermine (2016):

TABLE 32: HERMINE FORECAST

Forecast Time Latitude Longitude August 30, 2016 Time 12Z 24.3 N 86.7 N August 31, 2016 Time 0Z 24.5 N 87.6 N August 31, 2016 Time 12Z 25.1 N 87.9 N September 1, 2016 Time 0Z 26.5 N 87.1 N September 2, 2016 Time 0Z 29.6 N 84.1 N

39

TABLE 33: NHC FORECAST FOR HERMINE

Forecast Time Latitude Longitude August 30, 2016 Time 12Z 23.9 N 86.8 N August 31, 2016 Time 0Z 24.1 N 87.8 N August 31, 2016 Time 12Z 24.8 N 87.9 N September 1, 2016 Time 0Z 25.5 N 87.2 N September 2, 2016 Time 0Z 29.0 N 84.8 N

TABLE 34: OFFICAL TRACK FOR HERMINE

Forecast Time Latitude Longitude August 30, 2016 Time 12Z 23.9 N 86.8 N August 31, 2016 Time 0Z 24.1 N 87.8 N August 31, 2016 Time 12Z 24.8 N 87.9 N September 1, 2016 Time 0Z 25.5 N 87.2 N September 2, 2016 Time 0Z 29.0 N 84.8 N

We were able to outperform the NHC for their forecast for Hurricane Hermine as she approached landfall.

TABLE 35: SANDY FORECAST

Forecast Time Latitude Longitude October 26, 2012 Time 12Z 26.48 N 76.91 W October 27, 2012 Time 0Z 27.39 N 77.47 W October 27, 2012 Time 12Z 28.68 N 76.70 W October 28, 2012 Time 0Z 30.39 N 75.28 W October 29, 2012 Time 0Z 33.72 N 72.04 W

40

TABLE 36: NHC FORECAST FOR SANDY

Forecast Time Latitude Longitude October 26, 2012 Time 12Z 26.6 N 76.8 W October 27, 2012 Time 0Z 27.6 N 77.4 W October 27, 2012 Time 12Z 28.9 N 76.9 W October 28, 2012 Time 0Z 30.4 N 75.4 W October 29, 2012 Time 0Z 34.0 N 72.5 W

TABLE 37: Official Track for Sandy

Forecast Time Latitude Longitude October 26, 2012 Time 12Z 26.4 N 76.9 W October 27, 2012 Time 0Z 27.5 N 77.1 W October 27, 2012 Time 12Z 28.8 N 76.5 W October 28, 2012 Time 0Z 30.5 N 74.7 W October 29, 2012 Time 0Z 33.9 N 71.0 W

We were able to outperform the NHC for as she approached the U.S.

TABLE 38: JOAQUIN FORECAST

Forecast Time Latitude Longitude September 28, 2015 12Z 27.60 N 69.80 W September 29, 2015 Time 0Z 27.80 N 70.40 W September 29, 2015 12Z 28.20 N 71.20 W September 30, 2015 0Z 28.30 N 72.00 W October 1, 2015 Time 0Z 29.81 N 72.69 W

41

TABLE 39: NHC FORECAST FOR JOAQUIN

Forecast Time Latitude Longitude September 28, 2015 12Z 27.7 N 69.4 W September 29, 2015 Time 0Z 27.9 N 69.9 W September 29, 2015 12Z 28.2 N 70.4 W September 30, 2015 0Z 28.8 N 70.8 W October 1, 2015 Time 0Z 31.0 N 71.6 W

TABLE 40: OFFICAL TRACK FOR JOAQUIN

Forecast Time Latitude Longitude September 28, 2015 12Z 27.7 N 69.7 W September 29, 2015 Time 0Z 26.9 N 70.1 W September 29, 2015 12Z 26.2 N 70.5 W September 30, 2015 0Z 25.8 N 71.3 W October 1, 2015 Time 0Z 23.9 N 72.9 W

We were able to outperform the NHC for the prediction of Hurricane Joaquin as it approached .

TABLE 41: MATTHEW FORECAST

Forecast Time Latitude Longitude October 6, 2016 Time 0Z 23.20 N 76.29 W October 6, 2016 Time 12Z 24.51 N 77.40 W October 7, 2016 Time 0Z 26.50 N 78.90 W October 7, 2016 Time 12Z 28.20 N 80.20 W October 8, 2016 Time 12Z 31.89 N 80.49 W

42

TABLE 42: NHC FORECAST FOR MATTHEW

Forecast Time Latitude Longitude October 6, 2016 Time 0Z 23.1 N 76.0 W October 6, 2016 Time 12Z 24.8 N 77.5 W October 7, 2016 Time 0Z 26.6 N 79.0 W October 7, 2016 Time 12Z 28.2 N 80.1 W October 8, 2016 Time 12Z 31.5 N 80.0 W

TABLE 43: OFFICAL TRACK FOR MATTHEW

Forecast Time Latitude Longitude October 6, 2016 Time 0Z 23.0 N 76.0 W October 6, 2016 Time 12Z 24.7 N 77.5 W October 7, 2016 Time 0Z 26.7 N 79.0 W October 7, 2016 Time 12Z 28.9 N 80.3 W October 8, 2016 Time 12Z 32.5 N 79.9 W

We were able to outperform the NHC with our forecast for Hurricane Matthew as it approached landfall.

For Joaquin, the different between the MAPES is .043%. To see the improvement we have in latitude, we take a latitude from our 12 hour forecast on September 28, 2015 at time 12Z

(27.6N) and do the following:

Improvement= abs( 0.043* (27.6 N)*69.172*.1)= 8.21 miles in latitude

For longitude, take the longitude from the same time (69.8 W) and use the following formula:

Improvmentlong= abs (0.043* (69.7W* 65*.1) = 19.48 miles in longitude

43

To see the improvement we have in our 72 hour forecast, we take a latitude from October 1,

2015 at time 0Z (29.81 N)and do the following:

Improvement= abs( 0.043%* (29.81)*69.172*.1)= 8.87 miles in latitude

For longitude, take a longitude from the same time (72.69W) and use the following formula:

Improvmentlong= abs (0.043%* (72.69W)*65*.1) = 20.31 Miles in Longitude

For Matthew, the different between the MAPES is .0148%. To see the improvement we have in latitude, we take a latitude from our 0 hour forecast on October 6, 2016 at time 12Z (23.20

N)and do the following:

Improvement= abs( 0.0148%* (23.20)*69.172*.1)= 2.38 miles in latitude

For longitude, take the longitude from the same time (76.29W) and use the following formula:

Improvmentlong= abs (0.0148%* (76.29* 65*.1) = 7.34 miles in longitude

To see the improvement we have in our 72 hour forecast, we take a latitude from October 8,

2016 at time 0Z (31.89N) and do the following:

Improvement= abs( 0.0148%* (31.89N)*69.172*.1)= 3.26 miles in latitude

For longitude, take a longitude from the same time (80.49W)) and use the following formula:

Improvmentlong= abs (0.0148%* (80.49W)*65*.1)= 7.74 Miles in Longitude

For Sandy, the different between the MAPES is .0152%. To see the improvement we have in latitude, we take a latitude from our 0 hour forecast on October 26, 2012 at time 12Z (26.48

N)and do the following:

Improvement= abs( 0.0152%* (26.48)*69.172*.1)= 2.78 miles in latitude

For longitude, take the longitude from the same time (76.91W) and use the following formula:

Improvmentlong= abs (0.0148%* (76.91* 65*.1) = 7.40 miles in longitude

44

To see the improvement we have in our 72 hour forecast, we take a latitude from October 29,

2012 at time 0Z (33.72N) and do the following:

Improvement= abs( 0.0148* (33.72N)*69.172*.1)= 3.45 miles in latitude

For longitude, take a longitude from the same time (72.04W) and use the following formula:

Improvmentlong= abs (0.0148%* (72.04W)*65*.1)= 6.93 Miles in Longitude

For Isaac, the different between the MAPES is -.0192%. To see the comparison, we have in latitude, we take a latitude from our 0 hour forecast on August 28, 2012 at time 12Z (19.35 N) and do the following:

Improvement= abs( 0.0192* (19.35)*69.172*.1)= -1.98 miles in latitude

For longitude, take the longitude from the same time (74.26W) and use the following formula:

Improvmentlong= abs (0.0192%* (74.26* 65*.1) = -9.27 miles in longitude

To see our comparison in our 72 hour forecast, we take a latitude from August 31, 2012 at time

0Z (26.88N) and do the following:

Improvement= abs( 0.0192* (26.88N)*69.172*.1)= -3.57 miles in latitude

For longitude, take a longitude from the same time (84.38W) and use the following formula:

Improvmentlong= abs (0.0192%* (84.38W)*65*.1) = -10.53 Miles in Longitude

For Hermine, the different between the MAPES is .1264%. To see the improvement we have in latitude, we take a latitude from our 12 hour forecast on August 30, 2016 at time 12Z (24.3N) and do the following:

Improvement= abs( 0.1264* (24.3 N)*69.172*.1)= 21.25 miles in latitude

For longitude, take the longitude from the same time (86.7 W) and use the following formula:

Improvmentlong= abs (0.1264* (86.7W* 65*.1) = 71.23 miles in longitude

45

To see the improvement we have in our 72 hour forecast, we take a latitude from September 2,

2016 at time 0Z (29.6 N)and do the following:

Improvement= abs( 0.1264 %* (29.6N)*69.172*.1)= 25.88 miles in latitude

For longitude, take a longitude from the same time (84.1W) and use the following formula:

Improvmentlong= abs (0.1264* (84.1W)*65*.1) = 69.10 Miles in Longitude

Small changes in hurricane forecasts can have important implications for storm readiness because wind speeds vary considerably over short distances within a storm. “The winds of a hurricane are very light in the center of the storm (blue circle) but increase rapidly to a maximum

10-50 km (6-31 miles) from the center (red ring) and then fall off slowly toward the outer extent of the storm (yellow ring).” (US Department of Commerce 2017). This means that as we outperform the NHC at certain storms, we are able to lead to a reduction in the amount of people that are being forced to evacuate. We are also able to reduce the amount of people that are building and preparing for a storm that they may not be as directly impacted by. If we have outperformed the NHC by 10 miles, people can avoid getting hit by winds that they are not prepared to weather. Whitehead (2003) discovered a way to create real cost analysis and determined that evacuation costs ranged from 1 million to 50 million dollars per evacuation event depending on the storm.

46

CHAPTER 6

CONCLUSION

Hurricanes are difficult to track as they are traveling through the ocean. They have many factors that can affect how they move and how strong they can become. As a result, people and economies close to the coast and affected by the ocean are at the mercy of these deadly storms.

These storms can lead to deadly winds, , tornadoes, and storm surges that can lead to billions of dollars in damage. The National Hurricane Center has taken great strides to improve their hurricane forecasting tools; however this research presents a nonparametric Bayesian model averaging approach that can lead to more accurate hurricane forecasts. We proved the concept by outperforming NHC official forecasts on a small number of storms, both at time of strengthening to a hurricane and as the storms were approaching landfall. There is more work to do in optimizing the weighted average approach introduced here, but the methodology appears viable and performs well enough already to demonstrate the value of additional work on this topic.

By no means are we saying that we have the best model and that this is the only way that it should be done. However, it appears that the component models that are used to make up both our weighted average forecast and the NHC forecast track can be combined in a more efficient manner than has been done previously.

47

REFERENCES

Arizona University. “”. Accessed June 18, 2017.

http://www.atmo.arizona.edu/students/courselinks/fall16/atmo336/lectures/sec2/hurricane

s.html

Berg, Robbie. "Tropical Cyclone Report Hurricane Isaac." National Hurricane Center, January

28, 2013, 1-78. Accessed July 18, 2017.

Chu, Pao-Shin and Zhao, Xin. A Bayesian Regression Approach for Predicting Seasonal

Tropical Cyclone Activity over the Central North Pacific. Journal of Climate. Volume 20.

Published 17 November 2006. Accessed 10 August 2016.

Donovan, Shaun. "Hurricane Sandy Rebuilding Strategy Pre-Publication Edition." US

Department of Housing and Urban Development, August 2013, 1-168.

Dorfman, Jeffrey H. "Bayesian Composite Qualitative Forecasting: Hog Prices Again."

American Journal of Agricultural Economics 80 (1998): 543-51

http://eds.a.ebscohost.com/eds/pdfviewer/pdfviewer?sid=853c7f35-c6e4-417a-9cd3-

036b4eff9785%40sessionmgr4007&vid=0&hid=4103

Eckel, Catherine C., El-Gamal, Mahmoud A., and Wilson, Rick K. Risk loving after the storm:

A Bayesian-Network study of Hurricane Katrina evacuees. Journal of Economic

Behavior and Organization. Volume 69. Published February 2009. Accessed 10 July

2016. http://www.sciencedirect.com/science/article/pii/S0167268108001741

Elsner, James B. And Bossak, Brian H. Bayesian Analysis of U.S. Hurricane Climate. Journal

48

of Climate. Volume 14. Published 2 July 2001. Accessed 18 June 2016.

http://myweb.fsu.edu/jelsner/PDF/Research/ElsnerBossak2001.pdf

Elsner, James B. and Jagger, Thomas H. A Hierarchical Bayesian Approach to Seasonal

Hurricane Modeling. Journal of Climate. Volume 17. Published 12 February 2004.

Accessed 20 June 2016.

http://myweb.fsu.edu/jelsner/PDF/Research/ElsnerJagger2004.pdf

Elsner, James. B. Tracking Hurricanes. Bulletin of the American Meteorological Society.

Published March 1, 2003. Accessed August 10, 2016.

http://eds.a.ebscohost.com/eds/pdfviewer/pdfviewer?sid=f204f77d-740e-45ba- beea- b777e4bda1c1%40sessionmgr4006&vid=0&hid=4103

Elsner, J. B, Murnane, R. J, and Jagger, T. H. Forecasting U.S. hurricanes 6 months in advance.

Geophysical Research Letters. Volume 33. Published 31 May 2006. Accessed 19 June

2016.

http://myweb.fsu.edu/jelsner/PDF/Research/ElsnerMurnaneJagger2006.pdf

“Index of /atcf/archive/”. Accessed February 22, 2016. ftp://ftp.nhc.noaa.gov/atcf/archive/

Objective Technist for the NHC. ftp://ftp.nhc.noaa.gov/atcf/docs/nhc_techlist.dat

Knabb, R. D., J. R. Rhome, and D. P. Brown. "Tropical Cyclone Report: Hurricane Katrina,

August 23-30, 2005." Fire Engineering 159, no. 5 (May 2006): 32-37. Accessed July 18,

2017.

LeSage, James P., and Michael Magura. "A Mixture-Model Approach to Combining

Forecasts." Journal of Business & Economic Statistics. October 1992. Accessed

49

June 19, 2017.

https://www.jstor.org/stable/pdf/1391820.pdf?refreqid=excelsior:2bc458239c45ff27567a

6f628c55b7b4.

Li, David T., and Jeffrey H. Dorfman. "A ROBUST APPROACH TO PREDICTING

FLUCTUATIONS IN STATE-LEVEL EMPLOYMENT GROWTH*." Wiley Online

Library. August 1995. Accessed June 19, 2017.

http://onlinelibrary.wiley.com/doi/10.1111/j.1467-9787.1995.tb01415.x/abstract.

Kang, Nam-Young; Lim, Myeong-Soon; Elsner, James B., and Shin, Dong-Hyun. Bayesian

Updating of Track-Forecast Uncertainty for Tropical . Journal of Weather and

Forecasting. Volume 31. Published 29 December 2015. Accessed 22 June 2016.

http://myweb.fsu.edu/jelsner/PDF/Research/KangEtAl2016.pdf

Min, Chung-ki , and Arnold Zellner. "Bayesian and non-Bayesian methods for combining

models and forecasts with applications to forecasting international growth rates*

." Journal of Econometrics. 1993. Accessed June 19, 2017.

http://ac.els-cdn.com/030440769390102B/1-s2.0-030440769390102B-main.pdf

?_tid=386147f4-5532-11e7-aaa1-00000aacb35d&acdnat=1497906164_1bf0d4

7d73cbfe5c48c23f0d9123364d

Rappaport, Edward N., James L. Franklin, Lixion A. Avila, Stephen R. Baig, John L.

Beven, II, Eric S. Blake, Christopher A. Burr, Jiann-Gwo Jiing, Christopher A.

Juckins, Richard D. Knabb, Christopher W. Landsea, Michelle Mainelli, Max

Mayfield, Colin J. McAdie, Richard B. Pasch, Christopher Sisko, Stacy R.

Stewart, and Asha N. Tribble. "Advances and Challenges at the National

Hurricane Center." Journals Online. October 1, 2008. Accessed July 18, 2017.

50

http://journals.ametsoc.org/doi/pdf/10.1175/2008WAF2222128.1.

Reich, Brian J. and Fuentes, . A Multivariate Semiparametric Bayesian Spatial

Modeling Framework for Hurricane Surface Wind Fields. The Annals of Applied

Statistics. Volume 1. Published 1 March 2007. Accessed 15 July 2016.

http://www.jstor.org/stable/4537431?seq=1#page_scan_tab_contents

Rowlett, Russ. University of . “Saffir-Simpson Hurricane Scale”. Accessed June

18, 2017. https://www.unc.edu/~rowlett/units/scales/saffir.html

Whitehead, John C. "One million dollars per mile? The opportunity costs of Hurricane

evaluation." Science Direct. 2004. Accessed July 18, 2017.

http://www.sciencedirect.com/science/article/pii/S0964569104000043.

US Department of Commerce. National Oceanic and Atmospheric Administration.

“Saffir-Simpson Hurricane Scale”.

http://www.nhc.noaa.gov/aboutsshws.php .

US Department of Commerce. National Oceanic and Atmospheric Administration. “National

Hurricane Center Forecast Verification.” Accessed July 16, 2017.

http://www.nhc.noaa.gov/verification/verify2.shtml

US Department of Commerce. National Oceanic and Atmospheric Administration. “Tropical

Cyclone Structure”. Accessed July 24, 2017.

http://www.srh.noaa.gov/jetstream/tropics/tc_structure.html

Vickery, P. J., P. F. Skerlj, and L. A. Twisdale. "Simulation of Hurricane Risk in the U.S. Using

Empirical Track Model." Journal of Structural Engineering 126, no. 10 (October 2000):

1222-237.

Zhao, Xin and Chu, Pao-Shin. Bayesian Multiple Changepoint Analysis of Hurricane Activity in

51

the Eastern North Pacific: A Markov Chain Monte Carlo Approach. Journal of Climate.

Volume 19. Published 27 July 2005. Accessed 13 July 2016. http://journals.ametsoc.org/doi/abs/10.1175/JCLI3628.1

52

APPENDIX

CODING FOR SIMILAR STORMS cd('D:\Robby\Robby Matlab code\Matlab coding'); select = 1000000*ones(15,3); %%(id,row, diff) baseset=baseexcel; %1X6matrix changes for each "current Storm" (Matthew) dataset=dataexcel;

%% input the datasest; w = (5); w(1,1)=4; w(2,2)=5; w(3,3)=2; w(4,4)=1; w(5,5)=3; for i=1:88422 %% all the data; diff1=((dataset(i,1:5)-baseset(1,1:5))./baseset(1,1:5));%%cool formula diff=diff1*w*diff1';%%formula squared stormcheck=(select(:,1)-dataset(i,6)).^2; %%this is to check to see how it fairs compared to the rest of the data if min(stormcheck)==0 %%This part is to see what happens if it is the same as the base? for j=1,15; if select(j,1)==dataset(i,6)%%different conditions pertaining to stormcheck indno=j; end; if diff

53

select(:,2)=select(sind,2); select(:,3)=select(sind,3); end; end;

The previous coding is what is used to reach the forecasts. The beginning shows the weights that I added as series of w’s that are used to modify the results of which storms are the most similar. The part that says i=88421 is the total data set. The next bits of code are used to set up the conditions of seeing which line of hurricane data becomes more similar to the row of the

“current storm”. If it improves over the previous rows of data, then it will replace the least similar of the 10 similar storms that are listed.

CODING FOR FINDING FORECAST cd('D:\Robby\Robby Matlab code\Matlab coding'); select = 1000000*ones(15,3); %%(id,row, diff) baseset=baseexcel; %1X6matrix changes for each "current Storm" dataset=dataexcel; errors1= LatModel1Matthew-actuallatMatthew; errors11=LongModel1Matthew-actuallongMatthew; errors2=LatModel2Matthew-actuallatMatthew; errors22=LongModel2Matthew-actuallongMatthew; errors3=LatModel3Matthew-actuallatMatthew; errors33=LongModel3Matthew-actuallongMatthew; errors4=LatModel4Matthew-actuallatMatthew; errors44=LongModel4Matthew-actuallongMatthew; errors5=LatModel5Matthew-actuallatMatthew; errors55=LongModel5Matthew-actuallongMatthew; errors6=LatModel6Matthew-actuallatMatthew; errors66=LongModel6Matthew-actuallongMatthew; errors7=LatModel7Matthew-actuallatMatthew; errors77=LongModel7Matthew-actuallongMatthew; errors8=LatModel8Matthew-actuallatMatthew; errors88=LongModel8Matthew-actuallongMatthew; errors9=LatModel9Matthew-actuallatMatthew; errors99=LongModel9Matthew-actuallongMatthew; errors10=LatModel10Matthew-actuallatMatthew; errors1010=LongModel10Matthew-actuallongMatthew;

54

SSE1=(errors1(:,2)'*errors1(:,2))+(errors11(:,2)'*errors11(:,2)); SSE2=errors2(:,2)'*errors2(:,2)+errors22(:,2)'*errors22(:,2); SSE3=errors3(:,2)'*errors3(:,2)+errors33(:,2)'*errors33(:,2); SSE4=errors4(:,2)'*errors4(:,2)+errors44(:,2)'*errors44(:,2); SSE5=errors5(:,2)'*errors5(:,2)+errors55(:,2)'*errors55(:,2); SSE6=errors6(:,2)'*errors6(:,2)+errors66(:,2)'*errors66(:,2); SSE7=errors7(:,2)'*errors7(:,2)+errors77(:,2)'*errors77(:,2); SSE8=errors8(:,2)'*errors8(:,2)+errors88(:,2)'*errors88(:,2); SSE9=errors9(:,2)'*errors9(:,2)+errors99(:,2)'*errors99(:,2); SSE10=errors10(:,2)'*errors10(:,2)+errors1010(:,2)'*errors1010(:,2);

L1=(SSE1/50).^(-2/2); L2=(SSE2/50).^(-2/2); L3=(SSE3/50).^(-2/2); L4=(SSE4/50).^(-2/2); L5=(SSE5/50).^(-2/2); L6=(SSE6/50).^(-2/2); L7=(SSE7/50).^(-2/2); L8=(SSE8/50).^(-2/2); L9=(SSE9/50).^(-2/2); L10=(SSE10/50).^(-2/2);

LTOT= L1+L2+L3+L4+L5+L6+L7+L8+L9+L10; weight1=L1/LTOT; weight2=L2/LTOT; weight3=L3/LTOT; weight4=L4/LTOT; weight5=L5/LTOT; weight6=L6/LTOT; weight7=L7/LTOT; weight8=L8/LTOT; weight9=L9/LTOT; weight10=L10/LTOT; Weights= weight1+weight2+weight3+weight4+weight5+weight6+weight7+weight8+weight9+weight10;

errorMatthew2= MatthewLat2-MatthewOffLat; errorMatthew22 = MatthewLong2- MatthewOffLong; SSEMatthew2=(errorMatthew2'*errorMatthew2)+(errorMatthew22'*errorMatthew22);

%%NHC compared to track; errorNHClat= NHCLatMatthew-MatthewOffLat; errorNHCLong= NHCLongMatthew-MatthewOffLong; SSENHCLat= diag(errorNHClat'*errorNHClat);

55

SSENHCLong= diag(errorNHCLong'*errorNHCLong); SSENHC= errorNHClat'*errorNHClat+errorNHCLong'*errorNHCLong;

%%Now this is it with 10 in the nondynamic section; L11=(SSE1/50).^(-10/2); L22=(SSE2/50).^(-10/2); L33=(SSE3/50).^(-10/2); L44=(SSE4/50).^(-10/2); L55=(SSE5/50).^(-10/2); L66=(SSE6/50).^(-10/2); L77=(SSE7/50).^(-10/2); L88=(SSE8/50).^(-10/2); L99=(SSE9/50).^(-10/2); L1010=(SSE10/50).^(-10/2);

LTOT10= L11+L22+L33+L44+L55+L66+L77+L88+L99+L1010; weight11=L11/LTOT10; weight22=L22/LTOT10; weight33=L33/LTOT10; weight44=L44/LTOT10; weight55=L55/LTOT10; weight66=L66/LTOT10; weight77=L77/LTOT10; weight88=L88/LTOT10; weight99=L99/LTOT10; weight1010=L1010/LTOT10; Weights1= weight11+weight22+weight33+weight44+weight55+weight66+weight77+weight88+weight99+ weight1010;

errorMatthew10= MatthewLat10-MatthewOffLat; errorMatthew1010 = MatthewLong10- MatthewOffLong; SSEMatthew10=(errorMatthew10'*errorMatthew10)+(errorMatthew1010'*errorMatthew1010);

%%now this is it with the 50 in the nondynamic section

L150=(SSE1/50).^(-50/2); L250=(SSE2/50).^(-50/2); L350=(SSE3/50).^(-50/2); L450=(SSE4/50).^(-50/2); L550=(SSE5/50).^(-50/2); L650=(SSE6/50).^(-50/2); L750=(SSE7/50).^(-50/2); L850=(SSE8/50).^(-50/2);

56

L950=(SSE9/50).^(-50/2); L1050=(SSE10/50).^(-50/2);

LTOT50= L150+L250+L350+L450+L550+L650+L750+L850+L950+L1050; weight150=L150/LTOT50; weight250=L250/LTOT50; weight350=L350/LTOT50; weight450=L450/LTOT50; weight550=L550/LTOT50; weight650=L650/LTOT50; weight750=L750/LTOT50; weight850=L850/LTOT50; weight950=L950/LTOT50; weight1050=L1050/LTOT50; Weights150= weight150+weight250+weight350+weight450+weight550+weight650+weight750+weight850+ weight950+weight1050;

errorMatthew50= MatthewLat50-MatthewOffLat; errorMatthew150 = MatthewLong50- MatthewOffLong; SSEMatthew50=(errorMatthew50'*errorMatthew50)+(errorMatthew150'*errorMatthew150);

The part that begins with all the error1 is the errors that are created for each of the models. The models that are used have a forecast for each of the storms at a particular time, and we calculate their SSEs by using these error terms and multiplying them by their transpose.

There is one for each one and when you divide each one by the total, you get the weight that each model actually gets compared to the other models. Then, the next step is to calculate our

Error term off our forecast and compare each model to our forecast for our current storm. We then create each of the forecasts off of each of the different weights created from the different power terms. We create the difference formula stated earlier in the paper to compute our error term and as a result see how far off our forecast was from the actual track. We also create the standard error for the National Hurricane Center’s forecast and from there we can compare which forecast has a lower error.

57

LIST OF TABLES

The next set of tables will be used to show the weights, the SSEs, and the similar storms

that were used for the 5 hurricane development forecasts.

TABLE 1. WEIGHTS FOR MATTHEW POWER 2

Models Weights Forecast(2) Model 1 0.0820252884941138

Model 2 0.0663136826013441

Model 3 0.0487451804226973

Model 4 0.111771470110844

Model 5 0.132701344439767

Model 6 0.0863842725979490

Model 7 0.106707972731347

Model 8 0.114169457839058

Model 9 0.126275099116110

Model 10 0.124906231646770

58

TABLE 2. WEIGHTS FOR MATTHEW POWER 10

Models Weights Forecast(10) Model 1 0.0225833646074035 Model 2 0.00779947813632997 Model 3 0.00167381796418499 Model 4 0.106097372668597 Model 5 0.250280162278876 Model 6 0.0292565570613793 Model 7 0.0841461323821375 Model 8 0.117977586413537 Model 9 0.195271293883323 Model 10 0.184914234604232

TABLE 3. WEIGHTS FOR MATTHEW POWER 50

Models Weights Forecast(50) Model 1 3.85768412466822e-06 Model 2 1.89544487099179e-08 Model 3 8.62828646496712e-12 Model 4 0.00882889963129419 Model 5 0.644934444201602 Model 6 1.40766565850884e-05 Model 7 0.00277047495682132 Model 8 0.0150100095369151 Model 9 0.186455608926790 Model 10 0.184914234604232

59

TABLE 4. SSES FOR MATTHEW

Models SSEs Model 1 49306 Model 2 60988 Model 3 82969 Model 4 36184 Model 5 30477 Model 6 46818 Model 7 37901 Model 8 35424 Model 9 32028 Model 10 32379

Here are the weights and the SSEs that are used for determine our forecasts for Hurricane

Matthew. As the exponent in the weights formula increases, the more the forecasts sway toward

the models that were more accurate according to the similar storms. The higher the SSEs, the

lower the weights that that particular model would get.

60

TABLE 5. WEIGHTS FOR EARL POWER 2

Models Weights Forecast(2) Model 1 0.0925970426510887 Model 2 0.0748604608276149 Model 3 0.0550276583417250 Model 4 0.126177033632395 Model 5 0.149804435638501 Model 6 0.0975178304274975 Model 7 0.120460932032257 Model 8 0 Model 9 0.142549949573953 Model 10 0.141004656874968

TABLE 6. WEIGHTS FOR EARL POWER 10

Models Weights Forecast(10) Model 1 0.0256040711205687 Model 2 0.00884272101954403 Model3 0.00189770456895641 Model4 0.120288748941408 Model 5 0.283757145423540 Model 6 0.0331698566960638 Model 7 0.0954013538499369 Model 8 0 Model 9 0.221390398787390 Model 10 0.209647999592592

61

TABLE 7. WEIGHTS FOR EARL POWER 50

Models Weights Forecast(50) Model 1 3.91647038245999e-06 Model 2 1.92432906866460e-08 Model3 8.75977070681764e-12 Model 4 0.00896344096567251 Model 5 0.654762434589200 Model 6 1.42911671401558e-05 Model 7 0.00281269351328007 Model 8 0 Model 9 0.189296958072771 Model 10 0.144146245969504

TABLE 8. SSES FOR EARL

Models SSEs Model 1 41427 Model 2 43852 Model 3 90069 Model 4 31982 Model 5 30673 Model 6 59000 Model 7 45945 Model 8 0 Model 9 28996 Model 10 28834

Hurricane Earl had an interesting situation in which model 8(GFDL) was actually missing some

data for one of the similar storms and it was unable to be given a weight. This is why the weight

62

is listed as zero so it had no influence in the model. This still led to the improvement to the

NHC’s forecast.

TABLE 9. WEIGHTS FOR DANNY POWER 2

Models Weights Forecast(2) Model 1 0.106806708276897 Model 2 0.0917939766091591 Model 3 0.0340814398908823 Model 4 0.113796618259025 Model 5 0.116105274094434 Model 6 0.110632740436137 Model 7 0.0719089313184132 Model 8 0.143419488188409 Model 9 0.102735198061290 Model 10 0.108719624865354

TABLE 10. WEIGHTS FOR DANNY POWER 10

Models Weights Forecast(10) Model 1 0.0835022523262642 Model 2 0.0391540290680664 Model 3 0.000276245499337186 Model 4 0.114644260660451 Model 5 0.126755051915138 Model 6 0.0995689410794640 Model 7 0.0115510033453559 Model 8 0.364540998522043 Model 9 0.0687546142592215 Model 10 0.0912526033246585

63

TABLE 11. WEIGHTS FOR DANNY POWER 50

Models Weights Forecast(50) Model 1 0.000623409738174179 Model 2 1.41307406867414e-05 Model 3 2.47035113274951e-16 Model 4 0.00304119467668586 Model 5 0.00502468706796790 Model 6 0.00150280307220431 Model 7 3.15777498708372e-08 Model 8 0.988586160421692 Model 9 0.000235934733766152 Model 10 0.000971647971073090

TABLE 12. SSES FOR DANNY

Models SSEs Model 1 41205 Model 2 47944 Model 3 129131 Model 4 38674 Model 5 37905 Model 6 39780 Model 7 61202 Model 8 30686 Model 9 42838 Model 10 40480

Here are the models for Hurricane Danny. Model three was off by a larger margin in this one, so

the influence of this one’s weight was much smaller from the beginning and ended up becoming

64

zero much faster than the other models. The fact that the similar storms for this model were not

as accurate is what leads this model to be so inaccurate and have a small weight.

TABLE 13. WEIGHTS FOR BERTHA POWER 2

Models Weights Forecast(2) Model 1 0.0515763926149454 Model 2 0.0497605837331809 Model 3 0.0416531619049050 Model 4 0.161999182655914 Model 5 0.114821128231846 Model 6 0.0780813557825902 Model 7 0.0904634182795308 Model 8 0.139595449536951 Model 9 0.138272472179677 Model 10 0.133776855080460

65

TABLE 14. WEIGHTS FOR BERTHA POWER 10

Models Weights Forecast(10) Model 1 0.00126862239801048 Model 2 0.00106048588029917 Model 3 0.000435830310777998 Model 4 0.387830245969079 Model 5 0.0693723822474241 Model 6 0.0100882140219312 Model 7 0.0210592320142289

Model 8 0.184261187959939

Model 9 0.175693703358793

Model 10 0.148930095839518

TABLE 15. WEIGHTS FOR BERTHA POWER 50

Models Weights Forecast(50) Model 1 3.56050278018190e-13 Model 2 1.45336235891136e-13 Model 3 1.70387060286437e-15 Model 4 0.950731391396978 Model 5 0.000174093476369103 Model 6 1.13219411098292e-08 Model 7 4.48809563010342e-07 Model 8 0.0230154138956354 Model 9 0.0181397142492554 Model 10 0.00793892684975453

66

TABLE 16. SSES FOR BERTHA

Models SSEs Model 1 114001 Model 2 118161 Model 3 141160 Model 4 36295 Model 5 51208 Model 6 75303 Model 7 64996 Model 8 42120 Model 9 42523 Model 10 43952

Bertha had three models that had higher errors to the actual forecasts, and those models were

given lower weights and as a result had little influence on the forecast.

67

TABLE 17. WEIGHTS FOR HUMBERTO POWER 2

Models Weights Forecast(2) Model 1 0.0766705470850654 Model 2 0.0628338094209545 Model 3 0.0311712864074558 Model 4 0.135668306152565 Model 5 0.114456142757394 Model 6 0.0910496142833587 Model 7 0.118094543101165 Model 8 0.143817763703268 Model 9 0.108688898958443 Model 10 0.117549088130330

TABLE 18. WEIGHTS FOR HUMBERTO POWER 10

Models Weights Forecast(10) Model 1 0.0134059218857408 Model 2 0.00495588050046133 Model3 0.000148911247267218 Model 4 0.232565820128484 Model 5 0.0993915360991696 Model 6 0.0316624433045702 Model 7 0.116225928621897 Model 8 0.311326816354489 Model 9 0.0767502537417036 Model 10 0.113566488116217

68

TABLE 19. WEIGHTS FOR HUMBERTO POWER 50

Models Weights Forecast(50) Model 1 1.18384078063586e-07 Model 2 8.17361171829120e-10 Model 3 2.00192305050583e-17 Model 4 0.186011292744422 Model 5 0.00265189843677375 Model 6 8.70025535714953e-06 Model 7 0.00579862465591714 Model 8 0.799636347269420 Model 9 0.000728130081633146 Model 10 0.00516488735503823

TABLE 20. SSES FOR HUMBERTO

Models SSEs Model 1 51119 Model 2 62376 Model 3 125735 Model 4 28889 Model 5 34243 Model 6 43046 Model 7 33188 Model 8 27252 Model 9 36060 Model 10 33342

69

As stated above, Humberto only had one model that had a significantly high error, which was

model 3. This causes model 3 to approach zero very quickly, but the other models are less likely

to move to zero until the power 50 forecast.

TABLE 21. HUMBERTO SIMILAR STORMS

Closeness to Humberto Humberto’s Similar Storms 1st Philippe 2nd Fred 3rd Bill 4th Katia 5th Julia 6th Ophelia 7th Hermine 8th Florence 9th Lisa 10th Danny

70

TABLE 22. EARL SIMILAR STORMS

Closeness to Earl Closeness to Earl(2016) 1st Ernesto 2nd Alex 3rd Karl 4th Paula 5th Richard 6th Paloma 7th Dennis 8th Tomas 9th Irene 10th Ida

TABLE 23. DANNY SIMILAR STORMS

Closeness to Danny Danny 1st Ophelia 2nd Bill 3rd Karen 4th Phuilippe 5th Tomas 6th Gaston 7th Helene 8th Irene 9th Dean 10th Florence

71

TABLE 24. MATTHEW SIMILAR STORMS

Closeness to Matthew Matthew 1st Felix 2nd Isaac 3rd Dennis 4th Ernesto 5th Gustav 6th Tomas 7th Earl 8th Sandy 9th Danny 10th Earl

TABLE 25. BERTHA SIMILAR STORMS

Closeness to Bertha Bertha 1st Noel 2nd Wilma 3rd Maria 4th Hanna 5th Joaquin 6th Otto 7th Kyle 8th Rita 9th Tomas 10th Hermine

The next set of tables will be for the weights, SSEs, and the similar storms used for each of the

hurricanes that was used in the 72 hour until landfall forecasts.

72

TABLE 26: WEIGHTS FOR ISAAC

Model 1 6.09802934719061e-30 Model 2 6.09802934719061e-30 Model3 2.79508692516074e-36 Model 4 1.22983521094241e-13 Model 5 0.549267166466868 Model 6 1.16119773707139e-14 Model 7 0 Model 8 7.23234871351940e-06 Model 9 1.55602864908942e-06 Model 10 0.450724045155635

TABLE 27: SSES FOR ISAAC

Model 1 141393 Model 2 127885 Model 3 253501 Model 4 31495 Model 5 9823 Model 6 34613 Model 7 0 Model 8 15398 Model 9 16374 Model 10 9901

73

TABLE 28: ISAAC SIMILAR STORMS

Closeness to Isaac Storm Name 1st Maria 2nd Dennis 3rd Rita 4th Paloma 5th Noel 6th Danny

7th Bertha 8th Dolly 9th Rafael 10th Ernesto

TABLE 29: WEIGHTS FOR HERMINE

Model 1 1.23466991853697e-23 Model 2 1.12958735011980e-21 Model3 1.10911859524054e-24 Model 4 3.15683020171535e-19 Model 5 3.36919588365249e-11 Model 6 1.15162763891665e-24 Model 7 0 Model 8 1.77880482443463e-09 Model 9 0.999999994497630 Model 10 3.68987339719658e-09

74

TABLE 30: SSES FOR HERMINE

Model 1 85175 Model 2 71098 Model 3 93794 Model 4 56755 Model 5 27094 Model 6 93653 Model 7 0 Model 8 23119 Model 9 10327 Model 10 22454

TABLE 31: HERMINE SIMILAR STORMS

Closeness to Hermine Storm Name 1st Paloma 2nd Alex 3rd Stan 4th Nate 5th Kyle 6th Ophelia (2005) 7th Karl 8th Ingrid 9th Ernesto 10th Cindy

75

TABLE 32: WEIGHTS FOR SANDY

Model 1 1.87419149643158e-23 Model 2 2.11083510361736e-23 Model3 8.78246514268842e-29 Model 4 7.78100597310195e-15 Model 5 0.144253821596202 Model 6 8.98248520914407e-15 Model 7 0 Model 8 0.365856950195892 Model 9 4.33720603032241e-09 Model 10 0.489889223870683

TABLE 33: SSES FOR SANDY

Model 1 138252 Model 2 137596 Model 3 225860 Model 4 62509 Model 5 18417 Model 6 62151 Model 7 0 Model 8 17744 Model 9 36821 Model 10 17538

76

TABLE 34: SANDY SIMILAR STORMS

Closeness to Sandy Storm Name 1st Katrina 2nd Maria 3rd Tomas 4th Dennis 5th Hanna 6th Nicole 7th Paloma 8th Irene 9th Ophelia 10th Bertha

TABLE 35: WEIGHTS FOR JOAQUIN

Model 1 1.82047076318983e-31 Model 2 0 Model3 1.39580687111988e-34 Model 4 1.29828720664537e-12 Model 5 3.57585851404346e-05 Model 6 5.04426964898401e-19 Model 7 5.46849849846000e-10 Model 8 0.995713564576077 Model 9 0.00425067629063492 Model 10 0

77

TABLE 36: SSES FOR JOAQUIN

Model 1 196201 Model 2 3660218 Model 3 261406 Model 4 34560 Model 5 17417 Model 6 62373 Model 7 27139 Model 8 11566 Model 9 14387 Model 10 149561

TABLE 37: JOAQUIN SIMILAR STORMS

Closeness to Joaquin Storm Name 1st Cristobal 2nd Arthur 3rd Nate old 4th Paula 5th Maria 6th Kyle 7th Hermine 8th Ophelia old 9th Epsilon 10th Ophelia

78

TABLE 38: WEIGHTS FOR MATTHEW

Model 1 4.59633057031310e-19 Model 2 3.33234148746338e-18 Model3 5.83389486808161e-28 Model 4 3.34411202288169e-13 Model 5 0.00360797246845438 Model 6 2.32854337951813e-18 Model 7 1.68808965605525e-06 Model 8 0.979753464383666 Model 9 8.43120271571669e-08 Model 10 0.0166367907458622

TABLE 39: SSES FOR MATTHEW

Model 1 89877 Model 2 83030 Model 3 203942 Model 4 52381 Model 5 20790 Model 6 84229 Model 7 28252 Model 8 16615 Model 9 31850 Model 10 19557

79

TABLE 40: MATTHEW SIMILAR STORMS

Closeness to Matthew Storm Name 1st Irene 2nd Isaac 3rd Ike 4th Hanna 5th Gustav 6th Earl(old) 7th Tomas 8th Katrina 9th Paloma 10th Ophelia

80

LIST OF FIGURES

FIGURE 1: GIS FOR DANNY

Here is the comparison between our forecast and the forecast of the NHC. There is the actual track (green), and as the storm moved, the storm more closely lines up with our non-Parametric

Bayesian technique (black) compared to the NHCs forecast (red).

81

FIGURE 2: GIS FOR HUMBERTO

Here is the section of our forecast for Hurricane Humberto. Our forecast more closely lines up with the official track of the hurricane.

82