Florida State University Libraries

Electronic Theses, Treatises and Dissertations The Graduate School

2005 Using the Superensemble Method to Improve Eastern Pacific Forecasting Mark Rickman Jordan II

Follow this and additional works at the FSU Digital Library. For more information, please contact [email protected] THE STATE UNIVERSITY

COLLEGE OF ARTS AND SCIENCES

USING THE SUPERENSEMBLE METHOD TO IMPROVE

EASTERN PACIFIC TROPICAL CYCLONE FORECASTING

By

MARK RICKMAN JORDAN II

A Thesis submitted to the Department of Meteorology in partial fulfillment of the requirements for the degree of Master of Science

Degree Awarded: Fall Semester, 2005

The members of the Committee approve the Thesis of Mark Jordan defended on 1 September 2005.

______T.N. Krishnamurti Professor Directing Thesis

______Carol Anne Clayson Committee Member

______Peter S. Ray Committee Member

The Office of Graduate Studies has verified and approved the above named committee members.

ii ACKNOWLEDGEMENTS

I would first like to thank my major professor, Dr. T.N. Krishnamurti, for all of his help through this process and for his unending encouragement and patience. Furthermore, I would like to thank Dr. Carol Anne Clayson and Dr. Peter Ray for their advice and assistance throughout this process. Thank you Brian Mackey and Dr. Vijay Tallapragada for all of your help and wonderful suggestions during this project. Others who deserve commendation for their assistance during the past year include Mrinal Biswas, Arindam Chakraborty, Akhilesh Mishra, Lydia Stefanova, Donald van Dyke, and Lawrence Pologne. Thank you Bill Walsh for all of your support, advice, and encouragement over the years, and thank you Mike and Beth Rice for your love and support during my entire educational career. Finally, I would like to thank my parents and the rest of my family, for without all of you, I would not be where I am today. This research was funded by NOAA Subcontract Grant 120000586-09 and by ACS Defense Grant ACSD-04-036.

iii

TABLE OF CONTENTS

LIST OF FIGURES ...... vi

LIST OF TABLES...... ix

ABBREVIATIONS AND ACRONYMS...... x

ABSTRACT...... xi

1. INTRODUCTION ...... 1

2. SUPERENSEMBLE METHODOLOGY...... 4

2.1 History of Superensemble...... 4

2.2 Superensemble Description ...... 5

2.3 Real-Time Tropical Cyclone Superensemble ...... 6

2.4 Member Models...... 8

3. AVERAGE EASTERN PACIFIC SYNOPTIC PATTERN AND OVERVIEW OF 2004 EASTERN SEASON ...... 12

3.1 Overview...... 12

3.2 Eastern Pacific Synoptic Pattern...... 12

3.3 2004 Eastern Pacific Hurricane Season ...... 20

4. RESULTS ...... 23

4.1 Overview...... 23

4.2 General Eastern Pacific Superensemble Experiments ...... 23

4.3 Eastern Pacific Superensemble Experiments Using Filtered Training...... 28

4.4 Potential Explanations for Experimentation Outcomes...... 31

5. CONCLUSIONS AND FUTURE WORK ...... 60

iv 5.1 Conclusions...... 60

5.2 Future Work...... 61

REFERENCES ...... 62

BIOGRAPHICAL SKETCH ...... 64

v LIST OF FIGURES

1. Schematic of the steps in a tropical cyclone superensemble forecast...... 7

2. Mean surface level streamline analysis over the Pacific for May ...... 14

3. Mean surface level streamline analysis over the Pacific for August ...... 14

4. Mean 300 hPa level streamline analysis over the Pacific for July and associated mean July tropical cyclone tracks...... 16

5. Mean 300 hPa level streamline analysis over the Pacific for September and associated average September tracks for tropical cyclones...... 17

6. Mean monthly sea-surface temperatures for June ...... 18

7. Mean monthly sea-surface temperatures for August ...... 18

8. Mean monthly sea-surface temperatures for October...... 19

9. 2004 Eastern Pacific tropical cyclone tracks ...... 21

10. Experiment 1 RMS Track Errors ...... 34

11. Experiment 1 RMS Intensity Errors ...... 35

12. Experiment 2 RMS Track Errors ...... 36

13. Experiment 2 RMS Intensity Errors ...... 37

14. Experiment 3 RMS Track Errors ...... 38

15. Experiment 3 RMS Intensity Errors ...... 39

16. Experiment 4 RMS Track Errors ...... 40

17. Experiment 4 RMS Intensity Errors ...... 41

18. Experiment 5 RMS Track Errors ...... 42

19. Experiment 5 RMS Intensity Errors ...... 43

20. Experiment 6 RMS Track Errors ...... 44

21. Experiment 6 RMS Intensity Errors ...... 45

vi 22. Experiment 7 RMS Track Errors ...... 46

23. Experiment 7 RMS Intensity Errors ...... 47

24. Experiment 8 RMS Track Errors ...... 48

25. Experiment 8 RMS Intensity Errors ...... 49

26. Experiment 9 RMS Track Errors ...... 50

27. Experiment 9 RMS Intensity Errors ...... 51

28. Early Times Latitude Model Biases for 2002/2003 Eastern Pacific Training Set (Bias Increment is in Degrees Latitude)...... 52

29. Early Times Latitude Model Biases for 2002/2003 Atlantic Training Set (Bias Increment is in Degrees Latitude)...... 52

30. Actual Early Times Latitude Model Biases of the 2004 Eastern Pacific Numerical Models (Bias Increment is in Degrees Latitude) ...... 52

31. Early Times Longitude Model Biases of 2002/2003 Eastern Pacific Training Set (Bias Increment is in Degrees Longitude)...... 53

32. Early Times Longitude Model Biases of 2002/2003 Atlantic Training Set (Bias Increment is in Degrees Longitude)...... 53

33. Actual Early Times Longitude Model Biases of the 2004 Eastern Pacific Numerical Models (Bias Increment is in Degrees Longitude) ...... 53

34. Early Times Intensity Model Biases of the 2002/2003 Eastern Pacific Training Set (Bias Increment is in m.p.h.)...... 54

35. Early Times Intensity Model Biases of the 2002/2003 Atlantic Training Set (Bias Increment is in m.p.h.) ...... 54

36. Actual Early Times Intensity Model Biases of the 2004 Eastern Pacific Numerical Models (Bias Increment is in m.p.h.)...... 54

37. Late Times Latitude Model Biases of the 2002/2003 Eastern Pacific Training Set (Bias Increment is in Degrees Latitude) ...... 55

38. Late Times Latitude Model Biases of the 2002/2003 Atlantic Training Set (Bias Increment is in Degrees Latitude)...... 55

vii 39. Actual Late Times Latitude Model Biases of the 2004 Eastern Pacific Numerical Models (Bias Increment is in Degrees Latitude) ...... 55

40. Late Times Longitude Model Biases of the 2002/2003 Eastern Pacific Training Set (Bias Increment is in Degrees Longitude) ...... 56

41. Late Times Longitude Model Biases of the 2002/2003 Atlantic Training Set (Bias Increment is in Degrees Longitude)...... 56

42. Actual Late Times Longitude Model Biases of the 2004 Eastern Pacific Numerical Models (Bias Increment is in Degrees Longitude) ...... 56

43. Late Times Intensity Model Biases of the 2002/2003 Eastern Pacific Training Set (Bias Increment is in m.p.h.)...... 57

44. Late Times Intensity Model Biases of the 2002/2003 Atlantic Training Set (Bias Increment is in m.p.h.) ...... 57

45. Actual Late Times Intensity Model Biases of the 2004 Eastern Pacific Numerical Models (Bias Increment is in m.p.h.)...... 57

46. 2002/2003 Atlantic Tropical Storm Training RMS Track Errors...... 58

47. 2002/2003 Atlantic Hurricane Training RMS Track Errors ...... 58

48. 2002/2003 Atlantic Tropical Storm Training RMS Intensity Errors...... 59

49. 2002/2003 Atlantic Hurricane Training RMS Intensity Errors ...... 59

viii LIST OF TABLES

1. Distribution of Numerical Models Used for Eastern Pacific Superensemble Experiments ...... 9

2. 2004 Eastern Pacific Tropical Cyclones, Dates of Existence, and Maximum Intensity22

3. Experiment 1 RMS Track Error Comparisons...... 34

4. Experiment 1 RMS Intensity Error Comparisons...... 35

5. Experiment 2 RMS Track Error Comparisons...... 36

6. Experiment 2 RMS Intensity Error Comparisons...... 37

7. Experiment 3 RMS Track Error Comparisons...... 38

8. Experiment 3 RMS Intensity Error Comparisons...... 39

9. Experiment 4 RMS Track Error Comparisons...... 40

10. Experiment 4 RMS Intensity Error Comparisons ...... 41

11. Experiment 5 RMS Track Error Comparisons...... 42

12. Experiment 5 RMS Intensity Error Comparisons ...... 43

13. Experiment 6 RMS Track Error Comparisons...... 44

14. Experiment 6 RMS Intensity Error Comparisons...... 45

15. Experiment 7 RMS Track Error Comparisons...... 46

16. Experiment 7 RMS Intensity Error Comparisons...... 47

17. Experiment 8 RMS Track Error Comparisons...... 48

18. Experiment 8 RMS Intensity Error Comparisons...... 49

19. Experiment 9 RMS Track Error Comparisons...... 50

20. Experiment 9 RMS Intensity Error Comparisons...... 51

ix ABBREVIATIONS AND ACRONYMS

CLIPER...... Climatology and Persistence model DSHP ...... Decay Statistical Hurricane Intensity Prediction Scheme model GFDI ...... Geophysical Fluid Dynamics Laboratory Model – interpolated GFDL ...... Geophysical Fluid Dynamics Laboratory Model GFSI...... Global Forecast System model - interpolated GUNA...... Combination of GFDI, UKMI, NGPI, and GFSI GUNS...... Combination of GFDI, UKMI, and NGPI hPa...... hectopascals ITCZ...... Inter-Tropical Convergence Zone km ...... kilometer NGPI ...... NOGAPS model - interpolated NHC ...... National Hurricane Center NOGAPS...... Navy Operational Global Atmospheric Prediction System OFCI ...... National Hurricane Center Official Forecast RMS ...... Root Mean Square SHF5 ...... 5-day Statistical Hurricane Intensity Forecast model UKMI...... United Kingdom Meteorological Office Model - interpolated UKMT...... United Kingdom Meteorological Office model

x ABSTRACT

For many years tropical cyclone superensemble has shown remarkable skill in forecasting Atlantic tropical cyclone track and intensity. In this project tropical cyclone superensemble is applied to Eastern Pacific tropical cyclone forecasting for the 2004 Eastern Pacific tropical cyclone season. This task is completed by conducting a collection of model combination tests to discover which models perform best within the superensemble method. Then, the two main questions of the thesis are addressed: will a combined Eastern Pacific and Atlantic training set provide superior forecasts over just using an Eastern Pacific training set, and do intensity-specific training sets provide superior forecasts over just using all storms of varying intensities? In the context of the 2004 Eastern Pacific tropical cyclone season, the answer to both questions is yes. However, the ultimate findings are quite perplexing as an Atlantic training set provides superior forecasts when compared to forecasts using an Eastern Pacific training set or a combined-basin training set. Furthermore, forecasts made using only hurricane training usually outperform forecasts made using combined-intensity training and tropical storm training. The rest of the project uses model bias comparisons and intensity-specific error calculations to try and determine why the results are as they are.

xi CHAPTER ONE

INTRODUCTION

1.1 Background and Thesis Objectives

Tropical cyclones are some of the most intricate systems on the planet. These systems are vital for life here on Earth, for they act to redistribute heat and angular momentum from the tropics to the poles and attempt to maintain balance across the planet. Unfortunately, however, the ability of meteorologists to forecast these systems, though better than decades ago, still lacks in many areas. Today, meteorologists use advanced computer models as aids in forecasting tropical cyclone track and intensity; however, even the best of these models still have significant errors, particularly in long- range forecasting. Therefore, any type of advance in tropical cyclone forecasting is readily accepted and greatly appreciated by the meteorological community. While those in the western hemisphere generally focus on tropical cyclones that form in the Atlantic basin, Eastern Pacific tropical cyclones are also important for a number of reasons. For this study, the Eastern Pacific will be defined as the area of the Pacific Ocean east of the 140-degree longitude line. Eastern Pacific tropical storms and hurricanes affect portions of and at times and can result in significant wind damage along the coast while producing flooding in mountainous, inland areas. Furthermore, decaying tropical cyclones sometimes move into the southwestern areas of the United States, producing flooding in normally dry, desert regions. Eastern Pacific tropical cyclones can also move into the Central Pacific and affect , on occasion, as was seen with in 1992. Lastly, tropical cyclones of the Eastern Pacific can adversely affect shipping lanes. Clearly, tropical cyclone activity in the Eastern Pacific can affect millions of people in many different countries and regions. Therefore, improvement of tropical cyclone forecasting is vital for the Pacific basin. Over the past several years, one forecast model has shown remarkable skill in both tropical cyclone track and intensity forecasting: the Florida State Superensemble. Thus far, the Florida State Superensemble (FSSE) method has only been applied to

1 forecasting Atlantic Basin tropical cyclones in real time. However, other research has been conducted on the usefulness of the FSSE methodology for forecasting Western Pacific Basin tropical cyclone track and intensity, and the results of this research indicate great promise for superior forecasts in that basin should the method ever be implemented in real time. In this study, however, the goal is to determine whether the FSSE method would be useful for forecasting Eastern Pacific tropical cyclone track and intensity. Judging from previous successes in other basins, the expectation is that the FSSE forecasts will be better, on average, than other models forecasting Eastern Pacific tropical cyclones. In addition to proving the usefulness of the FSSE method for forecasting in the Eastern Pacific, two other facets of this investigation will include determining whether combined-basin/cross-basin training sets are beneficial to tropical cyclone forecasting and determining whether intensity-specific training sets help tropical cyclone forecasting. Superensemble training sets will be discussed in more detail in chapter two of this thesis. Therefore, the results of these experiments could not only have implications for the Eastern Pacific, but may also have implications for any basin the FSSE makes forecasts.

1.2 Previous Work

Much research has been conducted over the past several years concerning the superensemble and its potential application in real-time tropical cyclone forecasting. Szymczak (2004) shows that the application of a synthetic superensemble can be beneficial in tropical cyclone track forecasting. Synthetic superensemble involves placing a best-fit line through a track forecast and perturbing this best-fit line with Fourier curves of varying harmonics. The results of those experiments indicate that using a synthetic superensemble in real time can improve upon track forecasts generated by the current, incremental superensemble. Next, Vijaya Kumar et al. (2003) show that the superensemble method can be effectively used in another basin for both tropical cyclone track and intensity forecasts. This publication focuses primarily on the application of superensemble for the Western Pacific, and the results of these experiments indicate that the superensemble would be an effective forecast tool in real time, with most notable improvements in forecasts beyond 72 hours. Furthermore, a cursory examination of superensemble application to other basins is performed and indicates promise for global

2 application of the superensemble technique for tropical cyclone forecasting. Finally, Williford (2002) examines using intensity-specific training sets for tropical cyclone prediction. However, the use of these training sets is determined to show no greater skill than a larger, non-intensity-specific training set.

1.3 Organization of Thesis

This thesis is divided into five chapters. The next chapter will discuss the methodology of the superensemble and the member models included in this experiment. Chapter three will examine the average synoptic/dynamic pattern over the Eastern Pacific basin as well as provide an overview of the 2004 Eastern Pacific tropical cyclone season. Results of applying combined-basin/cross-basin training and intensity-specific training to Eastern Pacific superensemble forecasting will be discussed in chapter four. Finally, conclusions derived from these experiments and a discussion of future work will be discussed in chapter five.

3 CHAPTER TWO

SUPERENSEMBLE METHODOLOGY

2.1 History of Superensemble

After many years of mediocre Atlantic tropical cyclone forecasts using the ensemble mean, Dr. T.N. Krishnamurti developed the multi-model superensemble forecast method in 1998. The superensemble statistical method employs past forecasts from a group of member models in an effort to correct biases in these forecasts and rank the relative strength of each of the models. This method has been shown to produce tropical cyclone forecasts that are better than any member model for a given time (Krishnamurti et al. 1999). The superensemble was developed as a means of improving upon ensemble mean forecasting. An ensemble mean involves a straight average of all the models involved. Another type of ensemble mean is the bias-corrected ensemble mean, where the bias of each model is removed. In both of these cases, unlike the superensemble, all models are given an equal weight of one. It is important to note, however, that the ensemble-mean forecasting approach has been proven as a reasonable alternative to single-model forecasting. Goerss (2000) examined the utility of combining three dynamical models into an equally weighted ensemble mean. For the study, the GFDL, NOGAPS, and United Kingdom Meteorological Office models were used, and this ensemble mean showed greater skill than the majority of single models for individual storms and long- term averages. Following the success of using the superensemble method for forecasting tropical cyclones, the superensemble technique was applied to global numerical weather prediction and climate forecasting (Krishnamurti et al. 2000a) and global precipitation forecasting (Krishnamurti et al. 2000b). The results of these experiments show high skill in all areas. However, this thesis will only examine the skill of the superensemble as it relates to tropical cyclone forecasting.

4 2.2 Superensemble Description

The superensemble represents the next generation of bias-corrected ensemble mean forecasting, where not only the biases of individual models are recognized and corrected, but good/bad past performances of member models are also recognized and assigned higher/lower weights accordingly. The actual weights are determined through a multiple linear regression technique, where member model forecasts are regressed against the observed state. Such a method attempts to reward better-performing models with higher weights and poorer-perfoming models with lower weights (Williford 2002). The flaw in this system, however, would occur when the performance of the member models’ past forecasts do not accurately represent the performance of those models in the forecast period. The superensemble method is comprised of two parts: a training phase and a forecast phase. The training phase is made up of previous forecasts from the member models and the corresponding observed states. As previously described, the multi-model variables of latitude, longitude, and intensity are regressed against the observed states through a linear regression technique. The linear regression technique involves a minimization function that acts to limit the spread between the variables of the member models and the observed state. This minimization function is described by the following equation:

length J = ƒ tS )((− tO ))( 2 (1) t=0 where J is the minimization function; length refers to the length of the training period; S(t) is the superensemble prediction, and O(t) is the observed state (Williford 2002). The length of a particular training set appears to be important in achieving high- skill forecasts. Previous work with the superensemble indicates somewhere between 50 and 75 forecast cases are vital for attaining good forecasts. Generally for Atlantic tropical cyclone prediction, approximately two years of previous tropical cyclone forecasts are used. In this experiment, the time limit for forecasts was also set at two years. Choosing to only go back two years for training data is not a matter of choice, but of necessity. Member models undergo regular physical and dynamical changes that normally render a current version of a model obsolete after a finite time. The ultimate desire for those

5 working with the superensemble would be to artificially freeze the makeup of all member models from year to year. In that case, it would be possible to use an infinite number of years in the training set. However, this notion does not reflect reality. For the forecast phase of the superensemble, S(t) is derived by using the data gathered through the training phase and current member-model forecasts. This equation describes the forecast phase of the superensemble:

N = + − S()(()) t Oƒ ai Fi t F i (2) i=1 where O is the mean of the observed value in the training phase. N represents the number of models, and ai represents the regression coefficient (weight) for model i. Fi () t is the variable forecast made by model i, and Fi is the mean of a particular variable over all the − forecasts in the entire training period. Therefore, (())Fi t F i represents the forecast anomaly value for model i. The final term in Equation 1 represents the bias removal when the sum is added to O (Williford 2002). The incremental technique, used for real-time Atlantic superensemble, is also used in this study. This technique allows a value for S(t) to be computed for each variable at each forecast hour; then that value can be added to the forecast for the previous forecast time. This technique has been proven superior to other techniques, for the incremental method allows the latest data to be used as the initial forecast information (Williford 2002). A schematic view of how the superensemble works is shown in Figure 2.1

2.3 Real-Time Tropical Cyclone Superensemble

The Florida State Superensemble was first used in a research mode in 1999 to forecast for the 1998 hurricane season. A cross-validation approach was used due to major changes in the models during the late 1990s, and the training consisted of 1998 tropical cyclones. The results of these experiments showed that the FSSE performed better than all member models. During the 1999 hurricane season, the superensemble was run in real time using the interpolated versions of member models and training exclusively from 1998. Again, results for the 1999 hurricane season indicated that the FSSE showed superiority to its member-model forecasts. Therefore, based on these results, the FSSE has been operated in a real-time format ever since (Williford 2002).

6

Figure 2.1: Schematic of the steps in a tropical cyclone superensemble forecast

The superensemble makes forecasts for tropical cyclones once the National Hurricane Center deems a system to be of at least tropical storm strength (winds >39 m.p.h.). Furthermore, tropical depressions are not included in training sets for the Atlantic, and this experiment also did not include tropical depressions in training sets.

7 Observed values and member-model forecasts are received through the Automatic Tropical Cyclone Forecasting (ATCF) information stream, and the same stream is used by Florida State University to send superensemble forecasts to the National Hurricane Center. Currently, the superensemble is run four times a day at 00z, 06z, 12z, and 18z whenever a storm of at least tropical storm strength is occurring.

2.4 Member Models

In anticipation of using combined-basin/cross-basin training sets for conducting experiments in this thesis, track and intensity errors for the 2002 and 2003 hurricane seasons were examined for both the Atlantic and Pacific basins. Furthermore, an even more detailed examination was conducted as to which storms performed better with respect to latitude or longitude forecasts. After all examinations were completed and a series of tests conducted using various combinations of models, the combined models that were determined to provide the overall best forecasts are described in Table 2.1. Before Table 2.1 can be fully understood, however, a general explanation of time distributions has to be given. Early models in this experiment refer to models that were used for forecasts between 12 and 72 hours, and late models refer to models used for forecasts between 84 and 120 hours. Within the superensemble, it is possible to alter the times designated as early and late; however, preliminary tests indicated that using these time distributions was appropriate for this experiment, and, therefore, the given times were used throughout all experiments. For real-time Atlantic tropical cyclone superensemble, these are also the time distributions that have been and a currently being used. All models used for the superensemble are the interpolated versions of the models, and a more detailed description of all these models will be provided later in this section.

8 Table 2.1: Distribution of Numerical Models Used for Eastern Pacific Superensemble Experiments

Early Early Late Late Early Late Latitude Longitude Latitude Longitude Intensity Intensity NGPI OFCI NGPI OFCI OFCI OFCI UKMI GFDI UKMI UKMI GFDI GFDI GUNS NGPI GUNS GUNA UKMI UKMI GUNA SHF5 SHF5 DSHP DSHP

The following are descriptions of all the latitude, longitude, and intensity models described in Table 2.1:

1. OFCI (NHC Official Forecast): The National Hurricane Center issues its forecast for any tropical or subtropical system deemed to be of at least tropical depression strength. These forecasts are issued four times daily at 03z, 09z, 15z, and 21z. The forecasts represent subjective collaborations of forecasters at the National Hurricane Center. Until 2002, the NHC only forecasted out to 72 hours; however, beginning in 2003, these forecasts were extended to 120 hours. The extension of the official forecast is important to the superensemble, for the superensemble can now incorporate the official forecast as part of its member model set during the late periods.

2. GFDI (Geophysical Fluid Dynamics Laboratory model - interpolated): The GFDI has provided numerical guidance to the National Hurricane Center and to the rest of the meteorological community since 1995. The model is a triple nested, primitive equation model that has shown considerable skill in track forecasting for the Atlantic and Eastern Pacific. However, intensity forecasts from the GFDI have been known to contain large biases. Therefore, in 2001 the model was coupled with a high- resolution version of the Princeton Ocean Model in order to modify some of the large intensity errors. This coupling helped to substantially improve GFDI intensity forecasts by specifically targeting pressure underestimations with tropical cyclones

9 (Bender et al. 2002). The GFDI produces forecasts four times daily at 00z, 06z, 12z, and 18z and is initialized using the NCEP Global Forecast System model.

3. NGPI (NOGAPS model - interpolated): The NOGAPS (Navy Operational Global Atmospheric Prediction System) model is a spectral model in the horizontal domain and an energy-conserving finite difference model in the model. Therefore, sigma coordinates are used in the vertical. The model’s resolution is T239 with 30 sigma levels. In late 2003, the NOGAPS began using a 3-D VAR analysis scheme referred to as NAVDAS (Navy Data Assimilation). This new scheme became fully operational for the 2004 hurricane season (http://meted.ucar.edu/nwp/pcu2/nogaps/index.htm).

4. UKMI (United Kingdom Meteorological Office model - interpolated): The UKMI is a global forecast model operated by the United Kingdom Meteorological Office. The resolution of this grid point model is 432 columns and 325 rows. In 2002, an upgrade was done to the UKMI that involved the implementation of a Unified Global Model that includes changes in the dynamical/physical structure of the model (Milton et al. 2003). The UKMI is run four times daily.

5. GUNS: The GUNS model is an ensemble mean of the GFDI model, the UKMI model, and the NGPI model. Individual values from these three models are simply averaged together to provide a consensus value. This ensemble mean frequently shows greater skill in track and intensity forecasts than any one of its member models.

6. GUNA: The GUNA model is exactly like the GUNS model except for the addition of one more model, the Global Forecast System (GFSI) model, to the mix. Again, the GUNA model frequently shows greater skill in track and intensity forecasts than any one of its member models.

7. SHF5 (5-day Statistical Hurricane Intensity Forecast model): The SHF5 model uses climatological and persistence indicators to forecast intensity

10 change (Jarvinen and Neumann 1979). This model is analogous to the Climatology and Persistence Model (CLIPER) that is widely used as a benchmark for tropical cyclone forecasting. Because the developmental sample for SHF5 excluded storms that made landfall, this model is only valid in situations where the tropical cyclone is over water (http://www.nhc.noaa.gov/aboutmodels.shtml).

8. DSHP (Decay Statistical Hurricane Intensity Prediction Scheme model): The DSHP model is a statistical model that not only uses climatological and persistence indicators for forecasting purposes, but the model also uses various synoptic indicators as well. Four synoptic indicators are used in this model, and they include the difference between the maximum potential intensity and the current storm intensity, the 850-200 hPa (hectopascals) wind shear, the 200 hPa eddy flux convergence of relative angular momentum, and the 200 hPa zonal wind and temperature within 1000 kilometers of the center of the cyclone. Unlike the original SHIPS model that was developed for cyclones that did not cross land, the DSHP model can be used to forecast intensity of decaying cyclones over land areas (http://www.nhc.noaa.gov/aboutmodels.shtml).

11 CHAPTER 3

Average Eastern Pacific Synoptic Pattern and Overview of 2004 Eastern Pacific Hurricane Season

3.1 Overview

The Eastern Pacific is a relatively active basin for tropical cyclone development when it is compared the other tropical basins of the world. The Eastern Pacific hurricane season occurs between May 15 and November 15. Between 1966 and 1996, the Eastern Pacific averaged 16.4 tropical cyclones, 9.2 hurricanes, and 4.0 major hurricanes for any given year. These statistics make the Eastern Pacific the second most active basin in the world for tropical cyclone frequency; the Western Pacific is currently the most active basin. Another interesting fact about Eastern Pacific tropical cyclone development is that the vast majority of cyclones develop in a small region just west of the Central American coast in the vicinity of the Gulf of Tehuantepec. This region corresponds to the area where sea-surface temperatures in excess of 29 degrees Celsius are normally found. (Eastern Pacific sea-surface temperatures will be discussed in more detail later in this chapter.) This region has the highest tropical cyclone genesis rate per unit area in the world. The average peak of hurricane season occurs in late August and corresponds to the warmest sea-surface temperatures and a northward movement to the Inter-Tropical Convergence Zone. Finally, almost all tropical cyclones that develop in the Eastern Pacific form from Atlantic easterly waves that move into the Pacific, and it appears that the presence of a monsoon trough in this basin, along with associated enhanced convergence and cyclonic vorticity, aids in the development of many waves.

3.2 Eastern Pacific Synoptic Pattern

In order to understand why tropical cyclones develop and track like they do in the Eastern Pacific, it is necessary to examine the underlying synoptic pattern over the basin during hurricane season. Therefore, this section will examine the surface pattern, 300- millibar pattern, sea-surface temperatures, and wind-shear profiles commonly found in the Eastern Pacific between May and November. For continuity purposes and for the sake

12 of eliminating repetitiveness, different months will be examined in each of the following sections. A complete view of the Eastern Pacific synoptic pattern will be provided without looking at all six months of the hurricane season in detail in each section.

3.2.1 Average Eastern Pacific Surface Pattern In this section the Eastern Pacific surface patterns as observed during the months of May and August will be examined. During the month of May, the major synoptic features include a subtropical high that is located north of 30oN latitude and a semi-active Inter-Tropical Convergence Zone that is located near 10oN. The ITCZ can actually be divided into two regions. One region to the east of a col located near 10oN/110oW where northerly and southerly winds are more in opposition to one another. This area is characterized as having enhanced convection and a higher likelihood of cyclogenesis during the month of May. Note that this is the same area where the vast majority of Eastern Pacific tropical cyclones form at any time during the year. West of the col, the ITCZ is characterized as a confluence zone with reduced convective activity (Chu 1995). Figure 3.1 shows a streamline analysis that includes all the major features that have been discussed. During the month of August, the subtropical high moves to the farthest north position that it will move all year, around 40oN. The previously mentioned col region that is located near 10oN/110oW in May is now located near 12oN/128oW. Therefore, cyclogenesis and enhanced convection is now favored from the coasts of Mexico/Central America west to this point. West of the col, the confluent region remains intact, resulting in reduced convective activity. This setup is primarily responsible for why the Central Pacific experiences so few tropical cyclones in a given year (Chu 1995). Figure 3.2 provides a streamline analysis of the August synoptic pattern, showing all the major features that have been discussed thus far.

3.2.2 Average Eastern Pacific 300 hPa Pattern In this section the 300 hPa patterns as observed during the months of July and September will be examined. During the month of July, the major upper-level features include a broad trough to the north and west of Hawaii and a strong located

13 over the southwestern portions of the United States. It is believed that this strong anticyclone forces most July tropical cyclones to have a significant westward component

Figure 3.1: Mean surface level streamline analysis over the Pacific for May (Chu 1995)

Figure 3.2: Mean surface level streamline analysis over the Pacific for August

14 (Chu 1995) in their motion, as opposed to other times of the year when storms that develop close to the Mexican/Central American coasts can move parallel and even into the coast (Chu 1995). Figure 3.3 shows the major upper-level synoptic features discussed here along with some average July tropical cyclone tracks. During the month of September, the broad trough to the north and west of Hawaii is still located in the same region; however, the trough is not nearly as strong. Furthermore, no remnant exists of the strong anticyclone that is located over the southwestern United States in July. However, a broad, but weaker, anticyclone is located over central Mexico and extending into the Eastern Pacific. This anticyclone manages to keep some September tropical cyclones on a westerly track out into the Pacific, but some systems parallel and impact the Mexican coast during this period (Chu 1995). Figure 3.4 shows the major upper-level features over the Eastern Pacific during the month of September.

3.2.3 Average Eastern Pacific Sea Surface Temperatures In this section the Eastern Pacific sea-surface temperature patterns as observed during the months of June, August, and October will be discussed. The data presented in this section represents Reynolds sea-surface temperature monthly means for the time period 1971-2000. For this discussion a sea-surface temperature of 27oC is considered the minimum threshold for tropical cyclone formation. During the month of June, a vast region of sea-surface temperatures in excess of 27oC extends from approximately 5oN to 20oN at the Mexican and Central American coasts to between 5oN and 10oN at 140oW. The warmest SSTs, approaching 30oC, are found near the Gulf of Tehuantepec. A pictorial representation of June Eastern Pacific sea-surface temperature distribution is found in Figure 3.5. For the month of August, sea-surface temperatures in excess of 27oC are found along the entire Mexican and Central American coasts. The entire Gulf of has sea-surface temperatures in excess of 28oC. In fact, along the immediate coast from Central America all the way up the Mexican coast, sea-surface temperatures approach

15 30oC during the month of August. Sufficient sea-surface temperatures for tropical cyclone development extend west but are only located between 5oN and 15oN at 140oW.

Figure 3.3: Mean 300 hPa level streamline analysis over the Pacific for July and associated mean July tropical cyclone tracks (Chu 1995)

16

Figure 3.4: Mean 300 hPa streamline analysis over the Pacific for September and associated average September tracks for tropical cyclones (Chu 1995)

A pictorial depiction of August Eastern Pacific sea-surface temperatures can be found in Figure 3.6. For the month of October, sea-surface temperatures are still widely sufficient for tropical cyclone formation; however, a noticeable decrease in SSTs occurs between August and October, especially in the central and western portions of the Eastern Pacific. Sea –surface temperatures in excess of 27oC are found between 5oN and 25oN along and west of the Mexican and Central American coasts while SSTs of this magnitude are limited far west in the Pacific. A pictorial depiction of October Eastern Pacific sea- surface temperatures can be found in Figure 3.7.

17

Figure 3.5: Mean monthly sea-surface temperatures for June (Courtesy of www.nhc.noaa.gov)

Figure 3.6: Mean monthly sea-surface temperatures for August (Courtesy of www.nhc.noaa.gov)

18

Figure 3.7: Mean monthly sea-surface temperatures for October (Courtesy of www.nhc.noaa.gov)

3.2.4 Average Eastern Pacific Wind Shear Profiles This section will discuss averaged mean tropospheric wind profiles for June through August and for September through November. For the average wind profile for June through August, the subtropical jet appears to be strongest near 45oN. In the tropical regions between the equator and 20oN, easterly winds prevail and are somewhat stronger than seen earlier in the year. However, the easterly winds are not particularly strong, and no significant directional shear is noted. On the other hand, the upper troposphere does have some increase in easterly winds, which could possibly lead to speed shear concerns at times. For the average wind profile between September and November, the tropical easterlies between the equator and 20oN decrease from the previous period, especially in the upper troposphere. In fact, no speed or directional shear issues are noted during this period (Chu 1995).

19 3.3 2004 Eastern Pacific Hurricane Season

The 2004 Eastern Pacific Hurricane Season featured a below-normal number of tropical storms, hurricanes, and major hurricanes. There were 12 named storms, 6 hurricanes, and 3 major hurricanes for 2004, as compared with a long-term average of 16 named storms, 9 hurricanes, and 4 major hurricanes. Four tropical depressions never became tropical storms. The season was also quiet with respect to landfalling tropical cyclones, as no tropical cyclone made landfall as a tropical storm or hurricane. Two tropical storms, Blas and Javier, affected southern portions of Baja, California, with gusty winds and heavy rains (Avila et al. 2004). Overall, the 2004 Eastern Pacific hurricane season will be noted for its lack of long-lived tropical systems and the relatively low impact of those tropical cyclones on land areas. A complete view of all the 2004 Eastern Pacific tropical cyclone tracks can be obtained from Figure 3.8, and a detailed description of cyclone lifespan and maximum strength can be gathered from Table 3.1. Because most of the tropical cyclones from this season were short lived, it is worth noting the three cyclones with the longest lifespan, as these storms contributed most heavily to the 2004 Eastern Pacific track and intensity error statistics that will be discussed in detail in chapter four. The three storms with the longest lifespan in this season were Hurricanes Howard, Isis, and Javier. The following is a detailed description of these tropical cyclones: Howard formed from a that moved into the Eastern Pacific on August 26. This wave developed into a tropical depression on August 30 about 350 miles south-southwest of Acapulco, Mexico, and moved west-northwest. The depression became Tropical Storm Howard on August 31 and Hurricane Howard a day later. Howard began recurvature on September 2 and reached its maximum intensity of 140 m.p.h.. Howard began weakening the next day while moving over cooler waters and was declassified on September 5 (Avila et al. 2004). Hurricane Isis The tropical depression, that later became Isis, developed on September 8 about 460 miles south of , Mexico. The depression moved westward, strengthened into a tropical storm later that day, but was plagued by high wind shear that

20 soon caused Isis to weaken back to a tropical depression. Isis regained tropical storm status on September 12 well west of the tip of Baja, California, as it continued west. Isis slowly strengthened over the next few days and briefly became a hurricane on September 15 just before moving over cooler waters. Isis dissipated about 1300 miles west of Baja, California, on September 16 (Avila et al. 2004). The tropical depression, that later became Javier, formed on September 10 about 300 miles south of the Gulf of Tehunatepec. While moving westward over the next few days, Javier strengthened into a tropical storm on the 11th and a hurricane on the 12th. Javier strengthened to its maximum intensity of 150 m.p.h. while moving northwest, parallel to the Mexican coast. Over the next few days, Javier moved more northerly toward Baja, California, while losing strength over cooler waters. Javier eventually moved inland over central Baja, California, on September 19 as a tropical depression (Avila et al. 2004).

Figure 3.8: 2004 Eastern Pacific tropical cyclone tracks (Courtesy of www.nhc.noaa.gov)

21 Table 3.1: 2004 Eastern Pacific tropical cyclones, dates of existence, and maximum intensity

Storm Name Dates Maximum Wind (knots) T.S. Agatha May 22-24 50 T.D. Two-E July 2-3 30 Hurricane Blas July 12-15 55 July 19-25 75 July 26-August 1 105 T.D. Six-E August 1 25 T.S. Estelle August 19-26 60 Hurricane Frank August 23-26 75 T.D. Nine-E August 23-26 30 T.S. Georgette August 26-30 55 Hurricane Howard August 30-September 5 120 Hurricane Isis September 8-16 65 Hurricane Javier September 11-19 130 T.S. Kay October 4-6 40 T.S. Lester October 11-13 45 T.D. Sixteen-E October 25-26 30

22 CHAPTER FOUR

RESULTS

4.1 Overview

The outcomes of the experiments that will be discussed in the following sections will reveal some interesting, but seemingly unusual, characteristics of the superensemble method and the training data that is used to arrive at these outcomes. Eight separate experiments will be examined; the first five experiments will deal strictly with initialization of the model and the effects of progressive training and combined- basin/cross-basin training in the various forecasts. Progressive training refers to the inclusion of current year model forecasts previous to the storm forecast currently being conducted. Previous research and trials with the Atlantic Basin indicate that progressive training does help to improve the quality of superensemble forecasts. Naturally, the effect of progressive training would be most notable at the end of a given season since that is when the most training from the current season will be available. The final three experiments will examine the effect of using intensity-specific training sets for superensemble forecasts. Finally, the reasoning behind why certain training sets provided better forecasts than other training sets will be provided. It is important to note that all experiments conducted for this project were conducted as if the forecaster were running the superensemble in real time. No cross-validation experiments were done, for the results of such experiments could never be used as an indicator of how best to run the superensemble for a particular basin.

4.2 General Eastern Pacific Superensemble Experiments

The following sections detail the five main experiments conducted for Eastern Pacific tropical cyclone superensemble. Notes will be provided to explain the outcomes of other, less significant experiments when it is assumed that the reader would have a reasonable interest in why certain changes are beneficial to forecasts and others are not.

23 Note: All figures and tables discussed in the next several sections are found at the end of this chapter in the order in which they were discussed within the chapter.

4.2.1 Experiment 1: Eastern Pacific Training Set (Charts on pages 34 and 35) For the initial Eastern Pacific tropical cyclone superensemble experiment, an Eastern Pacific training set was used. This training set included all of the 2002 and 2003 Eastern Pacific tropical cyclone model forecasts from the models detailed in Table 2.1 of Chapter 2, Section 4. The 2002 Eastern Pacific hurricane season had 12 named storms with 7 becoming hurricanes. The 2003 Eastern Pacific hurricane season had 16 named storms with 7 becoming hurricanes. As is done for Atlantic real-time superensemble, all instances of tropical depressions were eliminated from the training set in every experiment conducted. Once both seasons were combined and tropical depressions eliminated, 382 forecast instances made up the Eastern Pacific training set. For this experiment, and all experiments conducted, the number of forecast cases for hours 12, 24, 36, 48, 60, 72, 84, 96, 108, and 120 were 138, 116, 99, 82, 69, 53, 43, 33, 23, and 14, respectively. Figures 4.1 and 4.2 provide a pictorial representation of 2004 root-mean- square track and intensity errors, respectively. Tables 4.1 and 4.2 provide numerical comparisons of 2004 root-mean-square track and intensity errors, respectively, and show how the superensemble compared with the three best models, the NOGAPS, GUNS, and GUNA, for track and the two best models, the Decay SHIPS and SHIFOR5, for intensity. As a forecast method touted over the years as being a model that can improve on even the best of dynamical models, the expectation was that the superensemble would perform reasonably well in forecasting Eastern Pacific tropical cyclones. Of course, as with all forecasting techniques, it was recognized that some tweaking of the method might be necessary since the method was being applied to a new basin; however, the forecast was expected to be reasonably good. The actual outcome of this forecast shows that the superensemble method performed mediocre in this basin using an Eastern Pacific training set. While track forecasts in the 12-48 hour timeframe are competitive with other model forecasts, the superensemble RMS track error is higher than most models by 60 hours, and the superensemble performs marginally well with respect to the other models

24 at later times. Furthermore, seasonal RMS intensity errors using this training set follow the same sort of pattern as seen with the track errors. The superensemble performs very well from 12-60 hours; however, in the later times, intensity errors increase substantially, and the superensemble is no longer competitive with most respectable intensity models.

4.2.2 Experiment 2: Combined Eastern Pacific and Atlantic Training Set (Charts on pages 36 and 37) For the second experiment of this project, a 2002 and 2003 Eastern Pacific training set was combined with a 2002 and 2003 Atlantic training set to form the complete training set used for the experiment. The 2002 Atlantic hurricane season had 12 named storms with 4 becoming hurricanes, and the 2003 Atlantic hurricane season had 16 named storms with 7 becoming hurricanes. The complete training set was comprised of 811 forecast instances. One storm from the 2003 Atlantic hurricane season, Hurricane Kate, was eliminated from the training because of its wildly erratic track and the potential that using such a track in training could hurt current forecasts. Figures 4.3 and 4.4 provide a pictorial representation of 2004 RMS track and intensity errors from this experiment, respectively. Tables 4.3 and 4.4 provide numerical comparisons of 2004 RMS track and intensity errors, respectively, with the best models in each category. The results of this combined-basin training set indicate that the inclusion of an Atlantic training set in this situation does help to improve track and intensity forecasts. Track forecasts have the more remarkable improvement of the two groups, as all times had lower RMS errors. However, the most notable improvements occur in the latest times (108-120 hours) when RMS errors are cut by about 10 percent. As far as intensity forecasts are concerned, the use of a combined-basin training set improves RMS errors here also. Intensity RMS errors are reduced across the board using the combined-basin training set, and, as with the track errors, the largest reduction in intensity errors occurs at 108-120 hours, when errors are reduced by approximately 25 percent While both sets of forecasts show improvements, more work still needs to be done on improving Eastern Pacific track and intensity forecasts.

25 4.2.3 Experiment 3: Combined Training Set and 2004 Progressive Eastern Pacific Training (Charts on pages 38 and 39) For the third experiment of this project, the same combined-basin training set used in the second experiment was supplemented with progressive training from the 2004 Eastern Pacific hurricane season. The number of forecast instances in this case ranged from 811 to 974. Figures 4.5 and 4.6 provide a pictorial representation of 2004 RMS track and intensity errors from this experiment, respectively. Tables 4.5 and 4.6 provide numerical comparisons of 2004 RMS track and intensity errors, respectively, along with the best models in each category. When using progressive training with the Atlantic in real time, the forecaster normally finds improvements in forecasts, with the most notable forecast improvements coming at the end of the season. This result occurs because the biases and general performance of individual models tend to be consistent within a given season. In this experiment, the track errors indicate slight overall improvement, with the most improvement in days four and five. The intensity errors experienced little change with the addition of progressive Pacific training. It is also important to note that this same experiment was conducted using 2004 progressive Atlantic training; however, the track and intensity errors showed little change from the second main experiment, and the results of using progressive Atlantic training are, therefore, not shown. One must also remember that there are no problems with the independence/dependence of the training and the actual forecasts when using progressive training because training from a particular storm is not included in the master training set as long as the storm’s track and intensity are currently being forecast.

4.2.4 Experiment 4: Atlantic Training Set (Charts on pages 40 and 41) Due to the success of using a combined-basin training set, the fourth experiment examined whether removing all 2002/2003 Eastern Pacific training from the combined- basin training set and using a 2002/2003 Atlantic training set would further enhance the success of the second experiment. The number of forecast instances in this experiment was 429. Figures 4.7 and 4.8 provide a pictorial representation of 2004 RMS track and

26 intensity errors from this experiment, respectively. Tables 4.7 and 4.8 provide numerical comparisons of 2004 RMS track and intensity errors, respectively, along with other models in each category. Some people will question the validity of using one basin’s training to forecast tropical cyclones in another basin, and many will find such a method counter-intuitive. However, the results of this experiment will force doubters to closely examine why such a method works in this instance. Using solely Atlantic training to forecast Pacific cyclones results in the best results seen thus far. RMS track errors using the Atlantic training set are lower than in all previous experiments. Across the board reductions are noted in track errors. In fact, when comparing the experiment using only Pacific training and the experiment using only Atlantic training, a 10-15 percent reduction in RMS error is noted at hours 108 and 120. Even though this track forecast represents success, the improvement in the intensity forecast is nothing short of a miracle. RMS intensity errors indicate major improvement from previous experiments, particularly in the later times. Again, when comparing the Pacific training experiment and the Atlantic training experiment, a 10-60 percent RMS error reduction is noted in the intensity forecast. Using this method, the superensemble is the best or second-best intensity model at almost all times.

4.2.5 Experiment 5: Atlantic Training Set and 2004 Progressive Eastern Pacific Training (Charts on pages 42 and 43) The fifth and final experiment in this series of experiments involved using the best background training set so far, 2002/2003 Atlantic training, and incorporating 2004 Eastern Pacific progressive training. The number of forecast instances in this experiment ranged from 429 to 592. Figures 4.9 and 4.10 provide a pictorial representation of 2004 RMS track and intensity errors from this experiment, respectively. Tables 4.9 and 4.10 provide numerical comparisons of 2004 RMS track and intensity errors, respectively, along with other models in each category. The improvements seen in this experiment were similar to the improvements seen when adding progressive training in experiment three. RMS track errors decreased

27 overall, with the largest reductions in the later periods. Like in experiment three, RMS intensity errors experienced little change from the last experiment.

4.3 Eastern Pacific Superensemble Experiments Using Filtered Training

Filtered training sets can be used in tropical cyclone superensemble for a variety of reasons. A filter can involve removing gross outliers from the training set. Another filter can involve removing storms with wildly unusual tracks from the training set. For this next series of experiments, a filter that separated training with respect to intensity was used. Two categories of training, tropical storm and hurricane, were used. Tropical storm training included only instances when the observed intensity was less than 74 m.p.h. while hurricane training included only instances when the observed intensity was equal to or greater than 74 m.p.h. The reason that such a filter was instituted was more intuitive than scientific; however, some scientific reasoning exists behind the decision. Tropical storms more closely resemble other tropical storms, and hurricanes more closely resemble other hurricanes. Therefore, why would you choose to use tropical storms in a training set to forecast the future position and intensity of a hurricane, and vice versa? Because of this reasoning, three experiments were conducted in this series: using a tropical storm training set to forecast tropical storms and a hurricane training set to forecast hurricanes, using a hurricane training set to forecast both tropical storms and hurricanes, and using a tropical storm training set to forecast both tropical storms and hurricanes. The background training used in all these experiments was the 2002/2003 Atlantic training set described in experiment four because of its superior performance to all other training sets.

4.3.1 Experiment 6: Separate Tropical Storm and Hurricane Training Sets (Charts on pages 44 and 45) For this experiment, two training sets were developed. One training set included only tropical storms while the other training set included only hurricanes. The tropical storm training set was used to forecast for times when the current storm was of tropical storm strength, and the hurricane training set was used to forecast for times when the current storm was of hurricane strength. The tropical storm training set was comprised of

28 266 forecast instances. The hurricane training set was comprised of 163 forecast instances. Figures 4.11 and 4.12 provide a pictorial description of 2004 RMS track and intensity errors from this experiment, respectively. Tables 4.11 and 4.12 provide numerical comparisons between 2004 RMS track and intensity errors, respectively, along with other models in each category. The results of this experiment partially confirmed what seemed to be intuitive all along. RMS track errors continue to decrease from the magnitude seen in previous experiments. However, the most dramatic improvement is seen at hours 108 and 120 when the superensemble reduces its RMS error between 100 and 200 kilometers. Furthermore, as a whole, superensemble track forecasts are best during this experiment, as the superensemble is among the top track models at all times and clearly performs first at hours 108 and 120. While the track forecasts are best using this method, intensity forecasts are, unfortunately, not as good. While RMS intensity errors in this case are not bad, the errors are not quite as low as seen in experiment four when unseparated Atlantic training was used.

4.3.2 Experiment 7: Hurricane Training Set (Charts on pages 46 and 47) For this experiment, the hurricane training set, described in the last section, was used to forecast for all tropical storms and hurricanes for the 2004 Eastern Pacific hurricane season. The training set was comprised of 163 forecast instances. Figures 4.13 and 4.14 provide a pictorial representation of 2004 RMS track and intensity errors from this experiment, respectively. Tables 4.13 and 4.14 provide numerical comparisons between 2004 RMS track and intensity errors, respectively, along with other models in each category. The results of this experiment were partially expected and partially surprising. RMS track errors were considerably higher when using the hurricane training set to forecast for all storms, particularly for later periods. This outcome seems reasonable since it is logical that hurricane tracks would not accurately be able to predict the tracks of tropical storms, particularly weak tropical storms. On the other hand, intensity forecasting is significantly improved using only the hurricane training set. In fact, the intensity

29 forecasts are better than all other models at the majority of times using this method. Unfortunately, from a logical point of view, this outcome would seem counter-intuitive; however, some possible explanations for this outcome will be explored in section 4.4.

4.3.3 Experiment 8: Tropical Storm Training Set (Charts on pages 48 and 49) For this experiment, the tropical storm training set, described in section 4.3.1, was used to forecast for all tropical storms and hurricanes during the 2004 Eastern Pacific hurricane season. The training set was comprised of 266 forecast instances. Figures 4.15 and 4.16 provide a pictorial representation of 2004 RMS track and intensity errors from this experiment, respectively. Tables 4.15 and 4.16 provide numerical comparisons between 2004 RMS track and intensity errors, respectively, along with other models in each category. The results of this experiment were completely expected due to logical reasoning that tropical storms do not effectively forecast track and intensity changes in hurricanes. Overall, hours 12-96 show greater RMS track errors from the previous experiment. The only reason that hours 108-120 show relatively low RMS errors is because most of the forecasts made at those hours were tropical storm forecasts. The intensity forecasts also show a degradation of skill from the previous experiment with every forecast hour having a higher RMS error.

4.3.4 Best Composite Forecast: Experiment 9 (Charts on pages 50 and 51) In an effort to further improve the 2004 Eastern Pacific season track errors, several additional modifications to the best training set (2002/2003 Atlantic hurricanes) were made. First, track errors for each synoptic time in the training were examined, and times with unusually large overall track errors were expunged from the training. Six instances fit this criterion (four instances from (2002) and two instances from (2002)). Next, subjective elimination of various latitude/longitude points was conducted by comparing the forecast value for a given model to the actual value. Again, latitude/longitude points with unusually large errors were eliminated. Finally, it is logical to assume that the suite of models that performed

30 best in the early experiments may not perform best now that the training set has been modified so much through the various experiments in this project. Therefore, another round of model trials were conducted and a better collection of models was discovered. For the final experiment, the models that were used included OFCI, GFSI, UKMI (latitude only), NGPI (longitude only), GFDI, and GUNA. The results of this final experiment show superensemble tropical cyclone track errors that are less than the errors of any other track model during the 2004 season. Furthermore, superensemble intensity errors are also less than any other intensity model in the 2004 season. Figures 4.17 and 4.18 provide a pictorial representation of RMS track errors for this final experiment, respectively. Tables 4.17 and 4.18 offer comparisons of RMS track and intensity errors between the superensemble and other models. Some may question the validity of expunging bad forecasts from training because of the potential for those forecasts to improve future superensemble forecasts. While this argument may have some merit for a few individual forecasts, the removal of unusually erroneous forecasts has proven to improve superensemble forecasts. Because of the format of the superensemble method, these forecasts would tend to degrade the model’s overall performance.

4.4 Potential Explanations for Experimentation Outcomes

While the experiments detailed previously in this chapter offer solutions for how to most accurately use the superensemble to forecast for the 2004 Eastern Pacific hurricane season, some rather unusual results arose from those experiments, and more explanation for why these techniques resulted in good outcomes is needed. Specifically, one question needs to be addressed. Why did an Atlantic training set perform better than a Pacific training set when forecasting Pacific tropical cyclones? Could the bias corrections noted in the Atlantic 2002/03 training set be more similar to the actual bias corrections in the Eastern Pacific forecast year? Furthermore, why did solely hurricane training perform better overall as compared to tropical storm training? Examining bias corrections and relative hurricane/tropical storm training errors helps to explain this outcome.

31

4.4.1 Bias Correction (Charts on pages 52-57) Examining the bias corrections done in all the training sets and in the Eastern Pacific 2004 forecast season helps to explain why an Atlantic training set outperformed an Eastern Pacific training set in this experiment. Early time and late time latitude, longitude, and intensity bias corrections were examined for all data sets. At the vast majority of times, Atlantic 2002/03 bias corrections appear to be more similar to the 2004 Eastern Pacific bias corrections than the 2002/03 Eastern Pacific bias corrections are. Latitude bias corrections for the early times illustrate this point well. 2002/03 Atlantic bias corrections and 2004 Eastern Pacific bias corrections appear to be mirror images of one another while 2002/03 Eastern Pacific bias corrections are nothing like the 2004 bias corrections. In other instances, while 2002/03 Atlantic bias corrections and 2004 bias corrections are not necessarily mirror images of one another, the magnitudes and signs of the Atlantic bias corrections more closely resemble the 2004 bias corrections. It is true that in some instances the 2002/03 Eastern Pacific bias corrections do more closely resemble the 2004 bias corrections; however, these instances are few, and the Atlantic bias corrections more closely resemble the bias corrections of the forecast year in an overall sense.

4.4.2 Hurricane Training (Charts on pages 58 and 59) Determining why 2002/03 Atlantic hurricane training provides better track and intensity forecasts than 2002/03 Atlantic tropical storm training is a unique problem whose solution probably lies in the relative track and intensity errors of the two training sets. Examining the track error differences between the tropical storm and hurricane training sets shows that the hurricane training set has across the board improvements with regard to track forecasts. Every track model used in this experiment has reduced track errors in the hurricane training set and increased track errors in the tropical storm training set. With regards to intensity, tropical storm training appears to have lower intensity errors through sixty hours. At later times, the intensity errors of the hurricane training are lower. However, using tropical storm training for the early times still provides inferior

32 intensity forecasts to the forecasts using hurricane training. Therefore, it is hypothesized that the similarities between the Atlantic bias corrections and the 2004 forecast year bias corrections must help explain the discrepancy. In any case, hurricane training in an overall sense provides much better track and intensity forecasts than tropical storm training. The natural question after such a finding is why do errors tend to be larger when using tropical storm training. The expectation is that many models have a hard time with tropical storm initialization, especially for weak tropical storms. Therefore, the storm may be lost by the models in a short time, or the models may not have a good handle on the storm and may try to strengthen/weaken the storm too quickly/slowly. Such intensity errors in the models frequently affect model track. On the other hand, hurricanes are more established entities, and models tend to initialize these stronger storms better and often have a better grip on future track and intensity patterns.

33

2004 RMS Track Errors (2002/2003 Pacific Training)

1200

1000

OFCI 800 GFDI NGPI UKM I 600 GUNA GUNS ENSM 400 BCEM FSSE

200

0 12 24 36 48 60 72 84 96 108 120

Forecast Hour

Figure 4.1: Experiment 1 RMS Track Errors

Table 4.1: Experiment 1 RMS Track Error Comparisons

Forecast 24 48 72 96 120 Hour RMS Error RMS Error RMS Error RMS Error RMS Error (km) (km) (km) (km) (km) NGPI 128.59 210.30 315.79 414.98 534.30 GUNS 99.93 171.45 257.49 407.03 750.08 GUNA 94.35 169.87 269.77 430.73 770.77 FSSE 100.09 188.42 305.50 496.28 841.77

34

2004 RMS Intensity Errors (2002/2003 Pacific Training)

35

30

25 OFCI GFDI 20 UKMI SHF5 DSHP 15 ENSM

RMS ErrorRMS(mph) BCEM FSSE 10

5

0 12 24 36 48 60 72 84 96 108 120 Forecast Hour

Figure 4.2: Experiment 1 RMS Intensity Errors

Table 4.2: Experiment 1 RMS Intensity Error Comparisons

Forecast 24 48 72 96 120 Hour Models RMS Error RMS Error RMS Error RMS Error RMS Error (mph) (mph) (mph) (mph) (mph) DSHP 7.85 10.01 9.97 9.91 7.80 SHF5 9.04 11.07 10.44 10.50 11.10 FSSE 7.56 9.76 12.44 19.27 22.42

35

2004 RMS Track Errors (2002/2003 Atlantic and Pacific Training)

1200

1000

OFCI 800 GFDI NGPI UKM I 600 GUNA GUNS ENSM 400 BCEM FSSE

200

0 12 24 36 48 60 72 84 96 108 120

Forecast Hour

Figure 4.3: Experiment 2 RMS Track Errors

Table 4.3: Experiment 2 RMS Track Error Comparisons

Forecast 24 48 72 96 120 Hour Models RMS Error RMS Error RMS Error RMS Error RMS Error (km) (km) (km) (km) (km) NGPI 128.59 210.30 315.79 414.98 534.30 GUNS 99.93 171.45 257.49 407.03 750.08 GUNA 94.35 169.87 269.77 430.73 770.77 FSSE 97.95 183.54 290.41 465.40 772.61

36

2004 RMS Intensity Errors (2002/2003 Atlantic and Pacific Training)

25

20

OFCI GFDI 15 UKMI SHF5 DSHP ENSM 10 RMS ErrorRMS(mph) BCEM FSSE

5

0 12 24 36 48 60 72 84 96 108 120 Forecast Hour

Figure 4.4: Experiment 2 RMS Intensity Errors

Table 4.4: Experiment 2 RMS Intensity Error Comparisons

Forecast 24 48 72 96 120 Hour Models RMS Error RMS Error RMS Error RMS Error RMS Error (mph) (mph) (mph) (mph) (mph) DSHP 7.85 10.01 9.97 9.91 7.80 SHF5 9.04 11.07 10.44 10.50 11.10 FSSE 7.45 9.73 10.94 15.40 16.73

37

2004 RMS Track Errors (2002/2003 Atlantic and Pacific Training + 2004 Progressive Pacific Training)

1200

1000

OFCI 800 GFDI NGPI UKM I 600 GUNA GUNS ENSM 400 BCEM FSSE

200

0 12 24 36 48 60 72 84 96 108 120

Forecast Hour

Figure 4.5: Experiment 3 RMS Track Errors

Table 4.5: Experiment 3 RMS Track Error Comparisons

Forecast 24 48 72 96 120 Hour Models RMS Error RMS Error RMS Error RMS Error RMS Error (km) (km) (km) (km) (km) NGPI 128.59 210.30 315.79 414.98 534.30 GUNS 99.93 171.45 257.49 407.03 750.08 GUNA 94.35 169.87 269.77 430.73 770.77 FSSE 97.89 182.19 288.79 462.90 765.78

38

2004 RMS Intensity Errors (2002/2003 Atlantic and Pacific Training + 2004 Progressive Pacific Training)

25

20

OFCI GFDI 15 UKMI SHF5 DSHP

10 ENSM RMS ErrorRMS(mph) BCEM FSSE

5

0 12 24 36 48 60 72 84 96 108 120 Forecast Hour

Figure 4.6: Experiment 3 RMS Intensity Errors

Table 4.6: Experiment 3 RMS Intensity Error Comparisons

Forecast 24 48 72 96 120 Hour Models RMS Error RMS Error RMS Error RMS Error RMS Error (mph) (mph) (mph) (mph) (mph) DSHP 7.85 10.01 9.97 9.91 7.80 SHF5 9.04 11.07 10.44 10.50 11.10 FSSE 7.47 9.76 11.21 15.97 17.37

39

2004 RMS Track Errors (2002/2003 Atlantic Training)

1200

1000

OFCI 800 GFDI NGPI UKM I 600 GUNA GUNS ENSM 400 BCEM FSSE

200

0 12 24 36 48 60 72 84 96 108 120

Forecast Hour

Figure 4.7: Experiment 4 RMS Track Errors

Table 4.7: Experiment 4 RMS Track Error Comparisons

Forecast 24 48 72 96 120 Hour Models RMS Error RMS Error RMS Error RMS Error RMS Error (km) (km) (km) (km) (km) NGPI 128.59 210.30 315.79 414.98 534.30 GUNS 99.93 171.45 257.49 407.03 750.08 GUNA 94.35 169.87 269.77 430.73 770.77 FSSE 97.79 181.98 287.94 456.23 749.30

40

2004 RMS Intensity Errors (2002/2003 Atlantic Training)

25

20

OFCI GFDI 15 UKM I SHF5 DSHP ENSM 10 BCEM FSSE

5

0 12 24 36 48 60 72 84 96 108 120

Forecast Hour

Figure 4.8: Experiment 4 RMS Intensity Errors

Table 4.8: Experiment 4 RMS Intensity Error Comparisons

Forecast 24 48 72 96 120 Hour Models RMS Error RMS Error RMS Error RMS Error RMS Error (mph) (mph) (mph) (mph) (mph) DSHP 7.85 10.01 9.97 9.91 7.80 SHF5 9.04 11.07 10.44 10.50 11.10 FSSE 7.89 10.70 11.22 9.21 9.60

41

2004 RMS Track Errors (2002/2003 Atlantic Training + 2004 Pacific Training)

1200

1000

OFCI 800 GFDI NGPI UKM I 600 GUNA GUNS ENSM

400 BCEM FSSE

200

0 12 24 36 48 60 72 84 96 108 120

Forecast Hour

Figure 4.9: Experiment 5 RMS Track Errors

Table 4.9: Experiment 5 RMS Track Error Comparisons

Forecast 24 48 72 96 120 Hour Models RMS Error RMS Error RMS Error RMS Error RMS Error (km) (km) (km) (km) (km) NGPI 128.59 210.30 315.79 414.98 534.30 GUNS 99.93 171.45 257.49 407.03 750.08 GUNA 94.35 169.87 269.77 430.73 770.77 FSSE 98.23 181.25 286.60 453.90 741.64

42

2004 RMS Intensity Errors (2002/2003 Atlantic Training + 2004 Progressive Pacific Training)

25

20

OFCI GFDI 15 UKM I SHF5 DSHP ENSM 10 BCEM FSSE

5

0 12 24 36 48 60 72 84 96 108 120

Forecast Hour

Figure 4.10: Experiment 5 RMS Intensity Errors

Table 4.10: Experiment 5 RMS Intensity Error Comparisons

Forecast 24 48 72 96 120 Hour Models RMS Error RMS Error RMS Error RMS Error RMS Error (mph) (mph) (mph) (mph) (mph) DSHP 7.85 10.01 9.97 9.91 7.80 SHF5 9.04 11.07 10.44 10.50 11.10 FSSE 7.80 10.46 11.28 11.23 10.37

43

2004 RMS Track Errors (Separate Tropical Storm/Hurricane Training Sets)

1200

1000

OFCI 800 GFDI NGPI UKMI 600 GUNA GUNS

RMS ErrorRMS(km) ENSM 400 BCEM FSSE

200

0 12 24 36 48 60 72 84 96 108 120 Forecast Hour

Figure 4.11: Experiment 6 RMS Track Errors

Table 4.11: Experiment 6 RMS Track Error Comparisons

Forecast 24 48 72 96 120 Hour Models RMS Error RMS Error RMS Error RMS Error RMS Error (km) (km) (km) (km) (km) NGPI 128.59 210.30 315.79 414.98 534.30 GUNS 99.93 171.45 257.49 407.03 750.08 GUNA 94.35 169.87 269.77 430.73 770.77 FSSE 99.19 186.77 302.99 465.36 573.37

44

2004 RMS Intensity Errors (Separated Tropical Storm and Hurricane Training Sets)

25

20

OFCI GFDI 15 UKMI SHF5 DSHP ENSM 10 RMS ErrorRMS(mph) BCEM FSSE

5

0 12 24 36 48 60 72 84 96 108 120 Forecast Hour

Figure 4.12: Experiment 6 RMS Intensity Errors

Table 4.12: Experiment 6 RMS Intensity Error Comparisons

Forecast 24 48 72 96 120 Hour Models RMS Error RMS Error RMS Error RMS Error RMS Error (mph) (mph) (mph) (mph) (mph) DSHP 7.85 10.01 9.97 9.91 7.80 SHF5 9.04 11.07 10.44 10.50 11.10 FSSE 7.93 11.82 11.82 10.29 9.94

45

2004 RMS Track Errors (Hurricane Training Set)

1200

1000

OFCI 800 GFDI NGPI UKMI 600 GUNA GUNS

RMS ErrorRMS(km) ENSM 400 BCEM FSSE

200

0 12 24 36 48 60 72 84 96 108 120 Forecast Hour

Figure 4.13: Experiment 7 RMS Track Errors

Table 4.13: Experiment 7 RMS Track Error Comparisons

Forecast 24 48 72 96 120 Hour Models RMS Error RMS Error RMS Error RMS Error RMS Error (km) (km) (km) (km) (km) NGPI 128.59 210.30 315.79 414.98 534.30 GUNS 99.93 171.45 257.49 407.03 750.08 GUNA 94.35 169.87 269.77 430.73 770.77 FSSE 95.00 176.20 280.65 456.87 760.93

46

2004 RMS Intensity Errors (Hurricane Training Set)

25

20

OFCI GFDI 15 UKM I SHF5 DSHP ENSM 10 BCEM FSSE

5

0 12 24 36 48 60 72 84 96 108 120

Forecast Hour

Figure 4.14: Experiment 7 RMS Intensity Errors

Table 4.14: Experiment 7 RMS Intensity Error Comparisons

Forecast 24 48 72 96 120 Hour Models RMS Error RMS Error RMS Error RMS Error RMS Error (mph) (mph) (mph) (mph) (mph) DSHP 7.85 10.01 9.97 9.91 7.80 SHF5 9.04 11.07 10.44 10.50 11.10 FSSE 7.58 9.79 10.11 8.52 12.55

47

2004 RMS Track Errors (Tropical Storm Training Set)

1200

1000

OFCI 800 GFDI NGPI UKMI 600 GUNA GUNS

RMS ErrorRMS(km) ENSM 400 BCEM FSSE

200

0 12 24 36 48 60 72 84 96 108 120 Forecast Hour

Figure 4.15: Experiment 8 RMS Track Errors

Table 4.15: Experiment 8 RMS Track Error Comparisons

Forecast 24 48 72 96 120 Hour Models RMS Error RMS Error RMS Error RMS Error RMS Error (km) (km) (km) (km) (km) NGPI 128.59 210.30 315.79 414.98 534.30 GUNS 99.93 171.45 257.49 407.03 750.08 GUNA 94.35 169.87 269.77 430.73 770.77 FSSE 103.02 194.45 317.63 463.49 569.18

48

2004 RMS Intensity Errors (Tropical Storm Training Set)

25

20

OFCI GFDI 15 UKMI SHF5 DSHP ENSM 10 RMS ErrorRMS(mph) BCEM FSSE

5

0 12 24 36 48 60 72 84 96 108 120 Forecast Hour

Figure 4.16: Experiment 8 RMS Intensity Errors

Table 4.16: Experiment 8 RMS Intensity Error Comparisons

Forecast 24 48 72 96 120 Hour Models RMS Error RMS Error RMS Error RMS Error RMS Error (mph) (mph) (mph) (mph) (mph) DSHP 7.85 10.01 9.97 9.91 7.80 SHF5 9.04 11.07 10.44 10.50 11.10 FSSE 9.26 12.01 11.51 10.28 13.00

49

2004 Eastern Pacific RMS Track Errors (Filtered Training and Adjusted Models)

1200

1000

OFCI 800 GFDI NGPI UKMI 600 GUNA GUNS

RMS ErrorRMS(km) ENSM 400 BCEM FSSE

200

0 12 24 36 48 60 72 84 96 108 120 Forecast Hour

Figure 4.17: Experiment 9 RMS Track Errors

Table 4.17: Experiment 9 RMS Track Error Comparisons

Forecast 24 48 72 96 120 Hour Models RMS Error RMS Error RMS Error RMS Error RMS Error (km) (km) (km) (km) (km) NGPI 128.59 210.30 315.79 414.98 534.30 GUNS 99.93 171.45 257.49 407.03 750.08 GUNA 94.35 169.87 269.77 430.73 770.77 FSSE 90.64 161.50 268.02 429.92 687.53

50

2004 Eastern Pacific RMS Intensity Errors (Filtered Training and Adjusted Models)

25

20

OFCI GFDI 15 UKM I SHF5 DSHP ENSM 10 BCEM FSSE

5

0 12 24 36 48 60 72 84 96 108 120

Forecast Hour

Figure 4.18: Experiment 9 RMS Intensity Errors

Table 4.18: Experiment 9 RMS Intensity Error Comparisons

Forecast 24 48 72 96 120 Hour Models RMS Error RMS Error RMS Error RMS Error RMS Error (mph) (mph) (mph) (mph) (mph) DSHP 7.85 10.01 9.97 9.91 7.80 SHF5 9.04 11.07 10.44 10.50 11.10 FSSE 7.58 9.69 9.77 8.24 12.14

51

Early Times (12-72 hours) Latitude Model Bias of 2002/03 Pacific Training Early Times (12-72 hours) Latitude Model Bias of 2002/03 Atlantic Training

0.1 0.25

0.05 0.2

0 12 24 36 48 60 72 0.15 NGPI NGPI UKMI -0.05 UKMI GUNS GUNS 0.1 Bias Increment Bias Bias Increment Bias -0.1 0.05 -0.15

0 -0.2 12 24 36 48 60 72

Forecast Hour Forecast Hour

Figure 4.19: Early Times Latitude Model Figure 4.20: Early Times Latitude Model Biases for 2002/2003 Eastern Pacific Biases for 2002/2003 Atlantic Training Set Training Set (Bias Increment is in Degrees (Bias Increment is in Degrees Latitude) Latitude)

Early Times (12-72 hours) Latitude Model Bias of 2004 Pacific Models

0.2

0.18

0.16

0.14

0.12 NGPI 0.1 UKMI GUNS

BiasIncrement 0.08

0.06

0.04

0.02

0 12 24 36 48 60 72 Forecast Hour

Figure 4.21: Actual Early Times Latitude Model Biases of the 2004 Eastern Pacific Numerical Models (Bias Increment is in Degrees Latitude)

52 Early Times (12-72 hours) Longitude Model Bias of 2002/03 Pacific Training Early Times (12-72 hours) Longitude Model Bias of 2002/03 Atlantic Training

0.1 0.15

0.05 0.1 0 0.05 -0.05 12 24 36 48 60 72 OFCI OFCI -0.1 0 GFDI 12 24 36 48 60 72 GFDI -0.15 -0.05 NGPI NGPI -0.2 GUNA -0.1 GUNA Bias Increment Bias Bias Increment Bias -0.25 -0.15 -0.3 -0.2 -0.35 -0.4 -0.25

Forecast Hour Forecast Hour

Figure 4.22: Early Times Longitude Model Figure 4.23: Early Times Longitude Model Biases of 2002/2003 Eastern Pacific Training Biases of 2002/2003 Atlantic Training Set Set (Bias Increment is in Degrees Longitude) (Bias Increment is in Degrees Longitude)

Early Times (12-72 hours) Longitude Model Bias of 2004 Pacific Models

0.2

0.15

0.1

0.05 OFCI GFDI 0 NGPI 12 24 36 48 60 72 GUNA BiasIncrement -0.05

-0.1

-0.15

-0.2 Forecast Hour

Figure 4.24: Actual Early Times Longitude Model Biases of the 2004 Eastern Pacific Numerical Models (Bias Increment is in Degrees Longitude)

53

Early Times (12-72 hours) Intensity Model Bias of 2002/03 Pacific Training Early Times (12-72 hours) Intensity Model Bias of 2002/03 Atlantic Training

14 6 12 5 10 4 3 8 OFCI OFCI GFDI 2 GFDI 6 UKMI 1 UKMI 4 SHF5 0 SHF5

Bias Increment Bias 2 Increment Bias 12 24 36 48 60 72 DSHP -1 DSHP 0 -2 12 24 36 48 60 72 -2 -3 -4 -4

Forecast Hour Forecast Hour

Figure 4.25: Early Times Intensity Model Figure 4.26: Early Times Intensity Model Biases of the 2002/2003 Eastern Pacific Biases of the 2002/2003 Atlantic Training Set Training Set (Bias Increment is in m.p.h.) (Bias Increment is in m.p.h.)

Early Times (12-72 hours) Intensity Model Bias of 2004 Pacific Models

10

8

6

4 OFCI GFDI 2 UKM I SHF5 DSHP 0 12 24 36 48 60 72

-2

-4

-6 Forecast Hour

Figure 4.27: Actual Early Times Intensity Model Biases of the 2004 Eastern Pacific Numerical Models (Bias Increment is in m.p.h.)

54

Late Times (84-120 hours) Latitude Model Bias of 2002/03 Pacific Training Late Times (84-120 hours) Latitude Model Bias of 2002/03 Atlantic Training

0.6 0.4 0.5 0.35 0.4 0.3 0.3 0.25

0.2 NGPI 0.2 NGPI 0.1 UKMI 0.15 UKMI 0 GUNS 0.1 GUNS Bias Increment Bias -0.1 84 96 108 120 Increment Bias 0.05 -0.2 0 -0.3 -0.05 84 96 108 120 -0.4 -0.1

Forecast Hour Forecast Hour

Figure 4.28: Late Times Latitude Model Figure 4.29: Late Times Latitude Model Biases of the 2002/2003 Eastern Pacific Biases of the 2002/2003 Atlantic Training Set Training Set (Bias Increment is in Degrees (Bias Increment is in Degrees Latitude) Latitude)

Late Times (84-120 hours) Latitude Model Bias of 2004 Pacific Models

0.5

0.4

0.3

0.2

0.1 NGPI 0 UKMI 84 96 108 120 GUNS

BiasIncrement -0.1

-0.2

-0.3

-0.4

-0.5 Forecast Hour

Figure 4.30: Actual Late Times Model Biases of the 2004 Eastern Pacific Numerical Models (Bias Increment is in Degrees Latitude)

55

Late Times (84-120 hours) Longitude Model Bias of 2002/03 Pacific Training Late Times (84-120 hours) Longitude Model Bias of 2002/03 Atlantic Training

0 0.3 84 96 108 120 -0.1 0.2 0.1 -0.2 0 OFCI 84 96 108 120 OFCI -0.3 -0.1 UKMI UKMI -0.2 -0.4 GUNA GUNA

Bias Increment Bias Increment Bias -0.3 -0.5 -0.4 -0.6 -0.5 -0.7 -0.6

Forecast Hour Forecast Hour

Figure 4.31: Late Times Longitude Model Figure 4.32: Late Times Longitude Model Biases of the 2002/2003 Eastern Pacific Biases of the 2002/2003 Atlantic Training Set Training Set (Bias Increment is in Degrees (Bias Increment is in Degrees Longitude) Longitude)

Late Times (84-120 hours) Longitude Model Bias of 2004 Pacific Models

1.4

1.2

1

0.8 OFCI UKMI GUNA 0.6 BiasIncrement

0.4

0.2

0 84 96 108 120 Forecast Hour

Figure 4.33: Actual Late Times Longitude Model Biases of the 2004 Eastern Pacific Numerical Models (Bias Increment is in Degrees Longitude)

56

Late Times (84-120 hours) Intensity Model Bias of 2002/03 Pacific Training Late Times (84-120 hours) Intensity Model Bias of 2002/03 Atlantic Training

25 6

20 4

15 OFCI 2 OFCI GFDI GFDI 10 UKMI 0 UKMI SHF5 84 96 108 120 SHF5

Bias Increment Bias 5 DSHP Increment Bias -2 DSHP

0 -4 84 96 108 120 -5 -6

Forecast Hour Forecast Hour

Figure 4.34: Late Times Intensity Model Figure 4.35: Late Times Intensity Model Biases of the 2002/2003 Eastern Pacific Biases of the 2002/2003 Atlantic Training Training Set (Bias Increment is in m.p.h.) Set (Bias Increment is in m.p.h.)

Late Times (84-120 hours) Intensity Model Bias of 2004 Pacific Models

10

8

6

OFCI 4 GFDI UKM I SHF5 2 DSHP

0 84 96 108 120

-2

-4 Forecast Hour

Figure 4.36: Actual Late Times Intensity Model Biases of the 2004 Eastern Pacific Numerical Models (Bias Increment is in m.p.h.)

57

2002/03 Atlantic Tropical Storm Training RMS Track Errors

1100

1000

900

800

700 OFCI GFDI 600 NGPI UKMI 500 GUNA RMSError (km) 400 GUNS

300

200

100

0 12 24 36 48 60 72 84 96 108 120

Forecast Hour

Figure 4.37: 2002/2003 Atlantic Tropical Storm Training RMS Track Errors

2002/03 Atlantic Hurricane Training RMS Track Errors

1100

1000

900

800

700 OFCI GFDI 600 NGPI UKMI 500 GUNA RMSError (km) 400 GUNS

300

200

100

0 12 24 36 48 60 72 84 96 108 120

Forecast Hour

Figure 4.38: 2002/2003 Atlantic Hurricane Training RMS Track Errors

58

2002/03 Atlantic Tropical Storm Training RMS Intensity Errors

25

20

15 OFCI GFDI UKMI SHF5 10 DSHP RMSError (mph)

5

0 12 24 36 48 60 72 84 96 108 120

Forecast Hour

Figure 4.39: 2002/2003 Atlantic Tropical Storm Training RMS Intensity Errors

2002/03 Atlantic Hurricane Training RMS Intensity Errors

25

20

15 OFCI GFDI UKMI SHF5 10 DSHP RMSError (mph)

5

0 12 24 36 48 60 72 84 96 108 120

Forecast Hour

Figure 4.40: 2002/2003 Atlantic Hurricane Training RMS Intensity Errors

59 CHAPTER FIVE

CONCLUSIONS AND FUTURE WORK

5.1 Conclusions

The purpose of this investigation was to determine whether training from two different oceanic basins could be combined to increase the skill of superensemble tropical cyclone forecasts. Furthermore, a second aspect of the project focused on whether intensity-specific training sets could further enhance the accuracy of these tropical cyclone forecasts. It was demonstrated that a combined Atlantic and Eastern Pacific training set does reduce track and intensity errors for superensemble tropical cyclone forecasts in the 2004 Eastern Pacific season. However, it was also determined that solely an Atlantic training set performed better than an Eastern Pacific training set or a combined-basin training set in this particular experiment. Last, it was determined that a training set involving just hurricanes outperforms a tropical storm/hurricane training set in this experiment. While it can be relatively easy to explain why hurricane training does a better job at forecasting tropical cyclones, it is somewhat more difficult to make sense of improved Pacific tropical cyclones forecasts due to training from a completely different basin. Of course, the reason can be explained through bias charts presented in this paper; however, the physical reason for such an outcome it still unknown. Atlantic cyclones and Pacific cyclones historically have different track and intensity characteristics; they can have different formation characteristics, and the weakening process for cyclones in the two basins are somewhat different. Yet, for some reason, the characteristics of the models in the Atlantic 2002/2003 seasons more closely resembled the characteristics of the models in the 2004 Eastern Pacific season. Whether this outcome is applicable to future seasons is a question that will take several years and many more tests to answer. It is possible that several years of using this method will show that Atlantic training consistently provides better track and intensity forecasts, and then the question will be what is wrong with Pacific training that prevents it from accurately modeling its own basin. However, other tests could reveal that in some years Eastern Pacific training performs better than Atlantic

60 training for forecasting Atlantic storms. If such an event were to occur, the superensemble method would have to be more closely examined to figure out how overall model performance and model bias really does change from storm to storm and from year to year.

5.2 Future Work

There exist many possibilities for future work with tropical cyclone superensemble. First, the superensemble method assumes that model performance and bias remain relatively consistent from year to year as long as the model itself is not changed. Several experiments can be done to determine whether the biases in the model change between storms and between years. At the same time, tests can be conducted to see whether overall model performance changes from year to year. Second, since this experiment has shown that filtering training sets in specific ways can work, more research can be done to find whether big training sets can be filtered as to cyclogenesis location, recurvature/no recurvature, cyclones that converted from extratropical to tropical versus cyclones that had a purely tropical background since inception. Finally, look into the possibility of including non-conventional models into the superensemble to see how such a technique would work. For example, if a group of people wanted to consistently forecast for tropical cyclones, it would seem possible that these human forecasts could be utilized as long as there exists enough training for such a trial. Naturally, there are many more areas within the superensemble canopy that could use additional research and examination, and maybe research in the areas mentioned above could lead to more and exciting ideas concerning tropical cyclone superensemble.

61 REFERENCES

Avila et al., 2004: Monthly Tropical Weather Summary. Available at the National Hurricane Center website: http://www.nhc.noaa.gov/archive/2004/tws/MIATWSEP_nov.shtml?

Bender, M., T. Marchok, and R. Tuleya, 2002: Draft changes to the GFDL Hurricane Forecast System for 2002 including implementation of two nested grid configuration. NOAA Tech. Memo. NWS TPB 492, , Silver Springs, MD.

Chu, Jan-Hwa, 1995: Tropical Cyclone Forecasters’ Reference Guide. Naval Research Laboratory. Available: http://www.nmlmry.navy.mil/~chu/tropcycl.htm.

Elsberry, Russell (1995): Global Perspectives on Tropical Cyclones, World Meteorological Organization, 289 pp.

Goerss, J.S., 2000: Tropical cyclone track forecasts using an ensemble of dynamical models. Mon. Wea. Rev., 128, 1187-1193.

Jarvinen, B.R., and C.J. Neumann, 1979: Statistical forecasts of tropical cyclone intensity. NOAA Tech. Memo, NWS NHC-10, 22 pp.

Krishnamurti, T.N. C.M. Kishtawal, T. LaRow, D. Bachiochi, Z. Zhang, C.E. Williford, S. Gadgil, and S. Surendran, 1999: Improved skills for weather and seasonal climate forecasts from multimodel superensemble. Science, 285, 1548-1550.

Krishnamurti, T.N., C.M. Kishtawal, D.W. Shin, and C.E. Williford, 2000b: Improving tropical cyclone precipitation forecasts from a multianalysis superensemble. J. Climate, 13, 4217-4227.

Krishnamurti, T.N., C.M. Kishtawal, Z. Zhang, T. LaRow, D. Bachiochi, E. Williford, S. Gadgil, and S. Surendran, 2000a: Multimodel ensemble forecasts for weather and seasonal climate. J. Climate, 13, 4196-4216.

Milton, S., I. Culverwell, G. Greed, and M. Willett, 2003: Improving tropical performance: I- Diagnosing errors in the New Unified Model (cycle G27). Forecasting Research Technical Report No. 401. United Kingdom Meteorological Office, Exeter, Devon, UK.

Szymczak, Heather, 2004. Skill of Synthetic Superensemble Hurricane Forecasts for the Canadian Maritime Provinces. M.S. Thesis, The Florida State University, Tallahassee, FL 32306, 87 pp.

Vijaya Kumar, T.S.V., T.N. Krishnamurti, M. Fiorino, and M. Nagata, 2003: Mulitmodel superensemble forecasting of tropical cyclones in the Pacific. Mon. Wea. Rev., 131, 574-583.

62

Williford, C. E., 2002: Real-time Superensemble Tropical Cyclone Prediction. Ph.D. Dissertation, The Florida State University, Tallahassee, FL 32306, 144 pp.

Information on NOGAPS model: http://meted.ucar.edu/nwp/pcu2/nogaps/index.htm

Tropical cyclone model information (Provided by the National Hurricane Center): http://www.nhc.noaa.gov/aboutmodels.shtml

63 BIOGRAPHICAL SKETCH

Mark Rickman Jordan II was born on August 26, 1982, in Charleston, South Carolina. In 2000, Jordan graduated from Bishop England High School in Charleston, SC, with honors and began college at the University of North Carolina-Asheville in Asheville, North Carolina. While in college, Jordan was a member of Phi Eta Sigma and was President of the university’s chapter of the American Meteorological Society. In December 2003, Jordan graduated Summa cum Laude from the University of North Carolina-Asheville with a degree in Atmospheric Sciences. Jordan subsequently began graduate school in January 2004 at Florida State University under the direction of Dr. T.N. Krishnamurti. Jordan will complete his Master of Science in Meteorology in December 2005.

64