PROBABLE MAXIMUM LOSSES OF INSURED ASSETS FROM

NATURAL AND MANMADE CATASTROPHES IN

WILLIAM GARDNER BEc

OCTOBER 2006

A THESIS PRESENTED TO THE SCHOOL OF

PHYSICAL GEOGRAPHY OF MACQUARIE UNIVERSITY

IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE

PHD DEGREE IN RESEARCH Declaration

I, William Gardner, hereby declare that this thesis has not been submitted for a higher degree to any other university or institution, and to the best of my knowledge, represents my own work, except where due reference is made and acknowledged.

October 2006

ii

Abstract

In a first attempt to quantify the national catastrophe exposure of insured property, this thesis uses a range of models of varying complexity to estimate the Australian industry’s exposure to natural perils. These perils include cyclones, earthquakes, hail, bushfire, and severe thunderstorms. The focus is exclusively on the insurance industry and so wider community economic losses are ignored.

To achieve this goal, the author first developed a national database of residential and commercial property exposure at the postcode level of resolution. The next major tasks were to develop a stochastic national loss model, implement a published earthquake model and combine the results of these models with extreme value statistics of losses from other perils. An Australian terrorism loss model developed by the author in the course of this thesis is also presented here to highlight differences between the modelling of man-made perils and those arising from natural hazards; the most notable difference being the difficulty in defining the probability of an attack compared with the return period of natural hazards of varying severity.

The key results for natural perils are expressed as Probable Maximum Losses (PML) at given Return Periods and are given in the Table below.

Return Period Cyclone Earthquake Other Perils Combined

(years) ($m) ($m) ($m) ($m)

1,000 11,945 42,226 5,953 42,551

250 4,699 7,639 3,622 10,530

100 2,164 2,884 2,360 5,421

Probable Maximum Loss by Return Period

These numbers can be used by insurers and international reinsurance companies to price catastrophe risk.

The thesis quantifies the level of uncertainty in these results and provides commentary on how the information provided can be used by the insurance industry, the government and the public to provide a better understanding of natural peril risk to insured property in Australia.

iii

Acknowledgements

I wish to thank the following people who have contributed to the production of this thesis:

• my supervisors over the years: Professors John Pollard, Russell Blong and John McAneney;

• my employers: Aon Re Australia, Impact Forecasting LLC and Aon Re Inc.;

• along with others who have assisted me in various aspects of the thesis including Dr George Walker, Dr Siamak Daneshvaran, Phil Heckman, Dr Andres Mendez, Dr Pierre Wiart and John Aquino; and

• most importantly, my wife Donna and my family who have patiently allowed me to work on and complete this research and document.

iv

Table of Contents

DECLARATION ...... II

ABSTRACT...... III

ACKNOWLEDGEMENTS ...... IV

TABLE OF CONTENTS ...... V

CHAPTER 1 - INTRODUCTION...... 1

Catastrophes in Australia ...... 1 Insurance Companies Catastrophe Concerns ...... 2 Insurance Company Need for Catastrophe Models...... 3 Insurance Regulation in Relation to Catastrophe Models ...... 4 Guidelines in the United States...... 5 Objectives of Thesis ...... 6

CHAPTER 2 - CATASTROPHE MODELS...... 8

Definition ...... 8 Probable Maximum Loss ...... 9 Model Complexity and Uncertainty ...... 10

CHAPTER 3 - EXTREME VALUE ANALYSIS ...... 13

Methodology...... 13 Analysis of Australian Catastrophe Data ...... 13 Other Methods of Fit ...... 15 Distribution of nth Largest Value...... 17 Conclusions...... 18

CHAPTER 4 - EXPOSURE DATABASE...... 20

Overview...... 20 Census data...... 21 Insurer Data ...... 24 Average Value ...... 24 Commercial Risks...... 25 Motor Risks ...... 25 Construction Material ...... 26 Building Age...... 26 Total Exposure...... 28 Proportion Insured ...... 28 v

Data Quality ...... 28

CHAPTER 5 - CYCLONE MODEL ...... 30

Overview...... 30 History of Tropical Cyclone Modelling...... 30 Modern Models of Cyclone Risk ...... 32 Cyclone Structure ...... 33 General Methodology ...... 37

CHAPTER 6 – DEVELOPMENT OF A CYCLONE EVENT SET...... 41

Historic Event Database...... 41 Event Start Position Generation...... 43 Starting Central Pressure...... 51 Maximum Potential Intensities...... 54 Starting Direction ...... 58 Forward Speed...... 61 Radius of Cyclone ...... 62 Event Development...... 63

CHAPTER 7 – DETERMINATION OF CYCLONE LOSSES ...... 68

Overview...... 68 Wind Profile Shape...... 68 Daneshvaran/Gardner Wind Model ...... 73 Terrain/Topography Adjustments ...... 75 Hazard Model Validation...... 81 Building Vulnerability Functions ...... 86 Loss Calculations ...... 95 Model PML Results ...... 98

CHAPTER 8 - DEVELOPMENT OF A GIS EARTHQUAKE ANALYSIS BASED ON A PUBLISHED MODEL ...... 101

Earthquakes ...... 101 Earthquakes in Australia ...... 102 Earthquake Models for Australia...... 102 Simulation of Earthquake Events...... 103 Modification for Soil Characteristics ...... 105 Loss Calculations ...... 106 Hazard Validation against Major Australian Earthquakes...... 108 Loss Validation against the Newcastle Earthquake...... 111 PML Results...... 112

CHAPTER 9 - OTHER NATURAL HAZARDS...... 114

vi

Methodology...... 114 Loss Data ...... 115 Development of Current Day Losses...... 116 Increase to Current Exposure...... 117 Curve Fitting ...... 119 Event Set Derivation...... 124 Industry Losses ...... 124

CHAPTER 10 – MAN MADE DISASTERS...... 126

Definition of Terrorism ...... 126 The Australian Terrorism Insurance Act 2003...... 126 Models...... 128 Natural versus Man-made Disasters ...... 129 Modelling the Where...... 130 Modelling the What...... 133 The When...... 136 Using the Terrorism Model Results ...... 141 Industry PMLs for Terrorism ...... 142 Conclusions...... 145

CHAPTER 11 – MODEL RESULTS...... 147

Summary of Modelled Australian PML’s ...... 147 Comparison by Peril...... 148 Model Variability and Sensitivity to Assumptions...... 148 Quantification of Epistemic Uncertainty in the Cyclone Model ...... 153 Contribution of Extreme Events to Total Risk...... 157

CHAPTER 12 - DISCUSSION ...... 163

Research Aims...... 163 Further Uses of the Models ...... 165 Further Research ...... 169

CHAPTER 13 - SUMMARY AND CONCLUSIONS ...... 172

Key Findings...... 172

APPENDIX A – ICA LARGE EVENT DATABASE...... 175

APPENDIX B – STATISTICAL DISTRIBUTIONS ...... 178

APPENDIX C – VON MISES PARAMETER ESTIMATION...... 181

APPENDIX D – DANESHVARAN AND GARDNER WIND MODEL ...... 185

vii

APPENDIX E – REFERENCES ...... 191

viii ix

Chapter 1 - Introduction

Catastrophes in Australia

The Australian continent is prone to a wide variety of natural perils including cyclones, earthquakes, hail storms, bush fires, droughts and that can cause considerable loss of life and damage to property1.

Of special concern to the insurance industry are high impact-low frequency events referred here to as catastrophes. By definition, catastrophes involve large losses and violate the basic tenets of insurance – independence and diversibility of risk.

General attributes of catastrophes include:

• unpredictability

• low frequency

• large footprint – wide areal extent of correlated losses

• contagious - in the sense of creating losses across multiple lines of business.

Because of Australian’s relatively recent colonisation and short written history, documented experience of many of these disasters is limited. This has important implications for the insurance industry: a meagre actuarial database means that there is no alternative but to fall back upon models and simulated experience in order to anticipate likely future losses. In other words the known past is but a poor indicator of the future.

At the time of writing there were no models available that can quantify the risk for all of Australia and for all major perils.

This thesis develops and uses a range of models of varying complexity to estimate the industry exposure to the major natural perils. The work in this thesis represents a first attempt to quantify the national catastrophe exposure of insured assets.

1Bureau of Transport and Regional Economics, 2001, “Economic Costs of Natural Disasters in Australia”, Commonwealth of Australia 1

Also presented is a terrorism model developed by the author in order to compare the differences in the modelling of man-made, deliberate acts of destruction with those arising from natural hazards.

The focus of this thesis is exclusively that of the insurance industry and so wider community economic losses are ignored.

Insurance Companies Catastrophe Concerns

Problems to an insurer due to catastrophe losses arise both from extreme events as well as from the accumulation of many smaller events over the term of a policy. These losses need to be managed by insurers and their cost spread equitably among the policyholders.

For increasingly larger events, there comes a point where an insurance company’s capital can be at risk. Imagine, for example, an insurer having a given amount of free capital that incurs thousands of claims caused by a direct hit to a major metropolitan area from an earthquake; the total liability could exceed the company’s assets. Even if this were not the case, the financial rating of the company could still be adversely affected and the level of solvency as measured by insurance regulation put at risk.

To protect themselves against losses that might otherwise cause insolvency, direct insurers reinsure themselves against these big risks. International reinsurers accept catastrophe risks from around the globe working on the assumption that these events are uncorrelated. In other words these reinsurance losses are diversified internationally.

Currently the purchase of catastrophe reinsurance costs the Australian insurance industry several hundreds of millions of dollars each year. To guide decision-making as to the optimum level of retention, how much catastrophe cover to purchase and how to price this risk and in the absence of a prolonged history of losses, insurers and reinsurers resort to catastrophe models. Ideally these models target event losses as a function of the average return period of the loss or the annual exceedance probability.

This thesis develops a range of models and attempts to establish a country-wide, multi-peril analysis for key risks.

2

Insurance Company Need for Catastrophe Models

Insurance companies need ways in which to quantify the risk due to natural and man made disasters across the country and for risks of different types. The costs of future catastrophes need to be allocated among policyholders in the same way that premiums for car owners’ insurance vary by postcode in relation to variations in burglary rates.

Models for more frequent perils such as car accidents, burglary and household fires are based on statistical analysis of actual claims data. Statistical and actuarial techniques establish suitable premium rates by location and type of risk. Such approaches fail for low frequency catastrophic events where there is rarely sufficient historical data upon which to base such calculations.

To get around this problem, the insurance industry has turned to the scientific and engineering communities. The solution is catastrophe modelling that estimates the potential risk based, not on the past experience, but on the current or projected exposure of that company and the exceedance probability of large losses. One of the models outlined in this thesis simulates losses from synthetic catalogues of earthquakes at fine-scale geographical resolutions such as post-codes. The catalogues duplicate, as far as possible, the seismicity of the region in terms of the magnitude and frequency attributes of earthquakes. The model must also account for the influence of local soils that can amplify or decrease the intensity of ground shaking intensities, as well as the vulnerability of different construction types and their spatial distribution of value. In fully stochastic models, uncertainties in the key input variables are also simulated. Risk, defined here as the contingent financial liability, arises as a function of the hazard, building vulnerability and insured value with risk.

Depending on the peril, fully stochastic models may not be available. In such cases, it is necessary to make do with simple statistical extrapolations from limited claims data. Examples of both types of models will be used in this thesis. The aim is not to develop state- of-the art models for each peril but to get a first order estimate of the nation’s reinsurance needs, a task that, to the knowledge of the author, has never before been attempted.

3

Insurance Regulation in Relation to Catastrophe Models

Following the collapse of the Australian insurance giant HIH, once Australia’s second largest insurance company, the insurance industry has become more heavily regulated. Recent changes to Australian legislation include revised Australian Prudential Regulation Authority (APRA) Capital Adequacy Guidelines. These guidelines include references to setting appropriate capital reserves2 to allow for natural disasters, and methods to determine appropriate levels of reserves based on a range of risks including estimates of financial consequences from extreme events due to risk concentrations3.

APRA Guidance Note GGN 110.5, "Concentration Risk Capital Charge", identifies the requirement for insurers to retain excess assets or have reinsurance contracts to fund losses arising from an event that would occur every 250 years. It sets out the issues that an insurer should consider when setting its Maximum Event Retention (MER), which is the amount of excess capital that would be required to fund such an event.

The determination of this 1 in 250 year Probable Maximum Loss (PML) may be made using computer models developed in-house or by external providers. The guidelines require that when such models are used, "APRA will expect the insurer to be able to demonstrate an understanding of the model in estimating the MER. This understanding will include: a. The type of data and assumptions used in the model; b. The methodology used to incorporate the data and assumptions into the model; and c. The sensitivity of the resulting MER figure to changes in the model's assumptions."

Although the intent of these new regulations is admirable, there are issues relating to the ability of some companies to comply. It will often fall upon the company actuary to provide

2 APRA Prudential Standard GPS 110, July 2002 (Amended January 2005), "Capital Adequacy for General Insurers", Australian Prudential Regulation Authority

3 APRA Guidance Note GGN 110.5, July 2002, "Concentration Risk Capital Charge", Australian Prudential Regulation Authority 4 guidance as to appropriate compliance. Unfortunately, actuarial training in Australia does not currently cover the modelling of natural hazards in the level of detail needed to satisfy APRA requirements. Often actuaries will have little choice but to rely upon catastrophe modelling software provided by professional modelling companies. However, detailed documentation that might allow the actuary to have a comprehensive understanding of the processes and parameters employed is seldom made available by the modelling firms. Therefore, the reliance upon catastrophe models of external parties could be seen as not fulfilling the letter of the law. In the US, proposed guidelines are even more exacting. This is examined below.

Guidelines in the United States

• An actuarial guideline4 of the United States of America Casualty Actuarial Society relevant to catastrophe risk analysis outlines a number of operating principals to be followed by actuaries when relying on catastrophe models. It suggests:

"In determining the appropriate level of reliance, the actuary should consider the following: a. Whether the individual or individuals upon whom the actuary is relying are experts in the applicable field; b. The extent to which the model has been reviewed or opined on by experts in the applicable field, including any known significant differences of opinion among experts concerning aspects of the model that could be material to the actuary's use of the model; and c. Whether there are standards that apply to the model or to the testing or validation of the model, and whether the model has been certified as having met such standards."

The standards referred to (c) above include the Florida Loss Commission standards for admission of commercial modelling tools for use in premium rating in that State. These standards outline a series of tests of the model and require modelling firms to disclose many of the assumptions and parameters in any model.

4 Actuarial Standards Board, June 2000, “Actuarial Standard of Practice No. 38 - Using Models Outside the Actuary's Area of Expertise (Property and Casualty)”, Casualty Actuarial Society, Doc. No. 071 5

The Standard of Practice suggests that the actuary have a reasonable understanding of the model components, of the inputs required and the level of detail required to produce results and output consistent with its intended use. The clear implication is that actuaries and insurance companies that depend on their advice should not blindly rely on models provided by external experts.

Objectives of Thesis

This thesis provides an overview of the development of catastrophe models for each of the major natural and man made perils affecting property in Australia. Its main objectives are to:

1. Develop a database representative of Australian property exposure

2. Develop a tropical cyclone model for Australia and use it to determine industry loss distributions

3. Implement a published model for earthquakes in Australia and determine industry loss distributions for this peril

4. Estimate distributions of potential losses from perils other than cyclone and earthquake using extreme value techniques

5. Compare and contrast models for natural and man-made perils

6. Integrate the individual peril exposure to obtain a multi-hazard, national industry measurement of risk.

6

7

Chapter 2 - Catastrophe Models

Definition

The term Catastrophe Model, or "Cat Model", describes algorithms and methodologies that are usually implemented in computer software packages to determine the physical or financial outcome of "catastrophic" natural perils. These perils include, but are not limited to:

• Tropical Cyclones

• Earthquakes

• Hail Storms

• Floods

• Bush fires

• Thunderstorms

• Landslides

• Tsunamis

The aim of the models is to obtain an estimate of the frequency and severity distribution. For this thesis, frequency is the annual frequency of events, assumed to be governed by a Poisson process. The severity distribution describes the distribution of losses in Australian dollars conditional on an event happening. The frequency and severity distribution can be used to determine the Probable Maximum Loss, the average annual loss amount, or other statistics useful for the insurance industry.

8

Probable Maximum Loss

Probable Maximum Loss (PML) is defined as the estimate of the loss having a certain probability of exceedance (pE). The probability is usually described in terms of a certain return period, or Average Return Interval (ARI), the average amount of time expected to elapse between events of this magnitude. For a Poisson process, the probability of exceedance and the ARI are related by:

pE = 1- exp(-N/ARI) (2.1)

where N is the number of years.

For N = 1, and for large ARI, the annual exceedance probability approaches:

pE = 1/ARI (2.2)

The cumulative probability then corresponds to 1 minus the inverse of the return period. For example, the 100 year return period PML would be equivalent to the loss amount at the 99th percentile on the loss distribution, while the 250 year return period PML would be equivalent to the loss amount at the 99.6th percentile on the loss distribution.

PMLs can be described in terms of either Aggregate or Occurrence losses. The Aggregate PML is the expected maximum total losses in a year that will be observed with a given ARI. The Occurrence PML is the expected maximum loss from any one event in a given period of time. Aggregate PMLs are always greater than Occurrence PMLs, and most often, Occurrence PMLs approach Aggregate PMLs at the larger return periods. For purposes of this thesis, PMLs will be calculated as Occurrence PMLs.

9

$6,000m

$5,000m

$4,000m

$3,000m Occurrence PML ($m) PML Occurrence $2,000m

$1,000m

$0m 1 10 100 1000 Return Period (Years)

Figure 2.1 – Occurrence PML curve for a Hypothetical Portfolio

Figure 2.1 shows how the PML varies with ARI. In accordance with the definitions given above, PML values cannot be given without an associated return period. However, the 1973 Insurance Act described PMLs without any mention of a return period. This inadequacy has been remedied in the recently introduced APRA regulations. It is also important to specify whether the topic of discussion is about a single or multi-peril PML and whether reference is to an ICA zone or whole of portfolio PML.

Model Complexity and Uncertainty

Catastrophe (CAT) models vary enormously in complexity and quality. A model may be as simple as a "back of the envelope" calculation, or as complicated as banks of computers to undertake the calculations.

It is often assumed that increasing complexity will yield greater accuracy. Unfortunately this is often not true.

There are two major classes of uncertainty that contribute to overall uncertainty in model outcomes:

• Aleatory

10

• Epistemic

Aleatory uncertainty describes the perceived scientific uncertainty arising from randomness in the scientific variables being modelled. This is the uncertainty which represents itself as increasing PMLs at every return period. This uncertainty is dealt with by expressing the range of possible values for variables or parameters in the model using statistical distributions.

Epistemic uncertainty, or lack of knowledge, is more difficult to quantify. This uncertainly refers to different states of nature but which cannot all be true. It is usually addressed through having multiple models which are interpreted as providing different views on the science.

Statistical and Geophysical Models

In this thesis, “Statistical” models refer to those models based on an analysis of loss statistics, in terms of financial amounts, rather than the result of attempting to model actual interactions in the physical world. “Geophysical” models on the other hand attempt to describe the scientific and engineering aspects of real world conditions leading to losses from natural perils.

Chapter 3 demonstrates the statistical approach for determination of PML curves and methods for quantifying the uncertainty around this curve shown. In later chapters, a series of more complex models are developed.

11

12

Chapter 3 - Extreme Value Analysis

Methodology

In order to illustrate the need for CAT models, a discussion of extreme value theory must be undertaken. The model will have to resort to such statistics in order to deal with the more frequent and often unmodelled perils such as bushfires, hailstorms, floods and thunderstorms. However provided a suitable foundation of data exists, extreme value theory and statistics can provide reasonable PML results at the lower return periods. In Australia, this is limited to about 50 years.

One important theorem of extreme value theory states that the maximum of a sequence of observations, under very general conditions, is approximately distributed as the generalized extreme value distribution. This distribution has three forms: (1) Gumbel (light tail), (2) Frechet (heavy tail), and (3) Weibull (bounded tail). In terms of the tail of a distribution, the corresponding theorem states that the observations exceeding a high threshold, under very general conditions, are approximately distributed as the generalized Pareto distribution of which the exponential (light tail) and Pareto (heavy tail) are special cases.

The modern approach to extreme value analysis is based on a point process representation, equivalent to: (i) a Poisson process governing the rate of occurrence of exceedance above a high threshold; and (ii) a generalized Pareto distribution for the excess beyond the threshold.

The method involves ranking loss information from individual events over a period of time and calculating associated exceedance probabilities. Extrapolation of fitted curves can provide a rough guide to PMLs at greater return periods. The analysis below looks at how stable this extrapolation is given different extreme value distributions.

Analysis of Australian Catastrophe Data

The Insurance Council of Australia (ICA) maintains a list of major disasters for all large events since 1967, along with losses scaled by inflation to the release date. The 37 years of data up to 31 December 2003 is shown in Appendix A.

The individual event losses have been ranked and return periods ARI calculated as

13

ARI = (n+1)/rank (3.1) where n is the number of years of data and rank is the rank of loss in current dollars.

The resulting PML curve is shown in Figure 3.1, along with a simple linear fit.

2,500

y = 48.672x R2 = 0.9753

2,000

1,500

1,000 Probable Maximum ($m) Loss Maximum Probable

500

0 0 102030405060 Return Period (Years)

Figure 3.1 - Fitted Linear PML Curve for All Australian Events (1967-2001)

The fitted curve gives the PML amount as

x = 48.7.ARI (3.2) where x is the PML in millions of Australian dollars.

The return period represents the inverse of the probability of exceedance such that

ARI = 1/ [1-F(x))] (3.3) where F(x) is the cumulative probability of loss x. The probability density function for the PML, f(x), can be obtained by substituting for the ARI in Equation (3.3) which is rewritten as:

x = 48.672/[1-F(x)] (3.4) and finally by solving for x and differentiating to obtain:

14

f(x) = 48.672 / x2 (3.5)

Extrapolating the fitted curve, the 1000-year PML for Australia is estimated as $48.7 billion.

Other Methods of Fit

Other methods of fit for the historic data may be used and as will be demonstrated may result in a wide range of extrapolated PML amounts at higher return periods. One such alternative is to assume that the underlying losses satisfy a Pareto distribution. This is equivalent to fitting a linear relationship to the logarithm of the losses as a function of the logarithm of the ARI, as is shown in Figure 3.2. Details of the various statistical distributions are shown in Appendix B.

10

9

8 y = 1.0364x + 3.8298 R2 = 0.9898 7

6

5

4

ln [Probable Maximum Loss ($m)] 3

2

1

0 012345 ln [Return Period (Years)]

Figure 3.2 - Pareto fit to ICA data

The fitted curve relationship has the form:

ln(x) = 1.0364 ln(ARI) +3.8298 (3.6)

Extrapolating to the 1000-year, the PML1000 is estimated to be $59.2 billion, a figure not too dissimilar to that estimated previously.

Finally, a third fit was undertaken using the log-transformed losses and natural logarithm of

15 the log-transformed return periods, as shown in Figure 3.3. This approach is consistent with assuming that the underlying distribution of losses is a Weibull Distribution.

10

9

8 y = 2.1262x + 4.5776 R2 = 0.962 7

6

5

4

ln [Probable Maximum Loss($m)] 3

2

1

0 0123 ln ln [Return Period (Years)]

Figure 3.3 - Weibull fit to ICA data

The fitted relationship is

ln (x) = 2.1262 ln (ln (ARI)) + 4.5776 (3.7)

Extrapolating to ARI of 1000 years, the PML1000 is $5.9 billion, an order of magnitude lower than those estimated previously.

The fit statistics for each of the three methods is shown in Table 3.1.

Test Method 1 - Method 2 – Method 3 - Inverse Pareto Weibull Polynomial Adjusted R-Square 97.53% 98.98% 96.2%

Table 3.1 - Fit statistics on ICA data

While each of the three approaches provides a reasonable statistical description of the data, extrapolation beyond this range results in widely different estimates with the 1000-year losses varying between $5.9 and $59.2 billion dollars.

Transforming the curves back onto linear axes, Figure 3.4 shows the three fitted curves

16 compared to the original data for return periods up to 1000 years.

70,000

60,000

50,000

40,000 Inv. Poly. Pareto Weibull 30,000 Losses Probable Maximum Loss ($m) 20,000

10,000

0 0 100 200 300 400 500 600 700 800 900 1000 Return Period (Years)

Figure 3.4 - Extrapolated fits to ICA data

Clearly the error in the prediction expands as one moves beyond the data. How much confidence can there be in estimates beyond the maximum point and how much should the maximum point be expected to vary? This is examined next.

Distribution of nth Largest Value

The kth largest value in a selection of n samples from a given distribution with probability distribution function f(x) and cumulative distribution function F(x) can be shown to exhibit a distribution with density function: fk,n(x) = P(kth element of n samples is x)

= P(X=x) P (x is the kth element)

= f(x)[Prob(Xx)]n-k Number of Permutations

= f(x) [F(x)]k-1 [1-F(x)]n-k n!/[(k-1)!(n-k)!] (3.8)

The distribution of the maximum point can be found when k=n

17

f1,n(x) = f(x) [F(x)]n-1 n (3.9)

The distribution of the second largest sample from the selection of 37 points (extent of ICA data) is shown in Figure 3.5 where a Weibull distribution has been assumed.

5,000

4,500

4,000

3,500

3,000

2,500

2,000

1,500 Probable Maximum Loss ($m)

1,000

500

0 1 10 100 Return Period (Years)

Figure 3.5 - Example of range of second largest sample on Weibull Distribution

If PMLs are determined using the extreme value method, then extrapolations beyond the maximum historic return period are characterised by a wide dispersion of possible PML results and must be viewed with extreme caution.

For comparative purposes, the variability around the twenty year return period value, approximated here as the variability around the second largest point in Figure 3.5, was calculated. The standard deviation of this amount is approximately 56% of the mean value. This point will be reviewed in later discussion (Chapter 11).

Conclusions

In summary, although the extreme value method may be used for a "quick & dirty" approach for determining PMLs at low return periods, its ability to predict high return period PMLs is limited and extrapolations should be undertaken only as a last resort. This is the fundamental reason why CAT models to predict low probability events are increasingly replacing extrapolated experience.

The following chapters demonstrate the construction of such models for certain hazards.

18

19

Chapter 4 - Exposure Database

Overview

Before PMLs can be simulated for Australia, an estimate of the national property portfolio is required. At the time of writing, a publicly available national database of property exposure is unavailable and so this must be created using other sources.

This thesis will analyse the exposure in insured property assets as at the end of 2003. It will cover domestic dwellings and businesses of all types and contents therein. In the case of residential property, exposure is estimated by combining information in the Australian Census with portfolio data from a number of insurance companies comprising a mix of construction types, ages and value of buildings and contents coverage.

Commercial exposure is more problematic. Nonetheless, an estimate of such losses is required in order to compare modelled and historic losses for which often only a combined total loss figure is available. The approach when making comparisons is to assume direct commercial losses to be equal to the total residential losses. If the one outlying point is ignored, Table 4.1 gives some support for this assumption in the case of floods. In the Lismore flood, only the proportions of losses were available from which the ratio was calculated while the Queanbeyan ratio was derived from the outcome of an economic model rather than measurement.

The reader is reminded that the estimation of wider indirect economic losses lies outside the scope of the study and will be ignored.

20

Flood Location Date Commercial Residential Ratio and Industrial ($m) ($m)

Queanbeyan (modelled) 0.75

Toongabbie Creek, Prospect Creek and Georges 1986 8.62 5.95 1.45 River

Nyngan 1990 11.3 15.6 0.72

Brisbane River 1974 114.4 35.9 3.19

Katherine 1998 29 28 1.04

Wollongong 1998 15 12 1.25

Lismore 1974 5.67

Townsville 1988 39.1 89.2 0.44

Table 4.1 - Commercial & Industrial (C&I) and Residential flood losses for Australian floods (source: Risk Frontiers, 2004).

Census data

The Australian Census is carried out every five years by the Australian Bureau of Statistics and is designed to gather information on individuals and households in Australia on a particular date. Amongst many other attributes, it tabulates the number of dwellings in each Census Collection District categorising these as houses, flats, caravans and other places of residence.

For purposes of this thesis, the number of dwellings included is the combination of:

• Separate houses;

• Semi detached, row or terrace houses, townhouses, etc.;

• Flats and apartments; and

• Other medium density housing.

21

This selection is consistent with the approach used by Oliver and Georgiou5.

The Insurance Council of Australia (ICA) produces a zoned map of Australia used by insurers to monitor aggregations of exposure. The zones are shown in Figure 4.1.

1717 16 13

12 11 14 19 10

18 9 8 20 7 6

5 15 5 4

24 3 21 2 1

49 47 22 48 26 46 23 25 27 454244 38 28 37 41 39 40 26 30 36 35 31 3334 34 32

29

Figure 4.1 - ICA Zones

At the time the analysis in this thesis was mostly undertaken, the 2001 census information was not yet publicly available and so extrapolation from previous census data was required. Information on the number of dwellings was obtained for each Australian census postcode using data from the census carried out in 1981 and 1991. The number of dwellings in each postal code in 2001 was estimated by projecting from 1981 and 1991 figures, using the average growth rate over the preceding 10 years shown in each ICA zone for all postcodes in that zone.

A credibility factor was assigned to the data from each ICA zone using the hyperbolic formula

Z = n/(n + X) (4.1)

5 Oliver, Stephen E., and Georgiou, Peter N., 1993, “Windstorm PMLs – ”, Bureau of Meteorology Special Services Unit 22 where Z is the credibility factor used to weight the growth in the number of buildings for each zone,

n the number of buildings recorded for 1991, and

X is a weight factor

If full credibility is defined as the estimate being 99% likely to be within 10% of the true expected value, 663 buildings are required for full credibility6. An X of 1,000 was chosen in order to be conservative. Substituting for this in Equation (4.1), the resulting partial credibility values shown in Table 4.2 are obtained.

N Z

1,000 50%

2,000 67%

5,000 86.7%

10,000 90.9%

Table 4.2 – Credibility values by building count

For zones 16, 17 and 18, which include postcodes that did not exist in 1981, zero credibility was assumed.

The projected 2001 number of building for a postcode is then

B2001 = B1991 * (1 + GzoneZ + Gtotal(1-Z) ) (4.2)

Where B2001 is the number of buildings projected for 2001

B1991 is the number of buildings from the 1991 census

Gzone is the growth rate for the ICA zone between 1981 and 1991

6 Hart, D.G., Buchanan, R.A.., and Howe, B.A., 1996 (revised 2001), “Actuarial Practice of General Insurance”, Institute of Actuaries of Australia 23

Gtotal is the total growth rate for all Australia

The end result of this analysis is an estimate of the total number of residential buildings in each postcode as at 2001. The number of buildings in each postcode at the end of 2003 is assumed to be equivalent to the 2001 figures, although their capital values have been increased to allow for inflation as outlined below.

Insurer Data

A limited amount of property exposure data was available to the author from five Australian insurers. The data comprises a total of 469,798 buildings, or 7.6% of the estimated total number of Australian residential buildings from the census projection. For reasons of confidentiality, company names are not shown. The data was “scrubbed” to remove invalid postcodes and obviously erroneous data, such as policies with building or contents values of $1 and negative values. The data was then amalgamated and postcodes indexed to ICA zones.

The data was used to determine average property values in each postcode, as well as the distribution of each construction type across the various ICA zones. In addition, average values for contents insurance were determined. Age of construction was determined from both company data and census information.

Average Value

The typical purchase of building insurance in Australia is made such that the value listed in the insurance policy represents the value of the building, and does not include the value of the land. The average value was first determined from the insurance company data for each postcode for all of Australia but the small number of properties for some postcodes allowed for little confidence in these estimates Instead, the average values were determined for each ICA zone. In each ICA zone, the number of buildings in the exposure was sufficient to achieve full credibility, assuming a requirement of 90% likelihood of less than 10% error. These average values for buildings and contents were then assumed to hold for each of the postcodes within each zone.

Numbers of building and contents policies vary between postcodes. Based on this insurer data, there are 89 contents policies for every 100 building policies on average for Australia. The proportion of buildings with contents coverage was determined for each ICA zone by

24 accumulating the individual postcode numbers within each ICA zone. The ICA zone results were blended with the average Australian proportion to determine credibility factors for each zone, using a credibility weighting of X=1000 as above, as there was limited information available in some of the zones with less policies.

It was assumed that the value listed in the insurance policies represents the most current sale or replacement value at the time the policy was written. The insurance company data is from policies issued during 1998, and in force as at December 31 of that year. In order to determine average values at December 2003, the results were inflated for five years at a compound rate of 3% per annum.

It is common for people to underinsure their building property or contents. Typical insurance policies require an insured to value ratio of at least 80% before proportional claim payments are made. As it is difficult to estimate the actual level of this underinsurance, no allowance was made for this effect in this thesis.

Commercial Risks

Commercial property exposure across Australia is more difficult to quantify because there are no publicly available statistics on the total quantity or value of such properties. In addition, insurance statistics are limited because there are a large proportion of risks that are insured by companies outside Australia and so premium and claims information does not need to be submitted to the ICA or other reporting bodies.

Based on a rough assessment of exposure from a number of larger Australian insurers with a mix of both residential and commercial policies, a simplistic approach was adopted whereby the total commercial exposure in each postcode was assumed to equal the total residential exposure.

Motor Risks

The total motor exposure for Australia was estimated using a simple relationship of 10% of the total residential exposure.

This assumption was based on the rough calculation assuming that each household, with an average combined building and contents value of $225,000, has one and a half cars, each of

25 average value of $15,000. The average value of cars then is $22,500, or 10% of the residential property exposure.

Construction Material

For each of the insurance company databases, the construction material was listed for each property or postcode aggregation. The individual construction codes used by each company were examined and mapped back to a common table of construction types and eventually grouped into two categories:

ƒ Wood frame construction, including timber structures with aluminium siding, brick veneer and light metal frame structures

ƒ Masonry construction with single and double brick walls, apartment blocks with brick or other stone or concrete based sidings and steel frame structures with heavy load capacities.

For the insured values, the proportion of each construction type was indeterminable at the postcode level due to the lack of sufficient data. A credibility measure based on a weighting of X=1000 was used to determine proportions of each of the two types of construction for each ICA zone. Although the proportion of each construction type varies by ICA zone, the overall national ratio was approximately 25% wood and 75% masonry.

Building Age

For cyclone risk, year of construction is important. A new wind loading standard AS1170.2 was introduced as a result of the damage from in 1975 and this new code specified significantly improved building standards for new buildings. This new code was implemented between 1980 and 1982. In the absence of other information, it is assumed that all buildings constructed since the 1981 census are in compliance with the new standard.

The proportions of buildings corresponding to pre- and post- 1981 have been estimated by examining both the census data and the insurance company data. The proportions of buildings corresponding to pre- and post- 1981 have been estimated from the ratio of the number of buildings in 1981 to the projected 2001 buildings. A credibility factor of X=10000 was used to determine partially credible proportions for each ICA zone.

26

Following discussions with various practitioners involved in the building and real estate industries, proportions of pre-1981 buildings were reduced by 15%, to allow for demolitions and renovations that have taken place in the last 20 years.

The proportions were compared to the ratios of pre- and post- 1981 buildings determined from the insurance company data as shown in Figure 4.2 below. The insurance data was assumed to have full credibility since the building ages had been individually specified rather than inferred from the census.

100%

80%

60%

40%

20% Proportion Old from Insurer data OldProportion Insurer from

0% 0% 20% 40% 60% 80% 100% Proportion Old from Census data

Figure 4.2 – Proportion of Pre-1981 buildings for various ICA zones.

In some cases, this factor may not be accurately reported by the insured in order to obtain a lower premium rate on an older property. However, the insurance company data was assumed to be fully accurate because in cyclone prone regions this information has been reviewed more accurately against council records by a number of the larger insurers.

The average of the two proportions was taken, and this proportion was reviewed for credibility against a factor of X=1000, based on the census population. The resulting proportions include a double application of credibility on the census component justified by its uncertainty compared to the insurance company data.

27

Total Exposure

The total residential exposure database was generated by applying the average values to the estimates of 2003 building numbers for each postcode. Each postcode contains eight separate groups, representing the various combinations of building age, construction type and coverage.

The total residential exposure in the database amounts to $1.655 trillion Australian dollars, representing 8,044,328 buildings. Motor exposure is estimated at 10% of this figure or $165.5 billion. As explained earlier, the total commercial exposure is also taken to equal $1.655 trillion.

Proportion Insured

When examining insured losses, allowances must be made for the fact that in most areas of Australia the proportion of insured properties falls well below 100% possibly even below 70% in some regions. A recent report by ICA7 found that, on average, approximately 10% of Australian households are uninsured.

The exposure database developed in this chapter, against which losses are estimated in later chapters of this thesis, is assumed to represent 100% of the total insured residential, commercial and motor exposure.

Data Quality

There are clearly limitations on accuracy, mainly as a result of using a small proportion of the total industry exposure to arrive at some global assumptions. While a number of improvements are possible, until more comprehensive datasets become publicly available, this dataset must suffice.

7 Insurance Council of Australia, October 2002, “Report on Non-Insurance/Under-Insurance in the Home and Small Business Portfolio”, MP008-1610 28

29

Chapter 5 - Cyclone model

Overview

This chapter outlines the development of a new model for cyclone risk in Australia. Again the focus is on estimating the national insurance loss potential and thus direct financial losses to residential and commercial property rather than on the life safety or indirect economic losses. This chapter begins with a brief overview of the history of cyclone modelling and describes the publicly accessible database of historic Australian cyclone events before presenting a simulation approach to cyclone modelling.

History of Tropical Cyclone Modelling

Tropical cyclones are known by many names: Hurricanes in the North Atlantic and Eastern Pacific; Baguios in the Philippines; Tai-fung in Cantonese; Typhoons in the Western Pacific north of the Equator and the China Sea; and Tropical Cyclones in Australia and the Indian Ocean. For all intents and purposes, these phenomena are physically identical with the only major difference being that in the Southern Hemisphere, winds rotate clockwise when viewed from high altitude while the reverse is true in the Northern Hemisphere.

Meteorologists referring to storms in the Atlantic Ocean sometimes use the different names as a measure of the strength of an event. The same event may change in name as the strength increased or decreased: a weak event would be known as a tropical depression, while stronger events would be known as a tropical storm, a cyclone or a hurricane in increasing order of intensity. In this thesis, each of these events will be referred to as a cyclone.

In ancient times, intense weather disturbances were thought to have divine provenance. A historically significant cyclone8 occurred in the late thirteenth century in the Sea of Japan saving Japan from a major invasion. At the time, Kublai Khan, the grandson of Genghis Khan, was attempting to invade Japan with his Mongolian military forces. Winds and high

8 Saunders, J.J., 2001, “The History of the Mongol Conquests”, University of Pennsylvania Press, ISBN 0-8122- 1766-7 30 seas from the cyclone destroyed most of his 1000 ships and his army of 100,000 men. As a result, the Japanese named the cyclone “Kamikaze” meaning “divine winds”.

With the invention of barometers in the seventeenth century, more accurate short-term weather predictions became possible. Trends and changes in the indicated either a worsening or improvement in weather conditions.

In more recent history, the most significant breakthrough occurred in 1830 when William Redfield9 published his theories on the progression and rotary motion of storms. Henry Piddington10 further contributed to the development of these theories, and was the first person to use the term “cyclone”, which means “the coil of the snake” in Greek. Piddington used ship logs to reproduce cyclone records, as the logs contained information such as barometric pressure, wind direction and speed, rainfall and comments on cloud formations.

As understanding of fluid dynamics improved, mathematical models of the flow of air within cyclones became possible. Until the second half of the twentieth century, the lack of detailed information on actual events and the inability to carry out the multitude of mathematical calculations prevented anything other than rough modelling of cyclones. With modern computing resources, more complicated calculations became possible, but cyclone modelling was still limited to a database of ground level observations and a small number aircraft measurements.

Satellite technology provided the next big advance. From the mid 1960's, satellite photography has improved the detection and tracking of cyclones. Geostationary satellites are able to give pictures of the same area of the earth at hourly intervals, allowing the movements of tropical cyclones to be followed as they form, move and decay. Not only does this technology provide valuable information in regions where population is sparse but also physical measurements difficult to obtain from ground level recordings.

9 Redfield, William C., Jan-Mar 1830, “Remarks on the prevailing storms of the Atlantic Coast of the North American states.”, American Journal of Science and Arts (Series 2), vol. 20, no. 1, pp. 17-43.

10 Piddington, Henry, 1855, “The sailor’s hornbook for the law of storms: being a practical exposition of the theory of the law of storms and its uses to mariners of all classes in all parts of the world.” London, William & Morgate, 360p. 31

Further improvements in computing power now enable scientists to perform detailed analysis of actual events and to model the complicated interactions that occur within a cyclone. The model proposed in this thesis does not attempt to duplicate the complicated models of weather professionals; rather its aim is to quantify the effects of a cyclone on a portfolio of insured property risks at an appropriate level of accuracy.

Modern Models of Cyclone Risk

Catastrophe loss modelling for insurance purposes grew out of engineering models that were developed during the 1970’s in order to analyse structures for construction purposes. These introduced the use of probabilistic modelling techniques to simulate stresses and loads on buildings without the need to resort to physical testing of structures. The use of event simulation for insurance purposes was pioneered by Dr Don Freidman11 of The Travelers Company, Hartford, Connecticut, U.S.A. The technique involves simulating the wind field that would be formed during a cyclone and then using these winds to determine the expected damage to the portfolio of properties.

As insured losses from major events12 increased during the 1970’s and 1980’s, including Hurricane Agnes in 1972 (US$2,000m), Cyclone Tracy in 1974 (US$800m), Hurricane Frederic in 1979 (US$2,300m) and Hurricane Alicia in 1983 (US$2,000m), interest in modelling of natural disasters increased. A paper by Karen Clark13 in 1986 laid out a simulation methodology that could be used to model a natural peril for insurance purposes with an emphasis on cyclones. Then came Hurricane Andrew in 1991 with US$26,500m in losses, the biggest insurance payout to that point in time. Services such as those provided by Clark’s company, Applied Insurance Research (AIR), began to be sought more widely by insurance and reinsurance companies. As the demand for catastrophe advice has increased, a number of other modelling firms have been formed. In addition, government departments of

11 Friedman, D.G., 1975, “Computer Simulation in Natural Hazard Assessment”. Monograph NSF-RA-E-75- 002. Institute of Behavioral Science, University of Colorado, Boulder, 192 pages

12 Loss amounts refer to economic losses in 1998 US dollars, taken from “World Map of Natural Hazards”, a 1998 publication by Munich Re

13 Clark, K. M., 1986, “A Formal Approach to Catastrophe Assessment and Management”, Casualty Actuarial Society Proceedings. Volume LXXIII, Part 2, No.140, 24 pages 32 some cyclone-prone countries have attempted to enter this field by commissioning studies of the risks associated with cyclones.

In Australia, major contributions to cyclone modelling include the work of:

• Gomes & Vickery14 and Martin & Bubb15 - Use of simulation to predict gust speeds at various locations along the North Australian coast;

• Tryggvason16 - Use of a simulation model to predict the statistics of wind speeds and wind loads on buildings due to tropical cyclones in several cities along the Australian coast; and

• Oliver and Georgiou - Use of a simulation model to predict return periods of damage to residential exposures from cyclones crossing the Queensland coast from to the Gold Coast.

Cyclone Structure

Cyclones are typically defined by meteorologists as intense low-pressure systems forming over the seas at latitudes close to the equator. The centre of the event, the eye of the cyclone, often appears cloudless, and is characterised by only modest wind speeds, compared with the intense wind speeds that exist beyond the eye.

The formation of a cyclone is dependent upon a number of environmental conditions including warm ocean surface temperatures in the region between around 5 and 25 degrees from the equator and a trough of low pressure. The process commences when hot air rising by convection causes air at the lower levels to rush from all compass directions towards a particular surface location. The inward flow of wind is affected by the rotation of the earth so

14 Gomes, L., and Vickery, B.J., 1976, “Tropical Cyclone Gust Speeds Along the North Australian Coast”, I.E. Aust, Civil Engg. Trans., Vol. CE18, No. 2, pp. 40-48

15 Martin, G.S., and Bubb, C.T.J., 1976, “Discussion of Gomes and Vickery (1976)”. I.E. Aust, Civil Engg. Trans., Vol. CE18, No. 2, pp. 48-49

16 Tryggvason, B.V., 1979, “Computer Simulation of Tropical Cyclone Wind Effects for Australia”, James Cook University of North Queensland 33 that the wind’s trajectory towards the eye forms a spiral. This phenomenon is known as the Coriolis Effect17. This moist air forms bands of clouds that surround the eye of the cyclone.

The low pressure eye of the cyclone becomes cloudless and calm as air moves from the upper atmosphere down into the centre of the vortex. Outside this eye, the wind accelerates towards the centre and moves upwards as it rotates around the eye. The region just outside the eye, the eye wall, experiences the fastest winds speeds with winds of over 300 kilometres per hour (83 m s-1) having been recorded. Figure 5.1 shows a cross section of a cyclone in schematic form, illustrating the direction of the winds and the central eye of the cyclone.

Figure 5.1 – Cyclone cross-section

The maximum extent of the cyclone is measured by the outer closed isobar. Beyond this isobar, the circular form of the cyclone ends and different pressure patterns not driven by the intense low pressure of the cyclone eye prevail. The air pressure at the outer closed isobar is known as the ambient pressure. Figure 5.2 demonstrates an example of the relationships between the air pressure and with distance from the centre of the cyclone eye.

17 Phillips, N. A., 2000, “An Explication of the Coriolis Effect”, Bulletin of the American Meteorological Society: Vol. 81, No. 2, pp. 299–303 34

1020 60

1010

50 1000

990 40

980

970 30

Pressure (hPa) 960 Wind Speed (m/s) Speed Wind

20 950

940 10

930

920 0 0 25 50 75 100 125 150 175 200 Distance from Centre (km)

Pr es s ur e Wind Speed

Figure 5.2 – Example of Air Pressure and Wind Speed vs. Distance from Centre of Cyclone

Beyond the cyclone eye the wind speed reduces with increasing distance. The wind speed experienced at the ground is the vector sum of the speed of the movement of air in its circular motion around the eye of the cyclone and the translational movement of the cyclone itself (Figure 5.3).

s d n i

W

d

s e

d t n n i en i m b W m

e

v l o o M a C e n y o

i

E t

a

t

o

R

Figure 5.3 – Vector calculation for addition of rotational winds and forward motion

35

Figure 5.4 – Cyclone Wind Field

Because of this, the resulting wind field is skewed so that the winds on one side of the eye are greater than the winds on the other side (Figure 5.4). As cyclones in the Southern Hemisphere spin clockwise, a cyclone hitting the eastern coast of Australia has the strongest winds on the southern side of the eye.

The structure shown above is only an approximation of the winds actually experienced. In a cyclone, the winds move upward through the bands of clouds at locations other than the centre of the cyclone (see Figure 5.1). Perfectly circular cyclones are rare; non-symmetrical structures more common. Figure 5.5 contains some satellite images of actual cyclones, and demonstrates how the circular assumption is not always a realistic representation of real cyclones.

Figure 5.5 – Satellite images of actual cyclones

36

In this thesis, no attempt is made to model all the possible variations of cyclone shape. Instead the structure of a cyclone is assumed to be symmetrical with wind speeds estimated from the vector sum of the rotational wind speed calculated from an analytical formula and the forward speed of the eye. This cyclone model also assumes that this symmetrical structure remains throughout the formation, movement and decay of each cyclone, where in reality a cyclone alters in shape with time. The interest is in simulating likely losses per event rather that to achieve an accurate depiction of an individual cyclone.

General Methodology

Numerical results based on this cyclone model are determined using a Monte Carlo18 simulation approach. Probability density functions are used to reflect uncertainties in input variables. These functions specify the limits between which input variables may vary and their relative likelihood of occurrence between these limits.

Simulation refers to randomly sampling from the distribution of each input variable. The result of each simulation is a potentially realizable tropical cyclone and its time history. These are then used to provide estimates of the extreme winds and its directional characteristics over very long time periods, of the order of tens of thousands of years.

The development of the underlying distributions of attributes (central pressure, forward speed, direction, radius, etc.) for each segment of the coast will be described in the following chapter. Outlined below is a brief overview of the process.

The first step involves determination of an event set. This is a compilation of synthetic cyclones of varying intensity and other attributes such as starting location, central pressure, eye radius, forward speed and various other parameters that are randomly sampled from the underlying distributions.

The randomisation of parameters is undertaken in a manner where the sequencing of random numbers is taken from a predetermined series of numbers. These calculated random values,

18 Hammersley, J.M. and Handscomb, D.C, 1964, “Monte Carlo Methods”, Methuen & Co Ltd 37 generated using a Halton19 sequence, lead to simulated events that cover the entire allowable range of each of the parameters while leading to faster convergence of model results.

The life cycle of the cyclone is then simulated to allow for changes in its structure over time, until such time as the wind speeds drop below the threshold for building damage to occur. This cyclone history is recorded as a sequence of snapshots along the path of the storm with each point in time having a separate array of cyclone parameters.

Events are assumed to be independent of each other. Each synthetic event has equal probability, with the total probability of all events being equal to the assumed annual frequency of cyclones. Event parameters are sampled using transformed uniform distributions so that each point in any parameter is chosen with equal probability.

Once the cyclone parameters have been determined, the wind speed at any location at each point in time is calculated. The maximum wind speed at each location is recorded for each simulated event, and this wind speed is used to determine the effects on exposed property.

Wind speeds are only crudely adjusted for the roughness of terrain and topography. Damage is determined using predefined vulnerability relationships between damage and the maximum wind speed experienced. Property damage is summed spatially to determine the total losses for each event.

Figure 5.7 shows the overall flowchart structure of this proposed model.

19 Halton, J.H. 1960, “Numerische Mathematik“, vol. 2, pp. 84–90 38

START

FOR EACH OF THE 99,999 CYCLONES IN THE EVENT SET

DETERMINE STARTING POSITION

MODEL PATH AND PARAMETERS ALONG TRACK

MODEL MAXIMUM WIND SPEED AT EACH RISK LOCATION

DETERMINE DAMAGE AT EACH LOCATION AND IN TOTAL FOR EVENT

MORE EVENTS YES NO

CALCULATE PML’S AND OTHER FINANCIAL STATISTICS

END

Figure 5.7 – Model Structure

In order to carry out the analysis in this thesis, use has been made of a software package developed for commercial use by Impact Forecasting LLC, a company wholly owned by Aon Corporation. This package, ELEMENTS™20, allows the analysis to be carried out in a Microsoft Windows® environment with a user-friendly interface, and with results being produced in formats compatible with other desktop personal computer software packages.

The software is written in Microsoft Visual Basic™ versions 4, 5 and 6, with all relevant coding used in the modelling of perils having been written and tested by the author of this thesis.

20 ELEMENTS™ - Simulation software developed by Impact Forecasting LLC, an Aon (NYSE:AOC) owned company based in Chicago Illinois, USA 39

40

Chapter 6 – Development of a Cyclone Event Set

Historic Event Database

The Australian Bureau of Meteorology (BoM) holds a database21 of cyclone events accessible via the internet (www.bom.gov.au/). The database contains individual event details for each cyclone recorded by the Bureau of Meteorology, from May 1907 to April 1996. The cyclones are numbered from 1 to 850. There are no records in the database for cyclone numbers 621, 632 or 762. Therefore, 847 cyclones appear in the database.

For each cyclone, the database contains a series of “snapshots” taken at fixed time intervals, usually three hours, which give the cyclone’s status at each location at each point in time. The fields in the database are listed in Table 6.1.

Cyclone code Cyclone number * Location of data source Units of measurement * Cyclone name * UTC date/time * Category Intensity change code Current observation code Latitude * Longitude * Reading code Central Pressure Maximum wind speed Central pressure code Underlying surface code * Extent of hurricane winds Extent of gale winds Radius of outer closed isobar Mean eye diameter Eye diameter code Direction of maximum winds Maximum wind speed Max wind code Radial distance to recording Bearing of maximum wind Maximum wave height Maximum wave code Maximum wave distance Maximum wave bearing Maximum swell direction Maximum swell height Period of maximum swell Maximum swell code Maximum swell distance Maximum swell bearing Maximum anomaly Tide height code Maximum tide distance Maximum tide bearing Cyclone move direction * Cyclone move speed *

Table 6.1 – Fields in Bureau of Meteorology cyclone database

For most of the events, information is missing from certain fields. For some cyclones, data is contained for some but not all of the snapshots. The most consistent fields are marked in the above table by an asterisk (*).

21 Located on Bureau of Meteorology website at ftp://www.bom.gov.au/public/cyclone/cyclones/cyclone.srt 41

In addition, a number of the records contain incorrect data. In most cases these errors appear to be a result of typographical mistakes. An example of this type of error would be that the latitude of a cyclone suddenly jumps between snapshots by 10 degrees, moving back to the original track after one or two snapshots. Errors of this form were modified by a visual inspection of the individual cyclone tracks.

A computer program written in Turbo Basic22 was used for the visual analysis of the cyclones in the database. The program’s major purpose was to plot the track data against a map of Australia, allowing aberrant tracks to be identified and corrected. An example of an analysis for a particular cyclone appears in Figure 6.1.

Figure 6.1 – Visual analysis of cyclone tracks from the database superimposed on a map of Australia. The numbered segments off the coast are discussed later in the text.

Other errors were detected: some central pressure values from events in Queensland differ from values contained in another database held by the Queensland Weather Services. Following a detailed analysis of these differences, the general inconsistency appears to be that the BoM database contains central pressures higher than the central pressures actually recorded at the centre of each cyclone. It is possible that these recordings in some instances relate to the atmospheric pressures deduced from wind speed recordings rather than direct estimates in the eye of the cyclone. The database was modified so that the Queensland Weather Services information replaced the central pressures in the database where differences occurred.

22 Turbo Basic™, programming language released 1987, Borland Software Corporation (NASDAQ:BORL) 42

For the first half of last century, the database is mostly derived from land-based recordings. Due to the sparse population density in remote parts of Northern Australia, many of the details were inaccurate for cyclones not passing directly by major towns. Moreover many cyclones can be presumed to have gone totally unrecorded. From 1959, satellite images of the northern Australian region provided for more accurate tracking. The lack of data for the period prior to 1959 is allowed for by giving greater emphasis to the post 1959 data in the analysis of certain of the model assumptions, as will be explained in later discussion.

Event Start Position Generation

Historic Starting Positions

Cyclone tracks across the world23 from 1886 through 1996 are shown in Figure 6.2.

Figure 6.2 – Worldwide Tropical Cyclone 1886-1996

Cyclones generally form between 5 and 25 degrees from the equator and head initially in a westerly direction before turning towards the poles. As they move further from the equator they tend to eventually turn back towards the east and begin to dissipate.

23 Map created using data from University of Hawaii - Institute for Astronomy web page located at http://www.solar.ifa.hawaii.edu/Tropical/tropical.html 43

Cyclone tracks for the Australian region, created by plotting the events in the Bureau of Meteorology database, are shown in Figure 6.3. On average, approximately 10 tropical cyclones develop in the Australian region each year and 6 cross the coast. In general, the cyclones that cross the eastern seaboard are heading in a roughly south-westerly direction. Those crossing generally have a southerly heading. Cyclones that commence in the and in the north of Australia exhibit no general directional trend, at least based on the limited data set available.

In many cases, cyclones cross from water on to land, where they eventually deteriorate in strength and dissipate. In some cases, cyclones move from water to land then back over water where they may either dissipate, or reform and once again approach the coastline. In keeping with the description of the cyclone formation described in the previous chapter it is assumed that cyclones form over the ocean, following which they may move across either land or sea, depending on each individual situation.

44

Figure 6.3 – Cyclones in the Australian region, 1907-1986

45

Model Starting Positions

In order to begin the analysis, starting positions need to be defined from which the cyclones will move in randomly generated tracks with characteristics drawn from the historic cyclone database. In this study, possible starting position were restricted to one hundred 100-kilometre segments that run parallel to the coast, each roughly 50 km offshore. These segments start from Pemberton at the southern tip of Western Australia, extending across the northern ends of Australia through and Queensland and south along the coast as far as Ulladulla (Figure 6.4).

Figure 6.4 – Segments for Cyclone Starting Positions

The segments were numbered from one to one hundred starting from the south-western end. Significant segments are listed in Table 6.2.

Segment Region 5 22 Port Hedland 27 Broome 40 Darwin 75 Cairns 78 90

Table 6.2 – Significant Segments for reference

46

While the southernmost segments have not experienced cyclone generation in the period covered by the historic database, their presence allows the ends of the frequency distributions to run smoothly down to zero. This also allowed a convenient number of segments - one hundred.

Analysis of Historical Cyclone Database

With the positions of segments established, the historic database was then analysed to identify the segment crossed by each cyclone along with the cyclone parameters at point of crossing. In some cases, cyclones made more than one crossing; others commenced as tropical depressions over land; a small number started over the ocean but between the coast and the segments. Many dissipated over the ocean without ever crossing land. If no crossing occurred, this cyclone was eliminated from the database. A summary of the results on this analysis appears in Table 6.3.

First Crossing Type Comments Number Total Pressure Pressure Count Info etc. missing Analysed Sea to Land (Onshore) “Typical” On-Shore crossing 356 Land to Sea (Offshore) Off-Shore crossing 64 420 24 396 No Crossing – Over Land Tropical depression 6 No Crossing – Over Sea Excluded from database 421 TOTAL 847

Table 6.3 – Summary of Cyclone Crossings over the 100 Segments

In total, 420 cyclones crossed the series of segments, including 356 onshore crossings. Pressure information was missing for 24 of these leaving 396 cyclones available for detailed analysis of other parameters.

Frequency by Location

Crossing positions of the 420 cyclones were analysed to determine the annual frequency of crossing by segment over the last 90 years. The large spatial dimensions of a cyclone means that its influence may extend across more than one segment and thus the width of the segment affects the calculated crossing frequencies reducing in proportion to the reduction in segment length: in essence, by reducing target size. Thus a one hundred kilometre segment is twice as likely to be crossed as a fifty kilometre segment centred at the same location.

47

The number of crossings in each segment was then examined for reasonableness. As expected a higher number of land crossings occurred in the segments around Port Hedland and around the northern Queensland coast near Townsville. Frequencies decrease as one moves further south, and there is also a reduction in frequency in the most northern regions.

Satellite Tracking of Cyclones

As has been mentioned already, the introduction of weather satellites in 1959 resulted in better observations particularly in the northern parts of Australia where small population densities in the earlier part of last century meant that many cyclones may have gone unrecorded. For this reason, the historical database was split between those that occurred prior to 30 June 1959 and those that occurred beyond this date. Figure 6.5 shows the geographical variation in frequency for the combined population and the two subsets.

0.160

0.140

0.120

0.100

Pre-59 Post-59 0.080 All Fitted Annual Frequency

0.060

0.040

0.020

-

1 4 7 3 6 9 5 8 7 0 6 9 2 8 1 7 0 3 6 9 2 8 1 7 0 10 1 1 1 22 2 2 31 34 3 4 43 4 4 5 55 5 6 64 6 7 7 7 7 8 85 8 9 94 9 10 Segment

Figure 6.5 – Fitting of Cyclone Starting Frequencies

Particularly for Northern Australia, there are significant differences. To investigate whether the differences in annual frequency observed above were significant, statistical tests were undertaken of events that occurred in each region in each of the periods, 1907-1959 and 1959- 1986.

A Chi-Square test was carried out on the results to confirm or deny the following hypothesis:

48

H0 : There is no difference between frequency in the two periods

H1 : A difference exists between frequency in the two periods

The Chi-Squared test was carried out using groups of five segments, from 1-5 through to 96- 100 with the hypothesis test mentioned above. The results are shown in Table 6.4.

Cyclone Count Annual Frequency Expected Region Pre 59 Post Total Pre 59 Post 59 Total Pre 59Post 59 Chi- Level Result at square 59 of Sig. 95% Value 1 to 5 0 4 4 0.000 0.108 0.044 2.36 1.64 5.73 98% Significant 6 to 10 2 1 3 0.038 0.027 0.033 1.77 1.23 0.07 22% - 11 to 15 3 5 8 0.057 0.135 0.089 4.71 3.29 1.51 78% - 16 to 20 22 18 40 0.415 0.486 0.444 23.56 16.44 0.25 38% - 21 to 25 27 17 44 0.509 0.459 0.489 25.91 18.09 0.11 26% - 26 to 30 18 28 46 0.340 0.757 0.511 27.09 18.91 7.42 99% Significant 31 to 35 6 10 16 0.113 0.270 0.178 9.42 6.58 3.02 92% - 36 to 40 8 13 21 0.151 0.351 0.233 12.37 8.63 3.75 95% - 41 to 45 4 11 15 0.075 0.297 0.167 8.83 6.17 6.43 99% Significant 46 to 50 6 16 22 0.113 0.432 0.244 12.96 9.04 9.08 100% Significant 51 to 55 5 13 18 0.094 0.351 0.200 10.60 7.40 7.20 99% Significant 56 to 60 11 11 22 0.208 0.297 0.244 12.96 9.04 0.72 60% - 61 to 65 6 16 22 0.113 0.432 0.244 12.96 9.04 9.08 100% Significant 66 to 70 4 8 12 0.075 0.216 0.133 7.07 4.93 3.24 93% - 71 to 75 27 10 37 0.509 0.270 0.411 21.79 15.21 3.03 92% - 76 to 80 15 13 28 0.283 0.351 0.311 16.49 11.51 0.33 43% - 81 to 85 18 7 25 0.340 0.189 0.278 14.72 10.28 1.78 82% - 86 to 90 16 13 29 0.302 0.351 0.322 17.08 11.92 0.17 32% - 91 to 95 4 3 7 0.075 0.081 0.078 4.12 2.88 0.01 7% - 96 to 100 1 0 1 0.019 0.000 0.011 0.59 0.41 0.70 60% - Total 203 217 420 63.63 100% Significant Table 6.4 – Analysis by 500 kilometre Segment Groups

Significant differences at the 95% confidence level were found in many groups of segments especially in the regions of Australia with sparse populations. As a consequence, it was decided that the frequency analysis for the region from segment 22 through segment 72 would only take account of the data recorded since July 1959.

Fitted Frequencies

A smooth curve was fitted to the frequencies. There is no physical reason why frequencies between adjacent segments should vary dramatically: there are no sudden discontinuities in the Australian coastline, and no significant mountain ranges or other formations that might cause frequencies to differ either side of a particular location. The southernmost regions were assumed to have zero frequency. Figure 6.5 shows the original data and a smoothed 7-point running average. For segments 22 through to 72, only post 1959 data was used.

49

Selection of Total Frequency

The frequencies shown above have not been adjusted for the influence of El Niño Southern Oscillation. The El Niño-La Niña phenomenon stems from the systematic geographical changes in sea temperature fluctuations in the Pacific Ocean. Warmer than average waters off the west coast of South America are indicative of El Niño; cooler waters off the coast of Eastern Australia are likely associated with El Niño conditions. Under El Niño conditions the number of cyclones in the Australian region is reduced on average, while in a La Niña year, the frequency of cyclones increases.

An analysis of the number of cyclones occurring in each year since 1959 showed that the mean frequency was 5.86 and the variance 6.56. As the variance was greater than the mean, a Negative Binomial24 distribution should be selected to represent the number of events occurring in all of the years. This is consistent with the assumption that the frequency of cyclones occurring in any particular year can be represented by a Poisson distribution, but that the mean of this distribution can vary from year to year. Therefore, to model any particular year, a Poisson distribution can be used to simulate the number of cyclones where the frequency is set in advance by making allowance for the current El Niño or La Niña situation.

By allowing for the La Niña/El Niño conditions in any particular year, the frequency may be increased or decreased by up to 50% with a corresponding annual frequency ranging from 4.5 to 10.0. Although some scientists believe it to be possible to predict the effects of El Niño as far as nine months in advance25, this head start is not sufficient to be realistically taken into account in the pricing of insurance and reinsurance contracts. For the purposes of this thesis, the frequency will be assumed to be fixed, and the number of events modelled as a Poisson distribution and adjusted upwards to account for the higher number of strong El Niño years over the last 30 years.26 Therefore, the frequency over this period may be slightly lower than that which could be expected in the long run. An arbitrary adjustment of 10% was made to

24 See Appendix B – Statistical Distributions

25 Wolter, K., 1987, “The Southern Oscillation in surface circulation and climate over the tropical Atlantic, Eastern Pacific, and Indian Oceans as captured by cluster analysis”. J. Climate Appl. Meteor., 26, 540-558.

26 Allan R., Lindesay J., Parker D., 1996, “El Niño Southern Oscillation & Climatic Variability”, CSIRO, 1996. There were 11 El Niño years and 6 La Niña years. 50 the computed average event frequency of 5.86, and the number rounded up to an expected long-term frequency of cyclone crossings of 6.5 per year.

The model works by generating a synthetic catalogue of events that can be randomly selected to model the total number of events occurring in any given time period. The simulation must generate the correct proportion of cyclones in each region relative to the frequencies selected above. Therefore, the absolute frequencies are not important at this stage of the model, as long as the correct relativities are maintained when the cyclones are simulated. The assumption for total annual event frequency is used after the individual event results from the simulation have been determined. The approach outlined above simulates an occurrence distribution of loss.

Starting Central Pressure

Choice of Central Pressure Distribution

The database of 396 cyclones that contained information for cyclone central pressure in hectopascals (hPa) was summarised by segment (Figure 6.6).

1020

1000

980

960 Central Pressure

940

920

900 0 102030405060708090100 Segm ent

Figure 6.6 – Central Pressure at Crossing vs Segment Number

Visual inspection shows central pressure ranging between 911 hPa and 1012 hPa, with an average of 985.2 and a standard deviation of 17.7. Upper values are typical of tropical

51 depressions, rather than cyclones, and the associated wind speeds would have minimal impacts on buildings.

Figure 6.7 summarises the distribution of values in terms of count by central pressure in hPa. The data is skewed to the left and shows a number of peaks at multiples of 5 hPa. There is no physical reason for this and this artefact most likely arises from the error in estimating central pressure from satellite images as compared to actual barometer readings. These approximations may sometimes be rounded to the nearest 5 hPa. This artefact was ignored in the following analysis.

90

80

70

60

50 Count 40

30

20

10

0 900 910 920 930 940 950 960 970 980 990 1000 1010 Central Pressure (hPa)

Figure 6.7 – Histogram of Central Pressures

The 396 data values were analysed using a commercially available statistical package Bestfit™.27 The software fitted a range of empirical distributions and goodness-of-fit statistics for each. As none of the distributions produced an adequate fit to the data, further analysis was required.

The data represents the central pressures of tropical cyclones. By definition, the maximum pressure at which a cyclone can exist is given by the ambient pressure of the outer closed

27 Bestfit™ , Statistical modelling software, Palisade Corporation, USA 52 isobar. The minimum pressure is a function of the water surface temperature, which constrains the minimum pressure possible in each region. Therefore, a minimum and maximum bound must be placed on the cyclones.

When the central pressure of each cyclone is above 990 hectopascals, modelled wind speeds are insufficient to produce damage, based on damage functions to be described in the following chapter. As a result, the values above 990 hPa do not require an accurate fit.

The data was transformed by plotting the pressure values against the natural logarithm of the negative natural logarithm of the ranked cumulative distribution function (Figure 6.8). The ranked cumulative distribution function assumes that the transformed sample represents a uniform selection across the possible ranges of the distribution.

1000

990

980

970

960

Pressure 950 Fitted

940 Central Pressure (hPa) Pressure Central

930

920

910

900 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 Log(Log(1-F[x]))

Figure 6.8 – Transformed Pressure Values (hPa)

On Figure 6.8, the X and Y axes are as follows:

Y value = ith ranked central pressure value in hPa (6.1)

X value = ln(ln(1-F(xi))) (6.2)

Where F(xi) = i/(n+1) (6.3)

53

As the figure shows, the transformed data is almost linear through the region from pressure 940 to 990 hPa. Below 940 hPa, the data flattens out, representing the minimum possible central pressures. Therefore, the distribution showing a reasonable fit to the shape of the distribution across the relevant region was adopted. This distribution is an extreme minimum value distribution known as the Gumbel28 distribution.

Fitting of Gumbel Distribution

Parameters for the Gumbel distribution were required for each of the one hundred segments. Since there were a limited number of data values in some segments, a number of important segments adjacent to major cities or towns were first identified. For each of these, the data was grouped with that of the three segments either side and the combined data used for the curve fitting.

The data was fitted visually in the same way as shown in Figure 6.7. For each of the important segments, the two shape parameters of the Gumbel were manually adjusted until a satisfactory fit achieved by eye. The fit was deemed to be satisfactory when the curve passed through the points in the region between 990 hPa and 930 hPa. Beyond reviewing the resulting curves for reasonableness, no further statistical refinement of the fitted parameters was undertaken. The parameters for the remaining segments were determined by interpolating between the values at each of the important locations.

Maximum Potential Intensities

The Maximum Potential Intensity (MPI) of a cyclone is limited by, among other things, the difference in the water surface temperature and the temperature in the upper atmosphere. This difference varies by season and by location. In the remainder of this chapter, MPI’s will be referred to as the minimum central pressure attainable in each segment.

Holland29 provides one methodology for the determination of the MPI’s using the temperature differentials for the Pacific and Atlantic Ocean regions. Henderson-Sellers30 (1997) also

28 Gumbel, Emil J., 1954, “Statistical Theory of Extreme Values and Some Practical Applications”, National Bureau of Standards, Applied Mathematics Series, Number 33, 12 February 1954.

29 Holland G.J., 1997, “The Maximum Intensity of Tropical Cyclones”, Bureau of Meteorology Research Centre, Melbourne, Australia 54 shows MPI’s for a number of locations around Australia for the different months of the year. These values are often contradictory and inconsistent with actual recordings, some of which were much less than the theoretical minimum pressures.

Given the contradictory results discussed above a simple threshold analysis was adopted using the data of Bister and Emanuel31 (1998) who provide a series of maps showing MPI’s for Australia in each month of the year. These maps were analysed to determine the minimum pressures at each latitude down the east and west coasts of Australia. Two linear relationships between minimum central pressure and latitude were derived for the west and east coasts of Australia (Figure 6.9 and Figure 6.10). The east coast is defined as any points east of longitude 135 degrees.

Figure 6.9 – Central pressure versus latitude for west coast

30 Henderson-Sellers A., Zhang H., Berz G., Emanuel K., Gray W., Landsea C., Holland G., Lighthill J., Shieh SL., Webster P. and McGuffie K., 1997, “Tropical Cyclones and Global Climate Change: A Post-IPCC Assessment”, American Meteorological Society

31 Bister M. and Emanuel K., 1998, “Hurricane Climatological Potential Intensity Maps and Tables”, Massachusetts Institute of Technology, From Web Page: http://www-paoc.mit.edu/~emanuel/pcmin/climo.html 55

Figure 6.10 – Central pressure versus latitude for east coast

Ambient Pressure

The maximum central pressure of each cyclone is set by the ambient pressure of the area surrounding the cyclone. When the central pressure is greater than the ambient pressure, there is no inflow of winds and the cyclone ceases to exist.

The historical cyclone database did not contain details of ambient pressures. A small sample of historical synoptic charts was available for analysis but these are hand drawn charts. These were analysed but did not provide any further insight. They appeared to have been based on the judgement of the meteorologists at the time, rather than on actual recordings.

Due to the lack of data for historical events, an attempt was made to determine the average central pressures across the cyclone-prone regions of Australia. However, any historical analysis of pressure readings takes into account the various low pressure systems that move over each location. This means that any averages that are calculated may be lower than the actual ambient pressure that would occur around a cyclone.

The choice of an assumption for ambient pressure is as important as the choice of the central pressure, as the modelled wind speeds are determined as a function of the difference between the central and ambient pressures. Therefore, an increase of ambient pressure by 2 hPa with constant central pressure is equivalent to a reduction in central pressure of 2 hPa with constant ambient pressure.

56

Annual temperature changes and changes in ambient pressures by month both increase the complexity of a model for ambient pressure. In addition, the El Niño and La Niña effects may cause changes in weather patterns that could lead to the analysis of a small number of years of data not truly representative of the long term average. Because of these issues, it is assumed that ambient pressures follow a normal distribution with a mean of 1010 hPa and a standard deviation of 1.0 hPa.

Resulting Pressure Distribution by Segment

By combining the cyclone frequency with the central pressure assumptions for each segment, a distribution for expected central pressures was determined for each segment. It follows that the probability of a cyclone being generated within each segment with a certain central pressure (p) is given by:

P[Cyclone with pressure

= P[Cyclone Occurring].P[Pressure

Using Bayes Theorem (P(A∩B]) = P(A)P(B│A)), then:

1/ARI = f F(p) (6.5) and if

F(p) = P[Pressure

F(p) = 1 – exp(-exp(β(p – α))) where α and β are the parameters of the Gumbel distribution

To obtain:

p =α + 1/β * ln(-ln(1-F[p])) (6.6)

And substituting in Equation (6.5),

p = α + 1/β * ln(-ln(1 – 1/(ARI*f))) (6.7)

57

The central pressures (p) calculated from Equation (6.7) are plotted for various return periods against each segment in Figure 6.11.

1000

980

960

10

940 20 50 100 200 500 920 1000 Central Pressure (hPa) Pressure Central

900

880

860 1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70 73 76 79 82 85 88 91 94 97 100 Segment

Figure 6.11 – Central Pressures by Return Period versus Segment

The curves shown above can be used to determine the magnitudes of various cyclone events that would be expected to occur within a given 100km segment at different places around the Australian coast at different return periods. For example, the 1 in 100 cyclone expected to pass through the 100km segment off Cairns (segment number 75) would have a central pressure of 960hPa.

Starting Direction

The 396 crossing records with central pressure values were then analysed to determine the direction that the cyclone eye was travelling at the time of crossing the segments. 317 of the records contained eye movement direction information. The data was summarised by segment.

58

Since crossing directions at the segment level showed no indication of more than one preferred direction, the Von Mises distribution was adopted. The Von Mises32 distribution described by Fisher33 is a symmetrical distribution that is the most commonly used model for uni-modal samples of circular data.

The density function for the Von Mises distribution is shown in Appendix B. Its parameters are estimated by determining the mean direction μ and the mean resultant length R of each sample, as outlined in Appendix C. The latter is defined as the vector sum of the uniformly weighted wind directions divided by the total vector length.

The cyclone-crossing database was analysed to determine the two sample parameters required for the distribution. The directions for each of the crossings are plotted for each segment in Figure 6.12. The mean values for each segment were first interpolated and then progressively smoothed. The figure also shows the seven-point running average.

360

300

240

Data Mean Direction 180 Smoothed Fitted Direction (Degrees) 120

60

0 1 10192837465564738291100 Segment

Figure 6.12 – Fitting of mean eye direction

The selected mean directions are shown against the 100 Australian segments in Figure 6.13.

32 von Mises, R., 1918, Über die “Ganzzahligkeit” der Atomgewichte und Verwandte Fragen, Physikal. Z. 19

33 Fisher N.I., 1993, “Statistical Analysis of Circular Data”, Cambridge University Press, Cambridge 59

Longitude 110 115 120 125 130 135 140 145 150 155 -5

-10

-15

-20

Latitude

-25

-30

-35

-40

Figure 6.13 – Mean eye crossing directions

The mean resultant length was determined from the data at each segment. From this follows the determination of the dispersion parameter for the Von Mises distribution. Examples of the resulting distributions are shown in Figure 6.14. The blue region in each plot shows a graphical representation of the relative probability of a cyclone starting its simulated path in any given direction.

Port Hedland Thursday Island Townsville

Figure 6.14 – Von Mises Distributions at various locations

In the case of Port Hedland, Figure 6.14 shows that for cyclones generated on the adjacent segment, the majority will be heading in a south-westerly direction: a few will commence 60 in a southerly or westerly direction, but there will be none heading in a north-easterly direction.

Forward Speed

As has already been explained, the wind speed experienced at any location is the vector sum of the cyclonic winds and the translational speed of the cyclone eye. Coch34 (1998) showed that a general relationship exists between the latitude of the eye and the forward eye speed for cyclones off the eastern coast of the United States of America. The forward speed increases as the cyclone moves further away from the equator. Higher speeds further from the equator are due to the Coriolis Effect of the earth’s rotation. Cyclones in the Australian region show similar characteristics with greater forward velocities as they move further south.

The 396 crossing speeds are plotted against the segments in Figure 6.15. The average forward eye speed for each segment and the seven-segment running average are indicated. Although there is very little data at the southernmost latitudes, crossing speeds seem to increase at the southern segments, confirming Coch’s relationship. Mean values for each segment were first interpolated and then progressively smoothed.

60

50

40

Speed Data Average 30 Running Average Fitted Forward Speed (km/h) 20

10

0 1 10192837465564738291100 Segment

Figure 6.15 – Crossing speed (km/h) by segment

34 Coch N. K., 1998, “Hurricane Hazards & Damage Patterns in the United States”, AIR Client Conference 61

The starting eye speed is assumed to follow the Normal distribution with the expected values as indicated and a standard deviation of 2.0 km/h for all segments.

Radius of Cyclone Eye

The historical database contains information on cyclone eye radius for only a small proportion of the documented cyclones. Accuracy in observations of eye radius has only been possible since the use of weather satellites but even with this technology, many cyclones do not have a visible eye when viewed from above. Of the 396 crossing cyclones in the database used for analysis of the other parameters, only 14 had a value for the radius.

Coch shows that the radii of cyclones increase as the eye moves further away from the equator towards the poles. An analysis of the radius of each of the values in the Australian database showed this same general trend albeit with significant scatter (Figure 6.16). The central red line is fitted to the running average whilst the two outer lines indicate 2.5 and 97.5- percentile values.

50

40

30 Radius 97.5% Average 2.5%

Radius (km) Radius Fitted 20

10

0 -30 -25 -20 -15 -10 -5 Latitude

Figure 6.16 – Analysis of Eye Radius by Latitude.

A straight line was fitted to the running average for each degree of latitude. Conveniently, the average radius in kilometres corresponded closely to the latitude south.

62

The uncertainty about this average also needs to be considered. In keeping with common usage and in the absence of better information, a lognormal distribution was chosen to describe the variation in radii. The lognormal is conveniently bounded by zero at the lower limit. The standard deviation for each latitude was estimated by inspection of Figure 6.16 as being approximately equivalent to 50% of the average value. This value provides the 95% confidence intervals as shown in Figure 6.16.

Event Development

Once the cyclone starting characteristics parameters have been determined, cyclones are simulated at random positions along each segment in keeping with the relative frequency of occurrence for that segment. A revised set of parameters is updated for each three-hour snapshot. The combination of all of the snapshots, including the starting position, makes up each event.

Forward Motion

The model for movement of each cyclone is based on a time series approach, using the event parameters randomly generated at the starting position for time zero. At each future point in time, the longitude, latitude and direction are modified using a method derived from a random walk or Markov35 process. A Markov process has three attributes: first; the probability distribution of all future values of the process depends only on its current value and is unaffected by its history or other current information. Secondly, the process has independent increments and thirdly, changes over any finite time interval are normally distributed with a variance that increases linearly with the time interval. In fact a special case of the random walk, that of Brownian motion with drift was used. Thus the result is a constant forward (drift) speed and random changes in direction.

Each snapshot in the event path is taken at intervals of three hours, an approach that mimics the historic database For each snapshot, the event location is moved according to new randomly chosen heading; forward speed being assumed constant.

Markov models, when examined using small time steps, exhibit random movement that is infinitely small for infinitely small time steps. A cyclone acts more like a spinning top in that

35 Dixit, A.K. and Pindyck, R.S., 1994, “Investment under uncertainty”, Princeton University Press, NJ 63 a certain momentum is built up and small changes do not exhibit the behaviour that would be consistent with a Markov process. Further improvements to this model could include a more complex cyclone track modelling to achieve a closer match to reality.

Warping

The historic database was analysed to determine suitable parameters for each region for the mean and standard deviation of the directional change. There were no significant differences in either parameter across selected regions of Australia within the region of modelling interest (within 100km of the coast). Figure 6.17 shows the results by region: the 95% and 5% levels of confidence are shown, along with the mean for each segment and a running average.

20

15

10

5

0 0 102030405060708090100

-5

-10 Angular Change in Degrees per 10 kilometers 10 per Degrees in Change Angular

-15

-20 Segm ent

Mean Running Average 5% 95%

Figure 6.17 – Mean Cyclone direction change by segment

The change in direction is assumed to be a normal distribution with a mean of zero and a standard deviation of twelve degrees.

A selection of model-generated tracks was compared visually against historical data leading to the conclusion that the model was appropriate with some modelled tracks showing reversals in direction or doubling back as is seen in real world events.

64

Central Pressure Changes

The central pressure of a cyclone is driven by the amount of air moving upwards away from the middle of the eye at sea level. The rate at which this takes place is affected by the temperature difference between the sea surface and the upper atmosphere. As this difference in temperatures increases, more air moves upwards and the cyclone’s central pressure falls. The contrary is also true: as the temperature difference weakens so does the cyclone as central pressure increases back towards ambient levels.

As they cross from water onto land, cyclones typically lose intensity, as indicated by a reduction in the pressure differential or an increase in the actual measured central pressure. This is due to the removal of the power source of the cyclone, the warm air rising from the ocean. The corollary to this is that cyclone intensity tends to decrease as it moves over colder waters.

A review of the historic data shows that most of the time no major changes in the pressure occur in the last few hours before coastal crossing. This was not the case for Hurricane Andrew in 1992, where a short-term pressure decrease was observed just prior to crossing the Florida coast slightly south of Miami. The cause of the decrease was identified as a change in the shape of the hurricane eye, as a convection band merged with heavy rain clouds that had formed when the western edge of the eye collided with the coastline.

Recent events including Hurricane Katrina in the United States of America in August 2005 and in Australia in March 2006 focused much attention on the accurate measurement of the central pressure at points of time along the track and, in particular, at landfall. For each of these events, it is still unclear at the time of writing as to the actual central pressure at landfall due to the lack of accurate official measurements and the reliance on estimates made from satellite imagery.

In this model, the cyclone central pressure is held constant until one or more of the events outlined below occur.

65

Landfall

A number of landfall studies have been carried out in the United States in order to model cyclone decay over land. Most recently Vickery36 showed that the reduction over land of the pressure differential parameter ΔP, the difference between ambient and central pressure, varies regionally in the United States. Vickery proposed three different models for the Texas and Gulf of Mexico region, the Florida peninsula, and for the upper East Coast. Each model used a decay rate as a function of the central pressure at point of coastal crossing and the distance that the cyclone eye had moved inland.

For Australia, this reduction after landfall is less important than in the US since most of the Australian exposure is located close to the ocean. An analysis of the Australian database carried out by the author failed to identify any discernable patterns by region or consistent relationships between pressure change and any other cyclone parameters. On average, the central pressure moved closer to the ambient levels by around 3 hPa for every ten kilometres that the cyclones move over land. This rate was assumed for this model.

Other Assumptions

All cyclones die out eventually, although some have existed as long as two weeks37. The simulated cyclones were assumed to decay as per the assumptions above when the centre of the eye was located over land. To eliminate simulated cyclones of no further threat to Australia, the central pressure was assumed to increase by 10 hPa per snapshot if:

ƒ the cyclone eye has moved to more than 500 kilometres from Australia;

ƒ the central pressure has risen to over 1000 hPa; or

ƒ the cyclone has been in existence for more than 14 days.

As mentioned previously, cyclone central pressures were also limited by latitude. The equations used in the determination of starting central pressure distributions were used to limit the minimum pressure of cyclones that moved further south into cooler waters.

36 Vickery, P., Twisdale, L.A., 1995, “Windfield & Filling Models for Hurricane Wind Speed Predictions”, Journal of Structural Engineering

37 Cyclone Rewa, 26 December 1993 to 21 January 1994 66

Summary

This chapter has outlined the derivation of a simulated event set of tropical cyclones. The following chapter explains how the wind field is calculated along the cyclone track and event losses are determined.

67

Chapter 7 – Determination of Cyclone Losses

Overview

To this point, for each simulated cyclone event, details of the event at each 3-hourly snapshot point along its entire track have been determined. In order to determine insured losses, the maximum wind speed experienced at each location needs to be estimated. Building damage is related to maximum wind speed. After adjustments are made to wind speed for topography and surface roughness, the resulting wind speed is used to determine the total damage to the buildings and contents. The total net loss from the event is then calculated by applying the insurance terms to each of the exposed concentrations of properties. PML curves and other model statistics may be determined by examining the net results of the entire event set.

Wind Profile Shape

Wind speed estimation involves the determination of both rotational and forward components of the wind speed. Two different models for the rotational component are compared here, and a third method developed which encompasses the calculations to be undertaken in a more efficient manner.

Holland

Holland’s38 formulae provide estimates of the gradient level39 wind speeds (V) as a function of distance from cyclone centre. This wind speed is defined as the ten-minute average that would be recorded at the height of the surface boundary layer level, a point approximately 1000 metres above sea level. The formula given by Holland is:

38 Holland, G.J., 1980, “An Analytical Model of the Wind and Pressure Profiles in Hurricanes”, Monthly Weather Review, 108, 1212-1218

39 The gradient level is the height of the top of the boundary layer. Above this wind speed is constant and smooth. Below it the wind speed decreases to zero at ground level due to surface roughness and the wind is turbulent. 68

⎛ −A ⎞ ⎜ ⎟ A × B× ()p − p × e⎝ rB ⎠ V = n c + Coriolis Component (7.1) ρr B

Where:

pn = Ambient pressure, measured in hPa

pc = Central pressure, measured in hPa

r = Distance from centre of cyclone eye in km

ρ = Air density, measured in kg/m3

A = Size scaling parameter

B = Shape scaling parameter

The square of the expected wind speed is directly proportional to the difference between the ambient pressure and the central pressure.

By differentiating with respect to r, the wind speed maximum occurs when

r = A (1/B) (7.2)

At this point, the Radius of Maximum Winds (RMW), the eye wall can be physically seen because condensation forms a wall of cloud that rises into the upper atmosphere.

Figures 7.1 and 7.2 show the effect of varying each of the Holland scaling parameters in turn.

69

Figure 7.1 – Varying the Alpha parameter

Figure 7.2 – Varying the Beta parameter

Since the distribution for the radius of the eye has already been determined, the distribution for Beta must be estimated so that the Alpha parameter can be modelled as A=RB.

Estimates of Beta for cyclones near Australia have been shown by Holland and also by Oliver and Georgiou to vary between 1.0 and 2.5. Limited evidence suggests a relationship between the Beta parameter and the latitude and estimates have been developed for the Atlantic

70 cyclone basin by Vickery (1999), but not for Australia. This correlation has been ignored.

Gust Adjustments

The wind speed determined by the Holland formula is the ten-minute mean winds at the gradient level, the average wind speed at a height of approximately 1000m above the soil surface. This wind speed needs to be adjusted to allow for the gusting of the winds, and for the measurement of wind speeds at the height of an average building.

In Australia, engineers have correlated building damage against a notional 6 metre 3 second sustained gust. In order to use the Holland formula, this must be taken account of. To do this Walker40 proposed the following formula after making the assumption of a fixed Holland Beta parameter for Australia. The formula is:

0.67 V=sqrt[7*(pn-pc)*(rM/r) ] (7.3) where:

pn = ambient pressure in hPa

pc = central pressure in hPa

r = distance from centre of cyclone eye in km

rM = radius of maximum winds in km

The formula is only valid for calculations beyond the eye wall, so a truncation of wind speed at the maximum value at the eye radius is generally assumed. A comparison of the Holland and Walker formulae is shown in Figure 7.3.

40 Walker, G.R., Senior Consultant, Aon Re Australia Limited (Personal communication). 71

80

70

60

50

Holland Walker

40

Wind Speed (m/s) 30

20

10

0 0 100 200 300 400 500 Distance from Eye Centre (km)

Figure 7.3 – Comparison of Holland and Walker wind models

Required Calculations

The formulae outlined above provide the rotational wind speed component for a given radius at a fixed point in time. In order to determine the maximum wind speed experienced at a point, from which damage will be inferred, wind speeds must be calculated at every point along the cyclone track. The three-hour interval snapshots are too coarse to do this and interpolation is needed. By making the intervals smaller, a more accurate estimate of losses will be achieved. This is especially important close to the eye where small differences in distance lead to large differences in wind speed (Fig. 7.3). Figure 7.4 shows an example of a wind field created with widely spaced snapshots.

The processing time of simulation scales with the number of interpolation points within each 3-hour segments. Using jumps of five kilometres could involve ten sets of calculations for each segment if each snapshot were 50km apart. Given ten segments in each of the 99,999 events, each with ten points between each snapshot, would require approximately ten million wind field calculations. Initial runs using this approach have taken as long as seven days to complete, depending on the processing speed of the computers used. Further development was undertaken to reduce this processing time.

72

60-65 55-60 50-55 45-50 40-45 35-40 30-35 25-30 20-25 15-20

Figure 7.4 – Cyclone wind speeds (m/s) from analysis with widely spaced snapshots

Daneshvaran/Gardner Wind Model

To overcome the computational problem, a new model, created by Dr S. Daneshvaran (Impact Forecasting) and the author in 2002, was developed to estimate the maximum wind speed at each location for an entire cyclone track segment. This new model was used for determination of maximum wind speeds in this thesis.

The model includes an implicit allowance for forward motion, so that vector calculations are no longer required. In addition, allowance is made for gusting, regional shape and modification to altitude and this methodology does not require specific allowance for wind speeds within the eye wall.

The run time reduction using this method is proportional to the average track segment length divided by the cyclone eye movement interpolation distance. If the average segment length was fifty kilometres and five kilometre jumps were used, the processing time would be approximately ten times less. Figure 7.5 shows the comparative wind field to Figure 7.4 developed using this new model.

73

60-65 55-60 50-55 45-50 40-45 35-40 30-35 25-30 20-25 15-20

Figure 7.5 – Wind field(m/s) using new model

The method is based on the determination of wind speed for each segment as a combination of winds experienced prior to the starting point, winds either side of the segment and winds beyond the second point in the segment. The formulae for calculation of wind speeds in each of the three regions shown in Figure 7.6 are listed in Appendix D.

Segment End Point

C n io eg R

Segment Start Point B n io eg R

A n io eg R

Figure 7.6 – Three regions in new wind model

74

The parameters for this model were evaluated by reviewing a selection of actual events crossing the United States from 1886 through 1999. Additional validation was performed by comparing return period winds derived from a combination of this wind model and a previously validated event set against engineering design wind speeds along the United States coast. Further validation against Australian cyclones is outlined in later discussion.

Terrain/Topography Adjustments

The wind speeds determined above are based on the assumption of measurements taken at a flat location possessing low surface roughness, such as an airport site, well away from structures or other natural features that could disturb the airflow. When estimating damage to a property, adjustment needs to be made for the effects of terrain and topography.

Terrain

Surface conditions affect the wind profile via momentum extraction and the generation of turbulence. Turbulence refers to the fluctuations in wind speed about the time-averaged value

41 (Vz ) and the magnitude of these fluctuations scale with the friction velocity u* defined below.

Near the surface, the mean wind speed (Vz ) decreases as momentum is extracted from the mean flow. As the roughness of the underlying surface increases, that is, the size of characteristic surface obstacles increase, so does the velocity shear (change in wind speed with height) and thus momentum extraction as measured by the friction velocity u*.

Under near-neutral conditions likely to pertain during a cyclone, the mean wind speed increases logarithmically with height (z) as:

Vz = (u*/k)ln[z/zo] (7.4) where:

k is von Karman’s constant taken here as 0.4;

41 Panofsky, H.A. and Dutton, J.A., 1984, “Atmospheric turbulence: models and methods for engineering applications”, John Wiley & Sons, NY. 75

u* is the friction velocity; and

zo is the roughness height broadly related to the average height of the individual roughness elements.

The shear stress (τ) on the surface is calculated as:

2 τ = ρu* (7.5) where ρ is the density of air.

The adjustment due to changes in surface roughness is estimated through reference to the Australian Wind Engineering Code AS1170.2-1989, which outlines a methodology for determining wind multipliers to adjust for changing surface conditions. These are based on values of zo. For easy characterisation, terrain types are grouped into four main categories:

1. Open terrain, with no significant obstructions

2. Open terrain such as grassland, with few obstructions

3. Suburban areas and wooded terrain

4. City buildings

The full selection of possible terrain types, along with the corresponding roughness lengths, is outlined in Table 7.1.

76

Terrain Type AS1170.2 Roughness

Terrain length zo (m) Category

City buildings 4 2.0

Forests 1.0

High density metropolitan 0.8

Centres of small towns 0.4

Level wooded country and suburban buildings 3 0.2

Few trees or long grass (600mm) 0.06

Crops 0.04

Rough open water at high wind speeds, 2 0.02 isolated trees, uncut grass and airfields

Cut grass (10mm) 0.008

Desert (stones) 0.006

Natural snow surface (flat) 1 0.002

Table 7.1 – Terrain adjustment parameters

(For many uniform surfaces, zo is roughly 10% of the average height of the roughness elements.)

The terrain multiplier, Te, follows from equation (7.4):

Te = u*A/u*B * ln(z/ z0A)/ln(z/ z0B) (7.6)

Where:

z is the distance or height above ground where wind speed is measured, taken here as 6m

77

A and B refer to the location where terrain adjustment is required and the base terrain location respectively.

The ratio u*A/u*B has been assumed to equal to 1.0 for purposes of this thesis. In reality, it will vary as a function of the imposed wind speed and the underlying surface conditions.

The Te multiplier ranges from 38% in high density housing to 130% in desert areas. The modified terrain multiplier, achieved through linear scaling, ranges from 70% to 120%.

Topography

Topography refers to the natural contours of the landscape. When air ascends a hill, streamlines converge rapidly, and the air accelerates. So ignoring the changes in surface roughness addressed above in the previous section, it is expected that the tops of hills will experience faster winds than in gullies and lower lying positions.

AS1170.2-1989 also outlines a methodology for determining the aerofoil effects of different topographical features. The topographic multiplier, Mt, is given by

Mt = 1 + (Kt s φ) / (1 + 3.7 *(σv/Vz )) (7.7)

Where:

Kt = topographic type factor, a function of whether the feature is on the top of a hill or ridge, or on the side of an escarpment

s = position factor for topographic effects

φ = effective slope of the topographic feature

σv/Vz = turbulence intensity, as outlined in the standard

The methodology defines a second multiplier to adjust the modelled wind field for topography.

Determination of Australian Terrain and Topography Categories by Location

Satellite imagery now makes determination of type of land coverage possible. However, imagery at the resolution necessary for determination of terrain and topography multipliers is only publicly available for the larger cities. Imagery for the remainder of Australia does exist,

78 but was too costly for the purposes of this thesis.

The approach has been to employ detailed topographical and aerial photography maps for the cities of Brisbane, Townsville, Cairns and Darwin, and the use of land coverage information at a less detailed resolution for the remainder of the country. The Australian Surveying and Land Information Group (AUSLIG) provided maps at a 1:25000 scale resolution for the areas mentioned above. These were compared to maps of the same regions containing Census Collection District (CCD) boundaries and postcode boundaries.

The terrain in each CCD was determined for the regions and entered into a database to facilitate for mapping. Topography was separately reviewed to classify locations as relatively flat ground, the top of a hill or ridge, or side of a slope or escarpment. Wind direction was assumed to be from the direction of the near coastline.

The wind adjustment factors were determined for each of the various combinations of terrain and topography and merged to the database of exposure based on location. The CCD results were combined by postcode as the exposure is expected to be only modelled to this spatial scale. The terrain and topography assumptions for Brisbane are shown in figures 7.7 and 7.8.

Figure 7.7 – Brisbane Terrain

(Yellow=Terrain 2, Light Blue=Terrain 2.5, Dark Grey=Terrain 3)

79

Figure 7.8 – Brisbane Topography

(Yellow=Hill or Ridge, Light Green=Escarpment, Dark Green=Flat)

Land coverage information provided by the Geological Survey of Canada (GSC) contained descriptive polygon files for terrain types across all of Australia. This information was reviewed for consistency with the more detailed analysis undertaken for Brisbane and other cities to confirm its accuracy. Topography was included implicitly in the various categories of terrain type in the GSC database.

The roughness factors were determined for each of the categories of terrain in the GSC database and the resulting wind adjustment factors determined. The results were merged with the results from the detailed analysis to complete the collection of terrain and topography adjustment factors for all of Australia, shown in Figure 7.9.

80

TERRAIN FACTOR 1.0489 - 1.0800 1.0201 - 1.0488 1.0001 - 1.0200 0.9901 - 1.0000 0.9501 - 0.9900 0.9401 - 0.9500 0.9101 - 0.9400 0.8901 - 0.9100 0.8701 - 0.8900 0.8448 - 0.8700 0.6928 - 0.8447

Figure 7.9 – Selected Terrain/Topography adjustment factors

The approach has been to match detail to exposures making do with lower level information outside major centres. It is accepted that this adjustment for local roughness and topography is at best notional but remind the reader, that the purpose is not to estimate likely damage to particular buildings but to get the average or ensemble losses for similarly exposed buildings at a postcode level. Moreover the deficiencies in the terrain and topographic adjustments are hidden with the high uncertainty surrounding the building vulnerability curves (see later).

Hazard Model Validation

The model for the calculation of maximum wind speeds was validated both against recorded wind speeds from historic events and against engineering models for return period winds at given locations.

Wind Speed by Event

Each of the twelve largest insured loss-causing historic cyclone events listed in Table 7.2 were modelled using the approach described previously. The 1974 flooding in Brisbane from

81

Cyclone Wanda and the 1998 flooding in Townsville due to Cyclone Sid were classified as floods rather than cyclones and are not listed here.

The maximum wind speeds were determined for each event and compared to actual wind speeds recorded during each of the events. Wind speeds for Cyclone Nancy in 1990 were the maximum measured over land. All other recordings and modelled wind speeds were at the maximum wind speed at any point in the cyclone track and were taken from the Bureau of Meteorology database.

Cyclone Year Recorded Modelled Error (%) Winds Winds km/h km/h Tracy 1974 241 227 -6% Madge 1973 148 164 10% Althea 1971 120 124 3% Ada 1970 130 116 -10% Winifred 1986 167 192 15% Joy 1990 167 203 22% Ted 1976 187 179 -4% Joan 1975 250 233 -7% Hazel 1979 185 193 4% Nancy * 1990 135 127 -6% Alby 1978 213 205 -4% Aivu 1989 222 241 9%

Table 7.2 – Wind speed by event

In most cases, modelled wind speeds lay within 15% of the actual speeds.

Wind Speed by Region

Tryggvason provides return period wind speeds for all cyclone prone regions in Australia while Oliver and Georgiou and Harper42 have done the same for Queensland. Tryggvason also includes summarised results from four other studies, including an approach involving a statistical analysis using a modified Gumbel distribution known as the Dorman method43 .

42 Harper, B. A., 1999, “Numerical Modelling of Extreme Tropical Cyclone Winds”, Journal of Wind Engineering and Industrial Aerodynamics 83

43 Dorman (modified Gumbel) method, used as guidance in development of Australian wind loading code, Standards Association of Australia, 1989, “Minimum design loads on structures (known as the SAA Loading 82

Engineering standard AS1170.2-1989 gives 50-year wind speeds for coastal and inland regions around Australia and the results in certain of the regions in this standard are based on the Dorman method.

Return period wind speed data was also identified for Brisbane and Townsville airports, but includes data attributable to non-cyclonic storms such as thunderstorms or winter cold fronts. The airport data was not used in this analysis.

Table 7.3 shows the modelled wind speeds at each return period for a selection of twelve airports and town centres, for Onslow, around the cyclone prone coastline of Australia.

City 20 50 100 250 500 1000

Brisbane 27 40 47 54 59 65

Rockhampton 24 33 41 50 55 58

Mackay 31 46 53 61 66 69

Townsville 33 45 52 60 64 66

Cairns 33 47 54 63 67 73

Darwin 30 40 46 51 55 59

Broome 47 56 61 68 72 76

Port Hedland 46 57 63 70 76 79

Onslow 50 61 68 76 79 82

Carnarvon 41 56 63 69 74 79

Geraldton 22 45 54 64 69 74

Table 7.3 – Model return period wind speeds

Figures 7.10 and 7.11 respectively show the modelled 3-second gust wind speeds versus the other models and various engineering design wind speeds for 50 and 500-year return periods. These estimates are much lower than the design code estimates as might be expected given the conservative nature of engineering design. For much of the east coast and for the 50-year

Code) - Wind loads”, AS1170.2.-1989. See also Leicester, R. H., Bubb, C. T. J., Dorman, C., and Beresford, F. D., 1979 .‘‘An assessment of potential cyclone damage to dwellings in Australia.’’ Proc. 5th Int. Conf. on Wind Engineering, J. E. Cermak, ed.,Pergamon, New York, 23–36 83 return period, the estimates lie between those of Tryggvason and Harper. For the west coast the estimates are slightly higher than those of Tryggvason.

At a 500-year return period, the estimates are for the most part in excess of those of Tryggvason and fall between the design codes limits for permissible and ultimate.

The design wind speeds shown in figures 7.10 and 7.11 come from AS1170.2-1989. They roughly correspond to the return periods shown in Table 7.4.

Wind speed reference Definition Approximate return period

Vs Serviceability 20 years

Vp Permissible stress 50+ years

Vu Ultimate – for strength limit state 500 years (formerly 1000 years)

Table 7.4 – AS1170.2-1989 Basic wind speeds

90

80

Oliver & Georgiou 70 Tryggvason Gomes & Vickery

60 Martin & Bubb (Sim) Martin & Bubb (Obs) Storman 50 Harper

Wind Speed (m/s) Design Ultimate

40 Design Permissible Design Serviceability Gardner 2005 30

20 Cairns Darwin Onslow Mackay Broome Brisbane Geraldton Townsville Carnarvon Port Hedland Port Airport/City

Figure 7.10 – 50 year return period wind speeds for current model versus previous research

84

90

80

Oliver & Georgiou 70 Tryggvason Gomes & Vickery

60 Martin & Bubb (Sim) Martin & Bubb (Obs) Storman 50 Harper

Wind Speed (m/s) Design Ultimate

40 Design Permissible Design Serviceability Gardner 2005 30

20 Cairns Darwin Onslow Mackay Broome Brisbane Geraldton Townsville Carnarvon Port Hedland Port Rockhampton Airport/City

Figure 7.11 – 500 year return period wind speeds for current model versus previous research

In summary, the wind speeds at the various return periods show satisfactory agreement with both the analyses from the previous work referred to above and to the Australian design wind speeds.

Figure 7.12 shows the 50-year return period wind speeds geographically.

85

Figure 7.12 – 50 year return period wind speeds

Building Vulnerability Functions

The approach follows that of Sciaudone et al.44 with modifications for Australian design codes. Damage functions describe the building and contents damage as a percentage of value with increasing wind speed. For each simulated event, the damage at each location is determined as the product of the percentage damage for the maximum gust speed times the insured building or contents values. Insured losses are determined from the estimated damage after allowing for any policy limits or deductibles.

Damage functions developed by Impact Forecasting LLC were used to determine losses for each location as a function maximum event wind speed. The damage functions, developed for analysis in the United States of America, represent a wide range of construction age, material, design building code and region. Representative Australian damage functions for purposes of this thesis were chosen from this database of damage functions based on a review of the Australian Building Code specifications outlined in AS1170.2-1989.

44 Sciaudone J.C, D. Feuerborn, G.Rao, M.T.S. Daneshvaran, 1997, “Development of Objective Wind Damage Functions to Predict wind Damage to Low-Rise Structures”, Eighth U.S. National Conference on Wind Engineering, Baltimore MD 86

Damage curve format

The damage functions are represented as piecewise linear distributions. Each function comprises the mean and standard deviation of the damage at each of sixteen wind speeds, starting at 0 metres per second and with a maximum of 250 metres per second. The spread of losses about the mean at any wind speed was assumed to be described by a Beta distribution.

Contents losses follow from the building damage. Sample damage functions are shown in Figures 7.13 and 7.14 for building and contents respectively.

100%

90%

80%

70%

60%

50%

Building Damage Building 40%

30%

20%

10%

0% 0 102030405060708090100 Wind Speed (m/s)

Mean Standard Deviation

Figure 7.13- Sample Building Damage Function

87

100%

90%

80%

70%

60%

50%

Contents Damage 40%

30%

20%

10%

0% 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% Mean Building Damage

Mean Standard Deviation

Figure 7.14- Sample Contents Damage Function

Design Code

AS1170.2-1989 contains specifications for design for four coastal regions of Australia. These four design regions, defined geographically in the code, represent the different levels of risk of high wind speeds. The region boundaries are outlined in Table 7.5.

0 to 50 km inland 50 to 100 km inland

West coast latitudes 25-30 degrees south B B

West coast latitudes 20-25 degrees south D C

West coast latitude 20 degrees south C B across northern coastline to east coast latitude 25 degrees south

East coast latitudes 25-30 degrees south B B

All inland and other regions A A

Table 7.5 – Wind code regions boundaries

88

The Australian postcode database was indexed to the design code regions and is shown in Figure 7.15.

Figure 7.15- AS1170.2-1989 Design code regions

(Green=A, Red=B, Orange=C, Blue=D)

Damage functions for wood buildings in each of the four regions are shown in Figure 7.16.

89

100%

90%

80%

70%

60% A B 50% C D 40% Mean Building Damage

30%

20%

10%

0% 20 30 40 50 60 70 80 Wind Speed (m/s)

Figure 7.16- Damage Functions by AS1170.2-1989 design code region

Year of Construction

As explained earlier, the wind loading standard introduced in Australia in 1989 led to improved construction practices in the following years in the regions of highest risk, AS1170.2-1989 regions B, C and D. The construction code changes affected the ability of structures to withstand cyclonic scale winds, first in Queensland, then in other States. For this reason, industry exposure has been allocated into either pre- or post-1980 construction.

The damage functions were developed for Australia for each of the four regions based on the post- 1980 design code. For buildings constructed prior to 1980, damage functions in each of the three higher risk areas were mapped to the damage functions of the next lesser code level. The damage function mapping is shown in Table 7.6.

Region Construction Pre- 1980 Construction Post- 1980

D Mapped to current code C Mapped to current code D

C Mapped to current code B Mapped to current code C

B Mapped to current code A Mapped to current code B

A Mapped to current code A Mapped to current code A

Table 7.6 – Damage function code level by Region and Year of Construction

90

The mapping above may overestimate the strength of buildings in zones C and D for pre-1980 construction since the code level across all regions includes improvements over the general standard prior to 1980. However, it is possible that in various regions, due to the introduction of the code, various efforts of retro-fitting have been made. In this manner, older buildings are modified to conform to the current to some extent. However, because the retrofitting is generally limited to the addition of specific tie-downs or application of modifications to roof connectors, as opposed to a complete reconstruction of a building, the strength would be inferior to a building built from scratch under the new code.

Construction Material

For purposes of this thesis, Australian residential exposure was grouped into two separate building material types: wood and masonry. On average, although not in all cases, due to particulars of design and adherence to code in various regions, experience has shown damage to wood constructions at a level higher than for masonry construction.

A review of damage to buildings in Darwin from Cyclone Tracy in December 1974 was undertaken through analysis of information gathered by Territory Insurance Office (TIO) following that event. A database was compiled of every building in Darwin, and included such fields as street address, construction material, roof design type and percentage damage. Results of the analysis by construction type are shown in Table 7.7.

Construction Average Damage

Brick 43.7%

Cement 54.2%

Fibro/Brick 57.1%

Fibro 64.0%

Other 59.5%

Unknown 30.5%

Total 55.4%

Table 7.7 – Cyclone Tracy Damage

It is clear by inspection that fibro buildings, assumed to been built with wooden frames, experienced an average level of damage level significantly greater than brick construction.

Submissions to the Florida Commission on Hurricane Loss Projection Methodology (FCHLPM) include a requirement for catastrophe model firms to show sample results for a

91 test portfolio outlined by the FLC. The submissions are publicly available via the FCHLPM website45 and include such information as average damage from pre-defined events for different construction classes, thereby indicating the relative damage functions for different construction materials.

Table 7.8 shows the information provided by two major catastrophe-modelling firms to the FLC and Impact Forecasting LLC who completed the analyses but did not made a formal submission. All firms assume greater vulnerability for wood-frame as compared with masonry construction.

Modeller Name Wood Frame Masonry Damage Difference Damage

Impact Forecasting LLC 3.84% 3.08% +25%

ToPCat/Tillinghast – 3.69% 3.32% +11% Towers Perrin

Catalyst 3.0 4.18% 3.30% +27%

Table 7.8 – Florida Loss Commission Damage Ratios

The damage functions used in this thesis, developed using an offline simulation of behaviour of components under increasing physical stress loadings, have wood damage at a level of approximately 22% greater than masonry damage in the range of expected cyclonic wind speeds. Differences between wood and masonry damage functions are shown for a selected combination of design code region and year of construction in Figure 7.17.

45 Florida Commission on Hurricane Loss Projection Methodology website located at http://www.sbafla.com/methodology/ 92

80%

70%

60%

50%

40% Building Damage 30%

20%

10%

0% 20 30 40 50 60 70 80 Wind Speed (m/s) Wood Masonry

Figure 7.17 – Comparison of Wood and Masonry Damage Functions

These differences appear consistent with the experience in Cyclone Tracy and the damage functions used by the ToPCat and Catalyst models.

Recent Experience

A recent study by Aon Re46 reviewed losses from Cyclone Larry in the Queensland town of Innisfail. The analysis involved a review of the ratios of insured loss to building insured value for buildings of different construction types and by building year of construction. A summary of the results for postcodes affected by Cyclone Larry is shown in Table 7.9. Values in Table 7.9 marked with an asterix indicate that for these combinations of construction type and year of construction there were less than 50 buildings insured hence the results may not be statistically significant.

46 Aon Re Australia Limited, 2006, “Lessons from Cyclone Larry”, Unpublished research 93

Construction Post Pre Unknown All 1980 1980 Data Wood 3.5% 6.1% 5.1% 5.3% Brick 1.7% 4.2% 3.1% 2.3% Concrete 1.6% 4.8% 0.4%* 2.0% Fibro 2.7% 4.6% 5.4%* 4.2% Iron/Steel 5.0% 13.7%* NA* 7.4% Other 2.4%* 7.7%* NA* 5.6% Total 1.9% 5.2% 4.3% 3.2%

Table 7.9 –Loss Damage Ratios from Cyclone Larry

It can be seen that for all construction types the losses from buildings built prior to 1980 were approximately double those post- 1980, suggesting that the relativities between the different damage functions in the model by age of construction are appropriate.

The relativities between different construction types appear consistent with the results from the Darwin study alluded to earlier and the assumptions used in the model, with the exception of the loss ratios for Iron/Steel. The results seen in this category are driven by a small number of large commercial risks which are not representative of commercial risks across Australia.

It is concluded that the analysis of losses from Cyclone Larry confirm the use of the damage function assumptions in this model.

Coverage Type

As previously mentioned, damage functions for contents are a function of the mean building damage. For purposes of this thesis, the contents damage functions as a function of building damage were assumed to be the same in each of the four design code regions, for each range of year of construction and for each construction material type. However, if the contents damage functions are considered as a function of the wind speed, then the contents damage levels will vary across each of the these combinations.

94

Loss Calculations

The cost of physical damage was calculated by multiplying the building and contents damage ratios determined from the simulated events by the values of exposure in each category at each location. The total damage for each event was determined as the sum of losses at all locations.

Deductibles

The insured losses were estimated by subtracting the deductibles at each location from the total damage losses. The deductibles in all locations were assumed to be 1% of the total value. The insured policy limits were assumed to be equal to the total value.

Modelling the deductibles required some care. Simply taking the mean damage for the gross loss and then allowing for an identical deductible could underestimate possible losses. For example, if 100 buildings with 1% deductibles take on average 0.9% damage, then using the mean only would result in zero net insured loss. In reality, some of the buildings would incur losses greater than 1%, so the net loss is non-zero.

The mean loss at each location, after allowing for the deductible and the variance in damage at each wind speed, was estimated by taking a sample of one hundred points at each location and piecewise linear endpoint. The losses represent the mean of this distribution, after allow for deductibles.

As 100% of the building and contents exposures are assumed to be insured for 100% of their value, the net loss represents the total insured loss.

95

Demand Surge

When catastrophic events occur, the damage to the exposed property may be such that immediate repairs may not be possible due to a lack of material supplies and physical labour for construction. In these cases, the additional “demand surge” leads to increased pricing of construction materials, labour and associated services. The demand surge experienced in historic cyclone events, based on estimates provided by Dr George Walker and Dr Siamak Daneshvaran, is shown in Table 7.10.

Event Date Demand Surge

Cyclone Tracy, Australia December 1974 +80%

Hurricane Andrew, U.S.A. August 1992 +35%

Hurricane Fran, U.S.A. September 1996 +25%

Hurricane Opal, U.S.A. October 1995 +30%

Table 7.10 – Demand Surge from Historic Cyclone Events

Following discussions with Dr Walker and Dr Daneshvaran during 2001, it was decided that an arbitrary maximum demand surge level for the largest events be set at 50% of total cost. The demand surge was modelled as a function of the industry total loss, in relation to return period. An individual demand surge multiplier was determined for each simulated event, as described below.

The PML analysis was undertaken on the industry exposure developed in Chapter 4. The losses were ranked and the return period calculated for each event as outlined below. Demand surge factors were applied to each event as a function of return period, based on interpolated values between the numbers shown in Table 7.11 below.

96

Return Period (years) Demand Surge Factor

Up to 10 0%

10 0%

20 6.7%

50 13.3%

100 20%

250 30%

500 40%

1000 50%

Over 1000 50%

Table 7.11 – Demand Surge Factors for Cyclone

In addition, for cyclone events commencing in segments between segment 16 (Winning) and segment 81 (Lindeman Island) it was assumed that there would be additional demand surge on account of the remoteness of these locations. The maximum demand surge factor applicable for events generated in each segment was increased for segments in this area. The increase was gradual from a fixed +50% maximum demand surge outside this region up to +100% between segment 25 (Eighty Mile Beach) and segment 72 (Cape Melville), as shown in Figure 7.18.

110%

100%

90%

80%

70%

60%

50%

40% Maximum Demand Surge Maximum

30%

20%

10%

0% 0 102030405060708090100 Coastal Segment

Figure 7.18 – Maximum Demand Surge by Coastal Segment

97

Demand surge factors were applied to ground up losses, before any insurance deductibles were taken into account.

It should be noted that the treatment of demand surge in this thesis is relatively simple compared to the complexities that lead to demand surge in reality. A more comprehensive approach, outside the scope of this thesis, could involve modelling shifts in labour forces due to economic factors, differences in population growth and structure in different regions and the effects of multiple events in a given time period.

Inflation and Discounting

The exposure in this thesis is assumed to be current as at 31 December 2003. Any extrapolation of the results in this thesis to future dates needs to allow for growth in property values and exposure growth.

As most property claims are settled in reasonably short time frames, investment delays and discounting of future repairs have been ignored for the purpose of this thesis.

Model PML Results

The 99,999 cyclone events have been generated using the assumption that the probability of occurrence is equal for all events. As the total frequency of all events has been determined to be 6.5 annually, then the frequency of each event is 6.5x10-5.

The Exceedance Probability of any event is defined as the probability of experiencing an event loss equal to or greater than the particular event loss. If event losses are ranked in descending order, then the exceedance probability equals 6.5x10-5 times the rank. The return period is the inverse of the exceedance probability. The annual average loss is equal to the sum of the products of the event probabilities and the event losses.

The PML results for Australian cyclone hazard determined from the analysis in this chapter are shown in Table 7.12.

98

Return Period Gross Insured Gross Insured Loss Loss with Demand Surge

($m) ($m)

10,000 23,659 35,488

5,000 15,527 23,291

2,000 11,939 17,908

1,000 8,281 11,945

500 5,101 6,983

250 3,799 4,699

100 1,801 2,164

50 1,048 1,161

20 501 522

10 244 247

Average Annual Loss 138 157

Table 7.12 – Australian cyclone PML results

These model results are compared to actual historic losses and PML curves from different perils in Chapter 11 below.

99

100

Chapter 8 - Development of a GIS Earthquake analysis based on a published model

Earthquakes

Earthquakes are fractures in the earth's crust and mantle, which occur as a result of relative movements of plates. Stress builds up as the plates push or pull against each other, until the rocks shear and the energy is released as an earthquake. Seismic waves carry energy to the ground surface, causing levels of damage that generally decrease with distance from the source.

One commonly used measure of earthquake hazard risk is the maximum intensity that would be expected every 50 years with a 10% chance of exceedance. This equates to a 475 year return period hazard. Global maps developed by various agencies incorporate knowledge of global tectonics as well as local information available in each region. Figures 8.1 and 8.2 show the Global Seismic Hazards Assessment Program (GSHAP)47 maps for the entire globe and Australia respectively.

Legend

PGA (%g)

0 - 2 2 - 4 4 - 8 8 - 16 16 - 24 24 - 32 32 - 40 40 - 48 48 +

Figure 8.1 - Global Earthquake 50 year 10% Peak Ground Acceleration (%g)

47 International Lithosphere Program , 1999, “Global Seismic Hazard Assessment Map” 101

Earthquakes in Australia

Approximately 95% of the world's earthquakes occur on the boundaries of the plates. Australia lies within a plate and thus does not experience the high levels of earthquake activity associated with plate boundaries. Thus compared to many of its neighbours, Australia experiences a relatively low earthquake risk. Types of earthquake occurring within Australia are called "intra-plate" earthquakes.

Legend

PGA (%g)

0 - 2 2 - 4 4 - 8 8 - 16 16 - 24 24 - 32 32 - 40 40 - 48 48 +

Figure 8.2 - Australian Earthquake 50 year 10% Peak Ground Acceleration (%g)

The largest historical earthquake in Australia occurred off the coast of Western Australia in 1906 and measured 7.4 on the Richter scale (ML 7.4). The most costly earthquake was the Newcastle earthquake of 10 December 1989. This earthquake, which measured ML 5.6, caused 13 deaths and damages of approximately $2,200 million in 2003 dollars, allowing for inflation and population growth as will be outlined in Chapter 9.

Earthquake Models for Australia

Gaull et al. 48 present a methodology for analysis of earthquake risk in Australia. The authors define a number of polygon regions around Australia as earthquake sources and parameters that describe the frequency-magnitude relationships for seismic activity of in each of these

48 Gaull, B, Michael-Leiba, M.O., and Rynn, J.A.W., 1990, “Probabilistic Earthquake Risk Maps of Australia”, Aust. J Earth Sciences, 37, 169 - 187 102 regions. The paper also gives attenuation relationships that allow determination of the ground shaking intensity at locations distant from the source.

Rather than research and develop a new earthquake model based on the historic event record, as was done for the cyclone model described previously, the earthquake analysis in this thesis simply implements the Gaull et al model for the hazard estimation and uses damage functions proprietary to Impact Forecasting LLC. The model was run on the industry exposure database developed in Chapter 4 in order to determine earthquake PML results for easy comparison with Australian cyclone PML curves and that for other perils.

Simulation of Earthquake Events

Event Set

An event set was generated using the parameters outlined by Gaull et al, taking into account the maximum event magnitudes given in the paper, and assuming a minimum earthquake magnitude of 5.0 ML for generating events building damage.

The source characterisation model is based on the Gutenberg and Richter (1954) statistics:

Log10 N = A - B ML (8.1)

where:

N is the number of earthquakes within a time interval with magnitudes greater than or equal to ML

ML is Richter magnitude

A is the number of earthquakes (ML >0), and

B is a scaling parameter describing the drop off in earthquake frequency with increasing magnitude

The events were simulated by sampling on a uniform U[0,1] distribution and transforming the simulated samples into earthquake magnitudes. The transformation is

M = Min - ln [1 - F + Fe-b(Max-Min)] / b (8.2) where:

103

Min = the minimum magnitude in each polygon (set at 5.0)

Max = the maximum magnitude in each polygon

F = the random sample from U[0,1] and

b = the product of the natural logarithm of 10 times parameter B given in Gaull et al.

Hazard Measurement

The ELEMENTS software was modified to incorporate the Gaull et al. attenuation functions. These functions are based on Australian data and which vary for each of the following regions:

• Western Australia

• South-eastern Australia

• North-eastern Australia

• Indonesia

The attenuation relationships take the form

Y = a eb ML / Rc (8.3) where:

Y = peak ground acceleration (PGA), measured in m/s2. (PGA is the acceleration experienced by a solid object embedded in the soil surface – a tombstone for example.)

ML = the Richter magnitude

Rc = the hypo-central distance from the location of hazard measurement

Parameters a and b vary by polygon region

Hazard Results

The model results for hazard were determined for the 50 year 10% threshold of PGA for comparison with the results shown in the Gaull et al. paper. The resulting map of PGA is shown as Figure 8.3.

104

Figure 8.3 – Modelled 50 year 10% PGA in (m/s2)

A visual comparison with the PGA map in Gaull et al. shows that the analysis produces a roughly equivalent distribution of PGA across Australia. It is concluded that the event set provides an adequate representation of the probabilistic hazard in Gaull et al.

The Australian design code AS1170.4-199349 uses the same results as GSHAP, which differ significantly in some regions from the results from the Gaull et al. methodology. It is expected that an analysis using the GSHAP model instead of the Gaull et al. model would lead to different overall PML results.

Modification for Soil Characteristics

The hazard produced by an earthquake is affected at each surface location by the structure of the earth through which the energy waves have passed, including the shallow geology and surface soil characteristics. Soil characteristics were determined for the Australian continent using geomorphic map data from the Generalized Geology of the World and Linked

49 Standards Association of Australia, 1993, “Minimum Design Loads on Structures, Part 4: Earthquake Loads”, AS1170.4-1993 105

Databases50. For each of these individual soil characteristics, amplification factors may be determined as a function of the incident PGA measures. By adjusting the PGA at each location, the net PGA affecting each building property is determined.

The soil factors used in this thesis were provided by Impact Forecasting LLC and remain the property of Impact Forecasting LLC.

Loss Calculations

Damage Functions

Damage functions developed by Impact Forecasting LLC were used to determine losses for each location as a function of post-soil-adjustment PGA. The damage functions, developed for analysis in the United States of America, represent a wide range of construction age, material, design building code, region and lateral force resisting system.

Representative Australian damage functions for purposes of this thesis were chosen from this database based on a review of the Australian Building Code specifications outlined in AS1170.4-1993. The review, performed in collaboration with Dr Francisco Javier Perez (Impact Forecasting), resulted in the selection of eight damage functions for Australian building and contents. The design code is representative of a region with minimal earthquake hazard, as compared to more hazardous regions such as the state of California or New Zealand.

The damage functions represent the various combinations of construction material (wood or masonry), building age (pre- and post- 1980) and coverage type (building or contents). However, the construction material of the building is ignored for purposes of determining contents damage, so there are effectively six different damage functions used.

The damage functions are based on PGA, and contain sixteen points for each of mean damage and standard deviation. The damage from hazard measurements of less than a PGA of 0.05 g (0.49 m/s2) was assumed to be zero.

50 Geological Survey of Canada, 1995, “Generalized Geology of the World and Linked Databases”, Open File 2915d 106

A sample of the damage functions for building damage for a wood frame building is shown in Figure 8.4.

100%

90%

80%

70%

60%

50%

Building Damage Building 40%

30%

20%

10%

0% 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 PGA (m/s2) Mean Standard Deviation Limit

Figure 8.4 - Building Damage Mean and Standard Deviation

Note on Spectral Acceleration versus Peak Ground Acceleration

The hazard measurement used in this thesis is based on PGA. However, more comprehensive damage analysis may be performed using Spectral Acceleration (SA), which takes into account the building “period”. The building period is the inverse of the natural frequency of the structure, determined from an analysis of building rigidity, and is closely correlated to the building height. In other words, a taller building sways in an earthquake, while a shorter building vibrates rapidly. A more comprehensive analysis, if more detailed exposure information were available, would take building period into account.

Demand Surge and PMLs

The maximum demand surge for earthquake was arbitrarily set at 40% of the total industry loss. The PML were based upon the industry exposure developed in Chapter 4. Event losses were ranked and the return period calculated for each event. Demand surge factors were applied to each event as a function of return period, based on interpolated values between the numbers shown in Table 8.2 below.

107

Return Period (years) Demand Surge Factor

Up to 10 0%

10 0%

20 5%

50 12%

100 18%

250 25%

500 32%

1000 40%

Over 1000 40% Table 8.2 – Demand surge factors for earthquake

No further adjustments by geographic region were made.

Hazard Validation against Major Australian Earthquakes

As described above, the damage functions selected chosen to represent Australian design standards were based on a theoretical engineering analysis of buildings rather than actual claims data. In order for such assumptions to be proven valid, detailed comparisons of modelled losses need to be compared with actual claims information for a number of historic earthquake events. The low frequency of damaging earthquakes in Australia means that only very limited information is available for validation purposes. In fact, only six earthquakes have occurred in recent history that have caused losses of greater than $5 million. Details of each of these events are summarised in Table 8.3 below, listed in chronological order.

Location Year Magnitude

Adelaide 1954 5.4

Robertson 1961 5.6

Meckering 1968 6.9

Cadoux 1979 6.2

Newcastle 1989 5.6

Ellalong 1994 5.4

Table 8.3 – Most Damaging Australian Earthquakes 1950-2004

108

Isoseismal maps, showing the intensity of shaking of each earthquake across the area of effect, were available for each of these six earthquakes51. The unit of measurement used in these maps is the Modified Mercalli Intensity (MMI), a measure of the effect of the earthquake on surroundings, rather than an instrumental measure of ground acceleration.

The Gaull et al. paper provides formulae for calculation of MMI as a function of earthquake magnitude and distance from the fault to each location. Three of these events were modelled individually and the MMI maps compared to the isoseismal records (Figure 8.5). The intention is not to revalidate the Gaull et al. methodology but rather to gain some rough indication of model performance.

51 University of Western Australia, Isoseismal Maps, http://www.seismicity.segs.uwa.edu.au/page/45009, The maps were redrawn from maps in The Isoseismal Atlas of Australian Earthquakes, Geoscience Australia 109

Actual MMI Model MMI

Adelaide 1954 1 - Adelaide - 1954 - 5.4ML

-30

-31

-32

-33 7.5 - 12 6.5 - 7.5 -34 5.5 - 6.5 4.5 - 5.5 -35 3.5 - 4.5 2.5 - 3.5 1.5 - 2.5 -36 0 - 1.5

-37

-38

-39

-40 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148

Meckering 1968 3 - Meckering - 1968 - 6.9ML

-26

-27

-28

-29 7.5 - 12 6.5 - 7.5 -30 5.5 - 6.5 4.5 - 5.5 -31 3.5 - 4.5 2.5 - 3.5 1.5 - 2.5 -32 0 - 1.5

-33

-34

-35

-36 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125

Newcastle 1989 5 - Newcastle - 1989 - 5.6ML

-28

-29

-30

-31 7.5 - 12 6.5 - 7.5 -32 5.5 - 6.5 4.5 - 5.5 -33 3.5 - 4.5 2.5 - 3.5 1.5 - 2.5 -34 0 - 1.5

-35

-36

-37

-38 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158

Figure 8.5 – Comparison of Actual and Modelled MMI

The comparison raises a number of issues:

• The model predicts symmetrical isoseismals where this is not always the case. This is due somewhat partially to the effects of local soil conditions

110

• The model underestimates ground shaking from the Adelaide earthquake, overestimates for Meckering and shows a reasonable correspondence with measurements of the Newcastle earthquake.

Despite these differences, the Gaull et al model will be adopted for our purposes given there is no practical alternative.

Loss Validation against the Newcastle Earthquake

Losses from the Newcastle event of 1989 were estimated by translating the recorded MMI values to PGA values, based on the relationships shown by Gaull et al. Losses were generated for 500 simulations using these PGA figures, but allowing for uncertainty about the mean damage functions by incorporating the standard deviation of loss at each level of PGA such as shown in Figure 8.4. The resulting distribution of simulated losses is shown in Figure 8.6. Number of Simulations Number $0m $1,000m $2,000m $3,000m $4,000m $5,000m $6,000m Total Insured Loss Simulated Actual

Figure 8.6 – Simulated losses from Newcastle Earthquake of 1989

The simulated losses range from $1.91 billion to $4.15 billion, while the actual losses, scaled to December 2003 dollars and exposure, are estimated as $2.20 billion. The validation shows that the model results are of the same order of magnitude as the actual losses, but the uncertainty is large.

111

The actual loss estimate of $2.20 billion does not consider underinsurance or the lack of insurance for some properties in the Newcastle area. If it is this is assumed to be is 20% then the actual losses would appear close to the centre of the simulated range.

It should also be noted that incorporating additional uncertainties in the attenuation relationship and the magnitude estimation would result in a much wider dispersion of possible outcomes. This being the case, this comparison is interpreted as being reasonable.

PML Results

The full probabilistic analysis using the entire event set was run against the industry exposure database developed in Chapter 4. Table 8.4 shows the PML results from the analysis of Australian earthquake exposures. Considering the relatively low level of seismic activity in Australia compared to some other countries, the earthquake risk is considerable.

Return Period Gross Insured Gross Insured Loss Loss with Demand Surge

($m) ($m)

10,000 107,768 150,875

5,000 74,737 104,632

2,000 41,797 58,515

1,000 30,162 42,226

500 12,419 16,353

250 5,980 7,639

100 2,445 2,884

50 1,300 1,448

20 428 446

10 148 150

Average Annual Loss 176 220

Table 8.4 – Earthquake PMLs

112

113

Chapter 9 - Other Natural Hazards

This chapter provides an overview of the statistical methodology for the simulation of losses to Australian property portfolios from bushfire, flood, hail and severe thunderstorms, collectively termed Other Perils to distinguish them from tropical cyclones and earthquakes. The methodology will be used in conjunction with the stochastic models for cyclone and earthquake already discussed to obtain complete loss distributions for all natural perils afflicting insured property assets in Australia.

Methodology

Although some geophysical models exist for the other perils, it will be some years before they are available for general use for Australia wide portfolios. Leigh52 has developed estimates of hail losses to insured residential properties for Sydney and Brisbane and for riverine flooding for major catchments on the eastern seaboard. Based on a statistical analysis of past events, McAneney53 simulated Australian wide property losses to bushfires by combining the annual average frequencies of bushfires losses together with a distribution of losses conditional upon a loss.

Because flood, hail and severe thunderstorms are frequent causes of insured loss, there is much more historical data available than is the case for cyclone and earthquake losses. Therefore, there is some justification for employing an approach based on statistical analyses of actual historic losses as a means of determining loss distributions for these perils. Statistical approaches should provide a reasonably guide to low return periods PMLs. It is the latter that are associated with retained losses, in other words, they fall below the reinsurance thresholds, and affect a company’s working capital. Longer return period, PML’s are likely dominated by cyclones and earthquakes.

The methodology used in this chapter is as follows:

• Determine a suitable database of historic industry losses

52 Leigh, R., 2003, “Estimating Sydney PMLs – Which is the most important natural hazard?”, Risk Frontiers quarterly newsletter, June 2003 Volume 2, Issue 4

53 McAneney, K.J., 2005, “Australian Bushfire: Quantifying and Pricing the Risk to Residential Properties”. 114

• Index industry losses to current dollar amounts

• Rank the current dollar/current exposure losses and determine a statistical fit to the distribution, extrapolating out to higher return periods

• Generate an event set from the distribution

• Determine PMLs for Australia for Other Perils

Loss Data

As described in Chapter 3, the Insurance Council of Australia maintains a database of losses to the insurance industry since 1967. An electronic version of this database is available at the Insurance Disaster Response Organisation (IDRO) website54. This database records the total insurance industry loss from each event along with brief details describing the event including the date or time period, the regions affected and the losses in millions of dollars. For most years in the database, only losses over ten million dollars were recorded. For some events the losses have been inflated to current dollars at the date of each version of the list.

Upon review of a number of different versions of this list, published at different times over the last ten years, a “clean” database was produced (Appendix A). Cleaning of the data included:

• Inclusion of the most recent version of the loss information

• Addition of date fields where some were missing

• Correction of apparent typographical errors

• Addition of region affected where missing

• Classification of each event to a specific peril type

As mentioned already, the 1974 flooding in Brisbane from Cyclone Wanda and the 1998 flooding in Townsville due to Cyclone Sid were classified as floods rather than cyclones.

54 Insurance Disaster Research Organisation, 1996, “Major Disasters Since June 1967”, Available on the internet at http://www.idro.com.au/disaster_list 115

The “clean” list included a total of 149 events from January 1967 through December 2003.

Of this list, losses due to the following perils were included:

• Cyclone – 29 events

• Earthquake – 4 events

• Bushfire – 16 events

• Flood – 28 events

• Hail – 20 events

• Storm – 52 events

Development of Current Day Losses

Up to December 1996, losses in the ICA database had been already corrected for inflation. This is not the case for more recent versions of the report, including the latest list available at the end of December 2003. The Consumer Price Index (CPI) (Figure 9.1) was used to inflate all losses to values as at December 2003. Losses listed with event dates prior to December 1996 were inflated from December 1996, while later events were inflated from their date of occurrence.

116

150

145

140

135

130

125

120

Consumer Price Index (CPI) Index Price Consumer 115

110

105

100

1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 Date

Figure 9.1 – Consumer Price Index

Increase to Current Exposure

Assuming that the size of each industry loss is proportional to the total property exposures in the region, implies a larger loss if the same event were to occur today. Therefore, in order to obtain a true estimate of the current potential losses, past exposures should also be corrected for the changing population as well as inflation.

To do this, the population in the region where the event occurred was first determined, both at the time of the event and as at December 2003. December 2003 populations were estimated from an extrapolation, by region, of the population change from 1997 through to 2001. For most events the region was defined as a particular city or town. For some larger events where more than a particular city locality was affected, the change in the entire state population was used to inflate the historic losses.

This methodology ignores modern mitigation strategies and building code improvements that would be expected to reduce some losses particularly in the case of cyclone wind damage.

The historic loss events inflated to current dollars and in terms of current exposures are listed in Appendix A. Key statistics about these inflation and exposure adjusted losses are shown in Table 9.1.

117

Peril Count Total Losses Average Loss Maximum ($m) ($m) Loss ($m)

Bushfire 16 1,486 93 411

Cyclone 29 4,919 170 2,339

Earthquake 4 2,324 581 2,203

Flood 28 2,264 81 707

Hail 20 4,412 221 2,120

Storm 52 1,970 38 314

Total 149 17,375 117 2,339

Table 9.1 –Event Losses inflated to December 2003 dollars and exposures

The interesting points to note from this table are:

• Cyclones have contributed to the highest total cost of loss, with hail a close second

• If storm and hail were combined, then thunderstorms would represent the largest losses

• Earthquakes have the largest event losses on average

• The maximum loss in current exposure and dollars was Darwin’s Cyclone Tracy in 1974, followed by the Newcastle Earthquake of 1989 and the Sydney Hailstorm of 1999.

The analysis below uses the data only for bushfires, floods, hail and storms, as cyclones and earthquakes have been modelled separately as discussed in the preceding chapters.

118

Curve Fitting

In the earlier years of data collection, smaller events were not recorded. It seems that the threshold for inclusion changed over time but there is no record of this anywhere. To safeguard against problems of completeness of the dataset, losses from all of the Other Perils were plotted against the annual return interval (ARI) using the following selections of data:

• For events of over $100 million, all years were used (37 years)

• For events between $20 million and $100 million, only data from 1974 was used (30 years)

• For events between $10 million and $20 million, only data from 1985 was used (19 years)

• For events with losses less than $10 million, only data from 1995 was used (9 years)

The ranges of years are shown in Figure 9.2.

10,000

1,000

100 Loss ($m) Loss

10

1 1965 1970 1975 1980 1985 1990 1995 2000 2005 Year

Figure 9.2 – Ranges of years used for model curve fitting

The curve was plotted in terms of the natural logarithm of the losses versus the natural logarithm of the return period. The data, when presented in this manner, showed a pattern with a pronounced “kink” at approximately the $8 million point on the curve, as shown in Figure 9.3.

119

10000

1000

All Years Since 1974 100 Since 1985

Loss ($m) Loss Since 1995

10

1 0.1 1 10 100 Return Period

Figure 9.3 – Event Loss Data for Bushfire, Flood, Hail and Storms

A least squared regression was undertaken on the data using the Microsoft Excel “Trendline” feature on the two sections of data either side of the $8 million kink.

A fit to the points with losses less than $8 million was made using a linear fit as shown in Figure 9.4.

120

5

4.5

4

3.5

3

2.5 ln Loss ln 2

1.5

1

0.5

0 -3 -2.5 -2 -1.5 -1 -0.5 0 ln RP

Figure 9.4 – Fit on losses less than $8 million

The relationship from this fit is

ln (Loss) (in millions) = 13.822 + 7.407 * ln(ARI) (9.1)

For the losses beyond $8 million, a fit was made using a polynomial of the third order. At the upper end two further points were added based upon some admittedly subjective criteria:

• the 1,000 year industry loss be of the order of $5-6 billion (approximately two and a half times the Sydney 1999 hail storm)

• the 10,000 year industry loss be of the order of $10 billion (approximately 5 times the Sydney 1999 hail storm)

A comparison of these two estimates of loss is made to simulation model results from Leigh below.

After adding these two points and performing the fit, the fitted curve (shown as Figure 9.5) showed an acceptable goodness of fit and produced the following results at the extreme end of the curve which correspond with the subjective estimates:

121

• A 1,000 year industry loss of $5.953 billion

• A 10,000 year industry loss of $9.870 billion

10

9

8

7

6

5 ln Loss 4

3

2

1

0 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 10 ln RP

Figure 9.5 – Fit on losses greater than $8 million

The relationship from this fit is

ln (Loss) (in millions) =

4.09 + 1.13 * ln(ARI) - 0.081 * (ln(ARI))2

+ 0.002 * (ln(ARI))3 (9.2)

122

Comparison to Existing Research

The Sydney Hail event of 1999, which has inflated losses of $2.12 million, translates using the above model into a loss with a return period of approximately 81 years. Blong et al55 estimate that the household loss from this event can be expected on average more frequently than one in a hundred years. This suggests that the PML curve is of the correct order of magnitude at this ARI.

Leigh showed PMLs for hail and flood for a hypothetical portfolio of residential and motor exposure in Sydney. The portfolio consisted of 90,000 houses in ICA Zones 41, 42 and 43. The industry exposure developed in this thesis consists of 1,261,689 houses in these zones. In addition, the exposure includes commercial property as well as residential and motor risks. The PMLs from Leigh were scaled to allow direct comparisons with the results developed above. The comparison is shown in Table 9.3.

Return Period 250 1,000 10,000

Leigh - Sydney Hail 160 248 370

Leigh - Sydney Flood 78 140 212

Scaled - Sydney Hail 4,282 6,637 9,902

Scaled - Sydney Flood 2,088 3,747 5,674

Proposed Model 3,622 5,953 9,870

Table 9.3 – Comparison of Proposed Model PMLs to Leigh (2005)

The model results are comparable to the scaled Leigh results at each of the ARIs shown above but since this approach ignores the fact that that large losses could also arise due to events in the other major cities. A Brisbane flood might be a case in point. This being the case, our Other Perils PMLs may understate the true risk.

55 Blong, R.J., Leigh, R., Hunter, L. and Chen, K., 2001, “Hail and Flood Hazards – Modelling to Understand the Risk”, Aon Re Hazards & Capital Risk Management Conference Series, edited by Britton, N.R. and Oliver, J., 25-39 123

Event Set Derivation

An event set was generated from the two selected loss curves by choosing 1,000 points evenly spaced (in logarithmic space).

The resulting event set can be combined with the event losses from the cyclone and earthquake model described in previous chapters to determine combined PML curves for all significant natural perils in Australia.

Demand Surge

Because the model for Other Perils was based on a statistical analysis of actual losses, demand surge is already included and no further adjustment is required.

Industry Losses

Based on this analysis, the PMLs for the Australian industry by return period have been calculated and are shown in Table 9.2.

Return Period PML ($m)

10,000 9,870

5,000 8,780

2,000 7,195

1,000 5,953

500 4,742

250 3,622

100 2,360

50 1,607

20 881

10 514

Average Annual Loss 349

Table 9.2 – Industry PMLs for Other Perils

124

125

Chapter 10 – Man Made Disasters

This chapter outlines the Australian Terrorism Insurance Act 2003 (the “Act”) and the issues insurers need to consider regarding the Act. It outlines the reasons why models for terrorism differ from models for natural perils and identifies model components where similar methodologies can be applied. Finally, it briefly sketches ways in which models of terrorism can be used to improve the management of terrorism risk.

Definition of Terrorism

The Macquarie Dictionary defines terrorism as “The use of terrorising methods; the state of fear and submission so produced; a method of resisting a government or of governing by deliberate acts of armed violence”

The key points of definition in the Act are:

• “action or threat of action” …

• “intention of advancing a political, religious or ideological cause”

• “intention of coercing or influencing by intimidation” … or

• “intimidating the public or a section of the public”

According to the Act, in order for an event to be classified as a “terrorism event”, the Minister, after consulting the Attorney General, must declare it so.

The US Terrorism Act has a similar definition, but requires the agreement of the Treasury Secretary, the Secretary of State and the Attorney General. It also specifies that events must be international rather than domestic acts.

The Australian Terrorism Insurance Act 2003

The Act enacted at 1 July 2003 set in place the Australian Reinsurance Pool Corporation (the “Pool”) that provides a basis for cover for acts of terrorism on Australian soil. The Act requires insurers provide cover to commercial insureds on a compulsory basis wherever cover is provided for property or business interruption. The aim of the Pool is to provide a facility

126 for terrorism risk to be transferred until the insurance industry becomes able to insure this risk.

Insurers may join a pooling scheme that will act as reinsurance for insurers writing terrorism cover. The Pool is funded through contributions from insurers where contributions are based on written premiums at proportional rates that vary by postcode. In other words, the written premium collected by insurers is multiplied by a series of factors to determine the amount to be contributed to the Pool. The rates, assuming no event happens, are 2% for exposures in rural postcodes, 4% in major cities and 12% in Central Business Districts, each contribution rate tripling after a terrorist attack.

If a terrorism event occurs, as defined by the Act, insurers will retain a maximum of $1 million per insurer or $10 million for the industry. Above this amount, the Pool will pay claims, up to its total balance, which is expected to grow at around $100 million per year to $300 million. Beyond this, $1 billion of commercial loan facilities and $9 billion of government backed loan facilities are in place to take the total event remuneration from the scheme to a maximum of $10.3 billion. The growth of the Pool components, assuming funds are accumulated each six months, is shown graphically against time in Figure 10.1.

Government Loan Facility ($9bn)

Commercial Loan Facility ($1bn)

Pool funds (built up over three years to $300m) Retention of $10m or $1m per insurer time

Figure 10.1 – Structure of Australian Terrorism Reinsurance Pool

An insurer must consider a number of issues:

• they must provide the cover if they write commercial business

127

• Insurers are required to retain the first $1 million of claims incurred in any terrorism event

• Insurers are required to fund the scheme through contributions that are a function of written premium, varying by postcode, so this data needs to be collected and maintained

• The premium to be charged to the insureds is not defined (although it can be assumed that the Pool contribution rates could be a suitable base rate)

• Insurers are not required to be members of the Pool

• The Pool does not operate as a full risk transfer, as loss amounts above the Pool funds are simply loans.

Quantification of the terrorism risk is beneficial to insurers in order to determine:

• What is their relative terrorism exposure compared to their peers?

• What is their maximum event loss?

• How much should they charge for the retention?

• Whether to retain the risk or not?

• If the risk is to be transferred, whether or not to be in the Pool or use reinsurance instead?

In most cases, losses of Australian insurers would be ceded to the Pool. However, terrorism events must be clearly defined, as the Act would not apply for losses below the minimum retention, for classes of business not covered by the Act and for insurers who choose not to join the Pool, selecting instead to use international reinsurance.

Models

At the time of writing, providers of commercial modelling software to the insurance industry, in particular US-based companies RMS, EQE and AIR, have developed event based simulation models for US terrorism risk. It is unlikely they will extend these models to Australia in at least the next few years.

The remainder of this chapter describes the Aon Re Australia Terrorism Model, “the model”, developed primarily by the author of this thesis, and gives examples of how the model could

128 be used by a sample insurer (Company XYZ) to help understand and manage their terrorism risk.

Natural versus Man-made Disasters

Natural disasters have been discussed extensively already in this thesis. By contrast, man- made disasters may include large fires, building collapses, oil-spills, gas leaks, large scale theft or heist, transportation accidents, explosions and terrorism. For the majority of these man-made events, the cause is accidental. Terrorism differs from the others in being not only man-made but also a deliberate act.

Any catastrophe model can be thought of having three components: the where, the what and the when. As shown in Table 10.1, the where and what can be defined in a similar way to those for a model of natural perils.

Tropical Cyclone Terrorism

Where? Category 4 cyclone crosses a section of the Two ton truck bomb detonated outside Queensland coast hotel

What? Wind speeds at each distance from eye Shock waves and fire cause damage at cause given levels of damage, leading to each distance, leading to financial and financial and human loss human loss

When Based on historic records and scientific ??? Human behaviour ??? analysis, this event is expected once every 250 years

Table 10.1 – Where/What/When of modelling

However, specifying the when, in other words, accurately determining the event probability is extremely difficult for terrorism. This follows for a number of reasons:

129

1. There is inadequate data to accurately estimate statistics and trends. Although in the last twenty years there have been on average more than 400 terrorism events globally each year56, there have rarely occurred in Australia.

2. Terrorism is intentional and target choice may change dynamically in response to changes in geopolitics and actions by security forces.

3. The deliberate nature of terrorism means that it cannot be modelled as a Poisson process assuming independent events with a fixed average probability.

Modelling the Where

The database of potential terrorism targets used in this study’s terrorism model is a selection of locations related to a wide variety of industries and uses. It employs forty target types in five major categories. These are listed in Table 10.2.

Commercial Infrastructure Government Transport/Education Public Aerospace Business Districts Dams Installations Airports Amusement Venues Financial Institutions Medical Facilities Embassies/Consulates Bridges Casinos Industrial Facilities/Mines/Factories Oil Refineries Government Buildings Bus Stations Cinemas Indoor/Outdoor Listed Companies Oil/Gas Production Military Installations Educational Facilities Venues Luxury Hotels & Resorts Post Offices Police Headquarters Museums Night Club Districts Media Company Locations Power Plants Prisons Ports Places of Worship Telecommunication Shopping Malls Towers Scientific Installations Railway Stations Sports Venues Water treatment Theatres and Concert Skyscrapers (All) facilities US Interests Tunnels Halls

Table 10.2 – Terrorism Target database

Although in theory every building and location in Australia is a potential target, there needs to be a realistic subset of plausible targets for modelling risk. Terrorists do not have limitless resources. The number of targets in the database was restricted to 2107, selected as a compromise between the need to obtain convergence in spatial and modelling terms while at

56Secretary of State, 2000, “Patterns of Global Terrorism”, U.S. Department of State

130 the same time achieving reasonable model run-times. The spatial density of targets is mapped in Figure 10.2.

!(!(

!( !( !( !( !( !( !( !( !( !(!(!(!( !( !(!( !(

!( !( !( !( !((! !( !(

!( !( !( !( !( !( !( !( !( !( !( !( !( !( !( !( !( !(

!( !(!(!( !( !( !( !( !( !( !( !( !( !( !( !( !( !(!( !( !( !( !( !( !( !( !( !( !( !(!( !( !( !( !( !( !( !( !(!( !( !( !(!( !( !( !( !( !( !( !( !( !( !( !( !( !( !( !( !( !( !( !(!( !( !( !( !( !( !( !( !(!( !( !(!( !( !( !( !( !( !( !(!( !( !(!(!( !( !( !( !( !( !( !( !( !( !(!( !( !( !( !( !( !( !( !( !( !( !( !( !( !(!( !( !( !(!( !( !( !( !( !( !( !( !( !( !(!( !(!(!(!(!( !( !( !( !(!( !(!(!(!(!(!(!(!( !( !( !( !(!(!(!(!(!(!(!(!( !( !( !(!( !( !(!( !( !(!( !( !( !(!( !( !( !( !( !( !(!( !( !( !(!( !(!( !( !( !( !( !( !(!( !( !(!(!( !( !( !( !( !( !( !( !( !( !(!( !(!( !( !( !( !( !( !(!( !( !( !(!( !( !( !( !( !( !( !(!(!( !( !(!(!(!(!(!( !( !( !(!(!(!(!( !( !( !( !( !( !(!( !( !(!(!( !( !( !(!(!(!(!( !( !(!( !( !( !( !(!( !( !( !( !( !(!( !( !( !( !( !(!(!( !(!( !( !(!(!( !( !( !( !( !( !( !( !( !( !(!( !( !( !( !( !( !(!(!(!( !( !(!(!(!( !( !( !( !(!(!(!(!(!(!(!(!(!(!(!(!(!( !( !( !( !( !(!(!(!(!(!(!(!(!( !( !(!(!(!(!(!(!(!(!( !( !(!(!( !( !( !(!( !( !( !( !( !( !( !(!( !(!(!( !( !(!( !(!(!(!(!(!( !( !(!(!(!(!(!(!( !( !(!( !( !(!(!( !( !( !( !( !(!( !(!(!( !( !( !(!(!(!(!( !( !( !( !( !( !(!( !(!( !( !( !( !( !( !( !( !( !( !( !( !( !( !(!(!(!(!( !( !( !(!(!(!(!(!(!(!(!(!( !(!(!(!(!(!(!(!(!(!(!(!(!( !( !( !(!( !(!(!( !(!(!(!( !( !( !(!( !( !( !( !(!( !( !( !(!(

!( !(

!( !( !( !(!( !(!( !( !(!(!( !( !( !(!(!(!( !( !( !(

!( !( !( !(!(!(!(!(!(!( !( !(!(!(

Figure 10.2 – Terrorism Risk Database Density

(Red=High Density, Grey=Low Density, White=No Targets Nearby)

For obvious reasons, target density is highest in the cities by virtue of population and the presence of iconic buildings. Grey areas in the Northern Territory and other outback locations are military bases.

Following selection of the database of potential targets, these locations were then geo-coded to determine the longitude and latitude of each. For some natural perils such as tropical cyclones, there is little benefit to be gained from geo-locating every building due to the fact that cyclones are physically large, covering many postcodes, and the imposed wind speed should not vary greatly within a postcode. However, when modelling terrorism events such as car bombs, a few metres can make the difference between life and death and thus there is no choice but to deal with risks on a street address basis. In this sense, terrorism modelling more closely resembles flood modelling.

For purposes of analysis in this thesis, an exposure file for an artificial company (Company XYZ) was generated. The exposure represents a broad allocation of commercial risks across

131 randomly selected locations in Australia. The analysis below uses this exposure and makes comparisons to industry exposure to terrorism risk.

The target database can now be used to determine a number of measures of a company’s exposure. One such application is to examine the distribution of company exposure relative to terrorism targets against the industry exposure, as shown in Figure 10.3. This figure shows the proportion of total exposure for the company and the industry at each distance from major skyscrapers. In this example, Company XYZ has a slightly higher concentration near targets than the industry over the first eight kilometres from each potential target and then less at longer distances. It is therefore more at risk than the market for this category of target.

18.0%

16.0%

14.0%

12.0%

10.0%

8.0% Proportion of Exposure Proportion 6.0%

4.0%

2.0%

0.0% 0 5 10 15 20 25 30 35 40 45 50 Nearest Distance in Kilometres Company Industry Exposure

Figure 10.3 – Company XYZ Target Relativity Comparison – Major Skyscrapers

Similar calculations can be made for each of the 40 different categories of target. The weighted average distance to each category of target can be used to determine a single measure for each target category, and an average for the portfolio to all targets.

The Terrorism Risk Index (TRI) is defined here as one hundred times the average distance to nearest target for the industry divided by the average distance to nearest target weighted by company exposure.

132

0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200

Business Districts 24 Financial Institutions 39 Industrial Facilities/Mines/Factories 75 Listed Companies in Australia 116 Luxury Hotels & Resorts 61 Media Company Locations 10 5 Shopping Malls 33 Skyscrapers (All) 73 Dams 13,009 Medical Facilities 99 Oil Refineries 99 Oil/Gas Production 621 Post Offices 28 Pow er Plants 66 Telecommunication Tow ers 10 6 Water treatment facilities 19 Aerospace Installations 10 1 Embassies/Consulates 80 Government Buildings 44 Military Installations 117 Police Headquarters 56 Prisons 48 Scientific Installations 41 US Interests 50 Airports 80 Bridges 13 2 Bus Stations 55 Educational Facilities 14 1 Mus eums 31 Por ts 10 4 Railw ay Stations/Facilities 10 8 Tunnels 78 Amusement Venues 319 Casinos 91 Cinemas 34 Indoor/Outdoor Venues 43 Night Club Districts 26 Places of Worship 68 Sports Venues 18 4 Theatres and Concert Halls 37 416

Figure 10.4 - Company Target Relativity Comparison – Terrorism Risk Indices

The weighting may be based purely on physical distance, or may use an exponential weighting in order to take into account the relative vulnerability of insured assets at closer distances to the target. The TRI for each of the 40 target types in shown in Figure 10.4.

Modelling the What

24 different attack types are considered. These are shown in Table 10.3.

Nuclear Conventional Radiological Biological Chemical 200 kiloton Cruise missile Cruise missile Large event Large event 20 kiloton Multiple aircraft Multiple aircraft Medium event Medium event 10 kiloton Single aircraft Single aircraft Small event Small event 1 kiloton Large truck bomb Large truck bomb Small truck bomb Small truck bomb Car bomb Car bomb Human bomb Human bomb

Table 10.3 – Attack Types

The four sizes of nuclear weapon represent a realistic range of potential TNT-equivalent explosiveness for transportable nuclear weapons that might be employed by international or domestic terrorists. Similarly, the categories of conventional weapons represent a realistic range of attack types that has been experienced across the globe in recent years. Radiological weapon sizes are equivalent to the conventional attack types since it is assumed that

133 conventional explosives would be used to disperse the radioactive material.

There is much greater uncertainty involved with chemical or biological weapons. For chemical weapons, among the many factors that must be considered are wind speed and direction, humidity and changes in humidity, terrain and topography, vegetation, air temperature and time of day. As biological weapons may be transported long distances by human “carriers”, a detailed model would need to describe complex interactions between people riding on planes, landing in other cities and for the role of medical and political interventions against the spread of any contagious threat. Because of this complexity, it was decided to simply model just three simple scenarios for each attack type, i.e. small, medium and large. The largest chemical attack scenarios cause injury as far as twenty kilometres from the target.

For each event, the distance is calculated from the target to the locations of insured risk. This distance is used to determine the distribution of potential damage to buildings and contents, the distribution of potential loss of occupancy or business interruption and the distribution of probability of injury and death to humans at each location. Much research has been undertaken into effects of various explosives, due to their use in construction and as weapons in war over the last few centuries. The damage distributions in this model were determined and peer reviewed by certified engineers following discussion with experts from Impact Forecasting LLC in Chicago, Illinois, Shirmer Engineering in Deerfield, Illinois, Aon Special Risk Team in London, UK, members of British Intelligence located in Chicago, Illinois, and a small selection of ex-Navy Seals and ex-military personnel. Additional references are listed below.57 58 59 60 61 62 63 64

57 The Diagram Group, 1990, “Weapons”, Updated Edition, ISBN: 0312039506

58 Baker, W.E., 1973, “Explosions in Air”, University of Texas Press

59 Biggs, J. M., 1964, “Introduction to Structural Dynamics”, McGraw-Hill Companies

60 Department of the [US] Army, 1986, “Explosives and Demolitions”, Field Manual 5-25, Headquarters

61 US Navy, 1974, “US Navy Seal Combat Manual”, Special Warfare Training Handbook, Lancer

62 Croddy, E., 2002, “Chemical and Biological Warfare”, Springer-Verlag

63 Dunnigan, J. F.,1993, “How to Make War”, Third Edition 134

Figure 10.5 shows sample damage functions for human injuries and deaths for a conventional large truck bomb.

100%

90%

80%

70%

60% Total Casualties 50% Deaths Injuries 40%

30% Expected Casualty Rate

20%

10%

0% 0.000 0.200 0.400 0.600 0.800 1.000 Distance from Target in Kilometres

Figure 10.5 – Mean Casualty Rates - Conventional Weapons – Large Truck Bomb

For each damage distribution, the uncertainty around a mean loss rate can be used for determining costs of loss above insurance excesses and/or losses up to insured limits. This is more important when modelling commercial property policies, the form of insurance likely to be most at risk to terrorist attacks.

The mean loss (in $) from each of 24 simulated events for each attack type at a given target is shown for Company XYZ in Table 10.4.

64 Kagan, R., 2002, “Power and Weakness”, Policy Review No.113, Hoover Institution, Stanford University 135

Target Number 27 Reserve Bank Of Australia Name - Adelaide Address 182 Square City Adelaide State SA

Attack Index Attack Type Expected Loss 1 Nuclear - 200 Kiloton 226,501,088 2 Nuclear - 20 Kiloton 174,723,559 3 Nuclear - 10 Kiloton 112,523,093 4 Nuclear - 1 Kiloton 83,752,385 5 Conventional - Cruise Missile Attack 4,037,915 6 Conventional - Multiple Aircraft 5,177,315 7 Conventional - Single Aircraft 3,145,328 8 Conventional - Large Truck Bomb 1,506,309 9 Conventional - Small Truck Bomb 800,224 10 Conventional - Car Bomb 444,037 11 Conventional - Human Bomb 13,320 12 Radiological - Cruise Missile Attack 7,358,117 13 Radiological - Multiple Aircraft 7,424,596 14 Radiological - Single Aircraft 4,141,269 15 Radiological - Large Truck Bomb 3,129,070 16 Radiological - Small Truck Bomb 2,550,403 17 Radiological - Car Bomb 1,866,027 18 Radiological - Human Bomb 1,282,064 19 Biological - Large Attack 8,875,172 20 Biological - Medium Attack 1,452,493 21 Biological - Small Attack 256,539 22 Chemical - Large Attack 16,312,988 23 Chemical - Medium Attack 2,536,491 24 Chemical - Small Attack 304,646

Table 10.4 – Simulation Output - Expected Loss by Attack Type

The When

As previously mentioned, estimating the frequency of terrorism attacks is impossible to do with much confidence. In order to perform an analysis similar to a probabilistic study of a natural peril, assumptions would be required for:

• the annual frequency of events;

• the conditional probability of an attack at each target or target type given an event occurs; and

• the conditional probability of category of attack given an event occurs.

Although it is impossible to have much confidence in the estimates, there are ways in which frequencies can be derived for use in a model. These include:

• Statistical

• Expert opinion

• Implying frequencies from known reinsurance rates

136

These are discussed in turn below.

Statistical analysis

Statistical analysis implies data. Little relevant information exists for Australia but the US Department of State reports statistics on international terrorism. Figures 10.6 and 10.7 show respectively the total number of international terrorism attacks from 1981 to 2001 and the total number of international attacks by region from 1985 to 2001.

700

600

500

400

300

200

100

0 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001

Figure 10.6 – Total International Terrorism Attacks by Year

137

300

250

200

1995 1996 1997 150 1998 1999 2000 2001

100

50

0 Africa Asia Eurasia Latin America Middle East North America Western Europe

Figure 10.7 – Total International Attacks by Region

The U.S. Department of Justice Federal Bureau of Investigation also report Terrorism in the United States 199965 shows statistics on different types of terrorism attacks (Table 10.5).

Year Business Diplomat Government Military Other

Department of State

1995 338 22 20 4 126

1996 235 24 12 6 90

1997 327 30 11 4 80

1998 282 35 10 4 67

1999 276 59 27 17 95

2000 384 30 17 13 113

Total 67% 7% 4% 2% 21%

Federal Bureau of Investigation

1980-1999 232 61 101 13 7

Total 56% 15% 24% 3% 2%

Table 10.5 – Attacks by Target Type

65 Federal Bureau of Investigation, 1999, “Terrorism in the United States”, The U.S. Department of Justice 138

Figures 10.8 and 10.9 show respectively the proportion of terrorism attacks by target type and the proportion of terror attacks by event type from the FBI report.

Figure 10.8 – Terrorism by Target

Figure 10.9 – Terrorism by Event Type

139

Expert Opinion

As limited data is available for the region of interest, opinion based frequencies are an alternative. Techniques such as the Delphi Method66, developed by the Rand Corporation during the Cold War, can be used to give weights to each of the expert opinions, based on the credibility and estimated accuracy of each expert’s opinion. Given the difficulty that the FBI and CIA had in anticipating the September 11 2001 attacks and the resources available to these organizations, it is difficult to determine who really knows. Who defines an expert?

As this opinion-based approach provides such a high level of uncertainty, it is the author’s opinion that it should therefore only be used when no other approaches are available.

Implied Frequencies

Reinsurance-based premiums can be used as a market-based estimate of the risk. This approach starts with an analysis of potential losses by target and attack and type, as outlined above. Estimates must be made of the conditional probabilities of attack by target and attack type, given a terrorist event. The market premium for the reinsurance contract, adjusted for profit margin, implies an annual attack frequency used in the reinsurance pricing. The frequency can be “backed-into” once market price estimates have been obtained.

One draw-back of this method is that the reinsurance profit margin in the current environment, due to the wide uncertainty in respect to terrorism, makes up the majority of any terrorism reinsurance price. In the author’s experience, for some actual contracts on natural perils at high layers, for return periods of, say, greater than 250 years, the quoted price of an excess of loss layer may contain a margin equivalent to more than ten or twenty times the expected loss in that layer. Terrorism quotes, due to their even higher uncertainty and the use by many reinsurers of minimum Rates-on-Line, may contain margins many times higher. This means that the estimation of the implied expected loss has an even wider error margin.

None of these approaches provide a satisfactory method of estimating frequencies in terrorism modelling. Therefore, the conclusion is that the production of PML curves for the terrorism peril is problematic, so direct comparisons of any PML results from this terrorism analysis to

66 Helmer, Olaf, March 1967, “Analysis of the Future: The Delphi Method”, P-3558, Rand Corporation 140 the probabilistic results for natural perils determined using the approaches in this thesis should not be made.

Using the Terrorism Model Results

Despite the difficulties as outlined above, if assumptions are made regarding level of annual frequency, a number of useful metrics may be calculated using the results from this analysis:

• Premium rates can be estimated from the annual expected cost of terrorism attacks at each geographic location or postcode. (Figure 10.10)

• PMLs may be used for risk management through the determination of reinsurance purchasing requirements and the pricing of such contracts.

• Reinsurance costs can be attributed to particular policies.

• Retention pricing can allow insurers to estimate their expected losses below any reinsurance contract or participation in the Pool and allocate this cost accordingly.

• Maximum industry event losses and the probability distributions of such losses may be estimated and used by regulators to determine the security of the industry and exposure to the terrorism peril (Table 10.6).

Figure 10.10 – Relative Terrorism Premium Rate by Location – Sydney

(Purple=High, Red=Medium, Yellow=Low, White=Nil)

141

Potential losses on a sample portfolio indicative of the total insurance industry, including life insurance and workers compensation losses in addition to the property exposure losses modelled for natural perils in this thesis are shown in Table 10.6. It can be seen that the events from the larger attack categories at the given target would produce industry losses that would exhaust any funds in the Pool and the commercial and government loan facilities. Multiple events would be even more costly.

Target 1 Name Sydney CBD Address 500 George St City Sydney State NSW Postcode 2000 Longitude 151.207098 Latitude -33.872113

Attack Type Property Insurance Property Insurance Life Insurance Workers Compensation TOTAL Commercial Residential Deaths Cost (* $300K) Injuries and Deaths Cost (* $250K)

1 Nuclear - 200 Kiloton 27,721,277,294 47,339,274,491 677,893 203,367,900,000 589,547 147,386,750,000 425,815,201,785 2 Nuclear - 20 Kiloton 24,912,877,120 36,880,980,650 552,733 165,819,900,000 530,805 132,701,250,000 360,315,007,770 3 Nuclear - 10 Kiloton 21,480,625,032 24,583,199,619 277,023 83,106,900,000 313,322 78,330,500,000 207,501,224,651 4 Nuclear - 1 Kiloton 19,274,212,792 18,909,018,347 196,154 58,846,200,000 184,627 46,156,750,000 143,186,181,139

5 Conventional - Cruise Missile Attack 5,991,149,504 1,369,702,017 17,373 5,211,900,000 6,226 1,556,500,000 14,129,251,521 6 Conventional - Multiple Aircraft 7,153,426,632 1,710,975,488 24,528 7,358,400,000 9,747 2,436,750,000 18,659,552,120 7 Conventional - Single Aircraft 3,955,715,013 884,285,754 16,904 5,071,200,000 6,864 1,716,000,000 11,627,200,767 8 Conventional - Large Truck Bomb 1,435,660,991 320,118,103 7,017 2,105,100,000 4,270 1,067,500,000 4,928,379,094 9 Conventional - Small Truck Bomb 448,795,413 100,070,655 5,158 1,547,400,000 3,506 876,500,000 2,972,766,068 10 Conventional - Car Bomb 256,050,901 57,093,234 72 21,600,000 397 99,250,000 433,994,135 11 Conventional - Human Bomb - - - - 19 4,750,000 4,750,000

12 Radiological - Cruise Missile Attack 13,042,651,037 3,489,753,500 18,657 5,597,100,000 6,966 1,741,500,000 23,871,004,537 13 Radiological - Multiple Aircraft 12,334,091,755 3,241,577,348 24,937 7,481,100,000 11,051 2,762,750,000 25,819,519,103 14 Radiological - Single Aircraft 8,823,234,581 1,987,964,028 18,297 5,489,100,000 7,743 1,935,750,000 18,236,048,609 15 Radiological - Large Truck Bomb 4,449,751,627 996,907,374 7,689 2,306,700,000 4,884 1,221,000,000 8,974,359,001 16 Radiological - Small Truck Bomb 2,520,143,710 562,495,793 5,758 1,727,400,000 3,891 972,750,000 5,782,789,503 17 Radiological - Car Bomb 1,481,438,600 330,889,368 372 111,600,000 679 169,750,000 2,093,677,968 18 Radiological - Human Bomb 1,958,057,623 436,600,071 - - 100 25,000,000 2,419,657,694

19 Biological - Large Attack 8,565,088,291 2,980,978,803 54,580 16,374,000,000 57,423 14,355,750,000 42,275,817,094 20 Biological - Medium Attack 2,211,527,782 532,518,628 17,591 5,277,300,000 15,407 3,851,750,000 11,873,096,410 21 Biological - Small Attack 283,472,773 63,771,608 1,859 557,700,000 1,458 364,500,000 1,269,444,381

22 Chemical - Large Attack 12,047,377,821 5,036,740,166 91,061 27,318,300,000 102,095 25,523,750,000 69,926,167,987 23 Chemical - Medium Attack 3,641,599,269 958,260,072 29,645 8,893,500,000 30,193 7,548,250,000 21,041,609,341 24 Chemical - Small Attack 334,609,731 75,455,905 2,231 669,300,000 1,665 416,250,000 1,495,615,636

Table 10.6 – Simulation Output - Estimated industry loss from specified event

The model losses outlined in this chapter have focused on losses at the time of the event at the risks physically located in the vicinity of the attack. Although business interruption losses are modelled for the locations physically affected by the attacks, economic losses to companies and industries can reach beyond the vicinity of any attack form. For example, the bomb blasts in Bali in 2002 have led to a wide reduction in tourism to the whole of South East Asia and beyond. Extent of losses beyond both the geographical locations modelled and classes of insurance modelled should be considered in any assessment of terrorism risk but were beyond the scope of this study.

Industry PMLs for Terrorism

As stated above, it is problematic to generate PML figures for the Australian industry exposure for terrorism as the annual frequency of events is difficult to estimate.

142

However, using the model outlined in this Chapter, Table 10.7 shows a comparison of the relative loss from terrorism under a range of assumptions for annual frequency of attack.

Return Period 0.050 per year 0.125 per year 0.250 per year 0.500 per year 1.250 per year

1000 $1,969m $4,952m $7,730m $11,626m $15,575m

500 $727m $2,502m $4,952m $7,730m $12,844m

250 $209m $1,071m $2,502m $4,952m $9,270m

100 $26m $209m $727m $1,969m $4,952m

50 $0m $47m $209m $727m $2,502m

20 $0m $0m $26m $121m $727m

10 $0m $0m $0m $26m $209m

Average Annual 8,547,316 21,368,289 42,736,579 85,473,158 213,682,894

Table 10.7 – Terrorism PML by Annual Frequency

Table 10.7 shows that the PML at, say, 250 years is extremely sensitive to the choice of annual frequency. For example, increasing the annual frequency from 0.125 per year to 1.25 events per year, a tenfold increase, increases the 250 year PML from $209 million to $4,952 million, an increase of twenty four times.

If for comparative purposes, the central frequency estimate of 0.25 events for year is used, the total terrorism PMLs may be examined to determine the contributing factors to overall industry loss. Since minimal confidence is being placed in the choice of annual frequency, the focus of any observations is on the relative magnitudes rather than the absolute PML amounts.

Terrorism PML by Exposure Class

Figure 10.11 shows the terrorism PML by property exposure class.

143

$20,000m

$18,000m

$16,000m

$14,000m

$12,000m

Total $10,000m Commercial Residential

$8,000m Estimated Loss ($m) Estimated

$6,000m

$4,000m

$2,000m

$0m 10 100 1000 10000 Return Period (Years)

Figure 10.11 – Terrorism PML by Exposure Class

It can be seen from this figure that the commercial losses are the primary contributor to loss. This is not unexpected if central business districts are assumed to be the primary targets.

Terrorism PML by Target Type

Figure 10.12 shows the terrorism PML by target type.

$20,000m

$18,000m

$16,000m

$14,000m

$12,000m Total Commercial Infrastructure $10,000m Government Transport/Education $8,000m Public Estimated Loss ($m) Estimated

$6,000m

$4,000m

$2,000m

$0m 10 100 1000 10000 Return Period (Years)

Figure 10.12 – Terrorism PML by Target Type

144

Again, it can be seen from this figure that losses due to attacks at commercial targets are the primary contributor to loss.

Conclusions

In summary, it is possible to model terrorism but the probabilistic component of any model is problematic. Nonetheless the task has to be faced if the risk is eventually to be passed from the Australian Government to the insurance industry. The terrorism PML results provided here are not used in the remainder of this thesis.

145

146

Chapter 11 – Model Results

Summary of Modelled Australian PML’s

In this chapter the combination of PMLs for all natural perils in Australia is examined. Table 11.1 summarises the PML results for cyclone, earthquake and Other Perils for the Australian property exposure developed in the earlier chapters of this thesis. The combined results were determined by collectively ranking the event sets for all three models and determining PML amounts assuming independence. Results are shown in millions of dollars and incorporate demand surge.

Return Cyclone Earthquake Other Perils Combined

Period ($m) ($m) ($m) ($m)

10,000 35,488 150,875 9,870 150,875

5,000 23,291 104,632 8,780 104,632

2,000 17,908 58,515 7,195 58,515

1,000 11,945 42,226 5,953 42,551

500 6,983 16,353 4,742 19,808

250 4,699 7,639 3,622 10,530

100 2,164 2,884 2,360 5,421

50 1,161 1,448 1,607 3,234

20 522 446 881 1,632

10 247 150 514 899

Average Annual Loss 157 220 349 726

Table 11.1 – Natural Peril PMLs Incorporating Demand Surge ($m)

147

Comparison by Peril

Figure 11.1 show the comparison of actual and model PMLs for the combination of cyclone, earthquake and other perils, effectively a chart of Table 11.1.

$40,000m

$35,000m

$30,000m

$25,000m

Combined Cyclone $20,000m Earthquake Other

Total Industry PML Industry Total $15,000m

$10,000m

$5,000m

$0m 10 100 1,000 Return Period (Years)

Figure 11.1 – Combined PMLs versus individual Perils

The combined model results show other perils as being the dominating contributor to loss for events up to an ARI of approximately 60 years, beyond which earthquake becomes the most significant loss contributor. Although cyclone contributes to loss along the entire curve, it is as no point the major contributor, if Other Perils are considered as a group.

Model Variability and Sensitivity to Assumptions

The many forms of uncertainty associated with the individual models lead to an uncertainty in the combined results that is not easily quantified. In what follows, the various contributory components of uncertainty to simulated outcomes from the cyclone loss model are considered. Uncertainty is grouped into four major categories: variability of nature, data quality, parameter uncertainty and simulation error.

148

Variability of nature

It is impossible with the current technology to develop a model that can predict with accuracy all of the inherent randomness of nature. Even the number of cyclones occurring in a given year is difficult to predict with any certainty, let alone where and when they will cross the coastline.

As models become more advanced, the ability to predict the occurrence and impacts of cyclones should improve. However, if in the limit, the ability to foretell with certainty were ever possible, this would render the use of insurance unnecessary.

Data quality

A primary source of model uncertainty is the data used to build such a model. The historic event set and historic recordings of wind speed and air pressure, on which the scientific models are based, have been checked for obvious errors, but no doubt contain many innocent misrepresentations about the recorded parameters. Malfunctioning of instruments, inaccurate reporting by the operator at the time of the recording, transcribing errors into the databases used, or simply the inability of the recording instruments to accurately measure the required variables would all contribute towards uncertainty in the derived parameters used in the model.

In addition not all the information is directly recorded during events, as in the example of cyclone central pressures where in many cases it is inferred from the analysis of satellite photos using the Dvorak technique67.

Information used to determine average values and mixes of different construction types and building ages has been taken from insurance company databases. These databases have been developed through a process of data entry over many years for some companies, and thus are also open to the chance of data entry error. Moreover often the base information on buildings is volunteered by the policy holder and never verified by the insurer. This information may well be flawed: for example, many owners would not know whether their home was of wooden-framed, brick veneer construction or double brick (unreinforced masonry) both of which perform differently under seismic and wind stresses.

67 Dvorak, V.F., 1975, “Tropical Cyclone Intensity Analysis and Forecasting from Satellite Imagery” 149

The damage functions chosen represent a typical building of each specific combination of risk parameters. However, within each of these category groupings, there remains considerable variation in their behaviour under cyclonic wind forcing due to variations in construction quality. For example, the same team of builders may have worked on two identical houses on either side of a street, but for certain reasons they may have not used as many nails in a particular roof joint in one building. A small variation such as this could lead to completely different damage levels between the two buildings under the same wind forcing.

In order to reduce this form of error, insurers need to be more assiduous about collecting the relevant and accurate data about the properties they insure.

Parameter uncertainty

Each of the components of the model have required the determination of parameters and fitted functions, and the uncertainty about these parameters arises due to the limited data sample of historic cyclones. For example, an annual Poisson frequency estimated from 100 years of data has a sample error equal to 10% of the estimate.

As time progresses, a larger data sample will lead to more accurate estimation of model parameters. In addition, greater information about cyclones will lead to improvements in the design of the individual model components.

Simulation sampling error

Errors are introduced into the modelling process through the use of simulation. Simulation of the various event paths and parameters, and of the damage caused by each event, leads to uncertainty in the upper end of the distribution.

In the methodology used in this thesis, the events are assumed to have equal probability. If the events were chosen by a purely random Monte Carlo methodology, then the uncertainty or confidence interval around each point on the PML curve can be estimated by a binomial approach based on the use of the simulated severity distribution.

Let f(x) be the distribution function of the underlying function estimated by the simulation

F(x) be the cumulative distribution function of the underlying function

150 fk,n(x) be the distribution function of the kth element of n data points

Fk,n(x) be the cumulative distribution function of the kth element of n data points

The distribution function of the kth element of n data points is determined as follows.

fk,n(x) = Prob(kth element of n is x)

= Prob(X=x) * Prob(x is the kth element)

= f(x) * [Prob(Xx)]n-k * Number of Permutations

= f(x) * [F(x)]k-1 * [1-F(x)]n-k * n!/[(k-1)!(n-k)!] (11.1)

k,n x k-1 n-k F (x) = ∫0 f(x) * [F(x)] * [1-F(x)] * n!/[(k-1)!(n-k)!]dx (11.2)

The distribution of the minimum of n data points, i.e. k=1

f1,n(x) = Prob(minimum element of n is x)

= f(x) * [F(x)]1-1 * [1-F(x)]n-1 * n!/[(1-1)!(n-1)!]

= f(x) * [1-F(x)]n-1 * n (11.3)

The distribution of the maximum of n data points, i.e. k=n

fn,n(x) = Prob(maximum element of n is x)

= f(x) * [F(x)]n-1 * [1-F(x)]n-n * n!/[(n-1)!(n-n)!]

= f(x) * [F(x)]n-1 * n (11.4)

By assuming a statistical distribution to approximate the results from the simulation models, Equation 11.1 can be used to generate a probability density function representative of the estimates of each PML point, based purely on the uncertainty due to simulation sampling error. A Weibull distribution was fitted to the region of the PML curve for all perils from 50 years through to the uppermost simulated point. The variability of the 1000 year return period PML was estimated assuming a range of different simulation sizes, as shown in Figure 11.2.

151

n=1000 n=2000 n=5000 n=10000 n=20000 n=50000 Probability Density Function

$0m $10,000m $20,000m $30,000m $40,000m $50,000m $60,000m $70,000m $80,000m 1000 year PML

Figure 11.2 – Effect on PML Estimate of Increasing Number of Simulations

Figure 11.2 shows that by increasing the number of samples, the 1000 year PML starts to converge. However, based on the estimates shown above, it appears that pure Monte Carlo approaches are insufficient to achieve a satisfactory the level of convergence.

In generating the cyclone event set, sampling from the Halton sequence was used to obtain faster convergence and will theoretically produce a variance smaller than that from a pure Monte Carlo approach68. This being the case, the distribution of data points given above can be considered as an upper bound on the possible simulation variance.

The variance at large return periods can be reduced by increasing the number of events in the simulation but at the expense of increasing processing times.

Another approach to improving accuracy is the use of importance sampling. Here, the combinations of parameters that lead to the larger return period events are first identified. The distributions of these parameters are segmented and a greater number of samples are taken from the region of each distribution leading to larger events, with a corresponding reduction in the event frequency of each of these events. The effect is to provide a greater

68 See Morgan, Byron, J.T., 1984, “Elements of Simulation”, Mathematical Institute, University of Kent, Canterbury, United Kingdom 152 number of samples in the tail of the distribution without affecting the overall frequency or severity distribution.

Other quasi- Monte Carlo methods including the Latin Hypercube method or alternate variations of stratified sampling could also reduce the variance in the tail of the distribution.

Quantification of Epistemic Uncertainty in the Cyclone Model

In the development of the cyclone model, uncertainty has been introduced at many junctures, including, but not limited to:

ƒ Cleaning of historic event database

ƒ Determination of starting locations

ƒ Fitting of starting segment frequencies

ƒ Central pressure distributions

ƒ Eye radius

ƒ Forward movement speed

ƒ Storm movement

ƒ Storm decay over land

ƒ Wind speed calculation

ƒ Terrain and topography adjustments

ƒ Damage calculation

ƒ Building construction assumptions

ƒ Building values

ƒ Insurance assumptions

Model Error Estimation

A lower bound for the range of model uncertainty may be determined by performing a

153 sensitivity analysis on a number of key assumptions then combining these sensitivities with the individual parameter errors of each of the model components, estimated using statistical techniques involving the sample size and form of each distribution. The effects of each source of uncertainty are not independent, so an estimate produced assuming independence will understate the possible model error.

Parameter Error

Conservative estimates were made of the uncertainty in the choice of a selection of parameters for various model components. In each case the standard deviation of the fitted values was estimated and then rounded down. In this manner and for the purposes below, the variation in results arising from the sensitivity test of each parameter can be considered to be an underestimation of the actual variability caused by a shift of the parameter by one standard deviation. The selected components and their parameter fit statistics are shown in Table 11.2.

Model Component Selected Fit Based On Assumed Standard Deviation of Fit

Central pressure Difference between constant ambient 5 hPa pressure and central pressures varying by coastal segment

Forward speed Normal distribution fit with mean forward 5 m/s speed varying by coastal segment

Eye radius Lognormal distribution with mean as a 5 km function of latitude

Decay over land Rate of decay of central pressure differential 0.05 hPa/km as a function of distance travelled

Storm track warping Change in direction in compass degrees per +25% on s.d. time step

Table 11.2 – Parameter Error

154

Sensitivity Analysis

Each of the parameters in Table 11.2 were individually varied by one standard deviation and the PML analysis recalculated. For illustrative purposes, Table 11.3 shows the impact on the 250-year PML. The results shown are before modification for demand surge.

Analysis 250 Year PML Difference from

($m) Base Case (%)

Base Case 3,799 NA

Central pressure – 1 σ 4,312 +14%

Forward speed + 1 σ 5,372 +41%

Eye radius + 1 σ 4,659 +23%

Decay – 1 σ 4,117 +8%

Warping + 1 σ 4,856 +28%

Table 11.3 – Parameter Error Quantification

Total Simulation Error

If the assumption is made that each of the components are independent, then the total variance of the combination of all sources of uncertainty is equal to the sum of the individual variances.

Therefore, the total standard deviation of the PML at the 250 year level, using this approach and assuming that there are further uncertainties to consider, is greater than 57% of the central estimate. This figure arises from summing the squares of the individual variances and taking the square root.

A 90% confidence interval of the 250 year PML is equal to the range of possible results at that level. The confidence intervals of the PML values at each return period are shown in Table 11.4 and graphically in Figure 11.3.

155

Return Period Lower 5% Mean Upper 5%

1000 5,128 8,281 13,371

500 2,511 5,101 10,362

250 2,419 3,799 5,966

100 877 1,801 3,695

50 487 1,048 2,254

20 250 501 1,005

10 117 244 508

AAL 73 138 262

Table 11.4 – Cyclone model confidence intervals

$14,000m

$12,000m

$10,000m

$8,000m 95th Percentile Central Estimate 5th Percentile $6,000m Industry Cyclone PML ($m) PML Cyclone Industry

$4,000m

$2,000m

$0m 10 100 1000 Return Period (Years)

Figure 11.3 – Cyclone model confidence intervals

As previously mentioned, the total uncertainty is at least as large as the combination of the uncertainties presented here and may be underestimated due to conservatism in choice of the standard deviations.

Comparison with Extreme Value Statistical Model

The standard deviation of the combined uncertainties discussed above at the 20 year return period is approximately 100% of the median PML estimate for the cyclone peril alone (Table 11.4). This compares to the standard deviation of 56% calculated in Chapter 3 for a 156 combined PML from all perils. In this case, the development of a more complex model, incorporating a wide range of different physical variables and assumptions, has led to an increased level of uncertainty around the central estimate at least at the low return periods. The reader is reminded that extrapolation beyond the range of historical time series of insured losses and assuming different underlying extreme value distributions led to a factor 10 differences in the 1000 year PML. Clearly it is always better to use data driven statistics where these are available but beyond this there is little choice but to resort to simulated ‘experience’.

Another issue that emerges is the minimum number of variables required in the model. Given the inevitable uncertainty in fitting parameters, it is possible that attempts to improve the physical realism of the model may not be rewarded with a lowered uncertainty about modelled outcomes. Therefore, consideration always needs to be made as to the net effect of adding additional complexity into any model component. Complexity must not be confused with accuracy. This is especially important in catastrophe loss modelling where model uncertainly increases the modelled PMLs. In this sense model uncertainty is treated as additional risk.

Contribution of Extreme Events to Total Risk

The average annual loss is calculated as the sum of insured losses from all events, weighted by the probability of each of those events. Figures 11.4 and 11.5 show the Average Annual Loss in dollars per $1 million of exposure for cyclones and earthquakes respectively in order to identify regions of higher and lower insured risk

.

157

Figure 11.4 – Cyclone Average Annual Loss per $1 million of exposure

It can be seen that the average annual tropical cyclone loss corresponds reasonably well with the 50 year ARI wind speeds plotted earlier in Figure 7.12.

158

Figure 11.5 – Earthquake Average Annual Loss per $1 million of exposure

Figure 11.6 shows the Average Annual Loss from earthquakes in each postcode per $million of exposure for the south-eastern region of Australia.

Figure 11.6 – Earthquake Average Annual Loss per $million of exposure

159

It can be seen that the average annual loss per unit of exposure is corresponds reasonably well with the 10% chance of exceedance in 50 year PGA (Figure 8.3.)

The average annual loss amount represents the probabilistic weighting of the contribution of all modelled events to the risk at a certain location. However, although there are certain geographic locations that have relatively higher natural perils risk, the PMLs at the higher return periods are largely driven through the concentration of insured risks in the major cities.

The events from the simulation database causing more than $1 billion of industry loss were examined separately. Figure 11.7 shows the Average Annual Loss in each postcode per $1 million of exposure from events with over $1 billion of total industry loss.

Figure 11.7 – Earthquake Average Annual PMLs for events over $1 billion

The concentrations of loss in this case are a function of both the pure earthquake risk and the propensity of events near the major cities to cause much larger industry losses.

This has important implications for cost allocation of natural peril risk. In order to charge an equitable premium to policyholders, or to equitably spread the cost in an industry fund or pool, the allocated cost to each location needs to be a combination of the pure average annual loss at that location, plus a charge for the share any additional costs incurred as a result of the concentration risk.

160

In the case of insurance companies, who would often have a reinsurance programme in place covering events over a certain threshold, the reinsurance premium needs to be allocated equitably among the risks that have led to the reinsurance being required. The geophysical techniques developed in this thesis can be used to accurately determine the appropriate premium allocation down to the individual risk level.

It should be noted, however, that the results shown above would be expected to be highly sensitive to the modelling assumptions, more so than the overall model results, due to the uncertainty in the tail of the distributions that lead to individual large event losses. Therefore, any financial decisions made as a result of such analyses need to be undertaken prudently and with a full understanding of any limitations in the model.

161

162

Chapter 12 - Discussion

Research Aims

Key findings

The main objective of this thesis is to determine risks of total insured event losses for all natural hazards in Australia. In undertaking this task, the following methodology was adopted:

1. Develop a database representative of Australian property exposure

2. Develop a tropical cyclone model for Australia and use it to determine industry loss distributions

3. Implement a published model for earthquakes in Australia and determine industry loss distributions

4. Estimate distributions of potential losses from perils other than cyclone and earthquake using Extreme Value statistics

5. Compare the modelling of natural perils and terrorism

6. Integrate the individual natural peril exposure to obtain a multi-hazard, national industry risk measure.

The primary quantification determination from the research is the list of Industry PMLs shown as Table 11.1, which indicate that there is an annual probability of 1 in 250 for a natural peril industry event of $10.5 billion to occur. Although there have not, in the 230 years since colonial settlement in Australia, any events of this magnitude, growing population and increased concentrations in areas of higher hazard makes this scenario possible.

Research value of thesis

Previous studies in certain of the areas covered by this thesis have been referred to in the description of the development of each of the model components. However, none of these studies have attempted the task addressed, that is, to develop national multi-peril PMLs.

163

In addition, certain technical issues for a given peril across all of Australia have been investigated in previous studies, such as earthquake PGA levels for given return periods, but none of these types of studies have also incorporated a determination of financial loss.

Moreover while various commercially available models exist for certain Australian perils, notably earthquake, tropical cyclone and hail, use of these tools has been confined to the analysis of particular insurance company portfolios for reinsurance pricing and has neither considered the exposure of the entire market nor for all perils combined.

As far as the author is aware, this is the first time an attempt has been made to determine a loss distribution for all significant perils for all of Australia.

Importance of Industry Loss Estimates

Although individual insurance companies are undertaking the approaches outlined in this thesis for the analysis of risks to their portfolios due to specific perils where models are available, as outlined by Froot69 there are additional benefits to the determination of industry losses for a specific country. In particular it is useful to understand how the loss distribution of insured perils for Australia compares to similar measurements around the world. It is the relativity or decorrelation between hazards in Australia and the rest of the world that drives the diversification benefit to international reinsurers when they add Australian risks to their global portfolios.

If Australia’s level of overall risk is insignificant compared to the risk to a reinsurer due to concentrations of reinsured exposure to natural perils in areas such as Florida (Hurricane), California (Earthquake) or Tokyo (Earthquake), then the additional premium margin charged for the risk to the reinsurer’s capital will be reduced. This lower risk is passed on to Australian insurers which in turn should lead to lowered premium rates for policyholders.

The fact that any diversification benefit from catastrophe reinsurance would make financial transactions with Australian insurers attractive to international reinsurers leads to the potential for further enhanced relationships for transactions in other lines of business, such as statutory class of insurance such as Workers Compensation or Compulsory Third Party motor. Again

69 Froot, K.A., 1999, Introduction of editorial in “The Financing of Catastrophe Risk”, The University of Chicago Press, Ltd. London 164 the benefits of diversification lead to reduced costs that should be passed down to the original policyholders.

Having an understanding of the level of risk can also help regulators measure the overall level of sufficiency of capital in the insurance industry, as noted by Kleindorfer and Kunreuther70.

Further Uses of the Models

Direct Insurance and Reinsurance Purposes

The analysis undertaken in this thesis has focused on industry losses, but could as easily been performed on the exposure database from an individual insurer or group of insurance companies. As outlined in Chapter 1, there are a number of reasons why an insurance company might wish to determine its PML at certain return periods. These include but are not limited to:

• Concentration management to ensure compliance with APRA Capital Adequacy Guidelines;

• Determination of appropriate levels of reinsurance purchase, in order to maintain consistency with industry practice, beyond any regulatory requirements;

• Management of expectation of Rating Agency benchmark scores, since the major rating agencies (Standard & Poor’s and A.M. Best) often use catastrophe model PMLs as a measure of risk; and

• General risk management of the business, ensuring that large natural disasters will not adversely affect profits beyond a tolerable level as determined by the modelling results, loss mitigation practices and risk transfer.

Beyond determination of the PML levels, insurance companies can also use the results of catastrophe modelling analysis to determine:

• Technical risk rates for reinsurance, allowing for expected losses ceded to reinsurers;

70 Kleindorfer, P.R., and Kunreuther, H.C., 1999, “Challenges Facing the Insurance Industry in Managing Catastrophe Risks”, from “The Financing of Catastrophe Risk”, The University of Chicago Press, Ltd. London 165

• Technical reinsurance pricing, incorporating the risk margin above the expected ceded losses;

• Direct risk premium rates, being the average annual loss for a given property or risk at a given location;

• Expense and reinsurance loaded premium rates, incorporating charges for the concentration risk, quantified as the cost of reinsurance or additional earnings required on capital at risk; and

• Appropriate allocation of existing reinsurance costs, allowing for equitable allocation varying by location or class as a function of contribution to each level of reinsurance purchase.

By performing an analysis of the sensitivity of PML results to small increases of exposure of various types in each location, an insurer can also:

• Manage exposure growth through determination of geographical regions where additional insurance policies on physical properties or assets contribute most to increases in PMLs or required reinsurance purchases; and

• Identify target areas for top line expansion, using the same technique but focusing on opportunities for exposure increase rather than limitation on growth.

Catastrophe Bonds

As a result of certain large international catastrophes in the late 1980’s and early 1990’s, capacity for reinsurance cover for certain perils was reduced. To overcome this lack of capacity, financial instruments called Catastrophe Bonds were developed whereby capital market investors assumed the financial risk of natural disasters with the financial return conditional on the non-occurrence of specified natural peril events.

For pricing these Catastrophe Bonds, rating agencies have used a variety of catastrophe models to estimate the likelihood of losses of each level against given portfolios of risks to which the bond is tied. Bonds with different levels of financial security, and therefore price, may be developed by modifying the selection of risk level or return period at which the trigger point for the bond occurs.

166

The results of this thesis could be used to initiate the development of a catastrophe bond for Australia covering the entire insurance market, either for all perils or a specific selection of perils. An approach such as this could assist in design of such a scheme.

As reinsurance capital in recent years has been more readily available, the annual number of new catastrophe bond capital market deals has not significantly increased. However, the concept may be extended to other uses as outlined below.

Social Planning

Catastrophe models can be used by governments for planning purposes, as the methodologies can be used to quantify losses beyond the insurance sector. For example, analyses of extreme hazard events could be extended to allow determination of infrastructure failure, including power lines, communications and transport. Catastrophe models can also be used for disaster planning: evacuation planning, physical loss estimation and disaster recovery. Beyond the physical damage occurring from individual events, business interruption and loss of use estimations may assist in planning the social recovery after such an event.

In terms of future projections, catastrophe models may be used for determination of levels of risk from each peril in given physical locations, therefore assisting in urban development planning.

Economic Planning

According to a 2005 report by Aon Re71, eight of the largest Australian insurance companies making up approximately 68% of market purchased a combined total of $8 billion of catastrophe cover. Hence the total industry purchase can be estimated at around $12 billion.

The modelling in this thesis has shown that a 250 year event in Australia could be of the order of $10.5 billion. Even if each of the insurance companies operating in Australia had reinsurance covering losses from this event, the flow on effects to the insurance industry would be felt beyond the duration of the actual physical event and cleanup.

Considering that the three greatest events in Australia were losses of approximately a quarter of this amount, it would be expected that there could be a significant realignment in

71 Aon Re Australia Limited, January 2005, “Property Catastrophe Benchmarking Study – Jan 2005” 167 reinsurance pricing for Australia. There would inevitably lead to a re-evaluation of each of the models as reinsurers sought to fund the future uncertainty in their returns and over time to recover a proportion of the losses experienced from such an event. Increases in reinsurance costs would be passed down through the insurance companies to the policyholders.

It is also possible that such persons, through ownership of superannuation retirement accounts or personal investments, could hold interests in listed insurance companies. The effect of large catastrophes has historically led to drops in insurance company share prices, which would therefore cause additional financial loss to persons with such investments over and above any direct losses to their properties and other assets.

Government Pooling of Australian Natural Perils Risk

Steps have already been undertaken by the Australian Government to develop a national strategy for management and mitigation of risk due to natural perils in Australia72. One such approach not yet formally considered could be the development of a Government Catastrophe Pool, operating in a similar manner to a Catastrophe Bond.

By using the output from an analysis such as has been developed in the course of this thesis, indicative levels of industry cover could be determined. Capital commitment to cover potential losses for the industry pool could be obtained internationally through capital markets, focusing on investment groups that would obtain diversification benefits from investment in the non-occurrence of natural perils in Australia. The combined industry pool would then pay a regular premium to investors in return for a guarantee to reimburse the pool for losses falling into the given industry loss range.

Industry pools have previously been established in many other countries including Taiwan, France and the United States. The main aim of such a pool scheme would be ensure the long term stability of the insurance component of the financial industry in Australia.

72 Emergency Management Australia, June 1999, “Final Report of Australia’s Coordination Committee for IDNDR (International Decade for Natural Disaster Reduction) 1990-2000” 168

Further Research

Additional research could be considered in the future development of each of the individual models in this thesis.

Cyclone

The cyclone model here incorporates an artificial catalogue of potential cyclone events, the determination of maximum wind speeds for each of these events and thus financial loss incurred from those events. Future enhancements to the cyclone model could focus on derivation of more accurate estimates of model uncertainty.

Other improvements could include:

• Allowing for cyclogenesis and non-Markov movement of cyclones over the ocean and central pressure development along the track;

• Modelling of terrain and topography wind effects at a greater resolution than postcode and allowing for different wind directions;

• Use of a wider range of construction types based on more comprehensive insurance company data;

• Allowance for flooding from cyclonic events;

• Allowance for multi-decadal variations in cyclone frequency; and

• Refinements of damage functions.

Earthquake

The earthquake model presented in this thesis is based on the implementation of a previously published methodology. Improvements to the earthquake model:

ƒ More detailed validation of the Gaull et al event model against actual Australian events;

ƒ Use of Spectral Acceleration instead of Peak Ground Acceleration; and

169

ƒ Determination and construction of Australian-specific damage functions.

Other Perils

This statistical approach used in this thesis can be improved in the future by adding further events as they occur and re-analysing the fit based on any new data.

In addition to the ICA disaster listing used here, work undertaken by the Risk Frontiers and published as the PerilAUS database, could also be reviewed in order to obtain greater statistical confidence in the model results.

Further, losses from industry events less than $10 million should be reviewed as the rough estimates developed here were based on only nine years of data. There is also some reasonable doubt as to whether events under $10 million have all been consistently included in the ICA database even for these last nine years.

Other improvements in the approach taken could include:

• Incorporating changes in wealth and building area increases in addition to modifications to population and inflation;

• Allowing for the physical upper limits of events; and

• Performing the extreme value analysis separately by each of the four perils.

Further Research

Further work on each of the perils mentioned above could lead to improved industry PML estimates. However, as has already been mentioned, added complexity in model development will not of itself necessarily lead to reduced model uncertainty. Therefore, the addition of any new model features needs to be considered carefully in the light of the trade-off between improvement in accuracy and changes in the total model uncertainty.

170

171

Chapter 13 - Summary and Conclusions

Key Findings

The key outcome of this thesis is the estimation of industry PMLs. Table 11.1, reproduced as Table 13.1 below, summarises the PML results for cyclone, earthquake, other perils and for all perils combined, for the Australian property exposure. Results are shown in millions of dollars and incorporate demand surge.

Return Period Cyclone Earthquake Other Perils Combined

10,000 35,488 150,875 9,870 150,875

5,000 23,291 104,632 8,780 104,632

2,000 17,908 58,515 7,195 58,515

1,000 11,945 42,226 5,953 42,551

500 6,983 16,353 4,742 19,808

250 4,699 7,639 3,622 10,530

100 2,164 2,884 2,360 5,421

50 1,161 1,448 1,607 3,234

20 522 446 881 1,632

10 247 150 514 899

Average Annual Loss 157 220 349 726

Table 13.1 – Natural Peril PMLs Incorporating Demand Surge ($m)

The major accomplishments of this thesis are:

• Statistical models based purely on an extrapolation of actual historic events can be used to estimate PMLs for short return periods but not for longer return periods or for accurate estimation of PMLs for perils with few samples. PMLs for longer return periods are difficult to estimate with any certainty due to the uncertainty in the shape of the underlying probability density functions and therefore choice of fitted tail distribution. For longer return period estimates, geophysical simulation models should be used where available.

172

• A detailed tropical cyclone model for the Australian region has been built that simulates a database of potential events with estimates of their event probability.

• Models for wind speed and damage have been used to estimate losses from each simulated cyclone event and for determination of the probabilistic distribution of cyclone event losses.

• The implementation of an already published model for earthquake hazard has been used to determine the distribution of potential losses from earthquakes in Australia.

• Losses from historical bushfires, floods, hail and severe storm events have been indexed to allow for the time value of money (inflation) and population growth in order to estimate of loss distributions at lower return periods and have been combined with some plausible assumptions about longer return period hail and flood events to achieve a more complete loss distributions for these perils.

• The thesis has demonstrated how terrorism can be usefully modelled to determine the potential for loss but estimates of probabilistic loss distributions cannot be made due to the inability to predict point-in-time or annual frequencies of either target or attack type.

• Multi peril loss distributions can be determined by combining results from different natural perils based on different models of differing complexity.

• The many forms of output from the models developed in this thesis have a variety of beneficial uses, but the many uncertainties introduced in the probabilistic modelling of the extreme events due to natural perils need to be considered.

173

174

Appendix A – ICA Large Event Database

Selected Inflated to Loss in Dec 2003 Original Dec 96 2003 Exposure and Index Month Year EVCode Event City State Dollars Dollars Dollars Dollars 1 Feb 1967 B Bushfires Hobart 2 Tas 14 101 120 195 2 June 1967 F Rain and Hail Brisbane 23 Qld Approx 5 36 43 99 3 Oct 1968 E Earthquake 1.5 12 14 36 4 Jan 1970 C T. Cyclone ‘Ada’ 12 79 94 225 5 Aug 1970 F Flooding Approx 5 31 37 224 6 Feb 1971 F Flooding Gippsland 2 Vic Over 2 12 14 18 7 Dec 1971 C Cyclone ‘Althea’ Townsville2 QLD Over 25 147 175 316 8 Feb 1972 C T. Cyclone ‘Daisy’ 2 12 14 28 9 Mar 1973 C Cyclone ‘Madge’ Qld NT, WA 30 150 178 641 10 Jan/Feb 1974 F T. Cyclone ‘Wanda’ 68 328 390 707 11 Mar 1974 C T. Cyclone ‘Zoe’ NSW Qld 2 12 14 25 12 Apr 1974 F Flooding Sydney 2 NSW 20 98 116 179 13 May 1974 F Flooding 42024 31 14 May 1974 S Wind & Hail Sydney 2 NSW 20 98 116 179 15 Dec 1974 C T. Cyclone ‘Tracy’ Darwin 2 NT 200 837 994 2339 16 March 1975 F Flooding Sydney 2 NSW 15 63 75 115 17 Dec 1975 C Cyclone ‘Joan’ 20 74 88 119 18 Jan 1976 H Hailstorm Toowoomba 2 Qld 12 49 58 82 19 Feb 1976 C Cyclone ‘Beth’ 3 12 14 30 20 Nov 1976 H Hailstorm 40 131 156 240 21 Dec 1976 C T. Cyclone ‘Ted’ 15 49 58 192 22 Jan 1977 S Thunderstorms 15 49 58 89 23 Feb 1977 S Storm Tongala, Echuca VIC 4 13 15 22 24 Feb 1977 B Fires Western District Vic 9 30 36 36 25 March 1977 F Floods 62327 43 26 Feb 1978 S Storms Sydney 2 NSW 12 44 52 80 27 Mar 1978 S Storms North Coast NSW 4 15 18 24 28 Apr 1978 C T. Cyclone ‘Alby’ 13 39 46 88 29 June 1978 S Wind and Floods 6 21 25 38 30 Mar 1979 C T. Cyclone ‘Hazel’ 15 41 49 91 31 June 1979 E Earthquake Cadoux WA 3.5 12 14 40 32 Nov 1979 H Hailstorm 10 24 29 37 33 Jan 1980 C T. Cyclone ‘Amy’ 2.7 12 14 15 34 Feb 1980 C T. Cyclone ‘Dean’ WA 2.5 11 13 17 35 Feb 1980 B Bushfires Adelaide Hills SA 13 34 40 51 36 Dec 1980 C T. Cyclones Brisbane2 Qld 7.5 17 20 38 37 Dec 1980 S Storm Brighton2 Qld 15 36 43 43 38 Feb 1981 F Floods Dalby and storms Qld 20 49 58 67 39 Nov 1982 S Storm Melbourne District Vic 9 19 23 31 40 Feb 1983 B Bushfires VIC 138 255 303 411 41 Feb 1983 B Bushfires SA 38 69 82 100 42 March 1984 C Cyclone ‘Kathy’ 5 12 14 20 43 Nov 1984 F Floods 80 132 157 199 44 Dec 1984 B Bushfires 25 45 53 68 45 Jan 1985 H Hailstorm Brisbane Qld 180 299 355 596 46 Sept 1985 H Hailstorms Melbourne Vic 10 17 20 27 47 Jan 1986 H Hailstorm Orange NSW 25 41 49 52 48 Jan 1986 C T. Cyclone ‘Winifred’ 40 65 77 223 49 Aug 1986 S Storms and floods 35 53 63 90 50 Oct 1986 H Hailstorm Western Suburbs NSW 104 161 191 200 51 Dec 1986 S Storm Adelaide SA 10 15 18 24 52 Feb 1987 B Bushfires 71214 16 53 Jul 1987 S Storm & Floods 2 3 4 5 54 Nov/Dec 1987 F Rain and Floods 8 12 14 19 55 Apr 1988 F Floods Alice Springs NT 10 14 17 19 56 Apr 1988 F Floods Sydney NSW 25 36 43 56 57 May 1988 C T. Cyclone ‘Herbie’ Carnarvon WA 20 30 36 38 58 Sept 1988 S Storms widespread WA 8 12 14 14 59 Nov 1988 F Rainstorms 11 15 18 23 60 Feb 1989 F Rainstorms Melbourne Vic 17 24 29 36 61 Apr 1989 C T. Cyclone ‘Aivu’ 26 35 42 48

175

Selected Inflated to Loss in Dec 2003 Original Dec 96 2003 Exposure and Index Month Year EVCode Event City State Dollars Dollars Dollars Dollars 78 Feb 1992 S Storm Sydney NSW 118 140 162 79 Feb 1992 S Storm & Floods Perth WA 4 5 6 7 80 Sep/Oct 1993 F Flood Benalla/Shepparton VIC 12 14 15 81 Dec 1993 S Storms Melbourne VIC 12 14 16 82 Jan 1994 B Bushfires NSW 59 70 80 83 May 1994 S Windstorms Perth WA 37 44 48 84 Aug 1994 E Earthquake Cessnock NSW 37 44 45 85 Nov 1994 S Windstorms NSW 29 34 37 86 Feb 1995 C T. Kalgoolie WA 11 13 13 87 Nov/Dec 1995 H Hail Storms S.E. QLD 40 48 53 88 Dec 1995 S Storms Hunter Valley NSW 10 12 12 89 Jan/Feb 1996 S Storms Sydney 14 17 18 90 Apr 1996 C T. Pannawonica WA 2 2 3 91 May 1996 F Floods S.E. QLD 31 37 41 92 Aug 1996 S Storms Sydney NSW 10 12 13 93 Sep 1996 H Hailstorm Armidale/Tamworth NSW 104 124 116 94 Nov 1996 F Floods Coffs Harbour NSW 20 24 26 95 Nov 1996 H Hailstorm Tamworth NSW 10 12 12 96 4-Dec-96 1996 S Storm Brisbane QLD 1 1 1 97 15-Dec-96 1996 H Hailstorm Singleton NSW 49 58 60 98 15-Jan-97 1997 B Bush Fires Ferny Creek & other areas 10 12 15 99 22-Mar-97 1997 C T. Cairns QLD 8 9 10 100 31-Mar-97 1997 H Hailstorm S.E. QLD 10 12 13 101 16-Nov-97 1997 S Storm Grafton NSW 5 6 6 102 2-Dec-97 1997 B Bush Fires Menai NSW 3 4 4 103 20-Dec-97 1997 S Storms Sydney NSW 40 48 52 104 5-Jan-98 1998 S Storms Nyngan NSW 12 14 14 105 11-Jan-98 1998 F Storms (T. Cyclone Sid) & Flood Townsville Thuringowa City QLD 71 85 94 106 26-Jan-98 1998 F Floods (ex. T. ) Katherine Northern Territory NT 70 83 93 107 4-Feb-98 1998 S Storms Sydney NSW 12 14 15 108 9-Apr-98 1998 S Storm Sydney NSW 10 12 13 109 23-Jun-98 1998 S Storm Hunter NSW 12 14 15 110 24-Jun-98 1998 F Floods East Gippsland VIC 1.3 2 2 111 6-Aug-98 1998 S Storms Sydney NSW 10 12 13 112 17-Aug-98 1998 S Storms/Flooding Wollongong NSW 50 59 62 113 13-Oct-98 1998 S Windstorm QLD 23 27 30 114 14-Nov-98 1998 S Storm Brisbane QLD 7 8 9 115 16-Dec-98 1998 H Hailstorm Brisbane QLD 76 89 97 116 27-Dec-98 1998 F Rain/Flooding Melbourne 10 12 13 117 7-Feb-99 1999 S Storms SE & SW QLD 2 2 2 118 12-Feb-99 1999 C Cyclone Rona NTH QLD Innisfail 4 5 5 119 20-Mar-99 1999 F Floods Moora WA 4 5 5 120 22-Mar-99 1999 C Exmouth/Onslow 35 41 42 121 14-Apr-99 1999 H Hailstorm Sydney 1700 1994 2120 122 23-Oct-99 1999 S Storm Sydney 40 46 49 123 23-Oct-99 1999 S Storm/Flood Wollongong 5 6 6 124 27-Dec-99 1999 F Rainstorm/Flood Victoria 10 12 12 125 15-Jan-00 2000 F Flooding Perth WA 5 6 6 126 21-Feb-00 2000 S Storm & Flood Longreach/Central QLD 12 14 14 127 27-Feb-00 2000 C Far North QLD 11 13 13 128 15-Mar-00 2000 C Ex-Cyclone Steve North WA 5 6 6 129 2-Apr-00 2000 C Cyclone Nth Qld 15 17 18 130 18-Apr-00 2000 C Cyclone Nth WA 8.5 10 10 131 30-Sep-00 2000 S Storm, Wind Vic Peninsula VIC 2 2 2 132 15-Nov-00 2000 F Flood Mackay QLD 5 5 6 133 6-Jan-01 2001 S Severe Storm Dubbo NSW 15 16 17 134 16-Jan-01 2001 S Storm Sydney NSW 12 13 14 135 17-Jan-01 2001 S Storm Casino NSW 35 38 38 136 10-Mar-01 2001 S Storm & Flood NSW North Coast 25 27 28 137 9-Mar-01 2001 S Storm S/E QLD 37 40 42 138 18-Nov-01 2001 S Storm, Sydney & Port Stephens NSW 30 32 33

176

177

Appendix B – Statistical Distributions

Frequency (Discrete) Distributions

Cumulative Distribution Probability Mass Function Name Parameters Function f(x) F(x)

x ⎛n⎞ x n− x ⎛n⎞ i n−i Binomial n, p ⎜ ⎟ p (1− p) ∑⎜ ⎟p (1− p) ⎝ x⎠ i=0 ⎝ x⎠

x Negative ⎛ s + x − i ⎞ x ⎛ s + i −1⎞ i s, p ⎜ ⎟ p s ()1− p ∑⎜ ⎟ p2 ()1− p Binomial ⎝ x ⎠ i=0 ⎝ i ⎠

−λ x x i e λ −λ λ Poisson λ e ∑ x! i=0 i!

178

Severity (Continuous) Distributions

Cumulative Distribution Probability Mass Function Name Parameters Function f(x) F(x)

xα −1(1− x)β −1 1 Beta α,β No closed form ∫tα −1(1− t)β −1dt 0

x − β x Exponential β e − 1− e β β

⎡ −α ⎤ −α ⎛ x ⎞ ⎛ x ⎞ ⎢−⎜ ⎟ ⎥ −⎜ ⎟ Frechet α,β ⎢ ⎝ β ⎠ ⎥ ⎜ ⎟ -αβ −α x−α −1e⎣ ⎦ 1-e ⎝ β ⎠

α ( x−μ ) α,μ α ( x−μ )−e −eα ( x−μ ) Gumbel αx 1− e

⎡ (ln x−μ)2 ⎤ −⎢ ⎥ ⎢ 2σ 2 ⎥ ⎡ln(x) − μ ⎤ Lognormal μ,σ e ⎣ ⎦ Φ ⎣⎢ σ ⎦⎥ xσ 2π

⎡ ( x−μ )2 ⎤ −⎢ ⎥ ⎢ 2σ 2 ⎥ Normal μ,σ e ⎣ ⎦ No closed form

σ 2π

α αβ α ⎛ β ⎞ Pareto α,β 1- ⎜ ⎟ xα +1 ⎝ x ⎠

exp[κ cos(x − μ)]

2πI0 (κ)

where Von Mises μ,κ No closed form 2π ∫ exp[κ cos(x − μ)]dx I (κ) = 0 0 2π

⎡ α ⎤ α ⎛ x ⎞ ⎛ x ⎞ ⎢−⎜ ⎟ ⎥ −⎜ ⎟ Weibull α,β ⎢ ⎝ β ⎠ ⎥ ⎜ β ⎟ αβ −α xα −1e⎣ ⎦ 1-e ⎝ ⎠

179

180

Appendix C – Von Mises Parameter Estimation

The Von Mises function is a uni-modal distribution that can be used for modelling direction. The formula for the probability density function of the von Mises distribution is given in Appendix B above.

The two parameters that define the distribution are

μ the Mean Direction - the direction of the mean of the distribution; and

κ the Concentration Parameter - the level to which the individual values are concentrated at the mean.

κ can be estimated from the Mean Resultant Length R , as outlined below.

Figure C.1 shows a sample of five cyclone eye movement directions measured within a specific region.

Figure C.1 – Sample cyclone eye movement directions

Each of the individual directions is transformed into a two-dimensional vector, each with equal lengths, as shown in Figure C.2. The order of sequence of the vectors is immaterial.

181

R

φ

Figure C.2 – Addition of unit vectors

For n individual unit vectors with direction θi, where i=1, 2, … n

Calculating

n C = Σi=1 cos θi

n S = Σi=1 sin θi

R = √(C2 + S2)

The estimate of μ can be calculated as

Cos μ = C/R and Sin μ = S/R

The estimate of the Mean Resultant Length R = R/n

In this thesis, κ was estimated from R according to the following approximation from Fisher (1993).

For R <0.53 κ = 2 R + R 3 +5 R 5/6

182

For 0.53<= R <0.85 κ = -0.4 + 1.39 R + 0.43/(1- R )

For R >0.85 κ = 1/( R 3 – 4 R 2 + 3 R )

183

184

Appendix D – Daneshvaran and Gardner Wind Model

The Daneshvaran and Gardner wind speed model is used in this thesis to calculate the maximum gust wind speed for each physical location for a time period over which a cyclone eye moves between snapshots at two locations, given the conditions and parameters of the cyclone at each snapshot.

Need for Model Approach

A cyclone track is defined by a series of snapshots, representing statistics of the cyclone at various points in time. These snapshots, determined for real events from meteorological recordings and from simulated events by model output, provide:

• Longitude of eye;

• Latitude of eye;

• Central pressure;

• Eye radius;

• Direction of eye movement; and

• Forward movement speed.

Previous models evaluate wind speeds at various risk locations at a given point in time. By generating the wind speeds at points along the cyclone track and taking the maximum, an estimate is made of the maximum wind speed that would be experienced at each location over the entire event.

However, in order to determine this maximum accurately, winds must be simulated at each of the intermediary intervals between time step snapshots. It is necessary to interpolate the eye parameters at a range of locations between each of the snapshots at small intervals, say of 1 km, then to make the calculations at each of these points. As cyclone eyes can be moving as fast as 50km/h, and snapshots are generally three hours or as far as 150km apart. Accurate calculations of maximum wind speed would then require 150 interpolated wind speed estimates for each segment between each pair of two snapshots.

185

The model proposed here provides an approach for calculating the maximum wind experienced across each such segment in a single calculation, rather than having to perform numerous calculations for each segment.

Proposed Approach

The determination of wind speed for each segment is made up of a combination of winds experienced prior to the starting point of the segment, the winds either side of the segment and the winds experienced beyond the second point in the segment. The three regions are shown in Figure D.1.

Segment End Point

C n io eg 1 R

Segment Start Point B n io eg 2 R d r 0 A n Location of io Building eg R

Figure D.1 – Three regions for wind speed calculation

In the calculation of wind speed for a building or risk at a particular location, the formula varies for each of these three regions.

Let

r be the perpendicular distance in kilometres from the location of the building to the straight line between the start and end points of the cyclone track segment. r is positive on the right side of the track and negative on the left.

d be the distance in kilometres from the segment start point to the point where

186

the perpendicular line intersects the track. d is positive beyond the segment start point and negative prior to that point.

j be the length of the segment in kilometres.

The model assumes that each of the parameters of the cyclone transform between the two ends of a linear segment in a linear manner. The parameters at point 2 are interpolated between those at points 0 and 1 using d and j.

Let also

r0 , r1 , r2 be the cyclone eye radii in kilometres at the start point, end point and point of intersection of the perpendicular line with the linear segment respectively.

Δp0 , Δp1 , Δp2 be the central pressure differentials (difference between central and ambient pressures) at the start point, end point and point of intersection of the perpendicular line with the linear segment respectively.

v0 , v1 , v2 be the cyclone eye forward speed in metres per second at the start point, end point and point of intersection of the perpendicular line with the linear segment respectively.

Lat0 and Lat1 be the cyclone eye latitude in decimal degrees at the start and end points of the linear segment respectively.

In region A, the formula for maximum wind speed (W) is

−0.3( d )

r0 W = Envelope[]r0 ,Δp0 ,v0 ,r, Lat0 ×e

In region B, the formula for maximum wind speed is

W = Envelope[r2 ,Δp2 ,v2 ,r, Lat0 ]

In region C, the formula for maximum wind speed is

−0.3( d − j )

r1 W = Envelope[]r1,Δp1,v1,r, Lat1 ×e

Using R in place of the eye radius and dropping the subscripts from other variables, the

187 formula for the wind envelope, Envelope[R,Δp,v,r, Lat], is

If r ≥ R , ie. the right side,

2 ⎛ ⎛ r ⎞ r ⎞ ⎜ β ⎜ ⎟ +β +β ⎟ ⎜ R1 R 2 R 3 ⎟ ⎝ ⎝ R ⎠ R ⎠ = ()α R1 Δp −α R2 Rf +α R3v +α R4 ×e

If r ≤ −R , ie. the left side,

2 ⎛ ⎛ r ⎞ r ⎞ ⎜ β ⎜ ⎟ +β +β ⎟ ⎜ L1 L 2 L3 ⎟ ⎝ ⎝ R ⎠ R ⎠ = ()α L1 Δp −α L2 Rf +α L3v +α L4 ×e

If r > −R and r < R , ie. in the eye,

Envelope[]R,Δp,v, R, Lat + Envelope[R,Δp,v,−R, Lat] = + 2

Envelope[R,Δp,v, R, Lat]− Envelope[R,Δp,v,−R, Lat] r × 2 R

The coefficients αLi, αRi, βLi, βRi, and f were developed through non-linear regression analysis using the Shapiro-based wind-field model73 in storm simulations and are latitude dependent. The model is latitude-based to allow for the variation of the wind field as it moves away from the equator.

In addition, the wind speeds are adjusted by a land friction factor that varies as function of the distance from the building or risk site to the nearest coastline.

Let

Dist be the distance from the coast in kilometres

73 Shapiro L.J., 1983, “The Asymmetric Boundary Layer Flow Under a Translating Hurricane”, Journal of the Atmospheric Sciences 188

W be the wind speed prior to friction adjustment

Adjusted wind speed = W * (1 – 0.01 Dist) if Dist ≤ 10km

= W * (0.921 – 0.0026 Dist) if Dist > 10km and Dist ≤ 50km

= W * 0.79 if Dist > 50km

The adjusted wind speed is multiplied by any terrain/topography adjustment to determine the gust wind speed to use with the damage functions described in this thesis.

Features of this Approach

The major benefit of this approach over the previously available methodologies is that this provides an improvement in computer processing speed up to as much as 150 times, as outlined above. When performing analyses of large insurance portfolios, this processing time improvement is significant.

The main compromise in this approach is that the direction of maximum winds is not provided as an output. However, as the wind damage functions in this thesis do not incorporate wind direction, this limitation does not compromise the model.

Considering the accuracy of simulated results from this new wind model versus actual recordings, outlined in Chapter 7, and the improvements in processing time, this new approach was adopted for the analysis in this thesis.

189

190

Appendix E – References

Actuarial Standards Board, June 2000, "Actuarial Standard of Practice No. 38 - Using Models Outside the Actuary's Area of Expertise (Property and Casualty)", Casualty Actuarial Society, Doc. No. 071

Allan R., Lindesay J., Parker D., 1996, "El Niño Southern Oscillation & Climatic Variability", CSIRO

Aon Re Australia Limited, 2006, "Lessons from Cyclone Larry", Unpublished research

Aon Re Australia Limited, January 2005, "Property Catastrophe Benchmarking Study - Jan 2005", Aon Re Australia Limited

APRA Guidance Note GGN 110.5, July 2002, "Concentration Risk Capital Charge", Australian Prudential Regulation Authority

APRA Prudential Standard GPS 110, July 2002 (Amended January 2005), "Capital Adequacy for General Insurers", Australian Prudential Regulation Authority

Baker, W.E., 1973, "Explosions in Air", University of Texas Press

Biggs, John M., 1964, "Introduction to Structural Dynamics", McGraw-Hill Companies

Bister M. and Emanuel K., 1998, "Hurricane Climatological Potential Intensity Maps and Tables", Massachusetts Institute of Technology, From Web Page…http://www- paoc.mit.edu/~emanuel/pcmin/climo.html

Blong, R.J., Leigh, R., Hunter, L. and Chen, K., 2001, "Hail and Flood Hazards - Modelling to Understand the Risk", Aon Re Hazards & Capital Risk Management Conference Series, edited by Britton, N.R. and Oliver, J., 25-39

Bureau of Transport and Regional Economics, 2001, "Economic Costs of Natural Disasters in Australia", Commonwealth of Australia

Clark, K. M., 1986, "A Formal Approach to Catastrophe Assessment and Management", Casualty Actuarial Society Proceedings. Volume LXXIII, Part 2, No.140, 24 pages

191

Coch N. K., 1998, "Hurricane Hazards & Damage Patterns in the United States", Applied Insurance Research (AIR) 1998 Client Conference, Colorado, USA

Croddy, Eric, 2002, "Chemical and Biological Warfare", Springer-Verlag

Department of the [US] Army, 1986, "Explosives and Demolitions", Field Manual 5-25, Headquarters, US Army

Dixit, A.K. and Pindyck, R.S., 1994, "Investment under uncertainty", Princeton University Press, NJ

Dunnigan, James F., 1993, "How to Make War", Third Edition, ISBN: 006009012X

Dvorak, V.F., 1975, "Tropical Cyclone Intensity Analysis and Forecasting from Satellite Imagery", Monthly Weather Review, Vol. 103, pp. 420--430

Emergency Management Australia, June 1999, "Final Report of Australia's Coordination Committee for IDNDR (International Decade for Natural Disaster Reduction) 1990-2000"

Federal Bureau of Investigation, 1999, "Terrorism in the United States", The U.S. Department of Justice

Fisher N.I., 1993, "Statistical Analysis of Circular Data", Cambridge University Press, Cambridge

Florida Commission on Hurricane Loss Projection Methodology website located at http://www.sbafla.com/methodology/

Friedman, D.G., 1975, "Computer Simulation in Natural Hazard Assessment". Monograph NSF-RA-E-75-002. Institute of Behavioral Science, University of Colorado, Boulder, 192 pages

Froot, K.A., 1999, Introduction of editorial in "The Financing of Catastrophe Risk", The University of Chicago Press, Ltd. London

Gaull, B, Michael-Leiba, M.O., and Rynn, J.A.W., 1990, "Probabilistic Earthquake Risk Maps of Australia", Aust. J Earth Sciences, 37, 169 - 187

Geological Survey of Canada, 1995, "Generalized Geology of the World and Linked Databases", Open File 2915d

192

Gomes, L., and Vickery, B.J., 1976, "Tropical Cyclone Gust Speeds Along the North Australian Coast", I.E. Aust, Civil Engg. Trans., Vol. CE18, No. 2, pp. 40-48

Gumbel, Emil J., 1954, "Statistical Theory of Extreme Values and Some Practical Applications", National Bureau of Standards, Applied Mathematics Series, Number 33, 12 February.

Halton, J.H. 1960, "Numerische Mathematik", vol. 2, pp. 84-90

Hammersley, J.M. and Handscomb, D.C, 1964, "Monte Carlo Methods", Methuen & Co Ltd

Harper, B. A., 1999, "Numerical Modelling of Extreme Tropical Cyclone Winds", Journal of Wind Engineering and Industrial Aerodynamics 83

Hart, D.G., Buchanan, R.A.., and Howe, B.A., 1996 (revised 2001), "Actuarial Practice of General Insurance", Institute of Actuaries of Australia

Helmer, Olaf, March 1967, "Analysis of the Future: The Delphi Method", P-3558, Rand Corporation

Henderson-Sellers A., Zhang H., Berz G., Emanuel K., Gray W., Landsea C., Holland G., Lighthill J., Shieh SL., Webster P. and McGuffie K., 1997, "Tropical Cyclones and Global Climate Change: A Post-IPCC Assessment", American Meteorological Society

Holland G.J., 1997, "The Maximum Intensity of Tropical Cyclones", Bureau of Meteorology Research Centre, Melbourne, Australia

Holland, G.J., 1980, "An Analytical Model of the Wind and Pressure Profiles in Hurricanes", Monthly Weather Review, 108, 1212-1218

Insurance Council of Australia, October 2002, "Report on Non-Insurance/Under-Insurance in the Home and Small Business Portfolio", MP008-1610

Insurance Disaster Research Organisation, 1996, "Major Disasters Since June 1967", Available on the internet at http://www.idro.com.au/disaster_list

International Lithosphere Program, 1999, "Global Seismic Hazard Assessment Map", Annali di Geofisica

193

Kagan, Robert, 2002, "Power and Weakness", Policy Review No.113, Hoover Institution, Stanford University

Kleindorfer, P.R., and Kunreuther, H.C., 1999, "Challenges Facing the Insurance Industry in Managing Catastrophe Risks", from "The Financing of Catastrophe Risk", The University of Chicago Press, Ltd. London

Leicester, R. H., Bubb, C. T. J., Dorman, C., and Beresford, F. D., 1979 .''An assessment of potential cyclone damage to dwellings in Australia.'' Proc. 5th Int. Conf. on Wind Engineering, J. E. Cermak, ed.,Pergamon, New York, 23-36

Leigh, R., 2003, "Estimating Sydney PMLs - Which is the most important natural hazard?", Risk Frontiers quarterly newsletter, June 2003 Volume 2, Issue 4

Martin, G.S., and Bubb, C.T.J., 1976, "Discussion of Gomes and Vickery (1976)". I.E. Aust, Civil Engg. Trans., Vol. CE18, No. 2, pp. 48-49

McAneney, K.J., 2005, "Australian Bushfire: Quantifying and Pricing the Risk to Residential Properties", Risk Frontiers

Morgan, Byron, J.T., 1984, "Elements of Simulation", Mathematical Institute, University of Kent, Canterbury, United Kingdom

Munich Re, 1998, "World Map of Natural Hazards", Munich Re

Oliver, Stephen E., and Georgiou, Peter N., 1993, "Windstorm PMLs - Queensland", Bureau of Meteorology Special Services Unit

Panofsky, H.A. and Dutton, J.A., 1984, "Atmospheric turbulence: models and methods for engineering applications", John Wiley & Sons, NY.

Phillips, N. A., 2000, "An Explication of the Coriolis Effect", Bulletin of the American Meteorological Society: Vol. 81, No. 2, pp. 299-303

Piddington, Henry, 1855, "The sailor's hornbook for the law of storms: being a practical exposition of the theory of the law of storms and its uses to mariners of all classes in all parts of the world." London, William & Morgate, 360p.

194

Redfield, William C., Jan-Mar 1830, "Remarks on the prevailing storms of the Atlantic Coast of the North American states.", American Journal of Science and Arts (Series 2), vol. 20, no. 1, pp. 17-43.

Saunders, J.J., 2001, "The History of the Mongol Conquests", University of Pennsylvania Press, ISBN 0-8122-1766-7

Sciaudone J.C, D. Feuerborn, G.Rao, M.T.S. Daneshvaran, 1997, "Development of Objective Wind Damage Functions to Predict wind Damage to Low-Rise Structures", Eighth U.S. National Conference on Wind Engineering, Baltimore MD

Secretary of State, 2000, "Patterns of Global Terrorism", U.S. Department of State

Shapiro L.J., 1983, "The Asymmetric Boundary Layer Flow Under a Translating Hurricane", Journal of the Atmospheric Sciences

Standards Association of Australia, 1989, "Minimum design loads on structures (known as the SAA Loading Code) - Wind loads", AS1170.2.-1989

Standards Association of Australia, 1993, "Minimum Design Loads on Structures, Part 4: Earthquake Loads", AS1170.4-1993

The Diagram Group, 1990, "Weapons", Updated Edition, ISBN: 0312039506, St Martins Press

Tryggvason, B.V., 1979, "Computer Simulation of Tropical Cyclone Wind Effects for Australia", James Cook University of North Queensland

University of Hawaii - Institute for Astronomy web page located at http://www.solar.ifa.hawaii.edu/Tropical/tropical.html

University of Western Australia, Isoseismal Maps, from webpage located at http://www.seismicity.segs.uwa.edu.au/page/45009

US Navy, 1974, "US Navy Seal Combat Manual", Special Warfare Training Handbook, Lancer

Vickery, P., Twisdale, L.A., 1995, "Windfield & Filling Models for Hurricane Wind Speed Predictions", Journal of Structural Engineering, Vol. 121, No. 11, pp 1691-1699

195

von Mises, R., 1918, Über die "Ganzzahligkeit" der Atomgewichte und Verwandte Fragen, Physikal. Z. 19

Wolter, K., 1987, "The Southern Oscillation in surface circulation and climate over the tropical Atlantic, Eastern Pacific, and Indian Oceans as captured by cluster analysis". J. Climate Appl. Meteor., 26, 540-558.

196

197

END OF DOCUMENT

198