
Main Title here 1 The Current State of Risk Information: Models & Platforms 3 Past and Future Evolution of Catastrophe Models Karen Clark (Karen Clark & Company) atastrophe models were developed in the late FIGURE 1. Catastrophe Model Component 1980s to help insurers and reinsurers better Cunderstand and estimate potential losses from natural hazards, such as hurricanes and earthquakes. Over the past few decades, model usage has grown considerably throughout the global insurance industry, and the models are relied upon for many risk management decisions. In short, the models have become very important tools for risk management. Now, new open loss modeling platforms are being developed to advance the current state of practice. The first generation catastrophe models are cl`sed “black box” applications, proprietary to the model vendors. Open models make more visible the key assumptions driving insurers’ loss estimates, along with giving them control over those assumptions. Market demand is driving the development of new tools because today’s model users require transparency on the model components and more consistency in risk management information. Insurers are also expected to develop their own proprietary views of risk and not simply rely on the output from third-party models. The The event catalog defines the frequency and physical following reviews the traditional catastrophe models and severity of events by geographical region. It is typically their limitations and how advanced open risk models generated using random simulation techniques, in which are addressing these issues. It also illustrates how other the underlying parameter distributions are based on users, such as governments of developing countries, can historical data and/or expert judgment. The reliability benefit from this new technology. of the event catalog varies considerably across peril regions, depending on the quantity and quality of Overview of catastrophe models historical data. A catastrophe model is a robust and structured For example, enough historical data exist on Florida framework for assessing the risk of extreme events. hurricanes to estimate credibly the return periods For every peril region, the models have the same four of hurricanes of varying severity there. In contrast, components, as shown in figure 1. nine hurricanes have made landfall in the Northeast The Current State of Risk Information: Models & Platforms 4 since 1900—none of them exceeding Category 3 FIGURE 2. Representative EP Curve intensity. Model estimates of the frequency of Category 4 hurricanes in this region are, therefore, based on subjective judgments that can vary significantly between models and even between model updates from the same vendor. Because scientists don’t know the “right” assumptions, they can develop very different opinions, and they can change their minds. For each event in the catalog, the models estimate the intensity at affected locations using the event parameters the catalog provides, site information, and scientific formulas developed by the wider scientific community. Over time, the fundamental structure of the models The catastrophe models incorporate published literature has not changed, but faster computers have enabled and data from the public domain—usually obtained the models to simulate more events and capture more from government agencies, universities, and other detailed data on exposures and geophysical factors, such scientific organizations. Scientists have collected and as soil type and elevation. But more events and greater analyzed intensity data from past events to develop detail do not mean the models now produce accurate these formulas, but, again, the amount and quality of the numbers. intensity data vary significantly across perils and regions. Model users sometimes confuse complexity with The models are most widely used to estimate property accuracy, but the catastrophe models will never be damage, and their damage functions attempt to account accurate due to the paucity of scientific data. In fact, for type of building construction, occupancy, and other since little or no reliable data underlie many of the model characteristics, depending on the peril. The functions assumptions, adding more variables (that is, adding are used to estimate, for different intensity levels, the complexity) can increase the chances of human error damage that will be experienced by different types of and amplify model volatility without improving the loss exposures. estimates or adding any real value to the model. The damage functions are expressed as the ratio of the repair costs to the building replacement value. Because Challenges and gaps with first generation extreme events are rare and very little claims data models exist, most of these functions are based on engineering While the traditional catastrophe models have judgment. The financial module applies policy and dramatically improved the insurance industry’s reinsurance terms to the “ground-up” losses to estimate understanding and management of catastrophe risk, the gross and net losses to insurers. first generation models have certain limitations and can The input for the model consists of detailed information be advanced. Insurers face five primary challenges with on insured properties, and the model output is the the current vendor models: exceedance probability (EP) curve, which shows the 1. Volatile loss estimates. Model volatility is largely estimated probabilities of exceeding various loss driven by modeling companies’ changing the amounts. Unfortunately, the false precision of the model assumptions, not by new science. This volatility output conveys a level of certainty that does not exist is highly disruptive to risk management strategies with respect to catastrophe loss estimates. Model users and is not fully warranted, given the current state of have come to a better understanding of the uncertainty scientific knowledge. through model updates producing widely different numbers. 2. Lack of control over (and therefore low confidence in) model assumptions. Because the first generation models are “secret,” insurers can never be certain Past and Future Evolution of Catastrophe Models 5 about the assumptions driving their loss estimates, ›› See the model assumptions and they have no control over those assumptions. ›› Understand the full range of valid assumptions for Loss estimates can change dramatically with model each model component updates. Regulators put pressure on insurers to ›› Analyze how different credible assumptions affect adopt the latest models, even if insurers cannot fully their loss estimates validate them and are not comfortable with the new estimates. In the current paradigm, insurers may ›› Select the appropriate assumptions for their risk feel compelled to use information they have little management decisions understanding of or confidence in. 3. Inefficient model “validation” processes. Because Open platforms start with reference models based on insurers cannot actually see the model components the same scientific data, formulas, and expertise as the and calculations, they cannot readily determine how traditional vendor models. The difference is users can see their loss estimates are derived and how different clearly how this information is implemented in the model sets of assumptions can affect them. Insurers have and can customize the model assumptions to reflect their to develop costly and inefficient processes around specific portfolios of exposures. the models in attempts to infer what is inside them, using contrived analyses of model output. The The damage function component is an obvious area process starts all over again with model updates. for customization. The vendors calibrate and “tune” 4. Significantly increasing costs of third-party model the model damage functions utilizing the limited loss license fees. Insurers now pay millions of dollars a experience of a few insurers. This subset of insurers year in third-party license fees. Given a duopoly of may not be representative of the entire market or the model vendors, costs continue to escalate without spectrum of property business, and even within it, each commensurate increases in value. insurer has different insurance-to-value assumptions, policy conditions, and claims handling practices. This 5. Exceedence probability (EP) curve metrics, such as means damage to a specific property will result in a value at risk (VaR) and tail value at risk (TVaR), do not different claim and loss amount depending on which provide enough visibility on large loss potential. While insurer underwrites it, and the model damage functions probabilistic output based on thousands of randomly will be biased to the data available to the model vendors. generated events is valuable information, it doesn’t give insurers a complete picture of their loss potential Even if a modeler could correct for these biases, the nor provide the best information for monitoring and damage functions in a traditional vendor model may be managing large loss potential. VaRs in particular are averaged over a small subset of companies and will not not operational, intuitive, or forward looking, and they apply to any specific one. The traditional model vendors don’t identify exposure concentrations that can lead don’t allow insurers access to the damage functions to solvency-impairing losses. to test this model component against their own claims experience. New open models empower insurers to Addressing the challenges and advancing do their
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages108 Page
-
File Size-