I N N O V A T I N G T O R E D U C E R I S K
Written Contributions
This publication is driven by input provided by the disaster risk community. The Global Facility of Disaster Risk and Recovery facilitated the development of the report, with funding from the UK Department of International Development.
The World Bank Group does not guarantee the accuracy of the data in this work. The boundaries, colors, denominations, and other information shown on any map in this work do not imply any judgment on the part of the World Bank concerning the legal status of any territory or the endorsement or acceptance of such boundaries.
Washington, D.C., June 2016 Editors: LFP Editorial Enterprises Designed by Miki Fernández Rocca ([email protected]), Washington, D.C.
© 2016 by the International Bank for Reconstruction and Development/The World Bank 1818 H Street, N.W., Washington, D.C. 20433, U.S.A. All rights reserved Manufactured in the United States of America. First printing June 2016
Table of Contents
Preface
5
The Current State of Risk Information: Models & Platforms
Past and Future Evolution of Catastrophe Models
Karen Clark (Karen Clark & Company)
8
- 13
- The Current State of Open Platforms and Software for Risk Modeling: Gaps, Data Issues, and Usability
James E. Daniell (Karlsruhe Institute of Technology)
- Visions for the Future: Model Interoperability
- 17
Andy Hughes, John Rees (British Geological Survey)
- Harnessing Innovation for Open Flood Risk Models and Data
- 21
Rob Lamb (JBA Trust, Lancaster University )
Toward Reducing Global Risk and Improving Resilience
Greg Holland, Mari Tye (National Center for Atmospheric Research)
24 29 32 36 40 44
Priorities for the Short and Medium Terms: Which Are Better?
Susan Loughlin (British Geological Survey)
Open Risk Data and Modeling Platform
Tracy Irvine (Oasis, Imperial College London, EIT Climate-KIC)
Visions for the Future: Multiple Platforms and the Need for Capacity Building
John Rees, Andy Hughes (British Geological Survey)
The Anatomy of a Next Generation Risk Platform
Deepak Badoni, Sanjay Patel (Eigen Risk)
Development of an Open Platform for Risk Modeling: Perspective of the GEM Foundation
John Schneider, Luna Guaschino, Nicole Keller, Vitor Silva, Carlos Villacis, Anselm Smolka (GEM Foundation)
– 1 –
Status of Risk Data/Modeling Platforms and the Gaps: Experiences from VHub
- and the Global Volcano Model
- 52
54 58
Greg Valentine (University of Buffalo)
Toward an Open Platform for Improving the Understanding of Risk in Developing Countries
Micha Werner, Hessel Winsemius, Laurens Bouwer, Joost Beckers, Ferdinand Diermanse (Deltares & UNESCO-IHE)
Oasis: The World’s Open Source Platform for Modeling Catastrophic Risk
Dickie Whitaker, Peter Taylor (Oasis Loss Modelling Framework Ltd)
Data
Open or Closed? How Can We Square Off the Commercial Imperative in a World of Open and Shared Data? 65
Justin Butler (Ambiental)
Understanding Disaster Risk Through Loss Data
Melanie Gall, Susan L. Cutter (University of South Carolina)
70
- 74
- High-Resolution Elevation Data: A Necessary Foundation for Understanding Risk
Jonathan Griffin (Geoscience Australia); Hamzah Latief (Bandung Institute of Technology); Sven Harig (Alfred Wegener Institute); Widjo Kongko (Agency for Assessment and Application of Technology, Indonesia); Nick Horspool (GNS Science)
Data Challenges and Solutions for Natural Hazard Risk Tools
Nick Horspool (GNS Science); Kate Crowley (National Institute of Water and Atmospheric Research Ltd); Alan Kwok (Massey University)
77
- 81
- The Importance of Consistent and Global Open Data
Charles Huyck (ImageCat Inc.)
Capacity Building
Australia-Indonesia Government-to-Government Risk Assessment Capacity Building
A. T. Jones, J. Griffin, D. Robinson, P. Cummins, C. Morgan, (Geoscience Australia); S. Hidayati (Badan Geologi); I. Meilano (Institut Teknologi Bandung); J. Murjaya (Badan Meteorologi, Klimatologi, dan Geofisika)
86 89
Required Capacities to Improve the Production of, Access to, and Use of Risk Information in Disaster Risk Management
Sahar Safaie (UNISDR)
Understanding Assumptions, Limitations, and Results of Fully Probabilistic Risk Assessment Frameworks 93
Mario A. Salgado-Gálvez (CIMNE-International Centre for Numerical Methods in Engineering)
- Building Capacity to Use Risk Information Routinely in Decision Making Across Scales
- 97
Emma Visman (King’s College London and VNG Consulting Ltd); Dominic Kniveton (University of Sussex)
– 2 –
Risk Communication
- Visualizing Risk for Commercial, Humanitarian, and Development Applications
- 102
Richard J. Wall (University College London (UCL) Hazard Centre) Stephen J. Edwards (UCL Hazard Centre and Andean Risk & Resilience Institute for Sustainability & the Environment), Kate Crowley (Catholic Agency for Overseas Development and NIWA), Brad Weir (Aon Benfield) and Christopher R.J. Kilburn (UCL Hazard Centre)
Improving Risk Information Impacts via the Public Sphere and Critical “Soft”
- Infrastructure Investments
- 108
112
Mark Harvey (Resurgence); Lisa Robinson (BBC Media Action)
Perceiving Risks: Science and Religion at the Crossroads
Ahmad Arif (Kompas); Irina Rafliana (LIPI, Indonesian Institute of Sciences)
– 3 –
Collaborators
– 4 –
Preface
hese written contributions are part of the Solving the Puzzle: Where to Invest to Understand Risk report. The report
provides a community perspective on priorities for future collaboration and investment in the development and use of disaster risk information for developing countries. The focus is on high-impact activities that will promote
T
the creation and use of risk-related data, catastrophe risk models, and platforms, and that will improve and facilitate the understanding and communication of risk assessment results.
The intended outcome of the report is twofold. First, that through the community speaking as one voice, we can encourage additional investment in the areas highlighted as priorities. Second, that the consensus embodied in the report will initiate the formation of the strong coalition of partners whose active collaboration is needed to deliver the recommendations.
The written contributions are part of the input received from the disaster risk community in response to open calls to the Understanding Risk Community and direct solicitation. The papers offer analysis around challenges that exist in disaster risk models and platforms, data, capacity building and risk communication.
– 5 –
The Current State of Risk Information: Models & Platforms
– 7 –
Past and Future Evolution of Catastrophe Models
Karen Clark (Karen Clark & Company)
atastrophe models were developed in the late
FIGURE 1. Catastrophe Model Component
1980s to help insurers and reinsurers better understand and estimate potential losses from
C
natural hazards, such as hurricanes and earthquakes. Over the past few decades, model usage has grown considerably throughout the global insurance industry, and the models are relied upon for many risk management decisions.
In short, the models have become very important tools for risk management. Now, new open loss modeling platforms are being developed to advance the current state of practice. The first generation catastrophe models are cl`sed “black box” applications, proprietary to the model vendors. Open models make more visible the key assumptions driving insurers’ loss estimates, along with giving them control over those assumptions.
Market demand is driving the development of new tools because today’s model users require transparency on the model components and more consistency in risk management information. Insurers are also expected to develop their own proprietary views of risk and not simply rely on the output from third-party models. The following reviews the traditional catastrophe models and their limitations and how advanced open risk models are addressing these issues. It also illustrates how other users, such as governments of developing countries, can benefit from this new technology.
The event catalog defines the frequency and physical severity of events by geographical region. It is typically generated using random simulation techniques, in which the underlying parameter distributions are based on historical data and/or expert judgment. The reliability of the event catalog varies considerably across peril regions, depending on the quantity and quality of historical data.
Overview of catastrophe models
A catastrophe model is a robust and structured framework for assessing the risk of extreme events. For every peril region, the models have the same four components, as shown in figure 1.
For example, enough historical data exist on Florida hurricanes to estimate credibly the return periods of hurricanes of varying severity there. In contrast, nine hurricanes have made landfall in the Northeast
– 8 –
Solving the Puzzle: Innovating to Reduce Risk—Written Contributions
since 1900—none of them exceeding Category 3
FIGURE 2. Representative EP Curve
intensity. Model estimates of the frequency of Category 4 hurricanes in this region are, therefore, based on subjective judgments that can vary significantly between models and even between model updates from the same vendor. Because scientists don’t know the “right” assumptions, they can develop very different opinions, and they can change their minds.
For each event in the catalog, the models estimate the intensity at affected locations using the event parameters the catalog provides, site information, and scientific formulas developed by the wider scientific community. The catastrophe models incorporate published literature and data from the public domain—usually obtained from government agencies, universities, and other scientific organizations. Scientists have collected and analyzed intensity data from past events to develop these formulas, but, again, the amount and quality of the intensity data vary significantly across perils and regions.
Over time, the fundamental structure of the models has not changed, but faster computers have enabled the models to simulate more events and capture more detailed data on exposures and geophysical factors, such as soil type and elevation. But more events and greater detail do not mean the models now produce accurate numbers.
Model users sometimes confuse complexity with accuracy, but the catastrophe models will never be accurate due to the paucity of scientific data. In fact, since little or no reliable data underlie many of the model assumptions, adding more variables (that is, adding complexity) can increase the chances of human error and amplify model volatility without improving the loss estimates or adding any real value to the model.
The models are most widely used to estimate property damage, and their damage functions attempt to account for type of building construction, occupancy, and other characteristics, depending on the peril. The functions are used to estimate, for different intensity levels, the damage that will be experienced by different types of exposures.
The damage functions are expressed as the ratio of the repair costs to the building replacement value. Because extreme events are rare and very little claims data exist, most of these functions are based on engineering judgment. The financial module applies policy and reinsurance terms to the “ground-up” losses to estimate gross and net losses to insurers.
Challenges and gaps with first generation models
While the traditional catastrophe models have dramatically improved the insurance industry’s understanding and management of catastrophe risk, the first generation models have certain limitations and can be advanced. Insurers face five primary challenges with the current vendor models:
The input for the model consists of detailed information on insured properties, and the model output is the exceedance probability (EP) curve, which shows the estimated probabilities of exceeding various loss amounts. Unfortunately, the false precision of the model output conveys a level of certainty that does not exist with respect to catastrophe loss estimates. Model users have come to a better understanding of the uncertainty through model updates producing widely different numbers.
1. Volatile loss estimates. Model volatility is largely
driven by modeling companies’ changing the assumptions, not by new science. This volatility is highly disruptive to risk management strategies and is not fully warranted, given the current state of scientific knowledge.
2. Lack of control over (and therefore low confidence in) model assumptions. Because the first generation
models are “secret,” insurers can never be certain
– 9 –
Past and Future Evolution of Catastrophe Models
- about the assumptions driving their loss estimates,
- ›› See the model assumptions
and they have no control over those assumptions. Loss estimates can change dramatically with model updates. Regulators put pressure on insurers to adopt the latest models, even if insurers cannot fully validate them and are not comfortable with the new estimates. In the current paradigm, insurers may feel compelled to use information they have little understanding of or confidence in.
›› Understand the full range of valid assumptions for each model component
›› Analyze how different credible assumptions affect their loss estimates
›› Select the appropriate assumptions for their risk management decisions
3. Inefficient model “validation” processes. Because
insurers cannot actually see the model components and calculations, they cannot readily determine how their loss estimates are derived and how different sets of assumptions can affect them. Insurers have to develop costly and inefficient processes around the models in attempts to infer what is inside them, using contrived analyses of model output. The process starts all over again with model updates.
Open platforms start with reference models based on the same scientific data, formulas, and expertise as the traditional vendor models. The difference is users can see clearly how this information is implemented in the model and can customize the model assumptions to reflect their specific portfolios of exposures.
The damage function component is an obvious area for customization. The vendors calibrate and “tune” the model damage functions utilizing the limited loss experience of a few insurers. This subset of insurers may not be representative of the entire market or the spectrum of property business, and even within it, each insurer has different insurance-to-value assumptions, policy conditions, and claims handling practices. This means damage to a specific property will result in a different claim and loss amount depending on which insurer underwrites it, and the model damage functions will be biased to the data available to the model vendors.
4. Significantly increasing costs of third-party model
license fees. Insurers now pay millions of dollars a year in third-party license fees. Given a duopoly of model vendors, costs continue to escalate without commensurate increases in value.
5. Exceedence probability (EP) curve metrics, such as value at risk (VaR) and tail value at risk (TVaR), do not provide enough visibility on large loss potential. While
probabilistic output based on thousands of randomly generated events is valuable information, it doesn’t give insurers a complete picture of their loss potential nor provide the best information for monitoring and managing large loss potential. VaRs in particular are not operational, intuitive, or forward looking, and they don’t identify exposure concentrations that can lead to solvency-impairing losses.
Even if a modeler could correct for these biases, the damage functions in a traditional vendor model may be averaged over a small subset of companies and will not apply to any specific one. The traditional model vendors don’t allow insurers access to the damage functions to test this model component against their own claims experience. New open models empower insurers to do their own damage function calibration so the loss estimates reflect their actual experience better.
Addressing the challenges and advancing catastrophe modeling technology with open models
Open models do not eliminate model uncertainty, but they give insurers a much more efficient platform for understanding it and the ranges of credible assumptions for the model components. Instead of subjecting their risk management decisions to volatile third-party models, insurers can test different parameters and then decide on a consistent set of assumptions for their pricing, underwriting, and portfolio management decisions.
New open loss modeling platforms address the challenges presented by the traditional models and significantly advance the current state of practice. The structure of an open model is the same as that of a traditional vendor model; the advance is model assumptions that are visible and accessible to the users, which allows insurers to do the following:
– 10 –
Solving the Puzzle: Innovating to Reduce Risk—Written Contributions
The RiskInsight® open platform also offers new risk metrics called Characteristic Events (CEs). Insurers, rating agencies, and regulators have tended to focus on a few point estimates from the EP curves—specifically, the so-called “1-in-100-year” and “1-in-250-year” probable maximum loss (PML) studies that drive many important risk management decisions, including reinsurance purchasing and capital allocation. mile intervals along the coast, are shown on the x-axis, and the estimated losses, in millions, are shown on the y-axis. The dotted line shows the 100-year PML—also corresponding with the top of this company’s reinsurance program.
The CE chart clearly shows that, although the insurer has managed its Florida loss potential quite well, the company has built up an exposure concentration in the Gulf, and should the 100-year hurricane happen to make landfall there, this insurer will have a solvency-impairing loss well in excess of its PML and reinsurance protection.
Apart from the uncertainty of these numbers, PMLs do not give insurers a complete picture of large loss potential. For example, insurers writing all along the Gulf of Mexico and Florida coastlines will likely have PMLs driven by Florida events because that’s where severe hurricanes and losses are most frequent. While managing the PML driven by these events, an insurer could be building up dangerous exposure concentrations in other areas along the coast. The traditional catastrophe models are not the right tools for monitoring exposure accumulations.
Armed with this additional scientific information, insurers can enhance underwriting guidelines to reduce peak exposure concentrations. Because they represent specific events, the CEs provide operational risk metrics that are consistent over time. One important lesson of recent events, such as the Northridge Earthquake and Hurricane Katrina, is that insurers require multiple lines of sight on their catastrophe loss potential and cannot rely on the traditional catastrophe model output alone.
Exposure concentrations can be more effectively
identified and managed using a scientific approach that is the flip side of the EP curve approach. In the CE methodology, the probabilities are defined by the hazard and the losses estimated for selected return-period
events.
How developing countries can benefit from open models
Because the catastrophe models were designed to estimate insured losses, vendor model coverage is more extensive for the more economically developed regions where insurance penetration is relatively high. Less
Figure 3 shows the losses for the 100-year hurricanes for a hypothetical insurer. The landfall points, spaced at ten-
FIGURE 3. CE Profile
– 11 –
Past and Future Evolution of Catastrophe Models
investment has gone into model development for the developing countries. interact with the model assumptions so as to understand fully the model components and the drivers of the loss estimates. Insurers have had to accept the vendor model output “as is” or make rough adjustments to the EP curves.
Open models empower developing countries to build their own models using experts of their own choosing. Open models also enable countries to be more informed about the nature of the risk and how it can be managed and mitigated versus simply giving them highly uncertain loss estimates. As governments use open models, they will become more knowledgeable risk managers. Finally, open models are more cost effective and efficient than closed third-party models.
Catastrophe models are more appropriately used as tools to understand the types of events that could occur in different peril regions and the damages that could result from those events under different sets of credible assumptions. The more open a model is, the more informative and useful it is for all parties—private companies, governments, and academic institutions. Advanced open loss modeling platforms take the powerful invention of the catastrophe model to the next level of sophistication.