<<

Trimming the UCERF2 Hazard Logic Tree

a) b) c) Keith Porter , Edward Field , and Kevin Milner

The Uniform California Earthquake Rupture Forecast version 2.0 (UCERF2) has 480 branches on its logic tree of 9 modeling decisions, often called epistemic uncertainties. In combination with 4 next-generation attenuation (NGA) relationships, UCERF2 has 1,920 branches. This study examines the sensitivity of a statewide risk metric to each epistemic uncertainty. The metric is expected annualized loss (EAL) to an estimated statewide portfolio of single-family woodframe dwellings. The results are depicted in a so-called tornado diagram. The uncertainties that matter most to this risk metric are the recurrence model, aperiodocity under a Brownian-passage-time recurrence model, and the choice of ground-motion-prediction equation. Risk is much less sensitive to the other uncertainties. One implication is that a statewide risk calculation employing only 5 branches of the UCERF2 tree and 4 NGA relationships captures virtually all the uncertainty produced by the full complement of 1,920 branches, allowing for a 100x reduction in computational effort with no corresponding underestimation of uncertainty. Another implication is that efforts to reduce these two epistemic uncertainties might be the most cost-effective way of reducing uncertainty in statewide risk resulting from UCERF2 and ground-motion-prediction equations.

INTRODUCTION

Assessing which sources of hazard uncertainty matter. The Uniform California Earthquake Rupture Forecast 2 (UCERF 2) is a fully time-dependent hazard model developed under the sponsorship of the California Earthquake Authority and delivered in September, 2007 (WGCEP 2007; Field et al. 2009). UCERF 2 contains 480 logic-tree branches reflecting choices among 9 modeling uncertainties or adjustable parameters in the earthquake rate model shown in Figure 1. For probabilistic seismic hazard analysis, it is also necessary to choose a ground-motion prediction equation (GMPE) and set its adjustable parameters, which

a) SPA Risk LLC, 2501 Bellaire St, Denver CO 80207, (626) 233-9758, [email protected] b) US Geological Survey, Golden CO c) University of Southern California, Los Angeles CA

can involve up to four additional branches, for a total of 1,920 hazard calculations for a single site.

Figure 1. Branches of the UCERF logic tree that received non-zero weights (black numbers) in the final model calculations (WGCEP 2007).

For the reader unfamiliar with UCERF2, it may be valuable to briefly review the meaning of each branching point (often called its epistemic uncertainties). In the following, quotations are drawn from WGCEP (2007).

 Fault models FM2.1 and FM2.2. As noted in the figure, the fault models specify the geometry of larger, more-active faults. According to the UCERF2 authors, “A fault model is a table of fault sections that collectively constitute a complete, viable representation of the known active faults in California.” The parameters of this representation are (1) section name, (2) fault trace, (3) average dip, (4) average upper seismogenic depth, (5) average lower seismogenic depth, (6) average long-term slip rate, (7) average aseismic slip factor, and (8) average rake. The authors define a new section only where one or more of these parameters are significantly different from those of a neighboring section along a fault. The major distinctions between FM2.1 and FM2.2 have to do with “how faults geometrically interact at depth,” especially in the Santa Barbara

Channel area and western Transverse Ranges, in their north-dipping thrust faults. See Figure 2a.

 Deformation models DM2.1-DM2.6. These refer to assigning a slip rate, an aseismic slip factor, and their uncertainties, to each fault section. The UCERF2 preferred deformation model (DM2.1 and DM2.4) and alternatives are consistent with slip-rate studies, geodetic data, and the overall Pacific-North America plate rate. The alternatives pertain to tradeoffs between slip on the San Jacinto and the southernmost San Andreas faults. See Figure 2b.

 Magnitude-area relationships. These give the average magnitude for a given rupture area, and also influence earthquake rates where constraints are applied to match fault slip rates. The UCERF2 authors point out that “The global database of rupture areas and magnitude determinations has significant spread, leaving room for alternative interpretations.” The two alternatives used in UCERF2 amount to a choice between a loglinear regression (Ellsworth-B) relating the logarithm of rupture area to magnitude, and a bi-loglinear regression (Hanks and Bakun 2008) whose 2nd segment has a higher slope for larger rupture areas (A ≥ 537 km2, or M ≥ 6.7). See Figure 2c.

 Segmentation for type-A faults. Type-A faults are those that have paleoseismic constraints on the rate of large ruptures at one or more points on the fault. Segmentation imposes rupture boundaries—locations along a fault that are believed to prevent a rupture on one side from propagating to the other. In the segmented model all viable ruptures are composed of one or more segments, and never just part of any segment. To account for the fact that this assumption might be wrong (or a bad approximation), and alternative un- segmented model was also established for the type-A faults. This alternative essentially involved treating them as type-B faults, where a magnitude- is assumed and ruptures that don’t extend the entire length are “floated” down the fault with a uniform probability of occurring anywhere. Figure 2d shows the Type-A faults included in the UCERF model.

 Fault-slip rates within the segmented model on type-A faults. Two alternative models employ, on the one hand, an a-priori rate model “derived by a community consensus of paleoseismic and other geologic observations,” and on the other a -balanced version of the model, which “modifies the earthquake rate to match the observed long- term slip-rate data…. These two models balance a consensus of geologic and seismologic

expert opinion with strict adherence to specific observational data.” Figure 3 illustrates of the alternative slip-rate models.

 Type-B earthquake rate models: connect more type-B faults? This refers to the model allowing nearby type-B faults to rupture together to produce larger earthquakes, especially San Gregorio, Green Valley, Greenville, and Mt Diablo Thrust. Doing so helps to reduce an over-prediction between M6.5 and M7 (referred to as “the bulge” by the UCERF2 authors, and shown in Figure 4 here).

 Type-B fault b-values. This branching point refers to the downward slope of the Gutenberg-Richter component of the magnitude-frequency relationship for type-B faults. Allowing an alternative b = 0.0 helps to reduce the bulge shown in Figure 4.

 Recurrence models and aperiodicity. The alternatives are a time-independent (Poisson) model, an empirical model that “includes a geographically variable estimate of California earthquake rate changes observed during the last 150 years,” and a Brownian passage time (BPT) model motivated by elastic-rebound theory. The BPT model involves computing the probability that each segment will rupture, conditioned on the date of last event, and then mapping these probabilities onto the various possible ruptures according to the relative frequency of each (from the long-term rate model) and the probability that each segment will nucleate each event. The BPT model involves an uncertain parameter referred to as aperiodicity, which relates to how irregularly earthquakes occur on a given fault. The lower the value of aperiodicity, the nearer events occur to their expected recurrence time, i.e., the more like ticks on a metronome. Three possible values have been considered. Figure 5a illustrates the effect on rupture probability as a function of time (normalized by recurrence interval) for Poisson versus BPT models.

 Ground-motion-prediction equation. Though not an element of UCERF2, GMPEs are required for a probabilistic seismic hazard analysis. Figure 5b illustrates the potential

difference between Sa(1.0 sec, 5%) for a site with Vs30 = 280 m/sec, 500 m depth to 1000 m/sec, for two of the next-generation attenuation relationships given a M7 rupture. The same general trend holds for these two relationships—Chiou and Youngs (2008) and Abrahamson and Silva (2008) for M6.5 and M7.5 ruptures, and for higher Vs30 (say 760 m/sec), with AS2008 producing generally higher shaking than CY2008 for distances greater than about 15 km.

(a) (b)

9

8

7 Ellsworth B Hanks & Bakun

Magnitude 6

5 1 10 100 1,000 10,000 Rupture area, km2

(c) (d) Figure 2. Some of the epistemic uncertainties in UCERF2: (a) Fault models 2.1 and 2.2 differ in the elements shown here; (b) deformation models differ in the distribution of slip rates (colors) between the San Jacincto (western or left-hand of the two systems) and southern San Andreas Faults (eastern or right-hand of the two systems) (c) magnitude-area relationships; and (d) maps showing ttypes A, B, and C sources. All but (c) are from WGCEP (2007).

Figure 3. Comparison of fault slip rates on Type-A segmented faults under a priori and moment-balanced models (WGCEP 2007).

Figure 4. Difference between observed (red) and modeled (blue and black) magnitude-frequency relationships. UCERF2 reduces but does not eliminate "the bulge," the discrepancy between observed and modeled rates of M6.5-7.0 events. Crosses represent 95% bounds on the oberved rates (WGCEP 2007).

1.00

0.75

0.50

0.25 AS2008 Mean Sa(1.0 sec, 5%), g 5%), sec, Mean Sa(1.0

CY2008 0.00 0 255075 100 Rupture distance, km

(a) (b) Figure 5. (a) Time dependence and the BPT versus Poisson recurrence models (WGCEP 2007); (b) shows that Abrahamson and Silva (2008) tends to produce hiigher 1-sec, 5%-damped spectral acceleration response than does Chiou and Youngn s (2008) for distances greater than about 10 km, giveen a M7 event, 5 km to top of rupture, Vs30 of 330 m/sec, and 500m depth to 1km/sec shearwave velocity.

MOTIVATION FOR AND METRIC OF SENSITIVITY TESTING

Each uncertainty has scientific relevance, and there is scientific value in resolving the uncertainties where practical so that we understand better how the earth works. In addition to the scientific value, resolving uncertainties can have practical societal value because of their impact on risk management and mitigation.

Not every uncertainty is equal in the socioeconomic and compputational senses. They may have greatly differing impacts on estimates of hazarrd and risk at any given site or at the societal level. Other things being equal, to an insurer, greater uncertainty in risk that higher losses are more likely. Insurers buy reinsurance based on large, rare events, often the loss with 1 chance of occurrence in 250 years. Greater uncertainty therefore means a higher 1/250-year loss, and a greater cost of reinsurance. Figure 6 illustrates the point with a loss exceedance-curve (like a hazard curve for money) calculated with more or less uncertainty. The EAL values may be the same; but the 250-year losses are not. Thus, there can be a strong financial incentive to reduce uncertainty as cost effectivvely as possible.

1

0.1

More 0.01 Less uncertainty

0.001

1/250-year loss

Exceedance probability 0.0001 0.0001 0.01 1 Loss, fraction of mean portfolio value

Figure 6. More uncertainty in risk translates to higher loss at low exceedance probabilities, which for an insurance company demands more reinsurance and more expense.

Resolving some uncertainties could also produce more-efficient mitigation efforts and public policy for new construction. At a more day-to-day, practical risk-management level, propagating each uncertainty through any given hazard or risk analysis carries a real computational cost, which can make their thorough treatment prohibitively expensive, especially for large portfolios. If some uncertainty does not substantially contribute to uncertainty in the hazard or risk metric of interest, it can be save computer time to ignore it and perform the calculations with only one of the alternatives, arbitrarily selected from among them.

The present work acknowledges but does not address the relative scientific importance of resolving the various epistemic uncertainties in UCERF2. Instead it explores their relative practical importance, by testing the sensitivity of a particular risk metric to each branching point.

There is a wide variety of potentially useful metrics of sensitivity, and substantial literature on the general topic. We will summarize only a few prior works. Cornell and van Marcke (1969) examined the sensitivity of the PGA hazard curve to event magnitude. Youngs and Coppersmith (1985) examined the sensitivity of the full seismic hazard curves at selected sites to variations in recurrence models and parameters that incorporate fault slip rates. Rabinowitz et al. (1998) explored the sensitivity of PGA at a single site to the epistemic uncertainties in the hazard logic tree. They also cite other sources that examined sensitivity of other hazard measures and multiple sites. Recently, Akinci et al. (2009) examined the

sensitivity of hazard maps of the Central Apennines of Italy to the choice of rupture probability models and aperiodicity parameter for a Brownian passage time model.

Like these prior authors, we could similarly examine the effect of the uncertainties on a uniform seismic hazard map at some recurrence frequency, such as the familiar MCE maps in ASCE 7 of 5%-damped spectral acceleration response at various periods (0, 0.2 sec, 1 sec, 3 sec, etc.) with 2% exceedance probability in 50 years (American Society of Civil Engineers 2010). The downside of examining differences in these maps is whether and how to decide which locations matter more. Another is that the maps only indirectly relate to initial construction costs and to post-earthquake repair costs, and therefore to the societally important economic impacts of the uncertainties.

Let us consider alternatives to examining hazard. One option would be the closely related

MCER maps of risk in ASCE 7-10 that relate to designing uniform collapse risk. Again, the upside is familiarity and immediate relevance to design, and the challenge is the resolution of which places matter more than others and the (less) indirect relationship to repair costs.

A third option, employed here, is to examine the effect of the uncertainties on societal economic risk as parameterized through a scalar metric of repair cost for a relevant portfolio of assets. Grossi (2000) examined the sensitivity of EAL and a “worst-case loss” for a community (Oakland CA) to the choice of which earthquake rupture forecast to use (USGS or a proprietary model), which of two ground-motion-prediction equations, and which of two soil maps to use (all one default NEHRP site class, or vary site class by location). Grossi (2000) also examines the effect of the choice of fragility models. Like Grossi (2000), we will use a risk metric, focusing on the sensitivity of a societal risk metric to individual uncertainties in the hazard logic tree.

Calculating a risk metric for a particular portfolio allows us to deal with the challenge of weighting this location against that one. We consider a single scalar value rather than a map (essentially a 2-dimensional vector) of scalar values. Two notable downsides of this approach are that the sensitivity of two different portfolios to the uncertainties may differ significantly, and that this approach involves new uncertainties in vulnerability.

There remains the question of which risk metric to use. Life safety is usually considered of paramount importance, but in California, building codes have been so successful in protecting life that economic risk has emerged as a dominant consideration. Economic risk in

earthquake has long been a concern of insurers, who tend to consider two general metrics: loss to a portfolio of assets in a large, rare event, and the expected annualized loss to the portfolio. The former is a consideration for their liquidity, i.e., the chance that they will go bankrupt and their decisions about purchasing reinsurance. They often use the loss with 0.4% occurrence for example as the rare event to plan for. They use expected annualized loss (EAL) as the basis for setting insurance premiums. EAL is also used as a basis for designing risk-mitigation efforts, since it relates to the present value of benefits of a risk-reduction effort. FEMA 366b (Federal Emergency Management Agency 2008) for example offers estimates of societal EAL as an indicator of relative regional seismic risk.

We have chosen to use expected annualized loss, which here means the average ground- up repair costs per year to the portfolio in question. We have also chosen to examine a portfolio of the estimated quantity of single-family woodframe dwellings in California. We made this choice partly for convenience and partly because of its likely relevance to a principal sponsor of the UCERF2 work (the California Earthquake Authority). The portfolio was developed by the first author from the HAZUS-MH estimated inventory of California construction. We next detail the tornado-diagram analysis, the development of the portfolio, and the method for calculating EAL.

TORNADO DIAGRAMS

In Porter et al. (2002), and in various subsequent studies (e.g., MMC 2005), earthquake engineers sometimes employ a graphical technique taken from the field of decision analysis to test the sensitivity of seismic risk to major uncertain variables. The graphical technique is useful for examining the sensitivity of a scalar deterministic function of uncertain input variables, and depicts the effect on the function from varying each uncertain input individually, while keeping the other parameter values fixed at some best-estimate or mean value.

That is, one is interested in some function Y = f(X), where Y is an uncertain scalar variable, X is a vector of one or more scalar uncertain input variables, and f is a deterministic function. To explore which component of X strongly affects Y, one evaluates a series of Yi = f(Xi) where Xi denotes a sample vector of X (i.e., given values of each component of X). In the following, n denotes the number of components or elements of X, X(i) denotes the ith component of X (1≤ i ≤n), E[X] denotes the expected value of X, i.e., the vector of X where

every component is set to its average or expected value, E[X(i)] is the expected value of

i (i) i component i of the vector X, X LB denotes a lower-bound value of X (discussed later), XUB denotes an upper-bound value of X(i), and {•} denotes a vector composed of the elements inside the brackets. Then one evaluates Y0, Y1, … Y2n as follows:

YfEX0    T 12   in    YiLB f EX, EX ,... X ,... EX 1 i n (1)   T 12   in    Yin  f EX, EX ,... X UB ,... EX 1 i n   The sensitivity of Y to component X(i) is indicated by the difference (or “swing”):

Swingiiin Y Y  (2) To create the tornado diagram, one sorts the components of X in decreasing order of swing, and creates a horizontal where the horizontal axis measures Y, and each horizontal bar corresponds to one of the components of X, and is labeled next to the vertical axis with a name for that component. The ends of the topmost bar are placed at Yi and Yi+n for the component i with the largest swing, the next higher bar shows Yj and Yj+n for the component j with the next-largest swing (i.e., Swingi ≥ Swingj ≥ Swingk, etc.). Finally, draw a vertical line that intersects the horizontal axis at Y0. The diagram resembles a tornado in profile, hence the name. Thus, the tornado diagram is helpful in depicting which input variables probably matter to the uncertain value of Y and which might reasonably be ignored.

A sample is shown in Figure 7. It shows that the variable called damage factor is most sensitive to uncertainty in a variable called assembly capacity. By setting all the other input variables of damage factor (Sa, ground motion record, etc.) to their expected value, uncertainty in assembly capacity can make the damage factor as low as 0.07 or as high as

0.94. It shows that assembly capacity and Sa seem to dominate uncertainty in damage factor, and implies that uncertainty in variables called F-d multiplier, mass, and O&P can reasonably be ignored for purposes of estimating damage factor.

Assembly capacity

Sa

Ground motion record

Unit cost

Damping

F-d multiplier

Mass

O&P

0.00 0.25 0.50 0.75 1.00 Damage factor

Figure 7. Sample tornado diagram (Porter et al. 2002)

ANALYSIS

Estimating the portfolio of interest. The HAZUS-MH California building stock is not based on an actual enumeration of the buildings in California: their value, location, size, seismic attributes, etc. No such enumeration exists. There is no authority responsible for compiling it, not county tax assessors, nor the state architect, nor the Federal Emergency Management Agency. The portfolio instead is based on population and employment , namely, how many people live or work where, and for workers, how many work in each of 30+ economic sectors. The interested reader is referred to NIBS and FEMA (2009) for details of the portfolio estimate, but to summarize (and oversimplify somewhat), the HAZUS-MH model encodes engineers’ judgment of the square footage required for each resident or worker, along with the per-square-foot value and building-type distribution by economic sector and several categories of residential construction. Thus, the HAZUS-MH model estimates the replacement cost of buildings for a given structure type and block as follows:

VPoAoToR     (3) o where o is an index of occupancy category, P denotes the number of people in the census block in occupancy category o, A denotes the estimated average square footage per person in that occupancy category, T is the fraction of building area of the given occupancy category constructed of the structure type of interest, and R is the estimated replacement cost per square foot for that structure type.

The HAZUS-MH (2009) software provides each of these parameter values in a database; we compiled the quantities for single-family dwellings (“RES1” in HAZUS-MH

terminology) and woodframe buildings of less than 5000 square feet (“W1”). The total estimated replacement cost of these buildings in California is $1.61 trillion, in 2006 USD. The statewide total for all residential buildings is $2.1 trillion, and the statewide total replacement cost for buildings of all occupancy classes is $2.7 trillion, so the portfolio considered here represents an estimated 60% of the replacement cost of all California buildings. The portfolio used here comprises 13,206 assets in 7002 California census tracts. Assets within the same census tract differ by structure type.

Evaluating the expected annualized loss. It is convenient to choose this particular metric because we recently developed a computer program called the Portfolio EAL Calculator to calculate it; see e.g., Porter and Scawthorn (2007, 2009). The calculator is part of a suite of software that extends OpenSHA (www.opensha.org) by adding loss-estimation capabilities: consideration of one or more assets exposed to seismic excitation, plus vulnerability functions to relate loss to shaking, and a calculator to evaluate the following equation:

k EAL  EALj j1 (4)  EAL Vy s G s ds j    s0 In Equation (4), EAL denotes the expected annualized loss to a portfolio of assets, j is an index to a portfolio of k assets (such as buildings), EALj is the expected annualized loss to asset j, V is the value of asset j (here, the replacement cost of a given structure type in a given census block), y(s) is the expected value of loss (normalized by value) to asset j when it is subjected to shaking intensity s, G(s) denotes the mean frequency (events per year) with which shaking of intensity s is exceeded at the location of asset j, and G'(s) is its first derivative with respect to s. As shown in Porter et al. (2006), the integral in Equation (4) can be carried out numerically:

n  yi 11 EALjiiiiiiii V y11 G1 exp m s G  1 exp m s s smm i1 iii (5) n Vyaybii1 ii i1 which is exact for a piecewise linear vulnerability function and piecewise loglinear hazard curve. In Equation 5,

sssiii 1 yi = yi – yi-1 mGGsiiii ln  1   for i = 1, 2, … n

 Gi1 11 aGii1 1exp ms ii  bmssiiiiexp smmiii

Vulnerability. One can estimate repair-cost vulnerability of a given structure type, denoted above by y(s), various ways. Various authors have acquired or estimated earthquake insurance claims data for large numbers of buildings and then performed to relate damage factor (repair cost as a fraction of replacement cost) to shaking intensity. See for example Steinbrugge (1982) for several of these. In their ATC-13 project, researchers working for the Applied Technology Council (1985) found that insufficient empirical evidence existed to estimate the seismic vulnerability of the breadth of California construction, and therefore resorted to expert opinion to create seismic vulnerability functions for 38 California structure types, generally defined in terms of construction material (e.g., wood, concrete, steel, etc.), lateral force resisting system (e.g., shearwall, moment-resisting frame, etc.) and height category (1-3 stories, 4-7 stories, or 8+ stories). While they are nearly exhaustive, the ATC-13 seismic vulnerability functions are expressed in terms of modified Mercalli intensity (MMI), which is not a significant part of modern US hazard modeling.

Other researchers have applied first principles to estimate seismic vulnerability, i.e., using structural analysis, component fragility data from lab tests or earthquake experience, and construction cost-estimation principles. The earliest of these efforts seems to be that of Czarnecki (1973), subsequently enhanced by work at the John A. Blume & Co. (e.g., Kustu et al. 1982). In the 1990s, Kustu contributed to the HAZUS-MH approach for estimating seismic vulnerability by analytical means (e.g., Kircher et al. 1997). Beginning in the late 1990s, researchers working for PEER Center and others further developed analytical methods (e.g., Beck et al. 1999, Porter et al. 2001), leading to the highly detailed, component-based approach currently being formalized by the Applied Technology Council in its ATC-58 project.

Of the analytical approaches, only the HAZUS-MH approach has been implemented to the point of offering a nearly exhaustive set of seismic vulnerability functions for U.S. construction. In recent work (Porter 2009), we adapted the HAZUS-MH methods and data to create seismic vulnerability functions in the form needed for Equation (5), i.e., tabular relationships between mean damage factor and structure-independent 5%-damped elastic

spectral acceleration response at 0.3- and 1.0-second periods. Porter (2009) details the methodology, and the resulting seismic vulnerability functions are offered at www.risk- agora.org for general use. The method has been peer reviewed by at least 6 engineers, and its results independently duplicated by at least 3.

The HAZUS-MH approach takes a somewhat simplified view of buildings, idealizing them as single-degree-of-freedom nonlinear damped harmonic oscillators with three general components (structural, nonstructural drift-sensitive, and nonstructural acceleration sensitive). It applies a somewhat simplified structural analysis technique referred to as the capacity spectrum method (CSM) detailed in ATC-40 (Applied Technology Council 1996). Researchers working on the FEMA 440 project (Applied Technology Council 2004) subsequently examined CSM critically since its development and proposed enhancements. The potential therefore exists to improve upon the structural analysis portions of the HAZUS- MH approach. However, the enhancements involve a number of new parameters whose values still need to be established before CSM can be practically replaced in the HAZUS-MH approach for much U.S. construction. For the moment, we are limited to the HAZUS-MH method for present purposes.

Applying HAZUS-MH approach outside of HAZUS-MH. The challenge of estimating EAL for the California portfolio is aggravated by the fact that HAZUS-MH itself cannot be used to perform the sensitivity tests proposed here. The principal reason is that the hazard model in HAZUS-MH does not allow one to test the effects of each individual branch of the hazard logic tree, at least not without enormous effort.

Portfolio EAL Calculator. We therefore chose to develop new software as part of the OpenRisk effort (Porter and Scawthorn 2007): a portfolio EAL calculator. Here, “portfolio” means a set of one or more assets exposed to some peril such as earthquake. Most often, a portfolio is imported from computer file, in this case a comma-separated-value text file. In the case of the Portfolio EAL Calculator, the portfolio file has a header line and an arbitrary number of data lines. The header line contains the following text:

AssetGroupName,AssetID,AssetName,BaseHt,Ded,LimitLiab,Share,SiteName,Elev,Lat, Lon,Soil,ValHi,ValLo,Value,VulnModel,Vs30

Each data line contains information about one point asset, i.e., an asset located at a single point. A sequence of data items about each asset appears in the line, in the same order as the header line. The data items are detailed in Table 1.

Table 1. Portfolio contents Field Meaning Type Comment AssetGroupName Group name Text Non-unique name for group to which asset belongs, e.g., “Houses” AssetID Asset identifier Integer A unique ID, e.g., 1 AssetName Asset name Text A label for the asset, e.g., “House 1” BaseHt Reserved for later Double use Ded Reserved for later Double use LimitLiab Reserved for later Double use Share Reserved for later Double use SiteName Site name Text A label for the site, e.g., “769 N Michigan Ave, Pasadena CA 91104” Elev Reserved for later Double use Lat Latitude Double Decimal degrees N, within ±90.00 Lon Longitude Double Decimal degrees E, within ±180.00 Soil NEHRP soil class Text NEHRP site soil classification, in {A, B, C, D, E, F} ValHi Upper bound of Double Value is uncertain or varies over time. Upper bound here is 97th value at risk , ≥ Value ValLo Lower bound value Double Lower bound is 3rd percentile, ≤ Value at risk Value Value at risk Double Best est of value at risk, monetary or number or people, ≥ 0.00. VulnModel Vulnerability Text Name of the vulnerability function to use for this asset, e.g., “CUREE function small house as-is,” restricted to a list of available models Vs30 Shearwave vel, Double Mean shearwave velocity in top 30 m of soil, m/sec, >0.00 m/sec

To use the Portfolio EAL Calculator, the user selects UCERF2 parameter values and GMPE to employ, either via graphic user interface (Figure 8) or as is the case here via a command-line interface. In the present application, the program automatically calculates EAL for every allowable combination of UCERF2 parameter value and a specified GMPE. To calculate EAL using other ground-motion prediction equations, one runs the calculation again, specifying the next GMPE of interest in the command line. Latitude, longitude, and Vs30 values are specified in the portfolio file. The source of Vs30 values used in the portfolio file is referred to here as the Vs30 model. For example, one could rely on Wills and Clahan’s (2006) map of Vs30, or that of Wald and Allen (2007). The intensity measure type to be calculated for each asset (e.g., 5% damped, 1-second spectral acceleration) is inferred from its vulnerability model, specified in the portfolio file. Each model corresponds to a vulnerability function—y(s) in Equation 4—encoded in a database that is part of the software.

Each vulnerability function includes a statement of which intensity measure it uses to measure s.

The calculator employs preexisting OpenSHA object classes to calculate site hazard for the selected parameter values and the first asset in the portfolio. It carries out the EAL calculations shown in Equations 4 and 5 to estimate the EAL for that asset and the specified hazard parameter values. Looping over assets, the calculator sums EALs to estimate the portfolio EAL.

Calculations for each asset are independent of the other assets, conditioned on the choice of UCERF2 and GMPE parameter values, timespan, Vs30 model, and vulnerability model. The calculation is therefore well suited to parallel processing, and the command-line version of the portfolio EAL calculator takes advantage of that to distribute the calculations over a number of processors. In particular, the calculation is distributed as follows.

The calculations described here were performed at the University of Southern California’s High-Performance Computing Center. The Center’s Linux Computing Resource consists of 785 dual-core/dual-processor nodes and 5 x 16 processor, 64GB, large memory servers, interconnected with Ethernet and 2GB Myrinet backbone, and a 1,990 quad-core or hex-core/dual processor nodes cluster with Ethernet and a 10GB Myrinet backbone. As of November 2011, USC’s high-performance computing center ranks number 79 among the world’s 500 fastest supercomputers (http://www.top500.org/list/2011/11/100).

With this hardware, the EAL calculations for 480 branches of the UCERF2 logic tree were performed in 24 sequential or parallel jobs, each job handling 20 branches of the tree. The job starts with the first branch. Each job has 40 processors available to it, distributed among 5 compute nodes, each of which has 8 processors. The job distributes assets among the processors. Individual processors compute EALs for the assets assigned to them; these are summed to produce the portfolio EAL. The job then moves on to the 2nd branch and does the same thing until it has calculated portfolio EAL for its 20 branches.

Distributing the job this way among 40 processors, the calculation of EAL for a portfolio of 13,200 assets and 20 branches took approximately 1 minute, so 24 jobs (all 480 branches) took approximately 30 minutes. These calculations reflect Types A and B faults only. Calculation of EAL from background sources (Type-C zones) were performed separately and added back in, because they do not vary by branch, but only by GMPE. These are

computationally intensive because of the number of seismic sources involved. Background seismicity calculations took approximately 50 minutes on 80 processors, as opposed to 1 minute on 40 processors for the EAL resulting from Type-A and Type-B faults.

Once EAL from background seismicity is added to EAL from Types A and B faults, results are reported via a text file. The file lists on each line the UCERF2 parameter values and the corresponding portfolio EAL, one record for each branch of the logic tree.

Additional documentation of the Portfolio EAL Calculator can be found at www.opensha.org.

(a) (b)

(c) (d)

Figure 8. Graphical user interface for the Portfolio EAL Calculator. The GUI allows one to specify parameter values for the earthquake rupture forecast, GMPE, timespan, and portfolio file. A command- line version was employed in the present study to loop over all branches of the UCERF2 logic tree.

RESULTS

The Portfolio EAL Calculator estimated EAL for each of the 480 branches of the UCERF2 earthquake rate model, and each of 4 NGA equations: Chiou and Youngs (2008), Campbell and Bozorgnia (2008), Boore and Atkinson (2008), and Abrahamson and Silva (2008). Among the 1,920 results, the minimum value was $1.62 billion; the value exceeded by 50% of branches (median in terms of order) was $2.31 billion; the maximum, $3.42

billion. The extrema are therefore approximately equal to the baseline times or divided by a factor of 1.5. The minimum, baseline, and maximum values represent $1.01, $1.43, and $2.12 per $1000 of exposed value (a common way to normalize losses against exposed value). The median or baseline branch is illustrated in Figure 9; parameter values producing low, median, and high values are shown in Table 2.

Figure 9. Baseline branch of the logic tree shown in bold. Baseline GMPE is Abrahamson and Silva (2008)

Table 2. Parameter values producing low, baseline (median in terms of order), and high EAL values. Order Def Mag-area A-fault Rates Connect b Recur- Aperi- GMPE EAL, mod solution more B rence odicity $B faults? Min D2.4 Ellsworth Segmented Mo-rate bal Yes 0.0 Empirical CY08 $1.62 Med D2.5 Ellsworth Segmented A priori Yes 0.0 Empirical AS08 $2.31 Max D2.2 Hanks-Bakun Segmented Mo-rate bal Yes 0.8 BPT 0.3 AS08 $3.42

The weighted-average EAL of all 1,290 branches—weights from Figure 1—is $2.32 billion, which almost equals the baseline EAL of $2.31 billion. The is $0.41 billion, meaning that the of EAL considering the UCERF2 branch weights and results for all 1,920 branches is 0.18, so the extrema are approximately 2.5 standard deviations from the baseline, equivalent to approximately 0.5% and 99.5% bounds for a Gaussian distribution, which seems reasonable.

The baseline EAL of $2.3 billion is roughly 2/3rd the statewide total of $3.5 billion estimated in FEMA 366b (Federal Emergency Management Agency 2008). The former figure represents only the risk to single-family woodframe dwellings, the latter the risk to all California construction. Normalized by value exposed, $1.40 annual repair cost per $1000 of exposed value compares reasonably with FEMA 366’s estimate of $3.5 billion EAL, divided

by approximately $2.7 trillion total exposed value, which is $1.30 per $1000 of exposed value.

Let us now consider the tornado diagram, which reflects the variables of the baseline branch changed one at a time. Its parameter values are shown in Table 3. The tornado diagram itself is shown in Figure 10. That tornado diagram reflects 19 branches (including the baseline). Note that it has only slightly narrower EAL extrema than all 1,920 branches.

Table 3. Results used to create tornado diagram Parameter Parameter value associated with Expected annualized loss, $B Low EAL High EAL Low EAL, $B High EAL, $B Swing, $B BPT aperiodicity BPT, 0.7 BPT, 0.3 $ 2.31 $ 3.27 $ 0.957 Recurrence model Empirical BPT (0.5) 2.31 2.98 0.674 GMPE CY2008 AS2008 1.69 2.31 0.621 Fault-slip rates Mo-rate balance A priori 2.23 2.31 0.081 Magnitude-area Ellsworth-B Hanks-Bakun 2.31 2.34 0.026 Connect more B-faults? TRUE FALSE 2.31 2.33 0.021 Def model D2.5 D2.3 2.31 2.32 0.015 A-Fault solution Unsegmented Segmented 2.30 2.31 0.008 B-fault b-value 0 0.8 2.31 2.32 0.006

Probablity model Empirical BPT, 0.3

GMPE CY2008 AS2008

Fault-slip rates Mo-rate bal A priori

Magnitude-area relationship Ellsworth Hanks & Bakun

Connect more B-faults? True False

Deformation model D2.5 D2.3 Max of 1,920 branches Min ofMin 1,920 branches A-fault solution type Unsegmented Segmented Weighted avg of B-fault b-value 0.0 0.8 1,920 branches

1.52.02.53.03.5 Expected annualized loss, $ billion

Figure 10. Effect of UCERF2 and ground-motion-prediction equation (GMPE) branches on EAL. Note that the line labeled “weighted average of 1,920 branches” uses the weights in Figure 1, the EAL values calculated for each branch, and is so close to the baseline EAL that it is indistinguishable in this figure.

Figure 10 suggests that the choice of recurrence model and the aperiodicity in the BPT recurrence model are the leading sources of uncertainty in the portfolio EAL, followed closely by the choice of which ground-motion-prediction equation to use. Less important to this metric are the method for estimating fault-slip rates, the choice of magnitude-area relationship, whether to connect more Type-B faults, the choice of deformation model,

whether to enforce segmentation on Type-A faults, and the choice of b-value to Type-B faults.

Thus, only the GMPE and the branches of the probability model, group D in Figure 1, matter much to EAL of California woodframe single-family dwellings. Among the 144 branches with an empirical recurrence model and the Abrahamson and Silva (2008) GMPE, the minimum and maximum EAL values are $2.22 and $2.37 billion, respectively, i.e., 2.29 ± 3%. Among the 96 branches with a Brownian passage time recurrence model, aperiodicity of 0.3, and Abrahamson and Silva (2008), the is somewhat broader: $2.9 to $3.4 billion, but still relatively modest at ±10%.

DISCUSSION

Why does Figure 10 look different from Figure 7? The bars in Figure 10 end at the baseline rather than crossing it, as in Figure 7. The reason is, Figure 7 tests the sensitivity of loss mostly to scalar continuous random variables, each of which has a median value. The baseline in Figure 7 runs through those median values. By contrast, the variables in Figure 10 are mostly nominal, meaning they can only take on a few discrete values and that those values have no order. For example, there are only two possible magnitude-area relationships, Ellsworth-B or Hanks and Bakun (2008), and no order or median value per se. The baseline in Figure 10 has to be taken as the EAL from the branch that produces the median EAL, not the EAL produced by the taking the median values for each variable. If there were “higher” and “lower” values of the magnitude-area relationship and some median value, then the EALs resulting from choosing those higher and lower values would probably be to the left or right of the baseline, or vice versa.

What about between variables? There is some interaction between variables, but it is not strong. The evidence is that the minimum and maximum values of EAL considering all 1,920 branches are close to the extrema of the bars in the tornado diagram, as shown in Figure 10. The variables can combine to produce more-extreme EALs, but not much more extreme.

What about weights? EAL as an uncertain value. The ends of the bars and the value of the baseline EAL make no reference to the weights shown for their respective branches on Figure 1. However, Figure 10 shows that the baseline EAL is nearly indistinguishable from the weighted-average EAL for all 1,920 branches, where the weights are taken from Figure 1.

That is, EAL can be considered a random variable, and its expected value—the expectation of expected annualized loss—is a discrete value of $2.32B, versus the $2.31B of the baseline branch. The standard deviation of EAL using these weights is $0.41 billion, so the bar extrema are at roughly baseline ± 2 standard deviations.

Can one really conclude from the tornado diagram alone that the probability model and GMPE are the principal contributors to uncertainty? Another way to look at the results is to fix the probability model at “empirical” and GMPE at “AS2008.” Among the 144 branches that share these values, the weighted average of EAL is $2.32B and its standard deviation is $0.04B—that is, fixing those values produces an uncertain EAL = $2.32B ± 2%, where the ± 2% refers to a coefficient of variation of 0.02. Do the same for the 96 branches with BPT recurrence model, aperiodicity of 0.3, and AS2008, and the EAL is $3.2B ± 5%. By contrast, if one fixes any other two variables, the uncertainty is much greater. For example, among the 384 branches with a-priori rates and the Ellsworth-B magnitude-area relationship (the next 2 highest variables in the tornado diagram), EAL = $2.34B ± 18%. In fact, fix all of the epistemic uncertainties besides recurrence model, aperiodicity and GMPE at their baseline values, and the remaining branches have EAL of $2.33B ± 18%, the same coefficient of variation as the population as a whole. The point is illustrated in Figure 11. It seems reasonable to conclude therefore that the tornado diagram accurately identifies the branches that matter, and neither a different baseline nor some explicit consideration of branch weights in the diagram would change that.

Figure 11. Probability density of EAL, varying only top uncertainties (upper curve) or only bottom uncertainties (lower)

What about other variables, such as vulnerability, Vs30, and the portfolio? The analysis takes the set of vulnerability models as a static variable. Here, the vulnerability models are taken from the HAZUS-MH-based vulnerability functions discussed in Porter (2009). The portfolio is also fixed; the asset locations and values of the buildings exposed to loss were extracted from the HAZUS-MH (NIBS and FEMA 2009) database. The Vs30 values at each site are taken from Wills and Clahan (2006).

Can we test the effect of varying these fixed parameters, and see whether the trim- trimming exercise would have a different outcome for different vulnerability models, different Vs30 model, or different estimate of the statewide portfolio of woodframe single- family dwellings?

While the portfolio could have an effect on the tornado diagram, there is no obvious practical alternative. There are no other publically available statewide databases of single- family woodframe dwellings. Insurers maintain databases of insured values, but these are not publically available, and they reflect only that insurer’s particular market, and have other idiosyncrasies that would be difficult to harmonize into a consistent whole.

There are alternatives to the Vs30 model, such as the Wald and Allen (2007) soil map that uses topographic slope as a proxy for Vs30. It seems likely that a different soil map would differ at the level of individual properties, but that the differences would tend to average out, producing trivial to modest overall effect.

It is straightforward to see how the tornado diagram might differ if one were to use different vulnerability models. We performed the analysis twice more, applying to the entire portfolio in one case the ATC-13 vulnerability function for woodframe lowrise construction (Applied Technology Council 1985) and the other, Wesson et al.’s (2004) model. For the former, it was first necessary to convert the ATC-13 intensity measure from MMI to peak ground velocity (PGV). We used the inverse of Wald et al. (1999) MMI-PGV relationship. There are two issues raised by this conversion: the regression of PGV versus MMI would probably be different than MMI versus PGV, but for present purposes, the difference is probably immaterial. Second, PGA is a better predictor of MMI for MMI < 6.5, but it seems likely that most of the EAL accrues at the higher intensities, where PGV is the better predictor. The results are shown in Figure 12.

Though both diagrams have somewhat lower baseline values ($2.07 and $2.05 billion for ATC-13 and Wesson, respectively, versus $2.31 billion for HAZUS-based vulnerability functions), the diagrams shows essentially the same results: the probability model and GMPE dominate the uncertainty, and with the exception of fault slip rates, the order of the variables in the diagram is largely unchanged, and the branches that produce the minima and maxima are largely unchanged. The weighted-average EAL from all 1,920 branches is $2.10 billion and $2.07 billion from the ATC-13 and Wesson et al. vulnerability models respectively, close to their baseline figures of $2.07 and $2.05 billion.

Probability model Empirical BPT, 0.3 Probability model Empirical BPT, 0.3

GMPE CY200 AS2008 GMPE CY200 AS2008 Hanks & Mag-area relationship Ellsworth Hanks & Bakun Mag-area relationship Ellsworth Bakun Connect more B faults? True False Deformation model D2.4 D2.2

Deformation model D2.4 D2.2 Connect more B faults? True False Un- Segmented A priori Mo-rate bal A-fault solution type segmented Fault-slip rates 0.0 0.8 Un- B-fault b-value A-fault solution type segmented Segmented Fault-slip rates Mo-rate bal A priori B-faults b-value 0.0 0.8

1.5 2.0 2.5 3.0 3.5 1.5 2.0 2.5 3.0 3.5 Expected annualized loss, $ billion Expected annualized loss, $ billion

(a) (b) Figure 12. UCERF2 tornado diagrams using (a) ATC-13 (b) Wesson et al. vulnerability models

VALIDATION

Can these detailed outcomes be validated considering prior work or with other calculations, and explained in light of what we know about the uncertainties? To start with, Field et al. (2009) examined the sensitivity of the UCERF2 model to its various branches. Considering the rate at which events of M≥6.7 occur, they found that “The empirical versus BPT/Poisson probability-model branch is by far the most influential in our logic tree.”

We can also check our findings considering some sample sites. Let us assume that statewide risk is dominated by the urban (and seismic) centers of Los Angeles, and San Francisco Bay Area, and examine the hazard curves for representative locations in both places. We choose for the sake of special interest the Los Angeles address of the Southern California Earthquake Center on the campus of University of Southern California and the Oakland CA address of the Earthquake Engineering Research Institute. Geolocations are from Google Earth ver 6.1.0.5001. Estimated Vs30 values are taken from Wills and Clahan (2006) as calculated using the OpenSHA Site Data Application ver 1.2.3. Depth to velocity

are taken from the SCEC Community Velocity Model and calculated the same way. Parameter values are:

USC: 34.023N 118.286W; Vs30 ≈ 280 m/sec; depth to 1.0 km/sec = 0.5 km; to 2.5 km/sec = 3.5 km

EERI: 37.804N 122.273W; Vs30 ≈ 302 m/sec; depth to 1.0 km/sec = 0.13 km; to 2.5 km/sec = 0.87 km

Hazard curves for each site are calculated using the OpenSHA Hazard Curve Calculator (local, version 1.2.3). Hazard is measured here in terms of Sa(0.3 sec, 5%). It is calculated under baseline conditions, plus baseline conditions except with high or low aperiodicity (0.7 and 0.3), or with empirical or BPT (aperiodicity = 0.5) probability models, or with two indicated GMPEs (Chiou and Youngs 2008 and Abrahamson and Silva 2008), or the indicated fault-slip-rate alternatives. See Figure 13. We consider here the entire curve, but focus on shaking with 2% exceedance probability in 1 year, near the recurrence frequency range that tends to dominate EAL.

For the USC site, the BPT probability model does produce substantially higher hazard than empirical. Aperiodicity of 0.3 produces the highest curve, with Sa(0.3 sec, 5%) being roughly 50% higher than baseline conditions at the reference recurrence frequency, approximately consistent with a 40% higher EAL. GMPE seems to affect hazard fairly uniformly, with Sa(0.3 sec,5%) under AS2008 being higher than CY2008 by perhaps 30%, roughly consistent with a 35% greater EAL under AS2008 (vulnerability functions being nonlinear and tending to be concave upward at modest shaking levels). The hazard under moment-balanced fault-slip rates is perhaps 4% lower than the baseline case, roughly consistent with an approximately 4% lower EAL. Using the Hanks and Bakun (2008) magnitude-area relationship does produce perhaps 3% higher hazard than baseline conditions, roughly consistent with 1% higher EAL. Results for EERI’s address show similar relative sensitivities.

Figure 13. Sensitivity of hazard at USC (left) and EERI (right) to UCERF2 epistemic uncertainties. Hazard for these two sites approximately tracks sensitivity of EAL to the uncertainties indicated by the tornado diagram.

We can also check the observed importance of the GMPE. As shown in Figure 5b, Abrahamson and Silva (2008) produce uniformly higher shaking than Chiou and Youngs (2008) at distances greater than 15 km. The GMPE should therefore have the strong influence observed here if EAL is dominated by losses farther from the source than 15 km. To test this hypothesis, we deaggregated losses for 10 sample California sites in the portfolio. The sites were selected at random in a stratified sample, where each stratum had equal cumulative value (10% of the total), and each site within a stratum had equal likelihood of being selected. The sample is shown in Table 4. Each site represents all construction of the specified type in the specified census tract. Mean loss was calculated for each site given mean shaking for each of approximately 12,000 ruptures, using the OpenSHA event-set calculator (www.opensha.org).

Contribution to EAL for this small portfolio is the product of the mean loss given the rupture and the recurrence frequency of the rupture. Weighting by contribution to EAL, the weighted-average Vs30 of the sites is 330 m/sec, Joyner-Boore distance is 27 km, and magnitude is 7.05. Referring to Figure 5b, these parameters suggest uniformly higher mean shaking under AS2008 than CY2008, perhaps 20 to 30% higher: i.e., 0.20g vs 0.25g. At these shaking intensities, mean repair cost assuming AS2008 is approximately 75% higher than under CY2008, because of nonlinearity in the vulnerability functions. This is more than enough to account for the 50% greater portfolio EAL under AS2008 than CY2008.

Table 4. Sample portfolio for deaggregating loss and testing effect of GMPE Replacement Lat N Lon E Vs30, m/sec Structure type cost, $000 34.0469 -118.2617 280 $ 603 W1-m-RES1-DF 33.8781 -118.1600 280 $ 16,044 W1-h-RES1-DF 36.8444 -119.7923 387 $ 22,602 W1-h-RES1-DF 34.2455 -118.4176 280 $ 33,369 W1-h-RES1-DF 37.8455 -122.2836 280 $ 35,306 W1-h-RES1-DF 34.0557 -118.3706 280 $ 35,881 W1-h-RES1-DF 34.8860 -120.4251 349 $ 139,017 W1-h-RES1-DF 33.6094 -117.6852 390 $ 150,521 W1-h-RES1-DF 33.8808 -118.4073 387 $ 219,850 W1-m-RES1-DF 37.5482 -121.9499 280 $ 351,695 W1-m-RES1-DF

We did not perform a thorough analysis to support the relative importance of all of the other epistemic uncertainties, but we can offer the following observations. It seems reasonable that parameters associated with Type-B faults should not strongly influence EAL, because almost by definition, they are smaller and less active than Type-A faults and therefore contribute relatively little to hazard at the statewide level. That the deformation model does not strongly affect statewide hazard also seems reasonable, since the differences between the models have to do with allocating the same slip between two faults. Reductions in the contribution to EAL resulting from lower slip rates on one fault would be offset by increases resulting from higher slip rates on the other.

LIMITATIONS

Several questions remain. We do not know how the tornado diagram might differ if instead of EAL we examined a large rare loss such as the commonly used loss with 0.4% exceedance probability in a year, or loss to a different portfolio. We have only partially examined why the epistemic uncertainties in UCERF2 have the relative impact suggested here. We have not examined how the tornado diagram might differ if we used a different portfolio or soil model. And in any case, we offer no information or opinions on the scientific importance of resolving any of the uncertainties.

CONCLUSIONS

There are many ways to explore the relative importance of the various modeling uncertainties (often called epistemic uncertainties) of the UCERF2 earthquake rupture forecast (WGCEP 2007) for the state of California. We set out to better understand which uncertainties matter most to societal economic risk, and to inform decisions about making

loss calculations most efficiently. Toward that end, we performed a tornado-diagram analysis of the sensitivity of expected annualized loss (EAL) experienced by an estimated statewide portfolio of woodframe single-family dwellings. Here, EAL refers to expected value of repair cost in 2012 for the buildings alone: no casualty risk, no content damage, no additional living expenses, no insurance deductibles or limits.

The analysis involved estimating the EAL for each of 480 branches of the UCERF2 logic tree and each of 4 next-generation attenuation relationships, for a total of 1,920 calculations of statewide EAL. The statewide portfolio was estimated using the HAZUS-MH default inventory (extracted from the HAZUS-MH database) and the HAZUS-MH seismic vulnerability functions extracted from HAZUS-MH using the cracking-an-open-safe methodology. The calculations were performed using a piece of software we created called the Portfolio EAL Calculator, which is an extension of the OpenSHA software developed by the USGS and Southern California Earthquake Center.

The baseline EAL (a pseudo-median, the value of which is exceeded by half the branches, and which exceeds that of the other half) is estimated to be $2.3 billion, or $1.40 per $1000 of exposed value, which compares reasonably with other estimates of statewide seismic risk. It is also very close to the weighted-average EAL of all 1,920 branches. This baseline EAL was produced using the Abrahamson and Silva (2008) NGA relationship, deformation model D2.5, the Ellsworth-B magnitude-area relationship (WGCEP, 2002, Eq 4.5b), segmented A- fault solution type, a-priori rates, connecting more Type-B faults, a Type-B faults b-value of 0.0, and empirical probability model. The branches with the highest and lowest EALs produced values of $3.4 billion and $1.6 billion, figures that represent $2.10 and $1.00 per $1000 of exposed value, suggesting that the epistemic uncertainties in earthquake rupture forecast and ground-motion-prediction equation produce uncertainty in EAL of a factor of approximately 1.4 to 1.5. (There are other contributors to uncertainty in risk, but these are ignored here so as to focus on UCERF2.)

The tornado-diagram analysis examines the sensitivity of EAL to one epistemic uncertainty at a time. That is, no combination of two or more epistemic uncertainties is considered, with the exception that testing the sensitivity of EAL to aperiodicity requires also assuming a Brownian passage-time model. Even so it was found that the uncertainty in EAL is dominated by three uncertainties: the choice of recurrence model; in the case of Brownian passage-time model, the aperiodicity parameter value; and the choice among ground-motion-

prediction equations. Considering only these uncertainties accounts for most of the factor of 1.5 mentioned above. The other 6 epistemic uncertainties, while scientifically important and potentially material to local seismic hazard, local seismic risk, and perhaps low-probability, high-loss events, only modestly affect statewide EAL experienced by single-family woodframe dwellings.

One implication is that, if one is examining statewide EAL and wishes to save computational time, the top 3 epistemic uncertainties alone sufficiently represent uncertainty in hazard. Considering them alone would reduce the complexity of the logic tree by 100x, from 1,920 to 20 branches: 5 combinations of recurrence model and aperiodicity (counting “empirical” only once among recurrence models) and 4 intensity-measured relationships. The reduction in model complexity and computation effort is not accompanied by a loss in uncertainty or a bias in expectation, compared with a model that considers all 1,920 branches.

Another implication is that, for a user of risk information concerned with statewide EAL, gathering more knowledge of any of the top 3 epistemic uncertainties would tend to be more effective in reducing uncertainty than exploring the other sources of uncertainty. These conclusions are unchanged if one uses either the leading expert-opinion-based alternative vulnerability model (ATC-13) or a recent empirical one (Wesson et al. 2004).

We do not know how an alternative soil model or alternative estimate of the same portfolio would affect the results, but imagine that a reasonable estimate of actual Vs30 conditions would be immaterial. As for the effect of a different portfolio, there is no publically available alternative estimate of the values and locations of all California woodframe single-family dwellings.

ACKNOWLEDGMENT

This research was supported by the Southern California Earthquake Center. SCEC is funded by NSF Cooperative Agreement EAR-0106924 and USGS Cooperative Agreement 02HQAG0008. Computation for the work described in this paper was supported by the University of Southern California Center for High-Performance Computing and Communications (www.usc.edu/hpcc).

REFERENCES

Abrahamson, N., and W. Silva, 2008. Summary of the Abrahamson & Silva NGA Ground-Motion Relations. Earthquake Spectra, 24 (1), 67–97

Akinci, A., F. Galadini, D. Pantosti, M. Petersen, L. Malagnini, and D. Perkins, 2009. Effect of time- dependence on probabilistic seismic hazard maps and deaggregation for the Central Apennines, Italy. Bulletin of the Seismological Society of America, 99 (2A) 585-610, http://www.bssaonline.org/content/99/2A/585 [viewed 21 Dec 2011] American Society of Civil Engineers, 2010. Minimum Design Loads for Buildings and Other Structures, SEI/ASCE 7-10, Reston VA, 608 pp (ATC) Applied Technology Council, 1985. ATC-13, Earthquake Damage Evaluation Data for California, Redwood City, CA, 492 pp. (ATC) Applied Technology Council, 1996. ATC-40: Seismic Evaluation and Retrofit of Concrete Buildings, vols 1 and 2. Redwood City, CA (ATC) Applied Technology Council, 2004. FEMA 440 Improvement of Nonlinear Static Seismic Analysis Procedures. Redwood City, CA Beck, J.L., A. Kiremidjian, S. Wilkie, A. Mason, T. Salmon, J. Goltz, R. Olson, J. Workman, A. Irfanoglu, and K. Porter, 1999. Decision Support Tools for Earthquake Recovery of Businesses, Final Report. CUREe-Kajima Joint Research Program Phase III, Consortium of Universities for Earthquake Engineering Research, Richmond, CA. Boore, D.M., and G.M. Atkinson, 2008. Ground-motion prediction equations for the average horizontal component of PGA, PGV, and 5%-damped PSA at spectral periods between 0.01 s and 10.0 s. Earthquake Spectra 24 (1), 99–138 Campbell K.W. and Y. Bozorgnia, 2008. NGA ground motion model for the horizontal component of PGA, PGV, PGD and 5% damped linear elastic response spectra for periods ranging from 0.01 to 10s. Earthquake Spectra 24 (1), 139-171 Chiou, B.S.J. and R.R. Youngs, 2008. An NGA model for the average horizontal component of peak ground motion and response spectra. Earthquake Spectra, 24 (1), 173-216 Cornell, C.A. and E.H. Vanmarcke, 1969. The major influences on seismic risk. Proceedings of the Fourth World Conference on Earthquake Engineering. A1-69 to A1-83 Czarnecki, R.M., 1973. Earthquake Damage to Tall Buildings, Structures Publication 359. Massachusetts Institute of Technology, Cambridge, MA, 125 pp (FEMA) Federal Emergency Management Agency, 2008. FEMA-366b: HAZUS-MH Estimated Annualized Earthquake Losses for the United States, Washington, DC, 53 pp. Field, E.H., T.E. Dawson, K.R. Felzer, A.D. Frankel, V. Gupta, T.H. Jordan, T. Parsons, M.D. Petersen, R.S. Stein, R.J. Weldon II, and C.J. Wills, 2009. Uniform California Earthquake Rupture Forecast, Version 2 (UCERF 2). Bulletin of the Seismological Society of America 99 (4), 2053–2107 Grossi, P., 2000. Quantifying the uncertainty in seismic risk and loss estimation. Proceedings of the EuroConference on Global Change and Catastrophe Risk Management: Earthquake Risks in Europe, IIASA, Laxenburg, Austria, 6–9 July 2000. http://www.iiasa.ac.at/Research/RMS/july2000/Papers/grossi.pdf [viewed 27 Dec 2011] Hanks, T. C., and W. H. Bakun, 2008. M-log A observations for recent large earthquakes. Bulletin of the Seismological Society of America, 98, 490–494. Kircher, C.A., A.A. Nassar, O. Kustu, and W.T. Holmes, 1997. Development of building damage functions for earthquake loss estimation. Earthquake Spectra, 13 (4) 663-682, Kustu, O., D.D. Miller, and S.T. Brokken, 1982. Development of Damage Functions for Highrise Building Components. For the US Department of Energy, URS/John A Blume & Associates, San Francisco, CA (MMC) Multihazard Mitigation Council, 2005. Natural Hazard Mitigation Saves: An Independent Study to Assess the Future Savings from Mitigation Activities, National Institute of Building Sciences, Washington, DC. (NIBS and FEMA) National Institute of Building Sciences and Federal Emergency Management Agency, 2009. Multi-hazard Loss Estimation Methodology, Earthquake Model, HAZUS®MH MR4 Technical Manual. Federal Emergency Management Agency, Washington, DC Porter, K.A., 2009. Cracking an open safe: more HAZUS vulnerability functions in terms of structure- independent spectral acceleration. Earthquake Spectra 25 (3), 607-618, http://www.sparisk.com/pubs/Porter-2009-Safecrack-MDF.pdf

Porter, K.A., A.S. Kiremidjian, and J.S. LeGrue, 2001. Assembly-based vulnerability of buildings and its use in performance evaluation. Earthquake Spectra, 17 (2), pp. 291-312, http://www.sparisk.com/pubs/Porter-2001-ABV.pdf Porter, K.A., and C.R. Scawthorn, 2007. OpenRisk: Open-Source Risk Estimation Software. SPA Risk, Pasadena, CA, 107 pp., http://www.risk-agora.org/downloads.html Porter, K.A., and C.R. Scawthorn, 2009. Development of Open Source Seismic Risk Modeling Framework. Report to the US Geological Survey under award no 07HQAG0002, SPA Risk LLC, Denver CO, www.sparisk.com/publications.htm. Porter, K.A., C.R. Scawthorn, and J.L. Beck, 2006. Cost-effectiveness of stronger woodframe buildings. Earthquake Spectra 22 (1), February 2006, 239-266, http://www.sparisk.com/publications.html Porter, K.A., J.L. Beck, and R.V. Shaikhutdinov, 2002. Sensitivity of building loss estimates to major uncertain variables. Earthquake Spectra, 18 (4), 719-743 Rabinowitz, N., D.M. Steinberg, and G. Leonard, 1998. Logic trees, sensitivity analyses, and data reduction in probabilistic seismic hazard assessment. Earthquake Spectra, 14 (1), 189-201. Steinbrugge, K.V., 1982. Earthquakes, Volcanoes, and Tsunamis, An Anatomy of Hazards, Skandia America Group, New York, 392 pp. (WGCEP) Working Group on California Earthquake Probabilities, 2003. Earthquake Probabilities in the San Francisco Bay Region: 2002–2031, USGS Open-File Report 2003-214. (WGCEP) Working Group on California Earthquake Probabilities, 2007. The Uniform California Earthquake Rupture Forecast, Version 2 (UCERF 2). U.S. Geological Survey Open-File Report 2007-1437 and California Geological Survey Special Report 203, http://pubs.usgs.gov/of/2007/1091/ [viewed 2 Dec 2011]. Wald, D.J. and T.I. Allen, 2007. Topographic slope as a proxy for seismic site conditions and amplification. Bulletin of the Seismological Society of America, 97, 1379-1395 Wald, D.J., V. Quitoriano, T.H. Heaton, and H. Kanamori, 1999. Relationships between peak ground acceleration, peak ground velocity and Modified Mercalli Intensity in California. Earthquake Spectra, 15 (3), 557-564 Wesson, R.L., D.M. Perkins, E.V. Leyendecker, R.J. Roth, and M.D. Petersen, 2004. Losses to single- family housing from ground motions in the 1994 Northridge, California, Earthquake. Earthquake Spectra, 20 (3), 1021-1045 Wills,C.J. and Clahan, K.B., 2006. Developing a map of geologically defined site-conditions categories for California. Bulletin of the Seismological Society of America, 96 (4A), pp. 1483- 1501 Youngs, R.R., and K.J. Coppersmith, 1985. Implications of fault slip rates and earthquake recurrence models to probabilistic seismic hazard estimates. Bulletin of the Seismological Society of America 75 (4) pp. 939-964