Atkinson et al., Eos Trans. AGU, v. 81, 2000

Reassessing the

Atkinson, G., B. Bakun, P. Bodin, D. Boore, C. Cramer, A. Frankel, P. Gasperini, J. Gomberg, T. Hanks, B. Herrmann, S. Hough, A. Johnston, S. Kenner, C. Langston, M. Linker, P. Mayne, M. Petersen, C. Powell, W. Prescott, E. Schweig, P. Segall, S. Stein, B. Stuart, M. Tuttle, R. VanArsdale

Introduction For many, the central enigma of the large mid-continent seismic region known as New Madrid Seismic Zone (NMSZ, Fig. 1) involves understanding the mechanisms that operate to permit recurrent great remote from plate boundaries. Underlying this question is the more fundamental one of whether great earthquakes have, in fact, recurred there. Given the lack of significant topographic relief that is the hallmark of tectonic activity in most actively deforming regions, most of us feel a need to “pinch ourselves and see if we are dreaming” when confronted with evidence that, at some risk levels, the zone might represent a hazard locally as high as areas near the San Andreas . There certainly is room for argument on the subject. Direct physical examination of the active faults is not possible, because they are buried by up to a kilometer of Embayment unconsolidated sediments (Fig. 1). Flowing across the surface, the tends to erase surface evidence of faulting. Current microseismicity reveals a pattern of active crustal faults, yet even collectively the three major segments it illuminates appear too short to support the assignment of M~8 for the three earthquakes in the 1811-1812 sequence (Johnston, 1996). Modern seismic networks have not been in place long enough to constrain larger recurrence rates or magnitudes. We must, therefore, rely on historical records of earthquake effects (intensities), paleoseismic traces of strong shaking (mostly sandblows), scant geomorphological evidence, and geodetic observations to provide data to constrain models of recurrence. Applying interpretive models, and seismic source theory (i.e. what we think we know about earthquake mechanics), to turn these basic observations into useful constraints, we must also draw on evidence from outside of the central US. Additional uncertainty arises from not being sure how applicable these external constraints.

1 Figure 1. Schematic map of the New Madrid seismic zone showing major tectonic features (see text), modern seismicity (pluses), state boundaries and major rivers. The shaded oval approximately covers the area of mapped liquefaction features created during the 1811-1812 earthquake sequence.

Illustrating some of the difficulties, Figure 2 compares isoseismals from the 1994 Northridge, California, earthquake and the 1895 Charleston, , earthquake, thought to be of similar magnitudes. There are no recordings of ground motion from 1895, but intensity‡Mo relations “trained” on instrumentally recorded eastern US earthquakes yield a well-constrained estimate. The larger eastern US isoseismals may represent the combined effects of lower intrinsic attenuation, systematically higher eastern US stress-drops, and stronger site amplification. The effect of site amplification of ground motions in river sediments can be seen in the damage (inner) isoseismals.

Figure 2. Although earthquakes in the central and eastern United States are less frequent than in the western United States, they affect much larger areas. This is shown by two areas affected by earthquakes of similar magnitude- the 1895 Charleston, Missouri, earthquake in the New Madrid seismic zone and the 1994 Northridge, California, earthquake. Darker shading indicates

2 minor to major damage to buildings and their contents. Outer, lighter shading indicates shaking felt, but little or no damage to objects, such as dishes.

A workshop, sponsored by the US Geological Survey and the Mid-America Earthquake Center, was held recently at the University of Memphis to discuss these issues, to come to some consensus on our understanding of them. To continue this consensus building process we herein summarize the workshop findings, and seek input from the wider community (see our website at: http://cordova.ceri.memphis.edu/~meeting). We review the various classes of observations, starting with the fundamental data and then moving to the interpretive data.

Instrumental and Historical Seismicity Synoptic seismic network coverage of New Madrid began in the mid-1970s, and modern broadband recording only began about one year ago. The pattern of microearthqakes reveals a zigzag pattern of planar faults, surrounded by a "halo" of earthquakes not clearly associated with any known structure(s) (Fig. 1). Almost all the focal depths are above ~15 km with the exception of a few at ~25 km just outside the NMSZ in southern . Within the NMSZ only two Mw>5 have occurred this century. The most recent M~5 earthquake occurred near Marked Tree, AR in 1976. Moment-magnitudes and intensity information exists for all M>4.5 events since the 1960s. The bottom line seems to be that the instrumental seismic catalog is insufficient to assess the question of recurrence of large earthquakes. However, the uncertainty of extrapolating the occurrence statistics to large magnitude almost certainly overwhelms the uncertainties in producing the instrumental catalog (see later discussion). Nevertheless, microseismicity is useful for highlighting active structures. That said, there is a widespread feeling that structures other than those highlighted by microearthquakes may represent potential earthquake sources. Other potentially significant faults include the Reelfoot boundaries, the Commerce Geophysical Lineament, the Crittendon County fault zone, and the Bootheel Lineament (Fig. 1). The latter is delineated by flower structures that cut Quaternary strata, identified in reflection profiles. Approximate lengths of faults delineated by microearthquakes are ~150 km on lower SW/NE seismicity trend, ~50 km on upper SW/NE seismicity trend, and ~70km on central reverse seismicity trend or the Reelfoot fault. The Blytheville Arch underlying the longer SW/NE seismicity trend has been imaged in reflection data and is consistent with a fault-deformed zone. To better constrain the seismicity rate at higher magnitudes than the instrumental catalog reaches, it is necessary to augment it with seismic intensity reports (with some events constrained by sparse instrumental recordings in this century). A number of authoritative catalogs exist and a discussion of these may be found in Mueller et al. (1997), which reports completeness levels of M >~3 since 1924, M>~4 since 1860, and M>~5 since 1700. The rates of occurrence of historic earthquakes are generally consistent with those expected based on the rather brief instrumental catalog and a stationary "b-value". However to estimate magnitudes of the historic earthquakes precisely is difficult and the accuracy and precision of such estimates remain controversial. As will be discussed later, this is especially problematic for the 1811-1812 earthquakes.

3 The most widely used estimates of the 1811-1812 earthquake magnitudes are based on intensity reports compiled by Nuttli (1973), Street (1982, 1984) and Street and Nuttli (1984). To estimate moment magnitudes from the intensity data, an empirically derived relationship must be employed. Because of low intraplate seismicity rates, Johnston (1996) used a global dataset to constrain isoseismal area‡ Mw regressions. The uncertainties Johnston reported only account for the scatter in the data and thus do not represent any systematic effects. Examples of relevant systematic effects might include whether the 1811-1812 earthquakes were significantly deeper than, or had significantly higher stress drops than, the regressed examples. Residual isoseismal areas (observed minus regression estimates) plotted against published stress drops showed a clear systematic correlation – positive residuals corresponding to larger stress drop events (Fig.12, Johnston, 1996). Another sort of systematic error might appear in intensity observations if “site effects” were treated differently for events on which the regressions were based and events to which they were applied. In the case of 1811-1812 the isoseismals reflect the population distribution and its concentration in areas of probable site-amplification (i.e. in alluviated valleys). Depending on how such a systematic bias was treated, it could result in estimates of Mw as much as one unit smaller (~M7; Hough et al, 1999). However, additional constraints on a lower bound for the 1811-1812 earthquakes’ magnitudes include comparing their effects (intensities) with other eastern North America M~7 earthquakes that were either widely felt (i.e. 1886 Charleston) or recorded instrumentally (e.g. 1929 Grand Banks, 1933 Baffin Bay). Intensities for Charleston and New Madrid events reported in New Madrid and Charleston, respectively, indicate that New Madrid was more strongly felt in Charleston than the reverse. Liquefaction was much more severe for New Madrid than Charleston, even though the materials seem to be less susceptible in New Madrid (Casey et al., 1999), and accounting for the fact that there were three New Madrid events. Of course if systematic biases bedevil the intensity‡Mo estimates, then the magnitude estimated for Charleston might be subject to similar systematic biases as NMSZ earthquakes. A promising method to evaluate the size of historical earthquakes eschews the use of felt areas together (Bakun and Wentworth, 1992). This method relies on deriving empirical functions describing the variation of intensity observations with distance from earthquakes with known locations and magnitudes. Preliminary work shows clear difference between regressions done for a small set of central and eastern U.S. earthquakes with those from California earthquakes that plausibly result from regional attenuation differences. Apart from the potential for bias due to site response and sampling already discussed, there is another hypothesis that earthquake source scaling may have affected the magnitude estimates. The 1811-1812 earthquakes (or NMSZ earthquakes in general?) may have been more enriched in high frequencies relative to the long periods at which seismic moment is defined, than are other earthquakes. Assuming that the effects upon which intensity reports are based are more sensitive to high frequency content (Atkinson, 1993), then such a “high-frequency” earthquake could be mistaken as a “large magnitude” earthquake. One way for this to happen is if there was a “second corner frequency” in larger earthquakes (Atkinson and Boore, 1995), which permitted the radiation of more high frequency energy than predicted by standard earthquake source

4 models. For example, comparing observed acceleration spectra for eastern North American earthquakes and those predicted by various theoretical models reveals good agreement at and above ~5 Hz but considerable difference at 1 Hz and below. A second way is if the 1811-1812 earthquakes had systematically higher stress drops than other earthquakes. Again assuming intensity is proportional to high frequency acceleration, that is in turn proportional to the product of the cube root of moment times the stress drop to the 2/3 power (Atkinson and Hanks, 1995), for the same intensity value one can decrease the moment while increasing the stress drop. A modest change in stress drop is needed to affect a significant change in moment. Alternative fault models with different lengths, width and stress drop may all produce the same intensities. For example, restricting the width to 15 km, earthquake ruptures with lengths of 140, 48, 20 km, and stress drops of 367, 150, 300 bars, respectively, would yield moment magnitudes of 8.1, 7.6, 7.2, respectively. One significant consequence of this dependence is that if the moment release rate is kept constant, relative to recurring M8.1 events, smaller earthquakes just as damaging recur approximately 4 and 16 times more often, thus potentially raising the hazard!

Paleoseismicity In the central US, geologists have found few Quaternary active surface faults. Although the Reelfoot reverse fault (~coincident with northwest trending seismicity lineation, Fig. 1) is thought to form a scarp within the Mississippi River alluvium, the fault itself has never been exposed. Therefore, recurrence estimates in the region have relied on observations of earthquake effects, such as sand blows presumably associated with strong motion. Paleoliquefaction observations may be used to constrain the timing and sizes of the causative events. Repeated liquefaction events observed at ~40 sites in the region confirm that large earthquakes have shaken the area at least several times in the past 1000 years. The characteristics of individual 1811-1812 sandblows and their spatial distribution (Fig. 1) serve in some sense as calibrations for pre-historic events. Characteristics of the individual sand blow features, such as the volume of sand ejected or the size of the feeder dykes provide a rough yardstick for relating these features to the severity of ground shaking, and hence loosely to magnitude (although this is not without some argument!). Many things may affect liquefaction and the size of sandblow features. While few in number, geotechnical investigations have consistently revealed the shallow sub-surface materials in the NMSZ to be only moderately liquefiable (Casey et al., 1999). Perhaps the best indicator of the physical size of a liquefaction-inducing pre-historic earthquake is the spatial distribution of such features. Of course, correlating individual paleoliquefaction events in non-contiguous trench exposures means relying on dating limits from a combination of archeological and radiometric dating of horizons on either side of the liquefaction feature. In addition to having overlapping formation date ranges, liquefaction features are associated with one another based on the size of the sandblows. Two pre-1811 episodes of major liquefaction appear to be documented by the observations, both comparable to the 1811-1812 earthquakes (regardless of their magnitudes) in both their areal extend and in the size of the individual sandblow features. Best estimates of the dates of these are 1530 and 900 AD (Fig. 3).

5 Figure 3. Dates of paleoliquefaction features (vertical axis) arranged by site location, from north (left) to south (right); thick segments indicate most probable ages with bars showing approximate two standard deviation uncertainties. Horizontal bars indicate the most probable dates of paleo-earthquakes.

The enormous size of the individual sandblows and their areal extent may be indicative of very strong shaking given the moderate susceptibility of the sediments. The sediments are essentially always saturated, so climatic variability should not be a problem. Most experienced observers feel the features could not have been formed by moderate local events, because of the enormity of the volume of sand mobilized. The paleoseismic interpretations favored currently suggest that large earthquakes like those in 1811-1812 recur more frequently than is suggested by historic and instrumental data and thus, may be “characteristic earthquakes”. Moreover, an absence of paleoliquefaction features that can be associated with moderate earthquakes further suggests a cut-off in Gutenberg-Richter magnitude/frequency behavior. This also is consistent with a complete lack of M>~6 earthquakes in the last century and a recurrence period of even 1000 years for New Madrid sized events (i.e., a Gutenberg-Richter relation and b-value ~1 predicts that the rate of M6 events should be tenfold that of M7s). A final reason for not extrapolating from M<~5 seismicity rates to estimate recurrence of larger events is based on a physical model, that has been well tested with a variety of data (Wesnousky, 1999). It demonstrates that Gutenberg-Richter behavior arises from failure on a population of faults with a well-organized distribution of sizes. To apply such a model to just the few major faults of the NMSZ clearly violates this well-accepted model. Finally, the paleoseismic record suggests that the clustering in 1811-1812 occurred in prior events. Sub-units of liquefied material have been seen in many of the sandblows,

6 and interpreted as resulting from major events occurring within weeks to months of each other. The lack of soil development but evidence of bioturbation in materials between the sub-units constrains the timing between clustered events. Three sub-units are observed for the 1811-1812 features, 2 sub-units for the 1530 event and 3 for the 900 event.

Geodetic Observations Only a few geodetically determined deformation rate constraints have been estimated for the NMSZ, and they all have high associated uncertainties. Deformation rates in the eastern US as a whole (from CORS continuous GPS data) appear to be less than 10-9/yr. Various strategies have been attempted recently to refine the deformation rate for the NMSZ, including re-surveying older triangulation networks with Campaign GPS, and Campaign GPS‡Campaign GPS ties over a few years duration. A 10-site continuous GPS network is in the process of being deployed. Illustrating the state of measurements, a recent study by the Stanford team yields strains of ~0+.10 for all the data combined and ~.07+.10 mrad/yr for a smaller net centered on the southern arm of seismicity. This result is consistent with GPS‡GPS tie results from the Northwestern team (Newman et al., 1999) which used different data sets and different analytical techniques. The Northwestern team has analyzed the survey data in terms of crustal velocities parallel to the NE trend of the seismicity. They report average fault parallel velocities of 0.6+3.2 mm/yr for near field sites, and –0.2+2.4 mm/yr for all sites combined. Until more precise measurements of the continuous network are available, the actual magnitude of the deformation rates depends on how one interprets the uncertainties. On the one hand, the uncertainty is sufficiently large that a fairly large systematic signal could be obscured within the noise. The other argument has been made that each new technology employed shrinks the error bars but always include zero, suggesting a NMSZ deformation rate not different from the surrounding crust. The model one chooses to apply in order to interpret geodetic observations has a major effect in the conclusions one obtains. The Northwestern team’s analyses represent the low-rate endpoint. They estimated the equivalent slip on an infinitely long fault, locked above mid-crustal depths, and driven by remote displacement boundary conditions. Treated thus, the observations do not allow one to reject the null hypothesis that there is no loading deformation. Although this model only requires the fitting of two parameters (locking depth and displacement rate) it also implicitly assumes an infinitely long fault, and hence is the model with the lowest long-term slip rate and recurrence rate. While this model may be appropriate to apply to plate boundaries, employing it in a mid- plate setting is controversial. One objection is that New Madrid is not a plate boundary of infinite length driven by far field displacements, as evident in the Northwestern group’s results, which show no relative displacement across the New Madrid seismic zone. In addition, the fault system clearly is not infinite. Another point of controversy was assumption of constant deformation rates. Strain rates estimated as a function of time after the 1906 earthquake in California, show clearly that the rate decays with time. Other models, which employ finite faults, yield higher estimates earthquake recurrence rates. For example, deformations might be driven by stress boundary conditions and concentrated beneath the New Madrid seismic zone by a structure in the lower crust approximating a “rift pillow” interpreted in some geophysical data (Stuart et

7 al., 1997). In such a model M8 earthquakes could occur on a fault overlying the rift pillow with a recurrence rate of 1000 years, and predict the observed geodetic data. Another plausible model is presented in the final section.

Geologic/Geophysical Constraints Because of the inaccessibility of the microseismically active faults, geological techniques to estimate slip rates are mostly based on the identification and characteristics of secondary features. These have made assumptions about the sense of slip on the faults that are therefore impossible to verify by direct examination. For example, slip estimates of ~2 mm/yr published recently (Mueller et al.,1999) were derived assuming there was no strike-slip motion on the Reelfoot fault (which is hard to reconcile with the geometry of the NE-SW striking faults). Relaxation of this assumption increases the slip rate estimates by a factor of ~2.5 Consider now the seismic potential of other faults in the region. While it seems unlikely that very large seismically active faults remain to be identified, the involvement of such enigmatic features as the Reelfoot rift boundaries, buried rift-pillow decollement, or the Bootheel Lineament cannot be ruled out (Fig. 1). Moreover, within the last years and even months, Quaternary active surface faulting has been revealed at marginal sites (Benton Hills, Porter Gap). While the observations reveal minimal faulting (a few meters slip?????) of (as yet) indeterminate age, they do suggest the potential for areas beyond the central NMSZ to produce moderate to large earthquakes.

Tectonic Models Any viable tectonic model to explain New Madrid must satisfy at least several criteria: large events, comparable to those in 1811-1812, have recurred every 500-1000 years; the active fault system probably isn’t any longer than ~200 km; today’s strain rates are low; and prior to the Holocene large earthquakes occurred with a frequency lower by orders of magnitude than today. An example of a model (being developed by Kenner and Segall,) which attempts to explain these constraints involves remote driving stresses that load an elongate zone of low viscosity in the lower crust beneath the New Madrid seismic zone. This zone concentrates stresses above it, causing earthquakes, which in turn reload the zone below. The cycle repeats but the repeat time lengthens until the whole process eventually ceases. Surface deformation rates are significantly higher early in the interseismic period (10s of years) than later (100s of years). A model of the NMSZ yielded large recurrent earthquakes (though fewer through time) without measurable strain accumulation later in the mid-interseismic period, similar to that observed in the New Madrid geodetic data. The presence of a low viscosity body within NMSZ remains to be verified. Perhaps the greatest enigma of the NMSZ is what would cause the process to begin so geologically recently. Loading changes or fault state changes that might possibly explain a recent increase in seismicity are related to the Holocene climate regime. Since the start of the Holocene, deglaciation has removed a large load of ice as close as 100 km from the NMSZ, which would have been in the forebulge. Also, the Mississippi river captured the Ohio river, and the hydrological system directly above the NMSZ was radically altered. Clearly, these are as yet wild speculations, with the basic challenge remaining of designing ways to turn them into testable hypotheses.

8 References Atkinson, G., 1993, Source spectra for earthquakes in eastern North America, Bull. Seism. Soc. Am., 83, 1778-1798. Atkinson, G. and D. Boore, 1995, New ground motion relations for eastern North America, Bull. Seism. Soc. Am., 85, 17-31. Atkinson, G.M. and T.C. Hanks, 1995, A high-frequency magnitude scale, Bull. Seism. Soc. Am., 85, 825-833. Bakun, W. H. and C.M. Wentworth, 1997, Estimating earthquake location and magnitude from seismic intensity data, Bull. Seism. Soc. Am., 87, 1502-1521. Casey, T., A. McGillvray, and P.W. Mayne (1999), "Results of Seismic Piezocone Penetration Tests Performed in Memphis, ," Georgia Institute of Technology, GTRC Project E-20-E87. Hough, S.E., J.G. Armbruster, L. Seeber, and J.F. Hough, 1999, On the modified Mercalli intensities and magnitudes of the 1811-1812 New Madrid, Central United States earthquakes, U.S. Geol. Surv. Open-File Rep., 99-565, 46 pp. Johnston, A.C., 1996, Seismic moment assessment of earthquakes in stable continental regions – II. Historical seismicity, Geophys. J. Int., 125, 639-678. Mueller, C., Hopper, M. and Frankel, A., Preparation of Earthquake Catalogs for the National Seismic-Hazard Maps: Contiguous 48 States, U.S. Geol. Surv. Open-File Rept. 97-464, 13 p, 1997. Mueller, K., J. Champion, M. Guccione, and K. Kelson, 1999, Fault slip rates in the modern New Madrid seismic zone, Science, 286, 1135-1138. Newman, A., S. Stein, J. Weber, J. Engeln, A. Mao, and T. Dixon, 1999, Slow deformation and lower seismic hazard at the New Madrid seismic zone, Science, 284, 619-621. Nuttli, O.W., 1973, The Mississippi Valley earthquakes of 1811 and 1812; intensities, ground motion and magnitudes, Bull. Seism. Soc. Am., 63, 227-248. Street, R., 1982, A contribution to the documentation of the 1811-1812 Mississippi valley earthquake sequence, Earthquake. Notes, 53, 39-52. Street, R., 1984, The Historical Seismicity of the Central United States: 1811-1928. Final Rept., Contract 14-08-0001-21251, U.S. Geol. Surv. Append. A., 316 p. Street, R. and O. Nuttli, 1984, The central Mississippi Valley earthquakes of 1811-1812. In Proc. Symp. On “The New Madrid Seismic Zone.”, U.S. Geol. Surv. Open-File Rept. 84-770, 33-63. Stuart, W.D., T.G. Hildenbrand, and R. W. Simpson, 1997, Stressing of the New Madrid seismic zone by a lower crustal detachment fault, J. Geophys. Res., 102, 27635-27650. Wesnousky, S.G., 1999, Crustal deformation processes and the stability of the Gutenberg-Richter relationship, Bull. Seism. Soc. Am., 89, 1131-1137.

9