Earthquakes clustering based on maximum likelihood estimation of point process conditional intensity function

Giada Adelfio and Marcello Chiodi

Dipartimento di Scienze Statistiche e Matematiche “Silvio Vianelli” (DSSM), University of Palermo, Italy

Abstract In this paper we propose a clustering technique to separate and find out the two main component of seismicity: the background seismicity and the triggered one. The method here proposed assigns each earthquakes to the cluster of earthquakes or to the set of isolated event according to the intensity function. This function is estimated by maximum likelihood methods following the theory of point process and iteratively changing the assignment of the events. This technique develops non- parametric estimation methods and computational procedure for the maximization of the point process likelihood function.

1 Introduction

Because of the different seismogenic features controlling the kind of seismic release of clustered and background seismicity (Adelfio et al. (2005)), to describe the seismicity of an area in space, time and magnitude domains, it could be useful to study the features of independent events and strongly correlated ones, separately. The two different kinds of events give different information on the seismicity of an area. For the short-term (or real-term) prediction of seismicity and to estimate param- eters of phenomenological laws we need a good definition of the earthquake clusters. Furthermore the prediction of the occurrence of large earthquakes (related to the as- sessment of seismic risk in space and time) is complicated by the presence of clusters of aftershocks, that are superimposed to the background seismicity and shade its principal characteristics. For this purposes the preliminary subdivision of a seismic catalog in background seismicity and clustered events is required. At this regard, a seismic sequences detec- tion technique is presented; it is based on MLE of parameters that identify the condi- tional intensity function of a model that describe seismic activity as a clustering-process, which represents a slight modification of the ETAS model (Epidemic Type Aftershocks- Sequences model; Ogata (1988), Ogata (1998)).

2 Previous seismic clustering methods

Several methods are proposed to decluster a catalog. We could identify two main classes: methods that require the definition of a rectangular time-space window, with size de- pending on the mainshocks magnitude, around each mainshock (Gardner and Knopoff (1974)) and methods closed to the single linkage: they identify aftershocks by modelling

1 an interaction zone about each earthquake assuming that any earthquake that occurs within the interaction zone of a prior earthquake is an aftershock and should be consid- ered statistically dependent on it (Resenberg (1985)). The second class of methods has the advantage of not imposing a window for the final size or shape of the cluster, but the choice of coefficients defining the link distances in space and time could be influenced by researchers experience. Ogata et al. (2002), to avoid such difficulties, proposes a stochastic method associat- ing to each event a probability to be either a background event or an offspring generated by other events, based on the ETAS model for clustering patterns; a random assignment of events generates a thinned catalog, where events with a bigger probability of being mainshock are more likely included and a nonhomogeneous Poisson process is used to model their spatial intensity. This procedure identifies the two complementary subpro- cess of seismic process: the background subprocess and the cluster subprocess or the offspring process.

3 Conditional intensity function in point processes

To provide a quantitative evaluation of future seismic activity the conditional intensity function is crucial. It is proportional to the probability that an event with magnitude M will take place at time t, in a point in space of coordinates (x, y). The conditional intensity function of a space-time point process can be defined as:

P r∆t∆x∆y(t, x, y|Ht) λ(t, x, y|Ht) = lim (1) ∆t,∆x,∆y→0 ∆t∆x∆y where Ht is the space-time occurrence history of the process up to time t; ∆t, ∆x, ∆y are time and space infinitesimal increments; P r∆t∆x∆y(t, x, y|Ht) represents the history- dependent probability that an events occurs in the volume {[t, t + ∆t) × [x, x + ∆x) × [y, y + ∆y)}. The conditional intensity function completely identifies the features of the associated point process (i.e. if it is independent of the history but dependent only on the current time and the spatial locations (1) supplies a nonhomogeneous Poisson process; a constant conditional intensity provides a stationary Poisson process). Other interesting point processes are defined by different conditional intensity func- tions. For example, we could consider probability models describing earthquakes cata- logs as a realization of a branching or epidemic-type point process and models belonging to the wider class of Markov point processes, that assume previous events have an in- hibiting effect to the following ones. The first type models could be identified with self-exciting point processes, while the second are represented by self-correcting pro- cesses as the strain-release model (Schoenberg and Bolt (2000)). ETAS model is a self-exciting point process and represents the activity of earthquakes in a region during a period of time, following a branching structure. In particular it could be considered as an extension of the Hawkes model (Hawkes (1971)), which is a generalized Poisson cluster process associating to cluster centers a branching process of descendants.

2 4 The proposed clustering method

The technique of clustering that we propose leads to an intensive computational proce- dure, implemented by software R (R Development Core Team (2005)). It identifies a partition of events Pk+1 formed by k + 1 sets: the one of background seismicity and those of clustered events (k disjoint sets). It iteratively changes the partition assigning events either to the background seismicity or to the j − th, (j = 1, . . . , k) cluster, given the current partition, on the basis of the likelihood function variation due to their moving from a set to another one. At each step ML estimation of seismicity parameters is performed.

4.1 Main steps Firstly, a starting classification, found by a like single-linkage method procedure, needs. Moving on, the parameters of the intensity function of ETAS model, describing clustering phenomena in seismic activity, are estimated, by ML, iteratively. Differently by ETAS model, in our approach space densities are estimated inside each cluster by a bivariate kernel estimator. The smoothing constant is evaluated with Silverman’s formula (Silverman (1986)). Let [T0,Tmax] and Ωxy time and space domains of observa- tion respectively; according to the theory of point process the likelihood function to be maximized is: n X Z Tmax Z log L = log λ(xi, yi, ti) − λ(x, y, t)dxdydt (2) i=1 T0 Ωxy where: k X K0 exp[α(mj − m0)] λ(x, y, t) = λtµ(x, y) + gj(x, y) p (3) (t − tj + cj) j j = 1 (tj < t)

In (3) tj and mj are time and magnitude of the mainshock of the cluster j respec- tively, gj(x, y) is the space intensity function of the cluster j (computed using the nj points belonging to the j − th cluster including the j − th mainshock) and µ(x, y) is the background space intensity function (computed using the n0 isolated points and the k mainshocks). In the evaluation of (3) we can have two different parametrization: one with a common parameter c and a common parameter p for the whole partition and one with k parameters cj and k parameters pj, varying over each cluster, for each found partition. The choices can be compared at the end of the procedure. To move events from their current position, the changes in the likelihood function (2) for each event of coordinates (xi, yi, ti,Mi) needs: indeed, our iterative procedure moves seismic events assigning each earthquake either to the set of background seismicity or to the j − th cluster, according to the term of (3) which is maximized. The iterative procedure stops when the current classification does not change after a whole loop of reassignment.

3 5 Conclusion

Starting from a seismic catalog, the procedure here proposed returns a plausible sepa- ration of the components of seismicity and clusters that have a good interpretability. This method could be the basis to carry out an analysis of the complexity of the seismogenetic processes relative to each sequence and to the background seismicity, sep- arately. Indeed parameters that control the way in which strain energy is released could be strongly different through clusters and isolated events; this could be also observed in significant differences in the parameter estimates of a phenomenological model applied to sets of earthquakes relative to the two different processes.

References

Adelfio, G., Chiodi, M., De Luca, L., Luzio, D., and Vitale, M. (2005). Southern- tyrrhenian seismicity in space-time-magnitude domain. Submitted for publication.

Gardner, J. and Knopoff, L. (1974). Is the sequence of earthquakes in outhern california, with aftershock removed, poissonian? Bulletin of the seimological Society of America, 64, pages 1363–1367.

Hawkes, A. (1971). Spectra of some self-exciting and mutually exciting point processes. Biometrika, 58, 1), pages 83–90.

Ogata, Y. (1988). Statistical models for earthquake occurrences and residual analysis for point processes. Journal of the American Statistical Association, 83, 401, pages 9–27.

Ogata, Y. (1998). Space-time point-process models for earthquake occurrences. Ann. Inst. Statist. Math, vol. 50, no. 2, pages 379–402.

Ogata, Y., Zhuang, J., and Vere-Jones, D. (2002). Stochastic declustering of space-time earthquake occurrences. Journal of the American Statistical Association, 97, 458, pages 369–379.

R Development Core Team (2005). R: A language and environment for statistical com- puting. R Foundation for Statistical Computing, Vienna, Austria. ISBN 3-900051-07- 0.

Resenberg, P. (1985). Second-order moment of central california seismicity, 1969-1982. Journal of Geophysical Research, vol. 90, no. B7, pages 5479–5495.

Schoenberg, F. and Bolt, B. (2000). Short-term exciting, long-term correcting models for earthquake catalogs.

Silverman, B. W. (1986). Density Estimation for Statistics and Data Analysis. Chapman and Hall, London.

4

The impact of the spatial uniform distribution of seismicity on probabilistic seismic hazard estimation

C. Beauval(*), S. Hainzl, and F. Scherbaum

Institute of Geosciences, University of Potsdam, Potsdam, Germany (*) now at IRD, Géosciences Azur, Sophia-Antipolis, France

Corresponding Author: C. Beauval, [email protected]

Abstract: In order to compute probabilistic seismic hazard, seismic sources must be characterized. In low-seismicity regions, seismicity is rather diffuse and faults are difficult to identify, thus large aerial source zones are mostly used. The hypothesis is that seismicity can be uniformly distributed inside each aerial seismic source zone. In this study, the impact of this hypothesis on the probabilistic hazard estimation is quantified. The D-value defines the degree of clustering of earthquakes. Fractal seismicity distributions are generated. Probabilistic hazard is computed for each fractal distribution generated; the impact quantifies the deviation from the uniform hazard value. Correlations between D-values and impacts are derived, so that in real cases uncertainty bounds on hazard can be deduced from D-value estimates.

1. Introduction The first step towards the establishment of a seismic building code in a country is the evaluation of probabilistic seismic hazard. The method for estimating probabilistic hazard was initiated more than 30 years ago (Cornell, 1968; McGuire, 1976). Since then, several variants of the method have been proposed and many aspects of the probabilistic computation have been studied and improved, but one hypothesis dealing with the characterisation in space of seismicity is still widely in use in low seismicity regions: seismicity is assumed to be uniformly distributed inside large aerial source zones. Indeed, because low-seismicity regions usually display diffuse seismicity and active faults are very difficult to identify, large aerial source zones are defined according to different geophysical and geological criteria. However, it is well-known that the distribution of seismicity in space is not uniform but clustered. One way of analyzing the spatial distribution of seismicity is to determine the fractal dimension (D-value). This D-value is an extension of the Euclidean dimension and measures the degree of clustering of earthquakes: in a 2d-space, D can be a decimal number and ranges from 0 (point) to 2.0 (uniform distribution in space). This study aims at characterizing the uncertainty on probabilistic hazard depending on the knowledge on the degree of clustering of the seismicity distribution. The fractal dimension considered in this study is the correlation dimension (Grassberger and Procaccia, 1983). We first establish a correlation between D and the associated uncertainty on hazard, through the generation of synthetic seismic distributions. Then we apply this approach in two regions of France and Germany and deduce, from the D-value estimations, the corresponding uncertainty bounds on probabilistic hazard estimations.

2. Synthetic source zone: which impact for which D? A quadratic seismic source zone is considered and the hazard is estimated for a grid of sites located inside the source zone (Fig. 1a,b). Synthetic seismic distributions are generated over the source zone, increasing the clustering of the seismicity from a line (D≅1.0) to a uniform distribution over the area (D≅2.0). The D-value is computed for each seismicity distribution by estimating the slope of the linear part of the correlation integral (Grassberger and Procaccia, 1

1983; Goltz, 1998). The correlation integral is established from 3000 events generated over the zone according to the spatial density probability and D is computed over the interval [5-30]km.

Figure 1. Upper row: D=1.05, lower row: D=1.45. (a) Synthetic seismicity distribution, number of M≥3 per year in 5×5km², quadrate: grid of sites. (b) Impact of uniform hypothesis on probabilistic hazard (T=475y), see Eq. 1; (c) corresponding distribution of impacts.

A uniform distribution of seismicity over the source zone (i.e. D=2.0) results in identical acceleration values all over the grid. The impact on probabilistic hazard is defined as the difference between the uniform acceleration Aunif and the estimated one A, normalized by the uniform value and expressed in percentage:

A − A I = unif ⋅100 (1) Aunif

Therefore, positive impacts correspond to sites where the uniform distribution of seismicity results in an increase of hazard. The probabilistic seismic hazard is estimated according to the classical methodology and earthquakes are assumed to follow a Poissonian process in time (Cornell, 1968; McGuire, 1976). An acceleration determined for a return period of e.g. 475yrs has a probability of 10% of being exceeded at least once over a time of 50 years. Once the spatial density probability distribution is obtained, the seismic rate is distributed over the source zone. The overall seismicity rate is fixed to 100 events with M≥3.0 per year. For each unit, the truncated Gutenberg-Richter recurrence curve for magnitudes is modeled with a slope b=1.0 and a maximum magnitude fixed to 6.0. Furthermore, the minimum magnitude considered in the probabilistic computation is 4.5 and the ground-motion predictions of the attenuation relationship are truncated at +3σ above the median. When the seismicity is distributed uniformly, the resulting acceleration at the PGA (Peak Ground Acceleration) is 0.29g at 475 years, which corresponds to sites of highest seismic hazard in countries like France or Germany.

2

However, the results of this study do not depend on absolute values since impacts correspond to a normalized difference with respect to the uniform hazard value. Examples of impact estimation are displayed in Fig. 1 for two synthetic seismicity distributions characterized by two different D-values (1.05 and 1.45). The impact distribution (Fig. 1b,c) can be considered as the uncertainty distribution characterizing the uniform hazard estimated at a site anywhere inside the source zone.

3. Impact on probabilistic hazard versus D-value of source zones For a fixed theoretical D-value, different spatial patterns and slightly different computed D-values may result. Impacts are determined for 30 runs per theoretical D-value. From each distribution, the percentiles 15% and 85% are computed. Then, for each percentile, mean values are determined within a 0.1-interval. The results are displayed on Fig. 2a for the return period 475yrs. Sites where the uniform distribution of seismicity results in an increase of hazard are more numerous than sites where it results in a decrease. Therefore, the uniform distribution of seismicity within a large aerial source zone leads globally to an over-estimation of hazard. If the seismicity embedded in an aerial source zone distributes in reality along a line, the impact of the uniform hypothesis on hazard can be as high as 70% for the 85% percentile. As expected, the percentiles 15% and 85% tend to zero when the D-value increases, the seismicity is less and less clustered.

Figure 2. Impacts of the spatial uniform distribution of seismicity on probabilistic hazard. (a) Results at 475yrs; crosses: percentiles 15% and 85% of distributions from 30 runs per D-value; curves: mean percentiles computed over 0.1-interval. (b) Percentiles 15% and 85% at 475, 104 and 105 yrs; the thicker the line the longer the return period. (c) Medians of impact distributions.

In Fig. 2b, the percentile curves for the return periods 104 and 107 years are superimposed to the results at 475 years. The median values of the impact distributions for the three return periods are also displayed (Fig. 2c). When the return period increases, the over-estimation is slightly increased: the percentiles shift to higher values.

4. Application to two regions: according to D, what is the uncertainty on hazard? The approach is applied in two regions of France and Germany. In the Alps, D-values are computed from the instrumental LDG catalog (homogeneous magnitude ML; Nicolas et al., 1998); all magnitudes higher than 3.0 are taken into account over the period [1975-1999]. For the Lower Rhine Area, D-values are computed from the instrumental part of the German catalogue (Leydecker, 2004); all magnitudes higher than 2.5 over the period 1975-2004 are included. As depths’ determination bear large uncertainties, distances are estimated in 2d only. A spatial mapping of D-values in these two regions indicates that the D-value ranges roughly between 1.3 and 1.6 (see Fig. 3 for the German part). According to the results on synthetic

3

seismic distributions, this range of D-value corresponds to an over-estimation on hazard of up to 60% for the 85% percentile, or an under-estimation of hazard of ~25% for the 15% percentile.

5. Conclusions If minimum and maximum bounds for the fractal D-value can be estimated for a region, the uncertainties on probabilistic hazard due to the uniform hypothesis in space can be bounded accordingly. A correlation between the impacts on hazard and the D-values of the source-zones is derived from the generation of synthetic seismicity distributions. The results show that a uniform distribution of seismicity inside an aerial source zone leads on average to conservative hazard estimates. The approach applied in the Alps and in the Lower Rhine Area yields an over-estimation of the probabilistic hazard of up to 60% for the 85% percentile. The only way to reduce the spatial distribution of seismicity is to identify active faults; more geophysical studies and longer instrumental catalogues are required in low-seismicity countries such as France or Germany.

Figure 3. Application in the Lower Rhine Area. (a) Mapping of D: for each grid point, all earthquakes M≥2.5 closer than 70km to the grid point are taken into account, D is computed from the correlation integral over the range [8-50]km; dashed lines define the seismotectonic source zone ‘Lower Rhine Area’ (zoning by Leydecker and Aichele ,1998). (b) Misfit (least square),only D-values with δD<0.1 are displayed. (c) Deduction of the impact of the uniform distribution of seismicity on the probabilistic hazard.

6. References Cornell, C.A. (1968), Engineering seismic risk analysis, Bull. Seism. Soc. Am., 58, 1583-1606. Goltz, C. (1998), Fractal and chaotic properties of earthquakes, Lecture notes in earth sciences, Springer, 175p. Grassberger, P, and Procaccia, I. (1983), Measuring the strangeness of strange attractors, Physica, 9D, 189-208. Leydecker, G., 2004. Earthquake catalogue for the Federal Republic of Germany and adjacent areas, http://www.bgr.de/quakecat. Leydecker, G., and Aichele H. (1998), the seismogeographical regionalisation for Germany: the prime example of third-level regionalisation, Geologisches Jahrbuch, E55, 85-98. McGuire R.K., (1976), Fortran computer program for seismic risk analysis, USGS Open-File Report 76-67. Nicolas M., N. Bethoux, and B. Madeddu, (1998), Instrumental seismicity of the Western Alps: A revised catalogue, Pageoph, 152, 707–731.

4

Producing Omori’s law from stochastic stress transfer and release

M. Bebbington1, and K. Borovkov2

1Massey University, Private Bag 11222, Palmerston North, New Zealand 2 University of Melbourne, Parkville 3052, Australia

Corresponding Author: M. Bebbington, [email protected]

Abstract: We present an alternative to the ETAS model for aftershocks.. The continuous time two-node Markov stress release/transfer model has one (mainshock) node which transfers `stress' to the second (aftershock) node. When the hazard function at the second node is an exponential function of the stress, the model reproduces Omori's law. When the second node is also allowed tectonic input, which may be negative, i.e., aseismic slip, the frequency of events decays in a form that parallels the constitutive law derived by Dieterich (1994). This fits very well to the modified Omori law, with a wide possible variation in the p-value. We illustrate the model by fitting it to aftershock data from the Valparaiso earthquake of March 3 1985, and show how it can be used in place of conventional `stacking’ procedures to determine regional p-values.

1. Introduction The modified Omori formula for the frequency of aftershocks is

nt()=+ Kt ( c )− p , (1)

where usually p ∈(0.9,1.8) , differing from sequence to sequence. Stochastic models for temporal shallow seismicity must include the existence of aftershocks, catering for this variation in p . An example is the ETAS model (Ogata, 1988), incorporating (1) together with a constant background rate. We construct an alternative model, which besides reproducing (1), also allows for more general main sequence behaviour.

2. Linked Stress Release Model (Liu et al., 1999; see also Bebbington and Harte, 2003) The model is based on a stochastic version of elastic rebound. The seismic area is divided into "regions", with the stress (Benioff strain) in region i evolving as

n ()j X iiiij()tX=+− (0)ρθ t∑ St () j=1

()j where St() is the cumulative stress release in region j , and θij the proportion of stress

drop transferred from region j to region i (1θii = ), which may be positive (damping) or

negative (excitation). The{ρi} are the rates of tectonic input. The probability of an

earthquake occurrence is controlled by a hazard function Ψii (x )=+ exp(µ iν iix ) . This results in a point process with conditional intensity

λανρθ()tXt=Ψ ( ()) = exp + t − St()j () , (2) iii( iiiij()∑ j )

for each region i , which can be numerically fitted to data using maximum likelihood. Using a two region model, we will represent mainshocks by one region (henceforth

node), denoted A, and aftershocks by the second node (B), as shown in Figure 1. We assume

that there is no transfer of stress from B to A, i.e., θ AB = 0 .

Figure 1 Two-node stress transfer aftershock model

3. Dieterich's Formula The two-node model produces (Borovkov and Bebbington, 2003) a decay formula

ψ ft()= , (3) eeas−−st+−(1 st )ψ /

νξB where sae==Ψ=−νρBB, ψ B (0),E 1, and ξB is the `aftershock size’ (stress). Dieterich (1994) created a general analytic model of earthquake nucleation and time to instability that leads to an aftershock rate

rτ/τ R = r (4) ττ −∆ −t exp−+ 1 exp 1 τσraAt

By setting

1 ∆τ τr sr==, ψ exp , a = tArtaaσ τ

(3) and (4) are equivalent, and hence, provided s > 0 , (3) gives the Omori law for tt< a , after which seismicity merges to the background rate given by ρ > 0 . Note that the two-node model is fitted to observed event data, rather than to physical quantities as in the formula (4), and provides a (stochastic) mechanism for variation around the mean rate.

4. Variation in p While (3) gives the Omori law if s > 0 , it is also possible for ρ (and hence s ) to be negative, representing aseismic stress decrease. Numerically fitting (3) to the modified Omori law (1) produces 0.5 < p < 1 (s > 0) and 1 < p < 1.5 (s < 0) for reasonable values of s, ψ and a (see Figure 2).

Figure 2 Variation in decay rate with s

5. The Valparaiso (Chile) Earthquake of March 3 1985 The Valparaiso aftershock sequence has 88 events of magnitude 5.0 or greater in the 802 days following the mainshock. The decay rate fits very well the modified Omori formula with p =1.038 (Kisslinger, 1988). Fitting the intensity (2) to the sequence of events as shown in Figure 3 produces s =νρ = 0.0248×(-0.0069) = -0.000172, ψ =Ψ(0) =eeα = 4.9677 = 143.7 and, averaging over the events, ae=−=E νξ 1 exp(0.0248 × 100.75(M k − 5.0) ) / 88 −= 1 0.153. ∑ k Numerically matching the 2-node decay equation (3) to the modified Omori formula (1) then gives p = 1.046.

Figure 3 Valparaiso aftershock sequence and fitted two-node intensity

6. Aftershock Stacking Felzer et al. (2003) estimate the p-value for California by scaling and then stacking aftershock sequences. Considering the 9 sequences (1983 – 1999, M ≥ 5.8) identified in the paper, we find (from the PDE catalog) 68 aftershocks of M ≥ 4.8 within (0.02, 180) days of the mainshock. Fitting the two-node (7 parameters) model, we get a ∆AIC of 1.85

over the full (8 parameters, i.e., θ AB ≠ 0 ) model, with estimates

−6.25 0.016 0.041 1 0  αν===Θ=,, ρ ,  −−−4.27 0.089 0.027 0.983 1 

Now

0.75(Mag − 4.8) a =×−= ave.as/ () exp() 0.089 10 1 0.619

0.75(Mag − 4.8) ψ = ave.m/s () exp() 0.089×× 0.983 10 = 7216.4

and

s = -0.027× 0.089 = -0.0024.

This gives p = 1.074, compared to p = 1.08 from Felzer et al. (2003). Kisslinger and Jones (1991) estimate p = 1.11 for Southern California.

7. Summary The two-node model can reproduce both the Omori formula and Dieterich’s formula. Through the latter, it achieves a range of at least 0.5 < p < 1.7. Fitting the SRM to an aftershock sequence confirms that the model decays correctly. The two-node model provides a regional-scale model for both main sequence and aftershock events, and provides an alternative to existing methods of `stacking’ aftershocks for regional analyses.

8. References Bebbington, M. and Harte, D. (2003), The linked stress release model for spatio-temporal seismicity: formulations, procedures and applications. Geophys. J. Int., 154, 925-946. Borovkov, K. and Bebbington, M. (2003), A stochastic two-node stress transfer model reproducing Omori's law. Pure Appl. Geophys., 160, 1429-1455. Dieterich, J., (1994), A constitutive law for rate of earthquake production and its application to earthquake clustering, J. Geophys. Res., 99, 2601-2618. Felzer, K.R., Abercrombie, R.E. and Ekstrom, G., (2003), Secondary aftershocks and their importance for aftershock forecasting, Bull. Seismol. Soc. Amer., 93, 1433-1448. Kisslinger, C. and Jones, L.M., (1991), Properties of aftershock sequences in southern California, J. Geophys. Res., 96, 11947-11958. Liu, J., Chen, Y., Shi, Y. and Vere-Jones, D. (1999), Coupled stress release model for time-dependent seismicity, Pure Appl. Geophys., 155, 649-667 Ogata, Y., (1988), Statistical models for earthquake occurrences and residual analysis for point processes, J. Amer. Statist. Assoc., 83, 9-27.

Analysis of Aftershocks in a Lithospheric Model with Seismogenic Zone Governed by Damage Rheology Yehuda Ben-Zion1, and Vladimir Lyakhovsky2

1Department of Earth Sciences, University of Southern California Los Angeles, CA, 90089-0740, USA 2 Geological Survey of Israel, Jerusalem 95501, Israel

Corresponding Author: Yehuda Ben-Zion, [email protected]

Abstract: We perform analytical and numerical studies of aftershock sequences following abrupt steps of strain in a rheologically-layered model of the lithosphere. The model consists of a weak sedimentary layer, over a seismogenic zone governed by a visco-elastic damage rheology, underlain by a visco-elastic upper mantle. The damage rheology accounts for fundamental irreversible aspects of brittle rock deformation and is constrained by laboratory data of fracture and friction experiments. A 1-D version of the visco-elastic damage rheology leads to an exponential analytical solution for aftershock rates. The corresponding solution for a 3-D volume is expected to be sum of exponentials. The exponential solution depends primarily on a material parameter R given by the ratio of timescale for damage increase to timescale for gradual inelastic deformation, and to a lesser extent on the initial damage and a threshold strain-state for material degradation. The parameter R is also inversely proportional to the degree of seismic coupling across the fault. Simplifying the governing equations leads to a solution following the modified Omori power law decay with an analytical exponent p = 1. In addition, the results associated with the general exponential expression can be fitted for various values of R with the modified Omori law. The same holds for the decay rates of aftershocks simulated numerically using the 3-D layered lithospheric model. The results indicate that low R-values (e.g., R ≤ 1) corresponding to cold brittle material produce long Omori-type aftershock sequences with high event productivity, while high R-values (e.g., R ≥ 5) corresponding to hot viscous material produce short diffuse response with low event productivity. The frequency-size statistics of aftershocks simulated in 3-D cases with low R-values follow the Gutenberg-Richter power law relation, while events simulated for high R-values are concentrated in a narrow magnitude range. Increasing thickness of the weak sedimentary cover produces results that are similar to those associated with higher R-values. Increasing the assumed geothermal gradient reduces the depth extent of the simulated earthquakes. The magnitude of the largest simulated aftershocks is compatible with the Båth law for a range of dynamic damage-weakening parameter. The results provide a physical basis for interpreting the main observed features of aftershock sequences in terms of basic structural and material properties.

1. Introduction Large earthquakes typically produce an increasing number of earthquakes referred to as aftershocks, with decay rates that can be described (e.g., Utsu et al., 1995) by the modified Omori law ΔN / Δt = K(c + t) − p , (1) where N is the cumulative number of events with magnitude larger than a prescribed cutoff, t is the time after the mainshock, and K, c, and p are empirical constants. However, aftershock decay rates can also be fitted (e.g., Gross and Kisslinger; 1994; Narteau et al., 2002) with exponential and other functions. The occurrence and properties of aftershocks exhibit a number of spatio-temporal variations that are unlikely to be associated with statistical fluctuations. The depth extent of aftershocks is correlated with the geothermal gradient (Magistrale, 2002) and the early parts of aftershock sequences may extend deeper than the regular ongoing seismicity (Rolandone et al., 2004). Cold continental regions with low heat flow have high aftershock productively and long event sequences associated, while hot continental regions and oceanic lithosphere have low aftershock productively and short event sequences with fast decay (e.g., Kisslinger and Jones, 1991; Utsu et al., 1995; Utsu, 2002; McGuire et al., 2005). Geothermal areas and volcanic regions with high fluid (magma, water) activity and high heat flow often have swarms of events without a clear separation between mainshocks and aftershocks (Mogi, 1967). In this work we attempt to establish a physical basis for aftershocks behavior using a lithospheric model with a seismogenic crust governed by a continuum damage rheology that accounts for key observed features of irreversible rock deformation (Lyakhovsky et al., 1997; Hamiel et al., 2004).

2. Results A detailed background on the employed damage rheology and full derivation of the results are given by Ben-Zion and Lyakhovsky (2006). Here we outline the main analytical results for a 1-D version

of the model, and show illustrative examples associated with the analytical results and 3-D numerical simulations. For a 1-D version of the model, associated with uniform damage evolution, the stress-strain relation is σ = 2μ0 (1−α )ε , (2) where μ0(1–α) is the effective elastic modulus with μ0 being the initial rigidity of the undamaged solid.

2 2 The 1-D version of the kinetic equation for damage evolution is α& = Cd (ε − ε 0 ) , (3) where ε0 is a threshold value separating states of strain associated with material degradation and healing and Cd is a rate constant for material degradation. For positive damage evolution (ε > ε0), the damage-related inelastic strain rate before the final macroscopic failure is given by e = Cvα& σ , (4) where Cv ⋅dα dt is the inverse of an effective damage-related viscosity. Since we are interested in the relaxation process in a region following the occurrence of a large event, we assume a constant total strain boundary condition. This implies that the rate of elastic strain relaxation is equal to the viscous strain rate, i.e., 2⋅ε& = −e (the factor 2 stems from the common definitions of the strain and strain-rate tensors). With this set of equations and boundary conditions, the result for damage evolution is

dα 2 2 2 2 = Cd ⋅{}ε s exp[R()(1−α − R 1−α s )]− ε 0 , (5) dt where αs and εs are the initial levels of damage and strain, respectively, and R = μ0 ⋅Cv is a material parameter. As demonstrated below analytically and numerically, the parameter R plays a dominant role in the aftershock behavior. Ben-Zion and Lyakhovsky (2006) show that R characterizes the ratio between a brittle damage timescale and a viscous relaxation timescale, and that R is inversely proportional to the degree of seismic coupling across the fault. To convert the evolution of the damage state variable α to evolution of aftershocks number N, we assume that α increases linearly with aftershocks number, α = α s + φN . We thus get

dN 2 2 2 2 φ = Cd ⋅{}ε s exp[R()()1−α s −φN − R 1−α s ]−ε 0 . (6) dt The corresponding solution for a 3-D volume with many interacting elements is expected to be associated with a sum of exponentials similar to (6). Figure 1 illustrates results based on equation (6) for several values of R and the simplifying limit values αs = 0 and ε0 = 0. Small R-values, corresponding to highly brittle cases with little viscous relaxation, produce long aftershock sequences with slow decay and high aftershock productivity. In contrast, high R-values, corresponding to relatively viscous cases, produce short aftershock sequences with fast decay and low aftershock productivity. While the aftershock rates are generated using the exponential equation (6), the results can be fitted well by the modified Omori power law relation K(c+t)–p. The obtained good fits to the results with the modified Omori law suggest that the leading term of equation (6) is associated with a power law. This can be derived explicitly by making 2 simplifications. Assuming that φN is sufficiently small so that the (φN)2 term in (6) can be neglected, gives dN φ = C ⋅{ε 2 exp[]− 2φNR()1−α − ε 2 }. (7) dt d s s 0

If we assume further that the initial strain εs induced by the mainshock is large enough so that

2 2 ε 0 << ε s exp[]− 2φNR(1−α s ) , we get a decay rate following the modified Omori law

dN N& N& 1 = 0 = 0 ⋅ , (8a) dt 2φR()1−α s N& 0t +1 2φR()1−α s N& 0 t +1 2φR()1−α s N& 0

2 where N& 0 = Cd ε s /φ is the initial aftershocks rate. Equation (8a) provides a mapping between parameters of the modified Omori law and parameters of our damage rheology model (in a 1-D analysis of uniform deformation), 1 1 k k = , c = = , p =1. (7b) 2φR()1−α s 2φR()1−α s N& 0 N& 0

Numerical simulations in a 3-D lithospheric structure confirm that the material parameter R is the major factor controlling the decay rate, duration and productivity of aftershocks, as well their frequency-size statistics. The simulations also show that a sedimentary layer thicker than a few km produces weaker aftershock sequences with a smaller number of events and faster decay, similar to what is produced by high R-values. Increasing the assumed temperature gradient leads to a smaller number of events, but the aftershock decay rate and duration remain largely unaffected (if the R-value is kept the same). However, increasing temperature gradient reduces the overall depth extent of the brittle seismogenic zone and the simulated aftershocks. In addition, the maximum hypocenter depth decreases with time from the mainshock due to a transient deepening of the brittle-ductile transition depth produced by the high strain rates generated by the mainshock (Figure 2). These results are compatible with observed correlations between the depth of seismicity and temperature gradient (Magistrale, 2002), and temporal evolution of the depth of aftershocks following the 1992 Landers CA earthquake (Rolandone et al., 2004).

References Ben-Zion, Y. and V. Lyakhovsky, Analysis of Aftershocks in a Lithospheric Model with Seismogenic Zone Governed by Damage Rheology, Geophys. J. Int., in press, 2006. Gross, S. J., and C. Kisslinger, 1994. Tests of models of aftershock rate decay, Bull. Seismol. Soc. Am. 84, 1571-1579. Hamiel, Y., Liu, Y., V. Lyakhovsky, Y. Ben-Zion and D. Lockner, 2004. A Visco-Elastic Damage Model with Applications to Stable and Unstable fracturing, Geophys. J. Int., 159, 1155-1165, doi: 10.1111/j.1365-246X.2004.02452.x. Lyakhovsky, V., Ben-Zion, Y. & Agnon, A., 1997. Distributed damage, faulting and friction. J. Geophys. Res. 102, 27,635-27,649. Magistrale, H., 2002. Relative contributions of crustal temperature and composition to controlling the depth of earthquakes in southern California, Geophys. Res. Lett., 29(10), 1447, doi:10.1029/2001GL014375. McGuire, J.J., M.S. Boettcher, and T.H. Jordan, 2005. Foreshock Sequences and Short-Term Earthquake Predictability on East Pacific Rise Transform Faults, Nature, 434, 457-461. Mogi, K., 1967. Earthquakes and fractures, Tectonophysics, 5, 35-55. Narteau, C., P. Shebalin and M. Holschneider, 2002. Temporal limits of the power law aftershock decay rate, Journal of Geophysical Research, 107, Art. No. 2359. Rolandone, F., R. Burgmann and R. M. Nadeau, 2004. The evolution of the seismic-aseismic transition during the earthquake cycle: Constraints from the time-dependent depth distribution of aftershocks, Geophys. Res. Lett., 31, L23610, doi:10.1029/2004GL021379. Utsu, Y., Y. Ogata, and R. S. Matsu'uara, 1995. The centenary of the Omori formula for a decay law of aftershock activity, J. Phys. Earth, 43, 1-33. Utsu, T., Statistical Features of Seismicity, in International Handbook of Earthquake and Engineering Seismology, Part A 2002. Timescale of fracturing Events rate for several values of R = Timescale of stress relaxation 180

160 Small R: R = 0.1 •expect long 140 active aftershock 120 sequences

100

R = 1 80 Modified Omori 60 law with p = 1 Number of aftershocks per day of aftershocks Number Large R: 40 •expect short R = 10 diffuse 20 sequences 0 0 102030405060708090100 Time (day) Fig. 1. Analytical results based on equation (6) for aftershock decay rates in a 1-D version of the damage model for different R-values. The dotted line is least-squares fit to the exponential analytical solution with the modified Omori power law.

0

-5

-10

-15

-20

-25 Depth (km) -30 R = 0.1 -35 T = 20 C/km

-40 0 30 60 90 120 150 180 210 240 270 300

Time (days)

Fig. 2. Depth of aftershock hypocenters in a 3-D model realization with R = 0.1, sedimentary cover of 1 km, and temperature gradient of 20 oC/km. Increasing thermal gradient and/or R lead to a thinner seismogenic zone. The maximum event depth decreases with time from the mainshock.

1 A STATISTICAL SEISMOLOGY SOFTWARE SUITE

Ray Brownrigg Statistical Computing Manager, School of Mathematics, Statistics and Computer Science, Victoria University of Wellington, Wellington, NEW ZEALAND

Corresponding author: R. Brownrigg, [email protected] Statistical Seismology is a relatively new field which applies statistical methodology to earthquake data in an attempt to raise new questions about earthquake mechanisms, to characterise aftershock sequences and to help make some progress towards earthquake prediction. A suite of R packages, known as SSLib (Statistical Seismology Library), is available for use with Statistical Seismology in general, but also with some applications outside this field. This poster session will introduce R and the SSLib packages and demonstrate some of the exploratory data analysis functions available within the packages.

An application of the Dieterich model for seismicity rate change to the 1997 Umbria-Marhe, central Italy, seismic sequence.

Flaminia Catalli and Rodolfo Console

Istituto Nazionale di Geofisica e Vulcanologia, via di Vigna Murata 605, 00142, Rome, Italy Corresponding Author: F. Catalli, catalli@ingv.

Abstract: We tried to extend the model of Dieterich (1994) for the seismicity rate change after a stress perturbation to a model that could reproduce the distribution of events during and after a seismic sequence (i.e. after a few stress perturbations). Thus, the major events of the sequence are treated like interacting among each other, although the secondary are not. A first important result of our effort to modeling this forecast algorithm is that it has just one free parameter to be evaluated, all the others could be assessed from observations or physical relations. The model is therefore introduced in a code that tests its validity and performs the best value of the free parameter on real seismicity. The code also compares it with a plain time-independent Poissonian model through likelihood-based methods. We applied the model to the 1997 Umbria-Marche (central Italy) seismic sequence with the aim to investigate if it is possible to explain the migration of seismicity from NW toward SE. The results show a qualitatively good accordance between the expected rates and data. Moreover, the simulations show a movement of expected seismicity in the direction NW-SE. We have the aim to objectively test this observations in the prosecution of the work .

1. Introduction The spatial evolution of seismicity depends on the stress transfer and on the static friction coefficient. The Coulomb failure model is founded exactly on the hypothesis that stress increases promote, and stress decreases inhibit, fault failure depending on the static friction coefficient. The validity of this model has been extensively proved in numerous papers published in the literature (Stein et al., 1997; Gomberg et al., 2000; King and Cocco, 2001; Stein, 1999; Kilb et al., 2002; Toda and Stein, 2003; Nostro et al., 2005; Toda et al., 2005; and many others). All these papers show a correlation of the Coulomb stress change with locations of most events and with increased seismicity rates. But stress change alone can not explain the time behavior of seismicity, such as Omori decay of aftershocks, or a phenomenon like migration of seismicity from the fault. The Coulomb model is a static model of stress transfer. It is essential to introduce the time dependence in the modeling to explain such type of phenomenon. It is also important to consider that seismicity depends not only on the stress transfer or friction coefficient, but also on the background seismicity of the studied area and on the physical characteristics of the fault. It is why we coupled the concept of Coulomb stress transfer with the rate-state model of Dieterich (1994). This link is considered in some current papers (Toda and Stein, 2003; Toda et al., 2005) the beginning of the more useful and accurate earthquakes forecast model. This type of modeling is more complicated than the simple stress transfer model of Coulomb and some parameters are poorly constrained too. Nevertheless it appears to explain many fault behaviors and it can justify the time delay between mainshocks and later events. Furthermore, it introduces the real concept of ”to promote” (or “to inhibit”) the fault failure because very small stresses may unleash, or repress, earthquakes in a population of faults. To model repeated stress perturbations ΔCFF’s we used the expression of seismicity

rate R as a function of the state variable γ at the reference shear stressing rate τ& r (Dieterich, 1994; Toda et al., 2005): r R = , (1) τγ &r where r is the background seismicity of the considered area. In this way, it is the state variable that evolves with stressing history:

1 γ = , (2) τ&r in the steady state;

Δ− CFF ⎛ ⎞ = γγ n−1 exp⎜ ⎟, (3) ⎝ Aσ ⎠ when an earthquake strikes;

⎛ 1 ⎞ ⎛ Δ− tτ ⎞ ⎜ ⎟ &r n+1 ⎜γγ n −= ⎟ exp⎜ ⎟, (4) ⎝ τ&r ⎠ ⎝ Aσ ⎠ during a time step Δt, where Aσ is a fault’s physical parameter.

2. Application to the 1997 Umbria-Marche (central Italy) sequence We applied the model to the 1997 sequence of Umbria-Marche, central Italy. For this purpose we adopt the source parameters for the nine mainshocks listed in Table 1 of the work of Nostro et al., (2005). One of the final goals of this application is to answer the question if is it possible to interpret the migration to S-W of the Colfiorito sequence with a rate-state type model. We believe that this model could have the capability to explain the phenomenon. The choice of the reference seismicity rate plays an important role in the Dieterich model, as already observed by Toda et al. (2005). In this regard, we represented in the model the background seismicity rate r smoothing on the considered area the seismicity of the previous ten years before 1997 (Figure 1-a). The background seismicity rate and the reference shear stress rate are not independent in a specific area. Under a number of assumptions, it can be shown that these two parameters are related by the following relationship (Console et al., (2005)):

b 7 3/2 ⎛ * ⎞ 3/1 )(1( −− mmb 0max ) (5) &r = rπτ ⎜ M 0 ⎟ Δσ 10 , 5.1 − b ⎝16 ⎠

* where b is the Gutenberg-Richter slope, M 0 is the seismic moment of an earthquake of magnitude m0, ∆σ is the stress drop (supposed constant for all the earthquakes), and m0 and mmax are the minimum and maximum magnitudes, respectively. Defining r in each point (i,j) of a grid we can so determineτ&r ji ),( and finally ta(i,j) by the relation:

Aσ ta = , (6) τ&r

just selecting a value for Aσ, the unique free parameter of our model. This is an important

improvement to the model with respect to previous works as that of Toda et al., (2005).

a) b)

c) d)

Figure 1: For all the maps: depth=5 km and Aσ=0.0021 MPa. The color scale represents the number of events expected per 1000 km2 per year. (a) Smoothed seismicity of the years 1986-1996 on the studied area; the superimposed relative epicenters are from Chiarabba et al., (2005). (b) Expected seismicity rate before the 10/03/97 event marked with the star; epicenters are from Chiarabba et al., (2005). (c) As (b) before the 10/14/97 event; epicenters are from Chiarabba and Amato, (2003). (d) As (b) and (c) after 8 years from 01/01/97; black stars point to the occurred mainshocks (the ninth is out of the area).

3. Discussion and conclusion We observe a qualitative good accordance between the expected seismicity rate changes and real data (Figure 1-b,c). Moreover, this correlation seems to grow up if we utilize more accurate earthquake locations (catalogue from Chiarabba and Amato, (2003)). This highlights the importance of the choice of the catalogue to test the match of the model with real observations. It appears that the model could explain the migration of the observed seismicity in the sense of its direction: the directivity NW-SE is correctly pointed out by the maps (Figure 1-b,c,d). But it is evident that, with our model and especially with all the crude assumptions we made, in a few cases we can not yet justify the unilateral direction the seismicity prefers. We believe that a more accurate choice of the reference seismicity could really improve the results of the model; for example, it could be very important to establish if and where had

been shadow zones in the studied area that could inhibit expected seismicity we now observe in some unexpected points. In the imminent prosecution of this work we are going to objectively quantify the accordance between expectations and data and to optimize the free parameter on real seismcity. We believe that background seismicity is an essential ingredient of such a model and a choice to be valued in a better accurately way.

5. Acknowledgement Improvements to this work were made because of some stimulating discussions with Massimo Cocco who we would like to gratefully thank. We are thankful to Claudio Chiarabba, Lauro Chiaraluce and Concetta Nostro for providing some useful data. We have really appreciated some stimulating comments from Lauro Chiaraluce and Anna Maria Lombardi during the course of the work.

6. References

Chiarabba, C., and Amato, A., (2003), Vp and Vp/Vs images in the Mw 6.0 Colfiorito fault region (central Italy): a contribution to the understanding of seismotectonic and seismogenic process, J. Gephys. Res., 105(B5), 2248, doi:10.1029/2002JB002166. Chiarabba, C., Jovane, L., and DiStefano R., , (2005), A new view of Italian seismicity using 20 years of instrumental recordings, Tectonophysics, 395, 251-268. Console, R., M Murru, and F. Catalli (2005 in press), Physical and stochastic models of earthquake clustering, Tectonophysics. Dieterich, J.H. (1994), A constitutive law for rate of earthquake production and its application to earthquake clustering, J. Gephys. Res., 99, (18):2601-2618. Gomberg, J., Beeler, N.M., and Blanpied. M.L., (2000), On rate-state and Coulomb failure models, J.Geophys. Res., 105, (14):7857–7872. Kilb, D., Gomberg, J., and Bodin, P., (2002), Aftershocks triggering by complete Coulomb stress changes, , J. Geophys. Res., 107, doi: 10.1029/2001JB0002002. King, G.C.P., and Cocco, M., (2001), Fault interaction by elastic stress changes: new clues from earthquake sequences, Adv. Geophys., 44:1-39. Nostro, C., Chiaraluce, L., Cocco, M., Baumont, D., and Scotti, O., (2005), Coulomb stress changes caused by repeated normal faulting earthquakes during the 1997 Umbria-Marche (central Italy) seismic sequence, J. Geophys. Res., 110, B05S20, doi:10.1029/2004JB003386. Stein, R.S., (1999), The role of stress transfer in earthquake occurrence, Nature, 402:605–609. Stein, R.S., Barka, A.A., and Dieterich, J.H., (1997), Progressive failure on the North Anatolian fault since 1939 by earthquake stress triggering, Geophy. J. Int., 128: 594–604. Toda, S., and Stein, R. S. (2003), Toggling of seismicity by the 1997 Kagoshima earthquake couplet: A demonstration of time-dependent stress transfer, J. Geophys. Res. 109, (B12, 2567), doi:10.1029/2003JB002527. Toda, S., Stein, R.S., Richards-Dinger, K. and Bozkurt, S.B., (2005), Forecasting the evolution of seismicity in southern California: animations built ion earthquakes stress transfer, J.Geophys. Res., 110, B05S16, doi:10.1029/2004JB003415.

The relationship between the earthquake rate change and the gradient of stress change

Y. I. Chen 1, C. S. Chen1, W. H. Yang1, and K. F. Ma2

1Institute of Statistics, National Central University, Jhongli, Tauyen 32054, Taiwan 2Department of Earth Science, National Central University, Jhongli, Tauyen 32054, Taiwa

Corresponding Author: Y. I. Chen; [email protected]

Abstract: It has been widely discussed that the seismicity rate change before and after a mainshock is related to the mainshock induced stress change. To see if it is true for the complex fault rupture of the 1999 Chi-Chi, Taiwan, earthquake, we use a universal Kriging method to explore how well the depth-dependent coulomb stress change explains the seismicity rate change. We also make such an exploration by correlating the rate change with the magnitude of the gradient of the stress change. The results demonstrate that the gradient of the stress change is preferred to the stress change itself for describing the seismicity rate change.

1. Introduction One problem of interest regarding to stress transfer leading to changes in the probability of earthquake occurrence (Steacy et al., 2005) is how the mainshock induced Coulomb stress change is related to the seismicity rate change. Toda et al. (1998) found that the rate change is related to the stress change induced by the 1995 Kobe, Japan, earthquake. However, how well the stress change explains the spatial variation of the rate change remains unknown. Ma et al. (2005) calculated the depth-dependent Coulomb stress change induced by the 1999 Chi-Chi, Taiwan, earthquake based on strike-slip fault models. They found that, even for the complex fault rupture of the Chi-Chi earthquake, the spatial distribution of the seismicity rate change is still consistent to the calculated Coulomb stress change. However, again, they did not investigate how well these two changes are correlated in space. To investigate the contribution of the stress change induced by the Chi-Chi earthquake to the explanation of the associated seismicity rate change, we employ a universal Kriging method (Cressie, 1993) to correlate the stress change and seismicity rate change. Note that the stress changes calculated from heterogeneous slip fault models (Ma et al., 2005) are not only depth-dependent, but also quite different in stability over the study region. Therefore, we also measure the associated magnitude of the gradient of the stress change and then consider the spatial model for relating the gradient magnitude and seismicity rate change.

2. Spatial statistical model In this study, we consider the non-overlapping grids in the study region. In each grid, let

RA and RB be the rate of earthquakes after the Chi-Chi earthquake and the associated background earthquake rate, respectively. Then, the seismicity rate change in the grid is defined to be RB/RA and the response of interest is, in fact, Z()⋅ = log(RB/RA). For modeling

the spatial variation of the response Z()⋅ , we consider a spatial process {():Ss s∈ D }, where Ss()=+μ () sδ (), s s ∈ D , (1) where μ ⋅)( is the overall trend related to the covariate, denoted by X ()⋅ , in the way of

μ()sX≡+ββ01 ()sandδ ⋅)( is a zero-mean Gaussian process with a certain spatial-dependent covariance function. Note that, in general, the correlation of Z()⋅ in two different locations becomes weaker when the distance of the two locations, h, gets further. Therefore, we employ the covariance function suggested in Matern (1986) in the following:

⎧σκ22(ah || || / 2)ν 2 ( ah2 || ||) ⎪ δν;ifh ≠ 0, Chδ ( )=+ cov(δδ (s h ), (s)) =⎨ Γ()ν ⎪ 2 ⎩ σδ ;ifh = 0, where κν ()⋅ is the second kind of modified Bessel function with order ν > ,0 ν is a

2 smoothness parameter, a > 0 is a scaling parameter, andσδδ = var( (s )) .

Note that the related covariate under study is either the mean stress change or the associated gradient’s magnitude. Denote the mean stress change in grid (i, j) by SCi,j, the magnitude of the gradient of the related stress change is then given by

22 MGij,,=−+−()( SCij+−1, SC ij11 SCi +−,1 j SC i, j), (3) which reflects the heterogeneity of the stress change in space. For modeling the spatial variation of the seismicity rate change, we consider the following two models:

log(RB/RB A)=α0+α1sgn( SCij )log(| SCij |)+δij +ε ij , (2) where sgn(a)= +1 for a>0, 0 for a=0, and -1 otherwise, and

log(RB/RB A)= β0+ β1log( MGij )+δij +εij , (3)

2 where the errors ε ij ’s are independent normal white noises with varianceσ ε . Since we employ the universal Kriging method (Cressie, 1993) and find the maximum likelihood estimates (MLE) of the parameters in the two spatial models:

3. Data analysis and results To correlate the seismicity rate change to the stress change induced by the 1999, M=7.6, Chi-Chi, Taiwan, earthquake, we consider 900 (=30x30) grids over the region (119.5 o E, 122.5 o E)x(22.5 o N, 25.5 o N) and the area of each grid is 0.1 o x0.1 o . The M ≥ 3.0 earthquakes 50 months before the Chi-Chi earthquake provide the background and the M ≥3.0 earthquakes 50 months after the Chi-Chi earthquake are under study. Since the hyper center of the Chi-Chi earthquake is at 8 km, the seismicity rate changes, the mean stress changes and the associated magnitudes of gradients for the layer with depth 3-9km are computed and presented in Figure 1.

Applying the universal Kriging method, we have the fitted model (2) as given by 22 μ = 0.088- 0.004 sgn(SC)log(|SC|) and (,ν a, σσδ , ε )= (0.227, 0.579, 0.453, 0.001), (4) while the fitted model (3) is obtained as 22 μ = 0.511+ 0.607 log(MG) and (ν , a, σσδ , ε )= (1.628, 0.044, 0.290, 0.000). (5) Since the fitted model (4) showing a negative correlation between the stress change and seismicity rate change is contradiction to the general knowledge, we only present, in Figure 2, the overall trend and the unexplained spatial variation in the fitted model (5).

Figure 1. The seismicity rate change (left), the mean stress change (middle) and the magnitude of the associated gradient (right) for the Chi-Chi earthquake.

Figure 2. The fitted overall trend (left) and spatial variation (right) for the model with the magnitude of the gradient of the stress change as the covariate.

4. Conclusions and Discussions The fitted model (4) implies that a grid with a larger positive stress change has a more serious decrease in the seismicity rate post to the Chi-Chi earthquake. Meanwhile, a grid with a smaller negative stress change or stress shadow has a more increase in the

seismicity rate after the Chi-Chi earthquake. This result is somehow against our knowledge about the relationship between the stress change and the seismicity rate change. On the other hand, the fitted model (5) shows that a grid located in the area with more heterogeneous stress change in space tends to have a larger seismicity rate post to the Chi-Chi earthquake. Figure 2 indicates that the overall trend that counts on the stability of the stress change contributes about 50% of the variation of seismicity rate change. Therefore, the result sheds light on the study of earthquake probability changes by taking into account of the homogeneity of the stress change in space. Note that, we also find, though not shown here, the unexplained spatial variation is irrelevant to b value (Gutenberg and Richter, 1944) change or the p value (Utsu, 1961; Ogata, 1983). Therefore, it remains a problem in a future study to explore something else that might be related to the seismicity rate change.

5. Acknowledgement The work of this paper was supported by the Taiwan Ministry of Education, iSTEP 91-N-FA07-7-4.

6. References Cressie, N., Statistics for Spatial Data, (John Wiley & Sons, New York, 2003). Gutenberg, R. and Richter, C. F. (1944), Frequency of earthquakes in California, Bull. Seism. Soc. Am. 34, 185-188. Ma, K. F., Chan, C. H., and Stein, R. S. (2005), Response of seismicity to Coulomb stress triggers and shadows of the 1999 Mw =7.6 Chi-Chi, Taiwan, earthquake, J. Geophys. Res. B05S19, Doi: 10.1029/2004JB003157. Matern, B., Spatial Variation, (Springer, New York, 1986). Ogata, Y. (1983), Estimation of the parameters in the modified Omori formula for aftershock sequences by the maximum likelihood procedure, J. Phys. Earth, 31, 115-124. Steacy, S., Gomberg J., and Cocco, M. (2005), Introduction to special section: Stress transfer, earthquake triggering and time-dependent seismic hazard, J. Geophys. Res. B05S01, Doi: 10.1029/2005JB003692. Toda, S. Stein, R. S., Reasenberg, P. A., Dieterich J. H., and Yoshida, A. (1998), Stress transferred by the Mw=6.9 Kobe, Japan, shock: Effect on aftershocks and future earthquake probabilities, J. Geophys. Res.103, 24,543-24,565. Utsu, T. (1961), Statistical study on the occurrence of aftershocks, Geophys. Mag. 30, 521-605.

Analysis of Aftershock Hazard Using Power Prior Information

Y. I. Chen and C. S. Huang

The Institute of Statistis, National Central University, Jhongli, Tauyen 32054, Taiwan

Corresponding Author: Y. I. Chen; [email protected]

Abstract: Near real-time hazard assessment for large aftershocks post to a major earthquake is usually urgent for disaster relief and rescue efforts. For regions that are short of enough previous aftershock sequences for an appropriate empirical prior distribution, we suggest to implement a Bayesian analysis based on only one or few available previous aftershock sequences and limited current data in a short time after the mainshock. The prior information in this Bayesian analysis is obtained directly from the likelihood of previous aftershock sequence with a certain power. We illustrate the use of the power prior Bayesian analysis on the evaluation of the hazards of three aftershock sequences post to the Chi-Chi earthquake (ML=7.6, 1999/9/21), the 331 event with ML=6.8 on 2002/3/31 and the 1210 event with ML=6.4 on 2004/12/10.

1. Introduction Real-time information about the hazard of large aftershocks is often urgent for ongoing emergency services to reduce loss of life, especially, in a short time after a disastrous mainshock over a region near the mainshock epicenter. To fulfill this purpose, Reasenberg and Jones (1989) proposed to combine the modified Omori law (Utsu, 1961) and the frequency-magnitude relationship (Gutenberg and Richter, 1944) as a model, referred to as the RJ model hereafter, for describing the time-magnitude distribution of earthquakes after a mainshock in California. Reasenberg and Jones (1989) considered the maximum likelihood estimated (MLE) RJ model. Due to the limited current data available within a short time after the mainshock, Reasenberg and Jones (1989) further suggested a Bayesian analysis of the RJ model with conjugate normal prior distribution. However, such a Bayesian analysis may not be appropriate as indicated in Rydelek (1990) and Chen et al. (2004) since the time and magnitude distributions of aftershocks are, in general, skewed. Therefore, Chen (2003) proposed using a Markov chain Monte Carlo (MCMC) algorithm for the Bayesian analysis for any prior distributions of the parameters in the RJ model. The MCMC Bayesian analysis of the RJ model is then expected to be better than the Bayesian analysis with conjugate normal prior if the pre-specified prior distributions are reasonable. To do so, the empirical prior distributions obtained from sufficiently large amount of previous aftershock sequences are needed. In practice, however, there are many areas like Taiwan that have complex tectonic structures, but then do not have enough previous aftershock sequences in each tectonic region for the appropriate empirical prior distribution. Therefore, we propose a Bayesian analysis of the RJ model based on only one or few previous aftershock sequences. In the proposed Bayesian analysis, the prior information is obtained not for the establishment of the prior distribution of the parameters in the RJ model, but for the reference or contribution to the

current data through the likelihood of the available previous aftershock sequences with a certain power. Therefore, the proposed Bayesian analysis is termed as power prior Bayesian analysis hereafter (Chen et al., 2000). We demonstrate, herein, the use of the power prior Bayesian analysis on the evaluation of the hazards of three aftershock sequences and, for each of them, one historical aftershock sequence is employed for providing the prior information.

2. Power prior Bayesian analysis

Suppose that we observe the current data {(tMmi , mi ), i= 1,2,..., nm } after a major event

with magnitude Mm and have a historical aftershock sequence {(tMhi , hi ), i= 1,2,..., nh } post to the mainshock with magnitude Mh. Furthermore, suppose that both sequences of aftershocks can be well described by the RJ model:

p λαβ(tM , )=−−+ exp{ (M Mc )}/( t c ) for M c ≤ M , (1) where the parametersα , β , c and p are nonnegative constants and Mc is the cut-off magnitude for both earthquake sequences. Then, following the Ogata (1983), we have the likelihood functions of the parameters given different sets of data

Luu(α ,βαβ ,cp , )== L ( , , cp , | ( ti , Mui ), i 1,2,..., nu ) n u T = exp{ ln[λλ (tM , )]− ( tMdt , ) } for u= m or h. (2) ∑ ui ui ∫0 ui c i=1 We consider herein the power prior information from a historical data set as

a {(,,,)}Lcph αβ , 0 ≤ a ≤ 1. Therefore, the posterior distribution of α , β , c and p is

proportional to

a {(,,,)}(,,,)LcpLchmαβ αβ p. (3)

In this setting, we suggest to find the posterior modes of α , β , c and p, which correspond to the most probable values of α , β , c and p given the current data by taking the reference from the historical data. In other words, we find the values of α , β , c and p so that the equation in (3) reaches its maximum.

3. Data analysis For illustrative purpose, we consider three current aftershock sequences post to the major events occurred in central (M=7.6, 1999/9/21, event 921), northeastern (M=6.8, 2002/3/31, event 331) and southeastern (M=6.4, 2004/12/10, event 1210) of Taiwan. The historical sequence as a reference for the aftershocks of the 921 event corresponds to the event with ML=6.2 on 1998/7/17 is selected. The aftershock sequence post to the event with ML=6.5 on 1994/6/5 is chosen as the historical data for the aftershocks of the 331 event. Finally, for the aftershock sequences post to the 1210 event, we take the historical sequence after the event with ML=7.1 on 1996/9/5. The related epicenters are shown in Figure 1. Note that the background colors, showing the spatial variation of the minimum magnitude for the complete earthquake catalog (Wiemer and Wyss, 2000), indicate an appropriate overall

cut-off magnitude of Mc =3.0. Therefore, for simplicity, we demonstrate herein the application of the power prior Bayesian analysis of the modified Omori’s model (Utsu, 1961) for assessing the hazard of M ≥ 3.0 aftershocks. The power under study is a= 0. 1, 0.5 and 1. We investigate how well the occurrence of current aftershocks can be forecasted by using the modified Omori’s law based on power prior Bayesian analysis. To do so, we computed the number of aftershocks during a week N days after the current major event based on the estimated Omori’s law obtained from both the current N-day data post to the major event and the associated historical data. We also study if the power prior Bayesian analysis is preferred to the MLE-based analysis for the forecasting, where the forecasted number of current aftershocks is calculated based on the Omori’s law with MLE obtained from the first N-day current data only. The results are then presented in Figure 2.

Figure 1 The epicenters of the current and historical major earthquakes. The rectangular regions indicate the areas of the epicenters of the current aftershocks under study.

Figure 2 The forecasted number of aftershocks in next 7 days based on the first 2-day (upper three) and 10-day (lower three) current data post to the 921 event (left), 331 event (middle) and 1210 event (right). The black dots are the observed number, the black line is the forecasted one from MLE, the red, green and blue lines are the forecasted ones from the power prior Bayesian analysis with a=0.1, 0.5 and 1.0.

4. Conclusions and Discussions Figure 2 shows that, when the magnitude of previous mainshock is much smaller than that of the current one, like the case with 921 event, the power prior Bayesian analysis tends to give a lower aftershock hazard for using a larger power of the prior information. On the other hand, if the magnitude of previous mainshock is much greater than that of the current one, like the case with 1210 event, then the power prior Bayesian analysis produces a higher aftershock hazard for using a larger prior power. However, when the magnitudes of the current and historical major events are about the same, like the case with 331 event, the power prior Bayesian analysis gives a good forecast for the aftershock hazard in the future 2 days with a small prior power about 0.1, but, yields a better forecast for the aftershock hazard in the future days with a larger prior power. Figure 3 further indicates that, comparing to the power prior Bayesian analysis, the MLE-based Omori’s model estimated from 10-day current data gives a competitive or even better forecast for the aftershock hazard in next week. Note that, in the power prior Bayesian analysis, the related parameters in the RJ model remain the same for both the historical and current aftershock sequences. Therefore, in the use of the power prior Bayesian analysis, the selection of a historical aftershock sequence that has a similar behavior in the time-decaying and the frequency-magnitude distribution to the current one is important. In fact, the power prior Bayesian analysis can be easily extended to the situation with more than one historical aftershock sequences. In general, the Bayesian analysis with reasonable prior information preserves some benefit for the near-time assessment of the aftershock hazard. However, in our study, about 10 days after the current mainshock when the available current data are getting more, the MLE-based analysis becomes more competitive. For simplicity, the MLE-based analysis is then suggested.

5. Acknowledgement The work was supported by the Taiwan Ministry of Education, iSTEP 91-N-FA07-7-4.

6. References Chen, M. H., Ibrahim, J. G., and Shao, Q. M. (2000), Power prior distributions for generalized linear models, J. Stat. Plann. Infer. 41, 121-137. Chen, Y. I. (2003), Bayesian analysis of aftershock hazard, contributed paper in AGU Fall Meeting. Chen, Y. I., Huang, C. S., and Hong, C. H. (2004). Statistical assessment for the hazard of large aftershocks post to the Chi-Chi mainshock, TAO, 15, 493-502. Gutenberg, R. and Richter, C. F. (1944), Frequency of earthquakes in California, Bull. Seism. Soc. Am. 34, 185-188. Ogata, Y. (1983), Estimation of the parameters in the modified Omori formula for aftershock sequences by the maximum likelihood procedure, J. Phys. Earth, 31, 115-124. Reasenberg, P. A. and Jones, L. M. (1989), Earthquake hazard after a mainshock in California, Science 243, 1173-1176. Rydelek, P. A. (1990), California aftershock model uncertainties, Science 247, 343. Utsu, T. (1961), Statistical study on the occurrence of aftershocks, Geophys. Mag. 30, 521-605.

Time-dependent b value for aftershock sequences

Y. I .Chen and C. S. Huang

Institute of Statistics, National Central University, Jhongli, Tauyen 32054, Taiwan

Corresponding Author: Y. I. Chen; [email protected]

Abstract: The b value in the Gutenberg-Richter law (Gutenberg and Richter, 1944) is found to be suddenly descending right after a mainshock and then increasing back to the background value for the aftershock sequence post to the 1999, M=7.6, Chi-Chi, Taiwan, earthquake. This observation suggests a time-varying b value for the frequency-magnitude distribution of aftershocks. Therefore, we consider a modification of the Reasenberg-Jones (Reasenberg and Jones, 1989) model for describing the time-magnitude distribution of the aftershocks in the Taiwan area. We also compare the relative performance of the modified Reasenberg-Jones model to the original Reasenberg-Jones model on describing the aftershock hazard post to the Chi-Chi earthquake.

1. Introduction It is well-known that the frequency-magnitude distribution of earthquakes in a large scale for a long time follows the Gutenberg-Richter law (Gutenberg and Richter, 1944). However, it is also generally believed that large aftershocks is more hazardous to occur right after a mainshock, but then they tend to have a small chance to occur when time goes by. This produces a conjecture that aftershocks preserve a time-dependent frequency-magnitude distribution. The modified Omori’s power law (Utsu, 1961) gives a time-decaying aftershock occurrence rate. Assuming that magnitude and time of earthquakes post to a mainshock are independent, Reasenberg and Jones (1989) suggested combining the Gutenberg-Richter law and the modified Omori’s power law to describe the time-magnitude distribution of aftershocks, referred to be the RJ model, hereafter. However, if the magnitude distribution of aftershocks is time-dependent, then we should consider the joint time-magnitude distribution of aftershocks as the product of its time distribution and the conditional magnitude distribution given time. This is then providing a modification of the original RJ model. In this study, we demonstrate that the b value in the Gutenberg-Richter law for the aftershock sequence post to 1999, M=7.6, Chi-Chi, Taiwan, earthquake is time-varying. We then consider a simple liner time-trend for the b value and suggest a modified RJ model. Finally, based on the Chi-Chi aftershock sequence, we make a comparison of the modified and original RJ models for the description of the aftershock hazard.

2. Time-dependent b value and the modified Reasenbeg-Jones model According to the Gutenberg-Richter law (Gutenberg and Richter, 1944) that the magnitude of earthquakes is distributed according to a left-truncted exponential distribution with the survival function

= obMS {Pr)( Magnitude M => −β − MM c )}(exp{} for M ≤ M c , (1) where M c is the cut-off magnitude and β is a constant. Note that β = b ln10 , where b is the b value in the Gutenberg-Richter law and ln denotes the natural logarithm. In addition, the occurrence of aftershocks decaying in time is described according to the modified Omori’s law (Utsu 1961) : λ()tKtc=+ /( )p , (2)

where λ t)( is the intensity or hazard of aftershocks at time t after the mainshock, and K , c, and p are positive constants. Reasenberg and Jones (1989) combined the models in (1) and (2) and suggested a hazard model for the intensity of aftershocks with magnitude M or larger at time t after the mainshock as λ = λ MStMt )()(),( (3) However, if the β or b value in (1) is time-dependent as β (t), then the RJ model in (3) should be modified to be

λ λ tMt {β −×−×= MMt ci )()(exp)(),( }. (4)

We consider, herein, the simple linear function β t)( = α + α10 ln t . Following Ogata (1983), we obtain the likelihood function of the related parameters,θ = pcK α α10 ),,,,( N N ti L θ|t,Μ λ ti {}−×= λ )(exp)()( β tdss i {}β i −×−×× MMt ci ,)()(exp)( ∏ ∫t ∏ i=1 i−1 i=1

The associated maximum likelihood estimates (MLE) of pcK α α10 ),,,,( can then be obtained by sing an iteration algorithm.

3. Data analysis and results We consider the aftershock sequence within 200 days post to the Chi-Chi earthquake in the study region of (23.4o o × (120.5N)N,24.4 o o E)E,121.5 . Since the minimum magnitude for the complete recording of the aftershocks is M c = 2.2 (Wiemer and Wyss, 2000), we illustrate the use of the modified RJ model based on only the M ≥ 2.2 aftershocks (Figure 1). We estimate the single b value in the Gutenberg-Richter law and the time-dependent b value proposed herein. To see if the b value varies in time, we also compute the b value based on 500 time-ordering aftershocks, sliding by 100. Figure 2 shows clearly that the time-dependent b value is necessary for the evaluation of the aftershock hazard post to the Chi-Chi earthquake.

Figure 1. The Chi-Chi aftershock sequence under study. The red triangle locates the epicenter of the Chi-Chi earthquake and the black dots in the rectangular study region represent the M ≥ 2.2 earthquakes occurred within 200 days after the 1999 Chi-Chi, Taiwan, earthquake.

Figure 2. The observed and estimated b values. The observed b value is computed from 500 aftershocks sliding by 100 (dot-line), the blue line represents the estimated single b value, and the red line shows the estimated time-dependent b value.

To further confirm if the modified RJ model (4) outperforms the original RJ model (3) for describing the Chi-Chi aftershock hazard (Chen et al., 2004), we computed the

MLE of the parameters Kcp, , , α0 and α1 as 14962.5, 3.46, 1.39, 1.5 and 0.24, respectively. We also find the MLE of the single b value as 0.82. The observed and the estimated cumulative numbers of MM≥≥3.0 or 4.0 aftershocks during the 200 days under study are then obtained (Figure 3). Note that a best fit of the cumulative numbers of aftershocks should be the 45 o dashed line. It is obvious that the modified RJ model provides with a better model for the time-magnitude distribution of the Chi-Chi aftershock sequence than does the original RJ model.

Figure 3 The observed and estimated cumulated number of M ≥ 3 (left) and M ≥ 4 (right) aftershocks under study. The blue line shows the cumulated number of aftershocks computed from the original RJ model and the red line represents that computed based on the modified RJ model.

4. Conclusions and Discussions In this study, we explore the time-varying b value for the frequency-magnitude distribution of the earthquakes after the Chi-Chi mainshock. Based on the time-dependent b value, we then consider a modified RJ model. The modified RJ model further shows a better description of the frequency-magnitude distribution of the Chi-Chi aftershock sequence. In fact, other than the simple linear time-trend of the b value, we can also consider

the b value of a quadratic time-trend. Though not showing here, we also demonstrate that the time-dependent b value holds for the aftershock sequences post to the two recent earthquakes in the northeastern and southeastern of the Taiwan area. It maybe deserves a future study to explore, for aftershock sequences in different regions of the Taiwan area, how their frequency-magnitude distributions are related to time post to the associated mainshocks.

5. Acknowledgement The work was supported by the Taiwan Ministry of Education, iSTEP 91-N-FA07-7-4.

6. References Chen, Y. I., Huang, C. S., and Hong, C. H. (2004). Statistical assessment for the hazard of large aftershocks post to the Chi-Chi mainshock, TAO, 15, 493-502. Gutenberg, R. and Richter, C. F. (1944), Frequency of earthquakes in California, Bull. Seism. Soc. Am. 34, 185-188. Ogata, Y. (1983), Estimation of the parameters in the modified Omori formula for aftershock sequences by the maximum likelihood procedure, J. Phys. Earth, 31, 115-124. Reasenberg, P. A. and Jones, L. M. (1989), Earthquake hazard after a mainshock in California, Science 243, 1173-1176. Utsu, T. (1961), Statistical study on the occurrence of aftershocks, Geophys. Mag. 30, 521-605. Wiemer, S. and M. Wyss (2000), Minimum magnitude of completeness in earthquake catalogs: Examples from Alaska, the westen US and Japan, Bull. Seism. Soc. Am, 90, 859-869. Particle filtering of a state-space model for seismic sequences

N. Chopin1, E. Varini2

1Department of Mathematics - University of Bristol University Walk, Bristol BS8 1TW, UK 2Institute for Applied Mathematics and Information Technologies, CNR via Bassini 15, 20133 Milan, Italy

Corresponding Author: E.Varini, [email protected]; N. Chopin, nicolas.chopin at bris.ac.uk

Abstract: A state-space model is proposed in order to analyse a sequence of earthquakes; the basic assumption is that, at each time, the physical process is in a hidden state and that each state corresponds to a different observable marked point process. We considered three possible states corresponding respectively to the following marked point processes for seismic sequences: the Poisson model, the stress release model and the Etas model. The statistical analysis is carried out by exploiting a Bayesian sequential Monte Carlo (or particle filtering) method. This recent statistical methodology allows us to estimate both the model parameters and the filtering probabilities (the probability that one of the considered marked point process is active at time t). Moreover, it allows us to update the present estimates as new information comes in.

1. A state-space model n Let {(ti,mi)}i=1 be the set of the earthquakes occurred in a region such that ti ∈ [0,T] denotes the time occurrence and mi ≥ M0,whereM0 is a fixed threshold. Many studies on marked point processes are proposed in the literature; by exploiting some of these processes, I carried out a temporal analysis of the seismic phenomenon where the magnitude is considered as a mark. A marked point process is completely and uniquely defined by its conditional inten- sity function λ(t, m), representing the instantaneous probability that an event of magni- tude m occurs at time t conditionally to the history of the process up to t; the expres- n sion of the likelihood function is based on λ(t, m)andisgivenbyL = i=1 λ(ti,mi) · T  exp − M λ(t, m) dt dm . I considered time and magnitude to be independent, so  0  that the magnitude component can be omitted in the analysis of the temporal part and the conditional intensity function can be simply denoted by λ(t). In this context, I focused on three important, widely studied marked point processes: the Poisson, the stress release (Zheng and Vere-Jones, 1991) and the Etas models (Ogata, 1999). They are founded on different physical behaviours of the seismic phenomenon: the Poisson model assumes the hypothesis of random time occurrences; the stress release model assumes that some elastic stress gradually accumulates and is suddenly released when the stress exceeds the strength of the medium; the Etas model describes the ten- dency of the earthquakes to occur in clusters. The most common and simple functional expression of their conditional intensity functions is respectively the following:

γ(mi−M0) α+β(ρt−R(t)) k · e λ1(t)=µλ2(t)=e λ3(t)= p (t − ti + c) i:ti

λ(t)= λj(t) · δj(Xt) j∈S where δj(Xt)=1ifXt = j and δj(Xt) = 0 otherwise. I assumed that an earthquake does not occur in correspondence of a state change. Later on, θ denotes the vector of all the unknown parameters of the state-space model and Xs:t = {Xr : s ≤ r ≤ t}.

2. The Bayesian sequential Monte Carlo method The aim is to estimate the parameter vector θ and the filtering probabilities p(xt | θ, y0:t), that is the probabilities that the system at time t is in a state j given the past history of the observable process. The filtering probabilities are interesting because they are linked to the predictive distributions as follows

p(x | θ, y )= p(x | θ, x )p(x | θ, y )dx for t

Since the filtering probabilities are related to the posterior distributions p(x0:t,θ | y0:t) (t ∈ [0,T]), I equivalently considered the latter as target distributions. The analysis is carried out by using a sequential Monte Carlo method, whose details and advantages I briefly explain in this section, referring to the extensive research re- cently developed on this new statistical methodology (Doucet, Freitas and Gordon, 2001; Kitagawa, 1998). Let p(x) be the density of a probability distribution on a measurable space and f(x) an integrable function having finite variance with respect to p(x). It is known that, if there exists no analytical solution of f(x)p(dx), but a sample x1, ..., xm from p(x)is −1 m available, then the Monte Carlo approximation m i=1 f(xi) is an unbiased estimator of the integral and it converges to the true integral value when m → +∞. If sampling from p(x) is complex, then the importance sampling principle can be used, keeping the same properties of the Monte Carlo integration: a sample x1, ..., xm is drawn from a “easy-to-sample” distribution π(x), called importance distribution, and the integral −1 m m is approximated by m i=1 ωif(xi), where ωi = p(xi)/π(xi). The set {xi,ωi}i=1 is called particle set and the ωi’s are called importance weights. The importance distribution π(x) is usually heavy-tailed with respect to p(x) and its support contains the support of p(x); moreover, if π(x)isclosetop(x), faster convergence of the importance sampling is observed. A simple choice is that the importance distribution are equal to the prior distribution. The importance sampling principle is exploited in the sequential Monte Carlo method n+1 in order to approximate the sequence of posterior distribution {p(x0:tk ,θ| y0:tk )}k=0 (t0 = m (k) (k) (k) 0, tn+1 = T ). Each p(x0:tk ,θ | y0:tk ) is approximated by a particle set xi ,θ ,ωi  i=1 (k) such that xi is a realization of the hidden process in [0,tk] given the parameter vector (k) θ . The particle set at time tk+1 is obtained by the particle set at time tk as follows: (k+1) (k) each xi is equal to xi in [0,tk] and the remaining states in (tk,tk+1] are drawn from π(x); θ(k) = θ(k+1) and, by assuming that prior and importance distribution are equal, the following simple expression of the weights holds:

(k+1) (k+1) (k) (k+1) (k+1) ωi ∝ ω˜i = ωi · p(ytk:tk+1 | θ ,xtk:tk+1 ,y0:tk )

m (k+1) where the normalising constant is i=1 ω˜i and the updating factor is simply the likelihood function on the interval [tk,tk+1]. For the strong law of the large numbers, an accurate approximation should be obtained for a sufficiently large number m of particles but, as in practice the variance of the weights is increasing as new data are collected, the sequential Monte Carlo method could quickly degenerate. In order to avoid this effect, a resampling step is sometimes applied following the computationally advantageous (cost O(m)) technique proposed by Carpenter, Clifford and Fearnead (1999).

3. Simulated data and results In order to test the model and the statistical methodology, I analysed a simulated data set. The parameter values used in the simulation procedure are fixed as follows: µ =0.5, α = −4.5, β =0.02, ρ =2.8, k =0.1, γ =0.6, c =0.5, p =1,q12 =0.018, q13 =0.002, q21 =0.015, q23 =0.0135, q31 =0.054, q32 =0.006, P (X0 = j)=1/3foreachj ∈ S. I simulated the marked point processes by using the inverse transform method and the thinning method (Lewis and Shedler, 1976); the magnitude of the events is simulated from a truncated exponential distribution by following the Gutenberg-Richter law. The simulated data set is composed by 908 earthquakes and 72 state changes, setting T = 2000 and M0 =4.5. Prior and importance distributions are chosen on the basis of the available numerous studies on the involved processes. The lower part of Figure 1 represents the simulated conditional intensity function, while the upper part represents the mixture of the plug- in estimates of the conditional intensity functions by using the posterior means of the parameters.

12 8 4 3

2

1

expected cond. intensity 0 12 8 4 3

2

1 ’true’ cond. intensity 0 0 500 1000 1500 2000

time

Figure 1 Comparison of the true conditional intensity function (bottom) with the expected conditional intensity function (top).

4. Acknowledgement This work is a part of my Ph.D. thesis in Statistics at the University Bocconi of Milan (Italy); I would like to thank my advisors dr. Renata Rotondi and prof. Nicolas Chopin.

5. References Carpenter, J., Clifford, P., and Fearnhead, P. (1999),An improved particle filter for non linear problems, Radar, Sonar and Navigation, IEE Proceedings 146,1,2-7. Doucet, A., De Freitas, N., and Gordon, N., Sequential Monte Carlo methods in practice, (Springer-Verlag, New York, 2001). Kitagawa, G. (1998) A self-organizing state-space model, Journal of the American Statistical Association, 93, 443,1203-1215. Lewis, P.A., Shedler G.S. (1976) Simulation of nonhomogeneous Poisson processes with log linear rate function, Biometrika, 63, 3,501-505. Ogata, Y. (1999) Seismicity analysis through point-process modeling: a review, Pure and Applied Geophysics, 155, 471-507. Zheng, X.G., Vere-Jones, D. (1991) Application of stress release models to historical earthquakes from North China, Pure and Applied Geophysics, 135, 4,559-576.

Are foreshocks really mainshocks with larger aftershocks?

A. Christophersen1, and E.G.C. Smith1

1The Institute of Geophysics, Victoria University of Wellington, PO Box 600, Wellington, New Zealand

Corresponding Author: [email protected]

Abstract: The question whether foreshocks behave like mainshocks that happen to have larger aftershocks is important for the understanding of earthquake nucleation and the modelling of earthquake clustering. A commonly used model for short-term earthquake occurrence combines empirical observations of aftershock occurrence and extends them to foreshocks assuming a single triggering mechanism for earthquake clusters. However, observed foreshock probabilities are significantly lower than predicted by this kind of model. We conclude that foreshocks differ in some way from mainshocks and look for statistical and physical causes in two earthquake catalogues.

1. Introduction Earthquakes cluster in space and time. In retrospect, the largest earthquake in a cluster is called the mainshock. The earthquakes preceding a mainshock are named foreshocks, and the earthquakes following a mainshock aftershocks. Minor earthquakes of similar magnitude are referred to as swarms (e.g. Evison and Rhoades, 2004) and larger earthquakes of similar magnitude are sometime called multiplets (e.g. Felzer et al., 2004). The earthquakes that occur outside a cluster are referred to as background. As no physical differences between these earthquakes are known, some of this labelling is arbitrary. Careful considerations need to be given to potential biases introduced by arbitrary data selection. In a different approach to earthquake modelling, complete earthquake catalogues are described as one family of earthquakes, which interact through an epidemic model (Console et al. 2003, Zhuang et al., 2004). We briefly review the most common empirical laws for earthquake clustering. They can be used to derive the probability for a foreshock to occur. We review prospective foreshock probabilities and compare them with recalculated values from a homogeneous global catalogue. We find the observed values to be significantly smaller than the model predictions. We discuss reasons for the discrepancy between data and model.

2. Empirical laws for aftershock occurrence Two empirical laws have been used successfully to model the short-term clustering of earthquakes. Omori’s law describes the decay of earthquake activity with time t.

+= tcKdtdN )/(/ p , (1)

where dN is the number of earthquakes in the time interval dt; K is a parameter that is proportional to the aftershock productivity; p describes the decay and takes values around 1.0; and c stands for a small time interval just after the mainshock (e.g. Utsu et al., 1995). The Gutenberg-Richter relation describes the magnitude-frequency distribution

10 )(log −= bMaMN , (2)

where N (M) is the number of earthquakes of magnitude M and a and b are parameters (e.g. Gutenberg and Richter, 1949).

Two less well explored empirical relations describe the increase of the average number N of aftershocks (3) and the aftershock area A (4) with mainshock magnitude M.

10 )(log −= dcMMN , (e.g. Singh, and Suárez, 1988) (3)

10 MA −= 01.402.1log , (Utsu and Seki, 1954) (4)

3. Predicting mainshocks from aftershock models Reasenberg and Jones (1989, 1994) studied 62 Californian aftershock sequences and combined equations (1) and (2) to derive the probability for the occurrence of earthquakes in a given time and magnitude interval following an initiating earthquake. They extended the model for magnitudes larger than the initiating earthquakes, assuming that foreshocks trigger the mainshocks in the same way as the mainshocks trigger aftershocks. The Californian model predicts a probability of 10% that an earthquake outside an aftershock sequence is followed by one of equal or larger size. Another model formulation combining equations (2) and (3) and assuming equal growths constants b and c, also finds a probability of 10% that an initial earthquake is followed by one of equal size or larger (Vere-Jones et al., 2005).

4. The earthquake catalogues and defining earthquake sequences We used a global catalogue based on the reports of the International Seismological Centre (2003). The catalogue includes more than 45,000 earthquakes of magnitude 5.0 and above in the period 1964 – 2001 and shallower than 70 km (Christophersen, 2000). We have also started to work with the ANSS (Advanced National Seismic System) catalogue for California covering the years 1984 – 2004 inclusive. We used a simple window method in time and space to search for related earthquakes. In space we calculated a search radius of four times the radius of a circular area according to Utsu and Seki’s model (1954), resulting in a search radius of 8 km for M =5.0 and 150 km for M = 7.5. In time we use a sliding time window of 30 days, extending the sequence each time a new event is found. We separated our global earthquake sequences into different tectonic regimes, following a method suggested by Kagan (1999).

5. Review of prospective foreshock probabilities Table 1 compares some previously reported foreshock probabilities with recalculated values from our global catalogue. We found good agreement with previous observations when matching our space, time and magnitude window. All observations are significantly lower than predicted by the aftershock models. The only exception is one observation in a volcanic area (Merrifield et al., 2003). We note that we found some significant variation in foreshock probabilities between different tectonic settings which we do not show here though. However, all these probabilities were still smaller than expected from the model.

6. The ETAS model The ETAS (Epidemic Type Aftershock Sequence) model allows each earthquake to trigger its own offsprings (e.g. Ogata, 1988). The ETAS model separates between the background earthquakes and earthquake clusters. Interestingly, Zhuang et al. (2004) observed that the background in which the foreshocks occur have different triggering parameters than the clusters of aftershocks. This might be another indication that foreshocks require different modelling to aftershocks.

Table 1. Comparison of previously reported and recalculated foreshock probabilities in different magnitude, space and time windows. Previously reported foreshock probabilities Recalculated foreshock probabilities Location, catalogue threshold, Magnitude Foreshock Foreshock Comments space and time window, author restriction probability probability California, (Jones, 1985) 6% Not calculated yet Global catalogue, completeness MS ≥ 5.0 7.5% 4.2% ±0.2% The discrepancy between for M ~ 5.6, 75 km and 7 days MS ≥ 6.0 2.3% 1.9%±0.1% MS ≥ 5.0 earthquakes can be (Reasenberg, 1999) MS ≥ 7.0 0.4% 0.5%±0.1% explained by incomplete data in the Reasenberg catalogue. New Zealand, 30 km and 5 days MS ≥ 5.0 4.5 ± 0.7% 4.3%±0.3% The recalculated values of old subduction zones are (Savage and Rupp, 2000) MS – FS ≥ 0.8±0.3% 0.8%±0.1% compared. Old subduction 1.0 zones have the highest foreshock and aftershock rates. Taupo Volcanic Zone 9.6+1.7% No comparable zone in global (Merrifield et al., 2004) catalogue

7. Discussion The probability of an earthquake to be a foreshock, i.e. to be followed by an earthquake of equal or larger size, within 75 km and 7 days, is about 4% in a global earthquake catalogue. The result confirms earlier observations and gives us some confidence that our clustering methodology to define earthquake sequences is consistent with commonly used methods of declustering earthquake catalogues. The observed foreshock probability is significantly smaller than 10% as derived from models using aftershock parameters. Thus we conclude that there is some difference between foreshocks triggering mainshocks and mainshocks triggering the subsequent sequence. One possible cause for this difference could be the ‘cumulative nature’ of the aftershock parameters. The parameters are derived from aftershock sequences in which some large aftershocks might have triggered their own sequence of events while the foreshock analysis does not extend past the largest earthquake in a sequence. To investigate possible differences in foreshock and aftershock parameters we have looked at the distribution of time and magnitude differences between foreshock and mainshock pairs. The distribution of time differences between foreshocks and mainshocks is consistent with Omori’s law. The distribution of magnitude differences between foreshocks and mainshocks follows the Gutenberg-Richter relation. However, the b-value seems to be smaller than for aftershock sequences. We are still investigating in how far this is an artefact of the data selection. Physical differences between foreshocks and aftershocks could be another possible cause for the difference between observed and model foreshock probabilities. For example, foreshocks could be triggered by the nucleation of the mainshock rather than trigger the up-coming mainshock (e.g. Dodge et al., 1995). In this case, the average foreshock size, the area of foreshock occurrence, and the average number of foreshocks per mainshock should all increase with mainshock size. Felzer et al. (2004) could not find any evidence for this. Instead they confirmed that a single triggering mechanism explains the occurrence of foreshocks, aftershocks and mulitplets. We will revisit this analysis in our global and Californian data sets and will report on our results in January.

8. Conclusion Observed prospective foreshock probabilities are significantly lower than predicted by simple statistical models for aftershock occurrence. We conclude that foreshocks do not trigger mainshocks in the same manner as mainshocks trigger foreshocks. We consider two possible causes: Statistical bias in data selection and physical differences between foreshocks and mainshocks. We are still looking for evidence in two catalogues for both causes.

9. Acknowledgement The presenting author thanks Matt Gerstenberger and David Vere-Jones for stimulating discussions. The study is funded by the New Zealand EQC Research Foundation. The presenting author has gratefully received travel support from the conference organisers.

10. References Christophersen, A. (2000), The probability of a damaging earthquake following a damaging earthquake, PhD Thesis, Victoria University of Wellington. Console, R., M. Murru, and A. M. Lombardi (2003) Refining earthquake clustering models, J. Geophys. Res. 108, 10.1029/2002JB002130 Evison, F. F., and D.A. Rhoades (2004), Demarcation and scaling of long-term seismogenesis, Pure appl. geophys. 161, DOI 10.1007/s00024-003-2435-8. Felzer, K. R., R.E Abercrombie, and G Ekstroem (2004), A common origin for aftershocks, foreshocks, and multiplets, Bull. Seism. Soc. Am. 94, 88-98. Gutenberg, B., and C. F. Richter (1949), Seismicity of the earth and associated phenomena, Princeton University Press. International Seismological Centre, Bulletin Disks 1-12 [CD-ROM], Internatl. Seis. Cent., Thatcham, United Kingdom, 2004. Jones, L.M. (1985), Foreshocks and time-dependent earthquake hazard assessment in Southern California, Bull. Seism. Soc. Am. 75, 1669-1679. Kagan, Y. Y. (1999), Universality of the seismic moment-frequency relation, Pure appl. Geophys. 155, 537-573. Merrifield, A., M.K. Savage, and D. Vere-Jones (2004), Geographical distributions of prospective foreshock probabilities in New Zealand, New Zealand Journal of Geology & Geophysics 47, 327-339. Ogata, Y. (1988), Statistical models for earthquake occurrences and residual analysis for point processes, J. Am. Stat. Ass. 83, 9-27. Reasenberg, P.A. (1999), Foreshock occurrence before large earthquakes, J. Geophys. Res. 104 B3, 4755-4768. Reasenberg, P.A., and L.M. Jones (1989), Earthquake hazard after a mainshock in California, Science 243, 1173-1176. Reasenberg, P.A., and L.M. Jones (1994), Earthquake Aftershocks: Update, Science 265, 1251-1251. Savage, M.K., and S.H. Rupp (2000), Foreshock probabilities in New Zealand, New Zealand Journal of Geology & Geophysics 43, 461-469. Singh, S.K., and G. Suárez (1988), Regional variation in the number of aftershocks (mb ≥ 5) of large subduction zone earthquakes (Mw ≥ 7.0), Bull. Seism. Soc. Am. 78, 230-242. Utsu, T., Y. Ogata, and R. S. Matsu’ura (1995), The Centenary of The Omori Formula for a decay law of aftershock activity, J. Phys. Earth 43, 1–33. Utsu, T., Y., and A. Seki (1955), A relation between the area of aftershock region and the energy of the mainshock, Zisin, 7, 233-240. Vere-Jones, D., J. Murakami, and A. Christophersen (2005), A further note on Bath’s law, Conference Proceedings 4th International Workshop on Statistical Seismology, Japan, 2006. Zhuang, J., Y. Ogata, and D. Vere-Jones (2004), Analyzing earthquake clustering features by using stochastic reconstruction, J. Geophys. Res. 109 B05301, 4755-4768. Seismicity migration and time-dependent stress transfer: supporting and conflicting evidence from the Umbria-Marche (Central Italy) seismic sequence.

Massimo Cocco, Lauro Chiaraluce and Flaminia Catalli

Istituto Nazionale di Geofisica e Vulcanologia, via di Vigna Murata 605, 00142, Rome, Italy

Corresponding Author: M. Cocco, [email protected]

Abstract: We study fault interaction by modeling the spatial and temporal pattern of seismicity through elastic stress transfer. The changes in the rates of earthquake production can be modeled both through time dependent stress changes and by accounting for the frictional properties of fault populations. In this study we investigate the prolonged sequence of normal faulting moderate- magnitude earthquakes that struck central Italy (Umbria-Marche regions) in 1997-1998. We have modeled the aftershock patterns and the spatial and temporal migration of seismicity in terms of Coulomb stress changes and pore pressure relaxation in a fluid-saturated fractured medium. We use this case study to discuss the physical processes underlying earthquake triggering.

1. Introduction Earthquake triggering is currently investigated by modeling stress transfer to explain observations of measurable changes in earthquake occurrence rates following a causative event. The most common representation of elastic stress interaction is based on the calculation of Coulomb stress changes caused by earthquake dislocations (see Harris, 1998; King and Cocco, 2001, and references therein). Coulomb stress changes are widely used in the literature to investigate fault interaction between large magnitude earthquakes as well as to model aftershock patterns and seismicity rate changes over longer time windows. Despite numerous observations of earthquake triggering, many questions remain unanswered about the physical mechanisms that may explain earthquake occurrence and the spatio-temporal pattern of seismicity. In particular, it is well known that other time dependent processes can modify the induced coseismic stress field, such as afterslip, viscoelastic relaxation of the lower crust and fluid flow. These stress perturbations occur at different temporal scales. By restricting the analysis to the aftershock duration, pore-fluid flow may play a dominant role in triggering seismicity. Diffusive processes of pore pressure relaxation in fractured and saturated rocks have been proposed to explain both earthquakes and induced seismicity (see Shapiro et al., 2003; Antonioli et al., 2005 and references therein). The efficiency of static stress change in triggering seismicity is supported by a huge number of phenomenological studies (Stein, 1999; Toda and Stein, 2003; Steacy et al., 2005 and references therein). However, in order to answer to questions related to the underlying physical processes, the frictional failure models or the fault constitutive behavior have to be taken into account. This is necessary, for instance, to explain why very small stress changes lead to measurable changes in failure rates or why failure is often delayed from the occurrence of the stress change. To this goal the most common approach proposed in the literature is the Dieterich (1994) model, which uses rate-state friction theory to model failure rate changes. This model has been discussed in detail both from a theoretical point of view (see Gomberg et al., 2005-a) and it has been used to compare model predictions with observations (Gross, 2001; Toda and Stein, 2003; Toda et al., 2005 among others). This model predict basic observations like aftershock rates that decay inversely with time (Omori law). The delayed occurrence of triggered seismic events is explained in this model as the effect of the frictional response of the fault to the applied stress perturbations. It is important to point out that most of the frictional models only consider static stress changes to promote failure or trigger faults that are close to fail. In the present study we will not face the problem of triggering of seismicity by dynamic stress perturbations. Nevertheless, it is important to emphasize that transient dynamic stress perturbations account for the time dependency of elastic coseismic stress changes. The Dieterich model is a key ingredient of recent approaches aimed to compute change of probability of a large impending earthquake caused by a nearby seismic event (see Hardebeck, 2004; Gomberg et al., 2005-b). In this study we aim to discuss the modeling of seismicity patterns, and in particular the migration of seismicity, through time dependent stress transfer. We apply our analyses to the seismic sequence that struck the Umbria-Marche (central Italy) in 1997-1998. We first present the results of modeling fault interaction through elastic stress transfer among repeated moderate- magnitude main shocks (5

Figure 1. Map of seismicity and focal mechanisms of the main shocks. Colors refer to distinct time intervals.

2. The Umbria-Marche Seismic Sequence. The 1997 Umbria-Marche seismic sequence (central Italy) consists of six moderate magnitude earthquakes (5

Figure 2. Temporal evolution of seismicity.

Several features make the study of this sequence particularly interesting: (1) the availability of accurate earthquake locations (the resulting errors are of the order of tens of meters, Chiaraluce et al., 2003); (2) seismicity migrated toward the SE-direction progressively activating the main fault planes before the occurrence of their impending mainshocks (Chiaraluce et al., 2004); (3) Coulomb stress changes reveals that these normal faults interact with each other (Nostro et al., 2005), but it cannot explain the presence of seismicity in the hanging wall of the main faults (Miller et al., 2004) as well as the temporal migration of seismic events (Antonioli et al., 2005); (4) there are evidence of the presence of fluids in this area revealed by previous studies (Chiodini et al., 2002).

3. Modeling Coulomb Stress Changes. Our modeling results show that 8 out of 9 main shocks of the sequence occur in areas of enhanced Coulomb stress, implying that elastic stress transfer may have promoted the occurrence of these moderate-magnitude events. Our calculations show that stress changes caused by normal faulting events re-activated and inverted the slip of a secondary NS-trending strike slip fault (see event 8 in Figure 3) inherited from compressional tectonics in its shallowest part (1-3 km). The adoption of a isotropic poroelastic model to compute the Coulomb stress perturbations (Cocco and Rice, 2002) yields a better score in explaining the location of main shocks of the 1997 seismic sequence. Of the 1517 available aftershocks, 82% are located in areas of positive stress changes for optimally oriented planes (OOPs) for Coulomb failure. Therefore, we might conclude that the spatial distribution of aftershocks is consistent with Coulomb stress changes, with seismicity primarily occurring in areas experiencing positive stress changes. However, our computations demonstrated that the number of promoted aftershocks decreases when stress changes are resolved onto the nodal planes of focal mechanisms computed from polarity data and the corresponding focal mechanisms associated with each OOPs do not coincide with the fault plane solutions computed from polarity data. Thus, we can conclude that the join prediction of both location and focal mechanism of an impending aftershock remains an extremely difficult task in the study area. It is important to note that most of the analyses of earthquake triggering relying on the correlation between seismicity rate changes and Coulomb stress perturbations do not account for the faulting mechanisms of promoted aftershocks. This approach has the advantage to provide a robust statistical measure of the increase in the rate of earthquake production in a seismogenic area. However, it contains the same limitations we have experienced in our application to the Umbria-Marche 1997 seismic sequence. The comparison between 322 available real focal mechanisms and those inferred from Coulomb stress modeling reveals that no more than 45% of them are similar. The comparison does not improve if we compute the optimally oriented planes for Coulomb failure by fixing the strike orientation of OOPs using information derived from structural geology.

Figure 3. Main shocks, aftershocks and fault planes of the main events of the sequence.

We have tested in our study if aftershock failure planes are controlled by geological structures by calculating different sets of OOPs having the strike angle fixed “a priori” by geological observations. Our results suggest that, despite the agreement between the average orientation of aftershock rupture planes and the orientation of the geological structures mapped at the surface, the structural constraints do not improve the agreement between OOPs and observed fault plane solutions. Our results show that a structural control does exist, even if it is not enough to allow the prediction of OOPs in agreement with focal mechanisms computed from polarity data. The modeling results presented in this study allow us to conclude that elastic stress transfer promoted the occurrence of the largest magnitude earthquakes of the Umbria-Marche seismic sequence. However, we confirm the findings of Kilb et al. (1997) concerning the incapability to reliably reproduce aftershock mechanisms. This might be due in our opinion to: (1) the complex fault mesh and to the 3D tectonic setting characterizing such a relatively small seismogenic volume; (2) the poor knowledge of the absolute stress amplitudes and the depth dependence of the tectonic stress tensor; (3) the details of slip distributions at short wavelength that can affect the prediction of focal mechanisms for aftershocks located very close to the main rupture planes; (4) the presence of fluids and the pore pressure relaxation caused by coseismic (undrained) stress changes. Our interpretation of these outcomes is that, despite several modeling results point out that elastic stress transfer contribute to promoting earthquakes and explain interaction among the main faults where the largest magnitude events nucleated, Coulomb stress perturbations cannot explain the observed temporal migration of seismicity and correctly predict the aftershock faulting mechanisms.

4. Fluid Flow and seismicity pattern. We model the spatial and temporal evolution of seismicity during the 1997 Umbria-Marche (central Italy) seismic sequence in terms of subsequent failures promoted by fluid flow. The diffusion process of pore-pressure relaxation is represented as a pressure perturbation generated by coseismic stress changes and propagating through a fluid saturated medium. The pore pressure relaxation is modeled through the Darcy law. Using this approach we have evaluated an average value of hydraulic diffusivity for the study area. The estimated values of isotropic diffusivity range between 22 and 90 m2/s, which is consistent with observations in other areas. We have also calculated a value of anisotropic diffusivity (see Antonioli et al., 2005): it is largest along the average strike (N140°) direction of activated faults yielding an anisotropic value of 250 m2/s. Our results suggest that the observed spatio-temporal migration of seismicity is consistent with fluid flow. Our interpretation is that pore-pressure relaxation and the drained response of a poro-elastic fluid saturated medium is activated by the coseismic stress changes caused by earthquakes. The evidence of intermediate-deep fluid circulation and CO2 degassing in this portion of the Apennines (Chiodini et al., 2000) further corroborates our results. For this area, Miller et al. (2004) proposed that seismicity on the hanging wall of normal faults is promoted by a pressure pulse originating (co-seismically) from this known deep source of trapped high-pressure CO2 and propagating into the damage region created by the earthquake. Our results suggest that fluid flow and pore pressure relaxation explain the observed migration of seismicity along the fault strike in the southern portion of the seismogenic volume. We emphasize that in complex highly fractured media both elastic and poro-elastic interactions act together in promoting seismicity and controlling its spatial and temporal pattern.

5. Conclusive remarks Our numerical simulations allowed us to investigate earthquake interaction and to provide a physical model to explain main shocks and aftershocks occurrence. We have shown that elastic and poro-elastic interactions are necessary to explain the evident spatial and temporal migration of seismicity. We are now trying to verify if a frictional failure model, such as the Dieterich (1994) model, can predict the observed migration of seismicity. Our preliminary results are encouraging and this raises the question of the main physical parameters of this approach that control the spatio-temporal migration. We aim to discuss these issues in this lecture.

6. References Antonioli A., Piccinini D., Chiaraluce L. and M. Cocco (2005). Fluid flow and seismicity pattern: evidence from the 1997 Umbria-Marche (central Italy) seismic sequence, Geophys. Res. Lett., in press. Chiaraluce L., W. L. Ellsworth, C. Chiarabba and M. Cocco (2003), Imaging the complexity of an active normal fault system: The 1997 Colfiorito (central Italy) case study, J. Geophys. Res., 108, doi:10.1029/2002JB002166. Chiaraluce L., et al. (2004), Complex Normal Faulting in the Apennines Thrust-and-Fold Belt: The 1997 Seismic Sequence in Central Italy; Bull. Seism. Soc. Am. 94(1), 99-116. Chiodini G., F. Frondini, C. Cardellini and L. Peruzzi (2000), Rate of diffuse carbon dioxide Earth degassing estimated from carbon balance of regional aquifers; the case of central Apennine, Italy, J. Geophys. Res., 105, 8423-8434. Cocco M. and Rice J. R. (2002), Pore pressure and poroelasticity effects in Coulomb stress analysis of earthquake interactions, J. Geophys. Res., 107, NO. B2, 10.1029/2000JB000138. Dieterich, J. (1994), A constitutive law for rate of earthquake production and its application to earthquake clustering. J. Geophys. Res., 99, 2601-2618. Gomberg, J., M. E. Belardinelli, M. Cocco, and P. A. Reasenberg (2005a), Time- dependent earthquake probabilities, J. Geophys. Res., 110, doi:10.1029/2004JB003404. Gomberg, J., P. A. Reasenberg, M. Cocco, and M. E. Belardinelli (2005b), A frictional population model of seismicity rate change, J. Geophys. Res., 110, doi:10.1029/2004JB003405. Gross, S. (2001), A model of tectonic stress state and rate using the 1994 Northridge earthquake sequence, Bull. Seismol. Soc. Am., 91, 263-275. Hardebeck, J. L. (2004), Stress triggering and earthquake probability estimates, J. Geophys. Res., 109, B04310, doi:10.1029/2003JB002437. Harris R. A. (1998), Introduction to special section: stress triggers, stress shadows, and implications for seismic hazard, J. Geophys. Res., 103, 24347-23358. Kilb, D., Ellis M., Gomberg J., and Davis S. (1997), On the origin of diverse aftershock mechanisms following the 1989 Loma Prieta earthquake, Geophys. J. Int., 128, 557–570. King G. C. P. and Cocco M. (2001), Fault interaction by elastic stress changes: new clues from earthquake sequences, Advances in Geophysics, 44, 1-38. Miller S. A., C. Collettini, L. Chiaraluce, M. Cocco, M. Barchi, J. Boris and P. Kraus (2004), Aftershocks driven by a high-pressure CO2 source at depth, Nature, 427, 724-727. Nostro, C., L. Chiaraluce, M. Cocco, D. Baumont, and O. Scotti (2005), Coulomb stress changes caused by repeated normal faulting earthquakes during the 1997-1998 Umbria-Marche (Central Italy) seismic sequence, J. Geophys. Res., 110, doi:10.1029/2004JB003386. Parsons, T. (2005), Significance of stress transfer in time-dependent earthquake probability calculations, J. Geophys. Res., 110, doi:10.1029/2004JB003190. Shapiro S. A., R. Patzig, E. Rothert and J. Rindshwentner (2003), Triggering of seismicity by pore- pressure perturbations: permeability-related signature of the phenomenon, Pure Appl. Geophys., 160, 1051-1066. Steacy S., Gomberg J. and M. Cocco (2005). Introduction to special section: stress transfer, earthquake triggering and time-dependent seismic hazard, J. Geophys. Res., 110, doi: 10.1029/2005JB003692. Stein R. S. (1999), The role of stress transfer in earthquake occurrence, Nature, 402, 605-609. Toda S. and Stein R. S. (2003), Toggling of seismicity by the 1997 Kagoshima earthquake couplet: A demonstration of time-dependent stress transfer, J. Geophys. Res., 108, B12, 2567, doi: 10.1029/2003JB002527. Toda, S., R. Stein, K. Richards-Dinger, and S. Bozkurt (2005), Forecasting the evolution of seismicity in southern California: Animations built on earthquake stress transfer, J. Geohys. Res., doi: 10.1029/2005JB003415.

Tidal Triggering of Earthquakes: Response to Fault Compliance?

E.S. Cochran1

1Institute of Geophysics and Planetary Physics, Scripps, La Jolla, CA 92093, USA

Corresponding Author: E. S. Cochran, [email protected]

Abstract: Clear tidal forcing of earthquakes has recently observed by several local and global studies. Using tidal amplitudes in addition to tidal phase the stress level required for tidal forcing is determined. Cochran et al. (2004) find that natural faults are highly responsive to relatively small stress changes and faults behave more weakly than would be expected from laboratory rate and state friction and stress corrosion measurements. Stress changes on the order of a tenth of a bar can cause modulation of the timing of earthquakes, similar to that measured for aftershock sequences. In addition, recent geodetic and seismologic studies suggest that faults are not simple planes of slip, but zones of highly damaged, sheared rock. These damage zones are highly compliant and may result in higher stress concentration in the fault than in the surrounding intact rock. Thus, fault zone compliance may reduce the stress loads required for rupture.

1. Introduction Seismological data is often mined to determine the amplitude of stress perturbation, whether natural or induced, required to trigger an earthquake. One application of standard statistics to a basic seismological question is the question of whether Earth tides affect the timing of earthquakes. For a century or longer, studies have been conducted to determine whether Earth tides modulate the timing of earthquakes. Each of these studies depends on the accuracy of tidal calculations and the statistical tests applied to the data. In addition, an additional step is taken to match recent seismologic and geodetic observations of fault zone properties with the notion that relatively low amplitude stresses trigger earthquakes. Tidal modulation of the timing of earthquakes is based on the assumption that faults are more likely to rupture when stress levels encourage failure. Computing tidal stresses is not trivial and includes the direct solid Earth tide, caused by the gravitational pull of the Sun and the Moon, and the indirect Earth tide due to ocean loading. An accurate accounting for the tides is an imperative first step in a global study of tidal triggering (e.g. Tanaka et al., 2002; Cochran et al., 2004) Local studies have had greater success at showing a tidal correlation as tidal stress can more easily be inferred from local tidal gauge measurements (e.g. Tolstoy et al., 2002). Comparing tidal triggering rates with the laboratory-derived measurements of triggering by stress loads suggests that natural faults are more responsive to low amplitude stress changes. Therefore, a mechanism is needed to explain failure of faults due to relatively low applied stress. The answer may lie in recent seismological and geodetic measurements that suggest faults are significantly more compliant than the surrounding country rock. Seismological measurements include observations of velocity reduction within a 100 – 200 m fault core by fault zone trapped waves (e.g. Li et al., 2003; Li et al., 2004; Rubinstein and Beroza, 2004). Shear wave splitting measurement indicate a highly sheared material with a width that corresponds to the low-velocity trapping structure (Cochran et al., 2005; Boness and Zoback, 2004). InSAR provides further clues of a highly compliant fault zone that experiences larger strains that the surrounding intact rock (Fialko et al., 2002). We infer a link between the responsiveness of faults to low amplitude tidal stresses with the observed fault zone compliance.

2. Observations of Tidal Triggering Cochran et al. (2004) determined the amplitude of stress required to trigger shallow, thrust earthquakes by Earth tides. The two statistical tests used to determine significance levels are: Schuster’s test and a binomial test. Two independent tests were used to compare the resulting significance levels. The Schuster test is used to test for periodicity in a sequence and is used to determine if the timing of the earthquakes is modulated by the periodic fluctuation in the tidal stress. Not only do we find a clear periodicity to the timing of events, but we also see a clear peak of events at the tidal phase (θ) defined to encourage failure. The vector sum over the tidal phase angles is:

$ N ' 2 $ N ' 2 2 , (1) D = & #cos" i ) +& #sin" i ) % i =1 ( % i =1 (

The resulting p-value gives a measure of how random (p ~ 1) or non-random (p ~ 0) the distribution is: ! # D2 & p = exp% " ( , (2) $ N '

Schuster’s Test indicates a greater than 99% probability that the events are non-randomly distributed. In addition, the peak of events is show to be at nearly 0° tidal phase, the phase ! expected to promote failure. To confirm the results of the Schuster test, Cochran et al. (2004) also employ a binomial test. This simple statistic tests whether the earthquakes are divided equally across the tidal phase. The tidal phase is divided into two tidal phase bins, each covering 90 degrees. One bin contains tidal phases expected to most promote failure (-90 < θ < 90) and the other bin

Figure 1 Percentage of excess events versus the peak tidal stress. Points are located at the mean peak tidal stress and solid horizontal lines indicate the range. Grey circles are global thrust data; solid triangle denotes California strike-slip data. Crosses and diamonds show the least squares fit to rate- and state- dependent friction and stress corrosion, respectively. From Cochran et al., (2004).

contains the remaining tidal phases. A probability of >99.99% that earthquakes correlate with the tides is found for large tidal stresses (> 0.01MPa). The amplitude of tidal stress required to trigger an earthquake is determined by dividing the data into four bins based on amplitude of the background peak tidal stress. A clear trend of increasing triggering of events with the increasing peak tidal stress amplitude is observed (figure 1). Laboratory experiments have been conducted to determine the relationship between the number of events triggered and the applied load. The data are fit to rate and state friction and stress corrosion. To fit both laboratory-derived trends we have to assume parameters for the fault viscosity or material properties that suggest natural faults are more responsive to stress loads than those measured in the lab.

3. Fault Zone Compliance The responsive nature of faults to relatively small stress loads is suggestive of a dependence on a more complex fault structure than a simple slip plane. Several seismology and geodetic studies have found evidence for a complex zone of deformation near the main slip plane. These studies may explain the relative weakness of fault to applied stresses whether tidal, seismic, or any number of man-induced stress changes. I will briefly outline recent studies that indicate that faults have reduced shear moduli reflecting a highly damaged zone with a strong shear fabric. Seismologic studies have been conducted near numerous active faults and a few common features have been observed. Clear indications of reduced P and S velocities have been observed in addition to waves trapped in the low velocity fault core. The shear velocities in the San Andreas Fault are reduced by 30 – 40 % compared to the nearby intact rock (Li et al., 2004). The velocity reduction has been shown to vary across the seismic cycle and may account for some of the variation in measured velocities. In addition, shear wave splitting studies conducted near active faults suggest faults have a shear fabric that reflects the highly damaged nature of the fault zone (Boness and Zoback, 2004; Cochran et al., 2005). Shear wave splitting is used to determine the anisotropic field near a fault and can be used to look for variation near a fault plane. Peng and Ben-Zion (2004) found a large spatial variation in splitting measurements that were dependent on whether the wave traveled through the main fault core. A recent study by Cochran et al., (2005) of shear wave splitting along the Parkfield segment of the San Andreas fault shows a clear spatial variation in splitting measurements. Fast directions preferentially oriented parallel to the fault within 100 m of the main fault plane correspond to the location of the main trapping structure detailed by Li et al. (2004) and discussed above. InSAR studies clearly highlight the compliance of faults. Fialko et al. (2002) showed a clear coseismic concentration of strain on faults near the M7.1 Hector Mine earthquake. The strain is concentrated on a zone that is approximately 1 km wide with an inferred reduction in shear moduli of 50%. This compliant zone extends to at least 5 km depth (Fialko et al., 2002). While the width of the compliant zone observed with InSAR is wider than that seen with the seismologic data, the overall implications are the same. Likely the two data sets are sampling different aspects of the same structure with the damage grading from a highly sheared fault core out to intact rock (figure 2).

4. Discussion Clear evidence for tidal triggering has been observed in several recent studies suggesting are more highly responsive to applied stress loads than expected from laboratory results.

Seismic and geodetic data suggest faults are highly compliant and localize strain in the highly damaged fault core. The triggering of faults by relatively small stress loads (> 0.01 MPa) may be attributed to the physical structure of the fault itself.

Figure 2 Schematic cross-section of an active fault zone depicting a gradient in damage across the fault. The fault core is a zone of highly deformed rock that contains the main slip planes. Oustide of the main fault core the cracks are mostly mode 1 opening fractures that decrease in density away from the fault (after Chester et al., 2003).

5. Acknowledgement The work presented here is a collaboration with many researchers including Y. Fialko, Y.-G. Li, P. Shearer, S. Tanaka, and J.E. Vidale. We also thank J. Dieterich and Z. Peng for useful discussions.

6. References Boness, N.L. and M.D. Zoback, Stress-induced seismic velocity anisotropy and physical properties in the SAFOD Pilot Hole in Parkfield, CA, Geophys. Res. Lett. 31, L15S17, doi:10.1029/2003GL019020, 2004. Cochran, E.S., J.E. Vidale, and Y.-G. Li, Near-fault anisotropy following the Hector Mine earthquake, J. Geophys. Res. 108, 2436, doi:10.1029/2002JB002352, 2003. Cochran, E.S., J.E. Vidale and S. Tanaka, Earth tides can trigger shallow thrust fault earthquakes, Science 306, 1164-1166, 2004. Cochran, E.S., Y.-G. Li, and J.E. Vidale, Anisotropy in the shallow crust observed around the San Andreas Fault before and after the 2004 M6 Parkfield earthquake, Bull. Seis. Soc. Am., in review, 2005. Fialko, Y., D. Sandwell, D. Agnew, M. Simons, P. Shearer, B. Minster, Deformation on nearby faults induced by the 1999 Hector Mine Earthquake, Science 297, 1858-1862, 2002. Li, Y-G., J.E. Vidale, E.S. Cochran, Low-velocity damaged structure of the San Andreas Fault at Parkfield from fault zone trapped waves, Geophys. Res, Lett. 31, L12S06, doi:10.1029/2003GL019044, 2004. Peng, Z., and Y. Ben-Zion, Systematic analysis of crustal anisotropy along the Karadere-Duzce branch of the North Anatolian Fault, Geophys. J. Int. 159, 253-74, 2004. Tanaka, S., M. Ohtake, H. Sato, Evidence for tidal triggering of earthquakes as revealed from statistical analysis of global data, J. Geophys. Res. 107, 2211, doi:10.1029/2001JB001577, 2002. Tolstoy, M., F.L. Vernon, J.A. Orcutt, F.K. Wyatt, Breathing of the seafloor: Tidal correlations of seismicity at Axial volcano, Geology 30, 503-6, 2002.

Applying the Rate-and-State friction law to an epidemic model of earthquake clustering

R. Console, M. Murru, and F. Catalli

Istituto Nazionale di Geofisica e Vulcanologia, Rome, Italy

Corresponding Author: R. Console, [email protected]

Abstract: Stochastic modeling provides a good description of earthquake clustering, usable for earthquake forecasting purposes. Nevertheless it doesn’t contain much information about the physical properties of the earthquake process. A possible physical explanation is given by the elastic stress transfer model and the rate-and-state constitutive law that in combination allow to model both the spatial and the temporal behavior of the earthquake interaction. Nevertheless a complete application of these physical concepts to a stochastic model is difficult due to the lack of the necessary information concerning the location and fault mechanism of all the moderate earthquakes.

1. Introduction Among different physical models proposed to explain the phenomenon of earthquake interaction, the model of the static stress change produced by a dislocation in the elastic medium has become one of the most popular (Toda et al., 2005; Freed, 2005). Combined with the rate-and-state friction constitutive law (Dieterich, 1994) this model has been successfully applied both to sequences of large earthquakes and to aftershock series (Stein et al., 1997; Toda et al., 1998¸ 2005; Toda and Stein, 2003). The success of this model has suggested the idea of using it for forecast purposes. It is common understanding that the predictive value of a model for earthquake forecast must be preceded by a statistical test. A popular way to perform such a test is using the concept of likelihood (Ogata, 1998; Console, 2001; Console and Murru, 2001; Schorlemmer et al., 2004). In order to implement a likelihood test of an earthquake forecast hypothesis it is necessary that the model is quantitatively described so as to allow the computation of the occurrence rate density in the space-time-magnitude space given the history of the previous seismic activity in the concerned area. In this presentation we want to review the steps to be followed and the problems encountered in the way towards the application of the stress transfer concept and the rate-and-state friction constitutive law to an algorithm of epidemic type for earthquake clustering.

2. A stochastic model for earthquake clustering and its likelihood Following Ogata (1998) and Console and Murru (2001), earthquakes are regarded as the realization of a point process modeled by a generalized Poisson distribution. It is assumed that the Gutenberg-Richter law describes the magnitude distribution of all the earthquakes in a sample, with a constant b-value. The occurrence rate density of earthquakes in space and time is modeled by the sum of two terms, one representing the independent, or spontaneous, activity, and the other representing the activity induced by previous earthquakes:

N (1) λλ(,,,xytm )=⋅ fri0 (,, xym ) +∑ Ht ( −⋅ t ) λi (,,, xytm ) i=1

The first term depends only on space, and is modeled by a continuous function of the geometrical co-ordinates, obtained by smoothing the discrete distribution of the past seismicity.

The second term depends also on time, and it is factorized in two terms that respectively depend on the space distance and on the time difference from the past earthquakes. A simple way to express these two terms is assuming the spatial distribution as an isotropic function and the time dependence as the modified Omori law. From the expected rate density, the log-likelihood of any realization of the process (actually represented by an earthquake catalog) can be computed straightforwardly through the following equation:

N

lnL=−∑ ln[λλ ( xjjjj , y , t , m ) V0 ]∫∫∫∫ (x , y , t , m ) dxdydtdm , (2) j=1 X TY M where λ(xj,yj,tj,mj) is the occurrence rate density computed for the earthquakes of the catalog and V0 is a dimensional constant.

3. The application of the stress transfer model Equation (1) does not contain any physical explanation for the phenomenon of earthquake triggering, Nevertheless Console et al. (2005) tried to modify the form of equation (1) by taking into account some physical constraints that help reducing the number of free parameters in the model. They assumed that the occurrence rate density is described by the equation introduced by Dieterich (1994) as a consequence of the rate-and-state constitutive law: R tR )( = 0 ⎡ ⎛ ⎞⎤ ⎛ ⎞ ⎢ ⎜ τ ⎟⎥ ⎜ −Δ− t ⎟ (3) ⎢ −− exp11 ⎜ ⎟⎥ exp⎜ ⎟ ⎜ ⎟ ⎜ t ⎟ ⎣⎢ ⎝ Aσ ⎠⎦⎥ ⎝ a ⎠ where R0 is the previous background rate density, Δτ is the stress change at the point where R(t) is estimated, and A, σ and ta are parameters of the constitutive law. The latter three parameters are related to the tectonic stress rate τ& by the equation ta = Aσ/τ& . Under a number of very crude assumptions, Console et al. (2005) found an expression with only two free parameters for the occurrence rate R’(t) = R(t) - R0 of events triggered by a number of previous earthquakes. This expression can be used in place of λ(t) in equation (1). It can be applied to a catalog of earthquakes containing only information about the origin time, epicentral coordinates and magnitude of every event. This expression still preserves the circular shape without any azimuth dependence, as in the purely stochastic models. In the applications to the seismic catalogs of Japan, Italy and California, the two free parameters were obtained by a maximum likelihood best fit. The comparison with the results of the purely stochastic model applied to the same catalogs has shown that the new model incorporating the rate-and-state physical model performs better than the stochastic model only for the catalog of Japan. A possible explanation of this circumstance can be that the purely stochastic model contains a free parameter that adjusts the spatial distribution of triggered seismicity to the location error of the epicenters in the catalog (typically 2.5 km for Italy and 1.2 km for California). On the contrary, the physical model does have a spatial distribution scaled with the magnitude of the events, implying a strong decay at distances smaller than the location error in the low magnitude range. So, only for the Japanese seismicity, with a magnitude threshold of 4.5, this effect is not too relevant. All the models considered so far are characterized by isotropic spatial distribution, and positive triggering, so that the effect of previous earthquake can be only that of increasing the occurrence rate of the following activity. If we computed the stress change at any point around a given earthquake source, we would find that there exist areas where the Coulomb stress change is negative and the occurrence rate becomes smaller than the previous background rate. In this case equation (1) is not applicable any more and a different algorithm should be adopted to compute the likelihood of the model.

4. Developing a more realistic approach A new method described by Toda et al. (2005) consists in the computation of the Coulomb stress changes produced by the largest earthquakes in a catalog, and plotting the respective occurrence rate density obtained from equation (3) on a dense grid of points distributed in the study area. We tried to apply this method to a seismic sequence occurred in Umbria, Central Italy, between September 1997 and April 1998. In this sequence, the source parameters (including fault dimensions and focal mechanism) for nine earthquakes of magnitude equal to or larger than 5.0 had been studied in detailed and the respective fault parameters are available in the literature. In our algorithm we have reduced the number of free parameters to just one: Aσ in equation (3). To do so, we have hypothesized that the stress drop is equal to 2.5 MPa for all the earthquakes. Moreover, we have used an original algorithm for obtaining the spatially variable tectonic stress rateτ& from the background seismicity. The expected seismicity rate is compared with the real observations of seismic activity. The results are encouraging and it is possible to observe how the Coulomb stress changes affects the spatial and temporal distribution of the seismicity as predicted by the theory. However, a complete match of the modeled earthquake distribution with the observations is not achievable, due to uncertainties in epicentral locations and lack of details about the stress heterogeneity on the faults. Moreover, the algorithm is applicable only for a limited number of major earthquakes as causative sources, while all the minor events take a role of triggered earthquakes only. This is not the case if, as shown by Helmstetter (2003) and Helmstetter et al. (2005), small earthquakes play a relevant role in the stress transfer and triggering of subsequent seismicity.

5. Conclusion Stochastic modeling allows the computation of the expected earthquake rate density on a continuous space-time volume, suitable for the validation of a model with respect to others and for real time forecasts. The significant steps made during the last decades in the physical modeling of earthquake clustering provide a tool for the refinement of these stochastic models. Jointly with the improvement of the seismological observations, these steps appear as a progress towards the possible practical application of the scientific research about earthquake clustering.

6. Acknowledgement We express our gratitude to Massimo Cocco for stimulating discussions that have improved our work.

7. References Console, R., (2001), Testing earthquake forecast hypotheses, Tectonophysics, 338, 261-268. Console, R., and Murru, M. (2001), A simple and testable model for earthquake clustering, J. Geophys. Res. 106, 8699-8711. Console, R., Murru, M., and Catalli, F. (2005 in press), Physical and stochastic models of earthquake clustering, Tectonophysics. Dieterich, J.H., 1994. A constitutive law for rate of earthquake production and its application to earthquake clustering, J. Geophys. Res., 99, 2601-2618. Freed, A. M. (2005), Earthquake triggering by static, dynamic, and postseismic stress transfer, Annu. Rev. Earth Planet. Sci. 33, 335-367, doi:10.11146/annurev.earth.33.092203.122505. Helmstetter, A. (2003), Is earthquake triggering driven by small earthquakes? Phys. Res. Lett., 91, (058501). Helmstetter, A., Kagan, Y. Y., and Jackson, D. D., (2005), Importance of small earthquakes for stress transfer and earthquake triggering. J. Geophys. Res., 110, (B05S08),

doi:10.1029/2004JB003286. Ogata, Y., (1998), Space-time point-process models for earthquake occurrences, Ann. Inst. Statist. Math. 50, 379-402. Schorlemmer, D., Gerstenberger, M., Wiemer, S., Field, E., Jones, L., and Jackson, D. D. (2004), T-RELM Testing Center: Rigorous testing of probabilistic earthquake forecast models, SSA Meeting 2004, Lake Tahoe, USA. Stein, R. S., Barka, A. A., and Dieterich, J. H. (1997), Progressive failure on the North Anatolian fault since 1939 by earthquake stress triggering, Geophys. J. Int. 128, 594-604. Toda, S., Stein, R. S., Reasenberg, P.A., and Dieterich, J. H. (1998), Stress transferred by the MW=6.5 Kobe, Japan, shock: Effect on aftershocks and future earthquake probabilities, J. Geophys. Res. 103, 24,543-24,565. Toda, S., and Stein, R. S. (2003), Toggling of seismicity by the 1997 Kagoshima earthquake couplet: A demonstration of time-dependent stress transfer, J. Geophys. Res. 109, (B12, 2567), doi:10.1029/2003JB002527. Toda, S., Stein, R. S., Richards-Dinger, K., and Bozkurt, S. (2005 in press), Forecasting the evolution of seismicity in southern California: Animations built on earthquake stress transfer, J. Geophys. Res.

Anomalies in the temporal decay of seismic sequences

S. D’Amico1, D. Caccamo2, F. Parrillo2, F. M . B a r b i e r i 2 and C. Laganà2

1Istituto Nazionale di Geofisica e Vulcanologia,via Di Vigna Murata, 60, 00143, Roma, Italy 2 Department of Earth Sciences, University of Messina, Salita Sperone 31, 98166 Messina, Italy

Corresponding Author: S. D’Amico, [email protected]; [email protected]

Abstract: The purpose of this work is to highlight some anomalies in the temporal decay of seismic sequences following a mainshock with magnitude M≥7. The decay can be modeled as a non-stationary Poissonian process where the intensity function is given by the Omori’s formula. The method used estimates the absolute value of differences between the observed and theoretical temporal trend. Several days before the occurrence of large aftershock we can see some “seismic anomalies in the decay” that could be considered as precursors.

1. Introduction The employment of statistics is useful to estimate the probability of future earthquakes. These probabilities are very interesting from the point of view of earthquake physics, and crucial for attempts to forecast the hazards due to large, damaging earthquakes. Actually a theoretical model that successfully describes earthquake recurrence is unknown, so it is necessary to adapt probability distribution based on earthquake history. Aftershock typically being immediately after a mainshock and are distributed through the source volume. Usually the frequency of occurrence of aftershocks decays rapidly according to the Omori’s law:

n(t) = c /(k + t) − p (1)

where n(t) is the frequency of aftershocks at time t after mainshock; k, c and p are costnts thatdepend on the size of earthquake. The p-value is usually close to1.0-1.4. The distribution in space of the aftershock is often related to the fault area or its lenght. The are a lot of empirical relations to estimate the fault area (A) or the lenght fault (L). Utsu developed the empirical formula:

log A = 1.02M s + 6.0 (2)

An other of these relations was found by Utsu (1969) relating to the fractured fault segment length L to the seismic event magnitude:

log L = 0.5M s −1.8 (3)

An other important formula is the Gutenberg-Richter’s relation (1954) between the size and frequency occurrence. They first proposed that the frequency of occurrence can be represented by the formula:

log N = a − bM (4)

in a given period of time and in a given region. In the previous formula N is the number of earthquakes with a fixed magnitude M, a and b are constants characteristic for the given area. Usually the constant b is called b-value. In general, b-values are between 2/3 and 1 and do not show much regional variability. Concerning the occurrence of large aftershocks, in the

first ten days, after a mainshock with magnitude greater than 7, we analyzed 153 sequences all over the world from 1973 to 2004. We noticed that in the first ten days there is a probability about 81% to have a large aftershock (with magnitude greater than 5.5).

2. Data and analysis

The study is focalized on the Loyalty Islands seismic sequence, S1, happened on 27 December 2003, mainshock latitude, longitude are respectively -21.94N, 169.69E (Caccamo et al. 2005a); on the New Guinea seismic sequence, S2, happened on 29 April 1996 in New Guinea, mainshock latitude, longitude are respectively -6.52N, 155E. (Caccamo et al. 2005b), and on Colfiorito seismic sequence happened on 26 September 1997, mainshock latitude, longitude are respectively 43.08N, 12.81E (Parrillo et al. 2005). Data come from NEIC-USGS data bank (http://neic.usgs.gov/neis/epic/) and INGV catalogues for Italian data (www.ingv.it). We calculated “a priori” the dimensions of the involved area using the Utsu (1969) empirical relation (relation 3). We acquired data in a square with sides at a distance 3L from the mainshock epicenter and in the period of the first 10 days starting from the mainshock. Using these data, we calculated the “barycentre” of the aftershocks sequence (D’Amico et al.,2004). The evaluation of the completeness threshold of a dataset is made using the Gutenberg-Richter diagram. For the temporal duration D of the seismic sequence, D=n1+n2, where n1 is the number of days equals to the number of aftershocks in the first 24 hours starting from the occurrence of the mainshock, and n2 is the number of days, after n1, that

n − n > 2.5 reach and include 10 days in which real theor there are not earthquakes. This is an hypothesis “a priori” to cut, in a appropriate way, the sequence. In mathematical terms, the observed temporal series of the shocks per day can be considered as a sum of a deterministic contribution and a stochastic contribution. These contributions are, respectively, the aftershock decay with a power law, i.e. the modified Omori formula (Utsu, 1995), and the random fluctuations around a mean value represented by the above mentioned power law. The decay phenomenon can be modeled as a non-stationary Poissonian process where the intensity function is equal to the Omori’s law (Matsu’ura, 1986), the number of aftershocks in a time interval ∆t is, as a mean value, n(t)⋅∆t, with a standard deviation :σ = n(t)∆t . We may expect that the random fluctuations around the mean value n(t) will be within a 2.5σ range (that represents approximately the 99%). We find the anomalies estimating the ratios of the absolute values of differences between the observed temporal trend and the theoretical trend (∆) and the standard deviations of the theoretical values (σ). We consider the quantity: ∆/σ>2.5 as an “anomaly”. We considered the term K1 take into account the background seismicity. This parameter should be typical of the seismicity of an area. We assumed in this work K1=1 (Caccamo et al., 2005).

3. Discussion

Using these data the temporal trend n(t), number of shocks per day for S1, S2, and S3 can be calculated as shown in figure 1a,1d and 1g. The three plots of figure 1, for S1, S2 and S3 respectively, have the same temporal scale. In figure 1b, 1e and 1h shocks with magnitude M>5.5 are shown. In figure 1c, 1f and 1i the ∆/σ values versus time t in days are shown, excluding the initial 10 days. Regarding to S1 there is one ∆/σ value exceeding this threshold (2.5 as before described ): ∆/σ=3.68 on the 12th day with shock with magnitude M=5.8 on th the 17 day (plots 1b and 1c). Regarding to S2 there are 3 ∆/σ values exceeding the threshold: ∆/σ=2.91 on the 10th day, ∆/σ=2.56 on 11th day and ∆/σ=3.70 on the 13th day, with shocks with magnitude M=6.3 and M=5.6 on the 12th and 13th day respectively (plots 1e and 1f). th Regarding to S3 there are 5 ∆/σ values exceeding the threshold: ∆/σ=2.98 on the 11 day,

∆/σ=7,98 on the 17th day, ∆/σ=3,99 on the 18th day, ∆/σ=4,99 on the 19th day and ∆/σ=3,99 on the 20th day, with shocks with magnitude M=5.8 on the 11th day and M=5.7 on the 19th day (plots 1h and 1i).

Figure 1 (a) Temporal trend n(t)for the sequence S1;(b) temporal trend of the shocks with M>5.5 for S1; (c) ∆/σ values versus time for S1. (d) Temporal trend n(t)for the sequence S2;(e) temporal trend of the shocks with M>5.5 for S2; (f) ∆/σ values versus time for S2. (g) Temporal trend n(t)for the sequence S3;(h) temporal trend of the shocks with M>5.5 for S3; (i) ∆/σ values versus time for S3; for every sequences the three plots have the same temporal scale.

4. Conclusion It is observed that, several days before the occurrence of large aftershocks, it is possible to point out the anomalies in the decay, considered as deviations from the mean value n(t) greater than a 2.5 quantity, that, likely, are not random like fluctuations but probably precursors. The data concerning the temporal series, checked according to completeness criteria, come from the NEIC-USGS data bank and INGV catalogues for Italian data. From the scientific point of view, this kind of analysis offers new horizons to the research about seismic anomalies and precursors.

5. Acknowledgement The authors wish to thank Aybige Akinci from Istituto Nazionale di Geofisica e Vulcanologia of Rome (Italy). Moreover the authors would like to thank Prof. Yosi Ogata and the Organization of “the 4th International Workshop on Statistical Seismology.

6. References Caccamo, D., Barbieri, F. M., Laganà, C., D’Amico, S., Parrillo, F (2005a in press), A study about the aftershocks sequence of 27 December 2003 in Loyalty Islands, Bollettino di Geofisica Teorica ed Apllicata Caccamo, D., Barbieri, F. M., Laganà, C., D’Amico, S., Parrillo, F (2005 in press), The temporal series of the New Guinea 29 April 1996 aftershock sequence, Physics of the Earth and Planetary Interiors. D’Amico, S., Caccamo, D., Barbieri, F. M., Laganà, C, Some methodological aspects related to the prediction of large aftershocks, (ESC, book of abstract, 2004) pp 132-133 Gutenberg, B., Richter, C. F.., Seismicity of the Earth (Princeton University Press, Princeton, N. J., 1954), pp. 310. Matsu’ura, R.S. (1986), Precursory quiescence and recovery of aftershock activities before some large aftershocks, Bulletin of the Earthquake Research Institute University of Tokyo, 61, 1-65. Parrillo, F., Caccamo, D., D’Amico, S., Barbieri, F. M., Laganà, C, Analisi sella sequenza sismica di Colfiorito (Umbria) del 26 Settembre 1997 con l’applicazione del metodo delta/sigma, (Conegno GNGTS). Utsu, T., Ogata, Y., Matsu’ura, R.S. (1995), The centenary of the Omori formula for a decay law of aftershocks activity, J. Phys, Earth, 43,1 – 33 Utsu, T., (1969), Aftershocks and earthquake statistics (I) source parameters which characterize an aftershock sequence and their interrelations, J. Fac. Sci. Hokkaido Univ., Ser. VII, 3, 129-195

Some analysis of seismicity in the Sumatra area

S. D’Amico1, D. Caccamo2, A. Zirilli3, F. Parrillo2, S.M.I. D’Amico3, A.Alibrandi3

1Istituto Nazionale di Geofisica e Vulcanologia,via Di Vigna Murata, 60, 00143, Roma, Italy 2 Department of Earth Sciences, University of Messina, Salita Sperone 31, 98166 Messina, Italy 3 Faculty of Statistic Sciences, University of Messina,Via Centonze, 98100 Messina, Italy

Corresponding Author: S. D’Amico, [email protected]; [email protected]

Abstract: Investigating both aftershock behaviour and a wide spectrum of parameters may find the key to better explain the mechanism of seismicity as a whole. In particular the purpose of this work is to apply some different statistical approaches related to the seismicity of “Sumatra area” (Indonesia). We focalised our attention on the seismicity from 1973 to 2004 and on different seismic sequences happened in this area. We estimated the variation of p-value, b-value, the energy involved into the decay process and the fractal dimensions D0 and D2 to highlight the clustering of events in time.

1. Introduction The earthquakes are natural phenomena that occur in particular zones of the Earth’s surface. They are related to complex processes. One of the classic problems in global seismology has been the mapping of earthquake distribution. This mapping has played a key role in the evolution of the theory of plate tectonics, which describes the large-scale relative motion of a mosaic of lithosferic plates on the Earth’s surface. Usually the energy during an earthquake is released from the mainshock. It could be preceded by foreshocks and follows by a lot of aftershocks. Statistical studies related to the seismicity and occurrence of earthquakes could give us useful interpretation to better explain the mechanism of seismicity.

2. Data and analysis In particular the purpose of this work is to apply some different statistical approach related to the seismicity of “Sumatra zone”. We analysed also three different seismic sequence happened in this area on 13 February 2001, 2 November 2002 and 26 December 2004 having the magnitude of the mainshock respectively M=7.4, M=7.6 and M=9. We evaluated in this case also the b-value, p-value, cumulative energy, and the fractal dimensions D0 and D2 to highlight the clustering in time of events; everything is related to some anomalies in the temporal decay. We consider an anomaly when, during the aftershock decay, the absolute values of differences between the observed and the theoretical temporal trend are greater than 2.5σ quantity being σ the standard deviation (Caccamo et al., 2005a,b; D’Amico et al. 2005, Parrillo et al.,2005). Data used in this study come from NEIC-USGS data bank (http://neic.usgs.gov/neis/epic/). A large earthquake cause a lot of damages. Certainly it will likely be followed by several aftershocks of considerable size in a short time span. Earthquakes located within a characteristic distance from the main event can be considered aftershock. Regarding to the seismicity of the area and the behaviour of aftershocks we are trying to use other statistic methods like the un-parameter procedure “NPC Ranking” and the Cox-Stuart’s approach. The first one is a statistic method that consent to draw up some classifications. With this method it is possible draw up some classifications concerning every observation. His peculiarity is that it to consent and pass easily from a partial classification to a general. This peculiarity has an important statistical interest because it summarizes the effects of single observations . In order to estimate temporal aspects of the seismicity we used un-parameter text because his use permit to establish phenomenon’s trend in a given period.

Cox and Stuart’s test permit to verify, for the variable that was considered into different methods, if exist a monotonous trend at the increase or at the decrease of the central trend or of variability.

3. Discussion Some analysis were focalised on different seismic sequence happened in this area on 13 February 2001 (Latmain=–4.68N; Lonmain=102.56E), 2 November 2002 (Latmain=2.82N; Lonmain=96.08E) and 26 December 2004 (latmain=3.3N; lonmain=95.98E) having the magnitude of the mainshock respectively M=7.4, M=7.6 and M=9. To define in space a sequence we calculated “a priori” the dimension of the involved area using the Utsu’s (1969) empirical relation. We calculated the “barycentre” of the aftershock sequence, too (D’Amico et al.,2004). The completeness threshold was made using the Gutenberg- Richter’s relation (1954). Typically, the frequency of occurrence of aftershock decays rapidly. For the temporal duration d of a seismic sequence we empirically considered d=n1+n2 where n1 is the number of days equals to the number of aftershock in the 24 hours starting from the occurrence of the mainshock and n2 the number of days, after n1, that reach and include 10 days in which there are not earthquakes. Time clustering of seismic activity are investigated by fractal dimensions. Here we used the fractal method based on the estimation of the box- counting dimension and the correlation dimension related to the seismic sequences located in the Sumatra area. The analysis are made before large aftershocks with M>5.5 occurred in the time sequence. In the analysed sequence we can notice a clustering in time of box- counting dimension and of the correlation dimension. They have values very close to zero. Relating to the NPC Ranking, in order to classify the earthquakes, that were registered in the investigated area, we individuated the magnitude, M, in four intervals (5

4. Conclusion Concerning to the aftershock sequences we notice that there are some anomalies in the temporal decay before the occurrence of large aftershock. These anomalies are related to the variations, in a short period, of the parameters above mentioned.

5. Acknowledgement The authors wish to thank Aybige Akinci from Istituto Nazionale di Geofisica e Vulcanologia of Rome (Italy). Moreover the authors would like to thank Prof. Yosi Ogata and the Organization of “the 4th International Workshop on Statistical Seismology.

6. References Caccamo, D., Barbieri, F. M., Laganà, C., D’Amico, S., Parrillo, F (2005a in press), A study about the aftershocks sequence of 27 December 2003 in Loyalty Islands, Bollettino di Geofisica Teorica ed Apllicata Caccamo, D., Barbieri, F. M., Laganà, C., D’Amico, S., Parrillo, F (2005 in press), The temporal series of the New Guinea 29 April 1996 aftershock sequence, Physics of the

Earth and Planetary Interiors. D’Amico, S., Caccamo, D., Barbieri, F. M., Laganà, C, Some methodological aspects related to the prediction of large aftershocks, (ESC, book of abstract, 2004) pp 132-133 D’Amico, S., Caccamo, D., Barbieri, F. M., Laganà, C, Use of fractal geometry for a study of seismic temporal series, (E.G.U.,Geophysical Research Abstract, 2005) Vol. 7 Gutenberg, B., Richter, C. F.., Seismicity of the Earth (Princeton University Press, Princeton, N. J., 1954), pp. 310. Parrillo, F., Caccamo, D., D’Amico, S., Barbieri, F. M., Laganà, C, Analisi sella sequenza sismica di Colfiorito (Umbria) del 26 Settembre 1997 con l’applicazione del metodo delta/sigma, (XXIV Convegno G..N.G.T.S.). Utsu, T., (1969), Aftershocks and earthquake statistics (I) source parameters which characterize an aftershock sequence and their interrelations, J. Fac. Sci. Hokkaido Univ., Ser. VII, 3, 129-19

Stress- and State-Dependence of Earthquake Occurrence

James Dieterich

University of California, Riverside, Department of Earth Sciences, and Institute of Geophysics and Planetary Physics, Riverside, CA 92521, USA

Abstract: A general physics-based formulation has been developed that quantitatively relates earthquake rates and stressing history [Dieterich,1994, Dieterich and Kilgore, 1996; Dieterich and others, 2003]. Because earthquake nucleation processes determine the time and location of earthquakes, this model treats seismicity as a sequence of nucleation events. Using solutions for the nucleation of unstable fault slip on faults with rate- and state- dependent constitutive properties, a general constitutive law for rate of earthquake production as a function of stressing history is derived. Forward modeling applications of this formulation include evaluations of the effect of stress changes on earthquake probabilities, earthquake triggering, foreshocks, and aftershocks. Used in a reverse mode, observed seismicity rate changes may inverted to calculate the changes of stress that drive the earthquake process. Two topics are currently being investigated. The first involves seismicity and aftershocks associated with stress changes that arise from geometric incompatibilities for slip on non-planar faults. The second topic is the use of observed seismicity rate changes to solve for changes of stress in space and time. Application of this method at Kilauea Volcano, Hawaii, yields results that agree with forward calculations based on deformation observations. The solutions for stress changes at the time of moderate and large earthquakes at Kilauea consistently show stress shadows, but comparable effects are less evident for earthquakes in California. . References Dieterich, J.H. (1994), A constitutive law for rate of earthquake production and its application to earth-quake clustering, J. Geophys. Res., 99, 2601-2618. Dieterich, J. H., and Kilgore,B.D. (1996), Implications of fault constitutive properties for earthquake prediction, Proc. Natl. Acad. Sci., 93, 3787-3794. Dieterich, J. H., V. Cayol, and P. Okubo (2003), Stress Changes Before and During the Puu Oo–Kupianaha Eruption, in: The Puu Oo–Kupaianaha Eruption of Kilauea Volcano, Hawaii: The First 20 Years, U. S. Geological Survey Professional Paper 1676, 187–202.

Tutorial on Stress- and State-Dependent Model of Earthquake Occurrence: Part 1 – Model and Formulation

James Dieterich

University of California, Riverside, Department of Earth Sciences, and Institute of Geophysics and Planetary Physics, Riverside, CA 92521, USA

Abstract: This tutorial will review the background and derivation of the stress- and state- dependent formulation for earthquake rates. This includes a review of rate- and state- dependent friction, and earthquake nucleation, that are the basis for the model. The derivation of the model will be examined, including discussion of approximations and limitations of this modeling approach. Finally, general characteristics of the formulation and modeling strategies with examples will be presented.

Tutorial on Stress- and State-Dependent Model of Earthquake Occurrence: Part 2 – Applications

James Dieterich

University of California, Riverside, Department of Earth Sciences, and Institute of Geophysics and Planetary Physics, Riverside, CA 92521, USA

Abstract: This tutorial will work through step-by-step examples of calculations with the stress- and state-dependent formulation for earthquake rates. These will include forward models of aftershocks, and earthquake triggering by tides or stress waves. Alternative strategies and parameter estimation for reverse calculations, which employ use of observed seismicity to infer changes of Coulomb stress will be covered as well.

Seismology in the Source: The San Andreas Fault Observatory at Depth

W. L. Ellsworth1, S. H. Hickman1, and M.D. Zoback2

1U. S. Geological Survey,345 Middlefield Road, MS-977, Menlo Park, CA 94025, USA 2Department of Geophysics, Stanford University, Stanford, CA 94305, USA

Corresponding Author: W. L. Ellsworth, [email protected]

Abstract: The main drill hole of the San Andreas Fault Observatory at Depth successfully penetrated the fault in the summer of 2005, setting the stage for the deployment of a multi-component earthquake observatory within the core of this major plate boundary fault. Results obtained from instruments at the edge of the near field region provide a tantalizing glimpse of earthquake nucleation and propagation processes.

1. Introduction The San Andreas Fault Observatory at Depth (SAFOD) is a comprehensive project to establish a multi-stage geophysical observatory in the hypocentral zone of repeating M~2 earthquakes within the San Andreas Fault as part of the U.S. National Science Foundation’s EarthScope Program. The goals of SAFOD are to exhume rock and fluid samples for extensive laboratory studies, to conduct a comprehensive suite of downhole measurements in order to study the physical and chemical conditions under which earthquakes occur, and to monitor changes in deformation, fluid pressure and seismicity directly within the fault zone during multiple earthquake recurrence cycles (Hickman, et al, 2004).

The main drilling target is a pair of M~2 earthquakes that have recurred multiple times over the past two decades (Figure 1). These target earthquakes are located at a depth of 3 km on the San Andreas Fault immediately to the northwest of the Parkfield segment (Waldhauser et al., 2004). Prior to the September 28, 2004 M 6.0 Parkfield earthquake, the target events occurred every 2-3 years, with the “S.F.” event typically preceding the “L.A.” event by a minutes to a few days. Clearly, these two sources interact. Following the 2004 Parkfield earthquake, the recurrence rate dramatically increased, with these repeating earthquake sources contributing events to the Parkfield aftershock sequence. The interval between the events has been steadily lengthening since, following Omori-like behavior. The close coupling between “S.F.” and “L.A” has also been broken. Consequently, there is much to learn about earthquake interaction at SAFOD, in addition to earthquake nucleation and propagation.

2. Building the Observatory Extending 4 km into the Earth along a diagonal path that crosses the divide between Salinian basement accreted to the Pacific Plate and Cretaceous sediments of North America, the main hole at the San Andreas Fault Observatory at Depth is designed to provide a portal into the inner workings of a major plate boundary fault. The successful drilling and casing of the main hole in the summer of 2005 to a total vertical depth of 3.1 km make it possible to conduct spatially extensive and long-duration observations of active tectonic processes within the actively deforming core of the San Andreas Fault. The observatory consists of retrievable seismic, deformation and fluid pressure sensors deployed inside the casing in both the main hole (maximum temperature 135 C) and the collocated pilot hole (1.1 km depth), and a fiber optic strainmeter installed behind casing in the main hole. By using retrievable systems deployed on either wire line or rigid tubing, each hole can be used for a

wide range of scientific purposes, with instrumentation that takes maximum advantage of advances in sensor technology.

To meet the scientific and technical challenges of building the observatory, borehole instrumentation systems developed for use in the petroleum industry and by the academic community in other deep research boreholes have been deployed in the SAFOD pilot hole and main hole over the past year. These systems included 15Hz omni-directional and 4.5 Hz gimbaled seismometers, micro-electro-mechanical accelerometers, tiltmeters, sigma-delta digitizers, and a fiber optic interferometeric strainmeter. A 1200-m-long, 3-component 80-level clamped seismic array was also operated in the main hole for 2 weeks of recording in May of 2005 by Paulsson Geophysical Services, collecting continuous seismic data at a rate of 4000 samples/sec.

Figure 1 Cross section in the plane of the San Andreas Fault showing repeating earthquakes of up to M2 in the drilling target, 1984-2005. The main trace of the San Andreas is associated with a northwestern event in red (San Francisco) and a southwestern event(Los Angeles) in blue; The green events are associated with a subsidiary trace of the San Andreas located 230 m closer to the SAFOD wellhead . Earthquakes are shown as circles centered on their hypocentroids with a diameter scaled by their moment for a 9 MPa stress drop (Imanishi et al., 2004). The S.F. and L.A. repeating earthquake sources are M 1.8 – 2.2. Hypocentroid locations courtesy of F. Waldhauser (after Waldhauser et al., 2004).

3. Discussion Some of the observational highlights include capturing one of the M 2 SAFOD target repeating earthquakes (Figure 1) in the near-field at a distance of 420 m, with accelerations of up to 200 cm/s and a static displacement of a few microns. Numerous other local events were observed over the summer by the tilt and seismic instruments in the pilot hole, some of

which produced strain offsets of several nanostrain on the fiber optic strainmeter. We were fortunate to observe several episodes of non-volcanic tremor on the 80-level seismic array in May, 2005. These spatially unaliased recordings of the tremor wavefield reveal that the complex tremor time series is comprised of up-and down-going shear waves that produce a spatially stationary interference pattern over time scales of 10s of seconds.

4. Conclusion The SAFOD observatory is now in operation. All data collected at SAFOD as part of the EarthScope project are open and freely available to all interested researchers. The Northern California Earthquake Data Center at U.C. Berkeley is the principal data repository for these SAFOD data. The more than 3 TB of 80-level array data are also available at the IRIS DMC as an assembled data collection.

5. Acknowledgement SAFOD is a component of the U.S. National Science Foundation’s EarthScope Project. SAFOD has also received major financial support from the International Continental Drilling Program and the U.S. Geological Survey. We thank the scores of researchers from around the world who have been instrumental to the success of the SAFOD program

6. References Hickman, S.H., Zoback, M.D., and Ellsworth, W.L. (2004), Introduction to special section: Preparing for the San Andreas Fault Observatory at Depth, Geophysical Res. Lett, 31, L12S01, doi:10.1029/2004GRL020688. Imanishi, K., Ellsworth, W.L., and Prejean, S.G. (2004), Earthquake source parameters determined by the SAFOD Pilot Hole seismic array, Geophysical Res. Lett, 31, L12S09, doi:10.1029/2004GRL029420. Waldhauser, F., Ellsworth, W.L., Schaff, D.P., and Cole, A.T. (2004), Streaks, multiplets and holes: High-resolution spatio-temporal behavior of Parkfield seismicity, Geophysical Res. Lett., 31, L18608, doi:10.1029/2004GRL020649.

Quantifying the early aftershock activity of the 2004 Mid Niigata Prefecture Earthquake (Mw6.6)

Bogdan Enescu1, Jim Mori1, Masatoshi Miyazawa1, Takuo Shibutani1, Kiyoshi Ito1, Yoshihisa Iio1, Takeshi Matsushima2 and Kenji Uehira2

1Disaster Prevention Research Institute, Kyoto University, Kyoto, Japan 2Inst. of Seismology and Volcanology, Faculty of Sciences, Kyushu University, Shimabara, Japan

Corresponding Author: Bogdan Enescu, e-mail: [email protected]

Abstract: The goal of this work is to determine an accurate image of the aftershock activity immediately following the 2004 Mid Niigata (Chuetsu) earthquake. For this purpose, we examined the JMA earthquake catalog and also the continuous seismograms recorded at six Hi-Net seismic stations located close to the aftershock distribution. By high-pass filtering the seismograms we were able to count aftershocks starting at about 35 sec. after the mainshock. The results show that the c-value, which expresses the saturation of the aftershock rate close to the mainshock, is very small (less than about 3 min.), but has a non-zero value of about 100 sec. that relates to the physics of the aftershock generation process.

1. Introduction The occurrence rate of aftershocks is empirically well described by the Modified Omori formula (Utsu, 1961) : n(t) = k/(t+c)p, where n(t) is the frequency of aftershocks per unit time, at time t after the mainshock, and k, c and p are constants. The parameter c, which relates to the rate of aftershock activity in the earliest part of an aftershock sequence, typically ranges from 0.5 to 20 hours in empirical studies (Utsu et al., 1995). Since the behaviour of aftershock sequences during the first minutes and hours after the mainshock is a significant component of theoretical models of seismicity (e.g., Dieterich, 1994; Rubin, 2002), it is essential to determine whether the coefficient c is a physical parameter (Vidale et al., 2003; Shcherbakov et al., 2004) or if it simply indicates a measure of instrument and technique deficiency (Kagan, 2004). We focus in this study on the decay of the early aftershock activity following the 2004 Niigata earthquake (M6.8) and try to estimate the c parameter by using a) JMA (Japanese Meteorological Agency) earthquake catalog data and b) continuous Hi-Net waveform data.

2. Analysis of the JMA catalog data 2.1 Analysis of the decay of aftershocks number We first analyse the decay of the aftershocks of the 2004 Niigata earthquake by using the JMA catalog data. By constructing the frequency-magnitude distribution of earthquakes after 0.01, 0.05, 0.2 and 0.5 days after the mainshock, for a 100-event window, we could easily estimate the magnitude of completeness (Mc) of the catalog at these times. By fitting these data, we could determine a semi-logarithmic relationship between the level of catalog completeness, expressed by Mc, and the time elapsed since the mainshock, t. The decay of aftershocks, for different threshold magnitudes, is fitted using the Modified Omori formula and the parameters p, c and k are determined in each case using a maximum likelihood algorithm (Ogata, 1983). For example, the c-values obtained by using threshold magnitudes of 3.0 and 4.0 are 0.027 and 0.012, respectively. These values, however, are comparable with the times at which the catalogue becomes complete in events above the corresponding threshold magnitude. This observation demonstrates that it is practically impossible to estimate a “real” c-value from earthquake catalog data.

2.2 Analysis of the decay of the moment release rate of aftershocks We also determine the decay of the seismic moment release rate of aftershocks, using the approach suggested by Kagan and Houston (2004). An advantage of using the moment summation of aftershocks instead of the usual counting of aftershocks number is that the missing small events at the beginning of the catalog are weighted much less because their seismic moment is relatively small. Assuming that the aftershock size distribution follows the Gutenberg-Richter relation, we could also calculate the moment rate which is due to the missing small aftershocks and compensate for an incomplete catalogue record (Kagan and Houston, 2004). As one can notice in Fig. 1, there is little difference between the moment release rate calculated using the catalogue data and the “corrected” rates, which account for the under-reported small aftershocks. The only notable difference is at the very beginning of the aftershock sequence. The decay of the corrected moment release rate is well fitted by a power-law with a slope of 1.2 in the log-log plot of Fig. 1. The graph suggests that there is no significant saturation of the moment release rate of aftershocks close to the mainshock. As the first aftershock recorded in the JMA catalogue occurred after about 3 minutes from the mainshock, it is impossible to get information on the aftershock moments at earlier times. We can only speculate that the c-value is smaller than about 3 minutes (~0.002 days). We also show in the same figure the source time function of the mainshock, determined by Miyazawa et al. (2005). The extrapolation of the fit of the moment decay of aftershocks to the end of the mainshock rupture seem to suggest a smooth transition from the mainshock rupture to the aftershock process.

Figure 1

Decay of the uncorrected and corrected moment release rates of aftershocks and a power-law fit to the data (slope of 1.2 in the log- log graph). The source time function of the mainshock is also shown.

3. Analysis of the Hi-Net continuous waveform data. As a next step in our analysis, we try to identify as many events as possible on the continuous seismograms recorded at six Hi-Net stations located closely to the aftershock distribution. The waveforms were high-pass filtered at 7Hz to attenuate the low-frequency coda of the mainshock. Clear early aftershocks can be identified on the filtered waveforms, starting at about 35 sec. after the mainshock (Fig. 2). Most of these early events are not recorded in the JMA catalog. We then select two representative stations (YNTH and NGOH), situated on opposite sides and close to the aftershock distribution, and count the events with amplitudes above some threshold value which can be clearly seen on the continuously recorded waveforms. This threshold value corresponds to about a magnitude 3.3 earthquake. The identified events satisfy well the Modified Omori law, with a p-value close to 1.0. The c- value is about 100 sec (~0.001 days), i.e. the number of events is smaller than predicted by a simple power-law distribution at very short times after the mainshock.

Figure 2

Clear early aftershocks (indicated by arrows) can be identified on these high-pass filtered waveforms, starting at about 35 sec. after the mainshock. The names of the recording Hi-Net stations are also shown.

4. Conclusion By using three different approaches we tried to estimate the c-value of the Modified Omori law for the aftershocks of the 2004 Niigata earthquake. We showed that the estimated value of the parameter by using usual aftershocks counting mainly reflects the under-reporting of small events at the beginning of the aftershock sequence. An alternative approach, using the decay of the moment release rate of aftershocks, indicates that the c-value is smaller than about 3 minutes (~0.002 days). An additional argument for a very small c-value is suggested by the smooth transition between the source-time function of the mainshock and the moment release rate of aftershocks. By analysing the high-pass waveform data recorded at several High-Net stations, we confirmed that the c-value is very small, but has a non-zero value of about 100 sec (~0.001 days). This observation relates to the physics of the aftershock generation process.

5. Acknowledgements We would like to thank NIED and JMA for allowing us to use their earthquake data.

The Snowball Effect: Statistical Evidence that Big Earthquakes are Rapid Cascades of Small Aftershocks

K. R. Felzer1

1United States Geological Survey, 525 S. Wilson, Pasadena, CA 91106, USA

Corresponding Author: K. R. Felzer,, [email protected]

Abstract: Subevents can be clearly seen in event seismograms, and subevent models are necessary to fit strong motion data. To what extent might earthquake subevents be made up of smaller subevents? And smaller subevents still? Vere Jones (1976) and Kagan and Knopoff (1981) demonstrated that an ultimate subevent model, in which the smallest, tiny component subevents are of equal size, could recreate the Gutenberg-Richter earthquake magnitude frequency distribution. In the Kagan and Knopoff (1981) model, mainshocks grow when tiny subevents rapidly trigger each other via a process identical to aftershock triggering. This model successfully reproduces realistic seismograms and realistic foreshock and aftershock sequences over longer time scales. Here I propose a model similar to Kagan and Knopoff (1981) except that equal faulting area rather than the original equal seismic moment characterizes the smallest subevents. This change is supported by physical and statistical considerations. My proposed model predicts that the number of aftershocks produced by an earthquake should scale with its faulting area and that no property of an aftershock sequence except for the total number of aftershocks should be influenced by mainshock size. These predictions are well supported by observations.

1. Introduction There are significant differences in the rupture of homogeneous and heterogeneous materials. In a homogeneous glass or metal a small crack may often grow quickly and catastrophically. In a heterogeneous material, however, such as rock, strength and elastic moduli may vary and a crack that starts in one place may have difficulty in spreading through neighboring stronger material. The amount of growth difficulty will depend on the applied load. For a small load cracks will never propagate beyond the areas of lowest strength; for a high load unbounded growth will always occur. At the critical load growth is possible but jerky as the crack feels the small-scale heterogeneity. Dynamic elastic waves are emitted by this jerky motion, which may initiate slip in new areas. Scale invariant (power law) avalanche type slip occurs, and the crack tip is self affine (Ramanathan and Fisher, 1998). Earthquakes clearly rupture through heterogeneous materials, and it is likely that they occur at critical stress loads. Tectonic stresses accumulate slowly, and slow stress accumulation means that stresses can stabilize fairly precisely at the level just large enough to make slip possible. The observed power law distribution of earthquake sizes provides further evidence for critical rupture dynamics. Physically, the avalanche like behavior of critical point rupture is very difficult to model. Statistical stochastic models, may recreate the process quite easily and accurately. Kagan and Knopoff (1981) proposed a compelling stochastic age-dependent continuous state branching process model for earthquake propagation. In the Kagan and Knopoff (1981) model small seismic subevents of equal size (hereafter referred to as “unit earthquakes”) trigger each other rapidly, via the aftershock triggering process, to create larger earthquakes. The idea that aftershock triggering might occur over such a short time scale is supported by the fact that a minimum triggering time, while it must exist, has not yet been found (Peng et al., 2005; Kilb et al., 2004). Furthermore, Kagan and Knopoff (1981) find that allowing the

unit earthquakes to trigger each other with time delays that adhere to Omori’s law for aftershock decay creates realistic seismograms. When the activity level of the model is tuned to the critical point, where sustained growth just becomes possible (the point where each unit earthquake triggers, on average, one other) Gutenberg-Richter magnitude frequency statistics are also reproduced. In the Kagan and Knopoff (1981) model unit earthquakes are defined as having equal seismic moment, but the model does not reproduce a b value of 1.0 for the Gutenberg-Richter relationship. Alternatively, I propose that the unit earthquakes should be of equal area. Equal area subevents fit easily into a physical model. If we assume that points of high strength are randomly distributed in a heterogeneous medium, then this will produce a characteristic length scale over which we expect initiated slip to grow before it starts to slow down. The “equal area” model does produce a Gutenberg-Richter relationship with b=1.0. Three testable predictions can be made from the equal-area unit earthquake stochastic model of earthquake growth:

1) The propagation of single earthquakes and the production of aftershocks are one and the same process. Therefore the triggering of aftershocks, like the growth of mainshocks, must be accomplished by dynamic stress changes.

2) All aftershock triggering is accomplished by unit earthquakes, whose characteristics (other than final slip) and triggering ability is always the same independent of final mainshock magnitude. Thus the characteristics of aftershock production by earthquakes of every magnitude should be the same except for the number of aftershocks, which should vary linearly as a function of faulting area (e.g. vary linearly with the number of unit earthquakes).

3) In the Kagan and Knopoff (1981) branching process, the activity of each branch is independent of the activity of all other branches. This means that each aftershock can be traced to a unique mainshock, and that subsequent earthquakes will not affect the timing of each aftershock.

In the following sections we will provide brief descriptions of tests of these predictions.

2. Test 1: Are aftershocks triggered by dynamic stress changes? I have performed two tests with E. E. Brodsky (Felzer and Brodsky, 2005; Felzer and Brodksy, submitted), on whether aftershocks are triggered by static or dynamic stress changes. The first test was for stress shadows, or whether earthquake-induced stress changes are capable of slowing other earthquakes down. This is predicted to occur if earthquake timing is affected by static stress changes, but may not occur if dynamic stress changes are most important. We found no evidence for stress shadows in California, supporting the dynamic triggering hypothesis. A similar lack of stress shadowing has also been found by other authors, at least for the first several months following a mainshock (Parsons, 2002; Marsan, 2003; Mallman and Zoback, 2003). The second test performed for static vs. dynamic triggering was to inspect the rate of decay of aftershock density with distance from mainshocks in California. We found a constant, inverse power law decay rate from distances ranging from a fraction of a fault length (of M 5-6 earthquakes) to over a hundred fault lengths (of M 2-3 mainshocks). The single, continuous decay rate indicates that a single aftershock triggering mechanism is operating

over the entire distance range. Only dynamic stress changes are large enough to trigger at distances over several fault lengths. This suggests that dynamic stress changes must be the triggering agent at all distances (Figure 1).

Figure 1 Distance from the mainshock vs. aftershock linear density (aftershocks/km) for combined aftershock data of M 2-3(A) and M 3-4 (B) California mainshocks. All aftershocks are M>2 and occur in the first 5 minutes after their mainshock. The decay is well fit by r-1.30 ± 0.1 for the M 2-3 data and r-1.37 ± 0.07 for the M 3-4 data, where r is the distance from the mainshock hypocenter.

3. Test 2: Aftershock scaling and mainshock magnitude There are really two predictions that the model makes with respect to aftershock scaling with mainshock magnitude. The first is that the total number of aftershocks will scale linearly as mainshock faulting area, or as 10αM where α =1, since 10M scales as faulting area. This scaling has been found by Yamanaka and Shimazaki (1990), Michael and Jones (1998), Felzer et al. (2004), and Helmstetter et al. (2005) among others. The second prediction is that no other property of aftershocks, besides number, will be affected by mainshock magnitude. Many authors have found that aftershock magnitude is independent of mainshock magnitude (Kagan 1991; Michael and Jones, 1998; Felzer et al. 2004), (Figure 2). Felzer and Brodsky (submitted) demonstrated that the spatial distribution of aftershocks is independent of mainshock magnitude. Helmstetter et al. (2005) and Gerstenberger et al. (2005) and others have found the temporal distribution of aftershocks to be independent of mainshock magnitude.

4. Test 3: Does each aftershock have a single parent? The last prediction of the model is that aftershock triggering is a binary process, such that each aftershock is triggered by one and only one mainshock. This implies that aftershock triggering is effectively an “on” or “off” process; a fault can be either triggered or left alone by another earthquake, and once triggered cannot be turned off again before its aftershock occurs. While this idea is counterintuitive with respect to many physical models (such as models of accelerating failure), it nonetheless finds support in the data. The density of aftershocks swiftly drops with distance from the mainshock fault. Yet Felzer (2005) found that the temporal distribution of aftershocks does not vary with mainshock distance. This indicates that stress change amplitude affects the total number of aftershock triggered, but not the timing of aftershocks that do occur – so the stress change must work as a simple “on” or “off” switch (Figure 3). While this finding does not prove that an aftershock cannot be multiply affected by different mainshocks, this “on or off” quality of aftershock triggering suggests that it is possible.

Figure 2 The average number of aftershocks/mainshock for different relative mainshock – aftershock magnitude differences. The data set is composed of 101,680 M≥ 2.2 California earthquakes (1975-2001). Mainshocks are used if they do not occur within the first 30 days of the aftershock sequence of a larger earthquake. Aftershocks occur within the first 2 days and 2 fault lengths of their mainshocks. The black line gives the relationship predicted if the aftershock magnitude distribution is simply the Gutenberg-Richter distribution (with b=1) regardless of mainshock magnitude, and if the total number of aftershocks produced varies linearly with mainshock faulting area.

Figure 3 We inspect the first 30 days of M≥3 aftershocks of the 1992 M 7.3 Landers, California earthquake. Groups of 200 earthquakes are taken from three different distance ranges from the mainshock fault. Stress change amplitudes and resulting seismicity rate changes affecting each group are different. The ratio of the average daily seismicity rate in the first 30 days of aftershocks vs. the average since 1975 is 4,546 for 0-1 km, 649 for 1-14.4 km, and 613 for 14.3 – 43.7 km, respectively (there is little difference between 1-14.4 km and 14.3 – 43.7 km because the Joshua Tree aftershock sequence created high prior seismicity rates in the 1-14.4 km zone). Despite the differences in stress amplitude and seismicity rate changes, the decay rates of the aftershocks in the three regions lie on top of each other – their temporal distribution is the same (Kolmogorov-Smirnoff test, 95% confidence). This indicates that stress change amplitude does not affect aftershock timing, suggesting that aftershock triggering is a binary “on” or “off” process.

5. Conclusion I propose that earthquake propagation can be physically modeled as the propagation of a crack through a heterogeneous medium, with stress loading set to the critical point. I propose that the statistical characteristics of this model is well represented by the stochastic model of Kagan and Knopoff (1981) in which earthquake propagation is modeled as a branching process of rapid aftershock triggering by tiny, equal-sized subevents. I model the subevents, or unit earthquakes as having equal area, and make several predictions from the model. These include that aftershocks should be triggered by dynamic stress changes

(because aftershock triggering and mainshock propagation are the same process), that aftershock production should scale with mainshock faulting area, that all properties of aftershock production, besides their total number, should be independent of mainshock magnitude, and that each aftershock should have a unique mainshock. Support for all of these predictions can be found statistically in aftershock catalog data.

6. Acknowledgement I thank Emily Brodsky, Rachel Abercrombie, Göran Ekström, Jim Rice, Thorsten Becker, Lucy Jones, Alan Felzer, Mike Harrington, and many, many others for assistance with different components of this work.

6. References Felzer, K.R., Abercrombie R. E., and Ekström G. (2004), A common origin for aftershocks, foreshocks, and multiplets, Bull. Seis. Soc. Am. 94, 88-99. Felzer, K.R., and Brodsky E. E. (2005), Testing the stress shadow hypothesis, J. Geophys. Res. 110, B05S09, doi:1029/2004JB003277. Felzer, K.R., and Brodsky E. E. (2005 submitted), Decay rate of aftershocks with distance indicates triggering by dynamic stress change Gerstenberger, M. C., Wiemer, S., Jones, L. M., and Reasenberg, P. A. (2005), Real-time forecasts of tomorrow’s earthquakes in California, Nature, 435, 328-331, doi: 10.1038/nature03622. Kagan, Y. Y. and Knopoff, L. (1981), Stochastic synthesis of earthquake catalogs, J. Geophys. Res., 86, 2853-2862. Kagan, Y. Y. (1991), Likelihood analysis of earthquake catalogs, Geophys. J. Int., 106, 135-148. Kilb, D., Martynov, V., and Vernon, F. (2004), Examination of the temporal lag between the 31 October 2001 Anza, California M 5.1 mainshock and the first aftershocks recorded by the Anza Seismic Network, Seis. Res. Lett., 74, 254. Mallman, E. P. and Zoback, M. D. (2003), Using seismicity rates to assess Coulomb stress transfer models, EOS Trans. AGU., 84(46), Fall Meet. Suppl., Abstract S32F-03. Marsan, D. (2003), Triggering of seismicity at short timescales following Californian earthquakes, J. Geophys. Res., 108, 2266, doi:10.1029/2002JB001946. Michael, A. J. and Jones, L. M. (1998), Seismicity alert probabilities at Parkfield, California, revisited, Bull. Seis. Soc. Am., 88, 117-130. Parsons, T. (2002), Global Omori decay of triggered earthquakes: Large aftershocks outside the classical aftershock zone, J. Geophys. Res., 107, doi:10.1029/2001JB000646. Peng, Z, Vidale, J. E., Ishii, M., and Helmstetter, A. (2005), Early aftershock decay rates, Abstract, Southern California Earthquake Center, Annual Meeting. Vere-Jones, D. (1976), A branching model for crack propagation, Pure App. Geophys., 114, 711-725. Yamanaka Y. and Shimazaki K. (1990), Scaling relationship between the number of aftershocks and the size of the mainshock, J. Phys. Earth, 38, 305-324.

Earthquake Dynamic Triggering and Ground Motion Scaling

J. Gomberg1, K. Felzer2,E. Brodsky3

1US Geological Survey, 3876 Central Ave., Memphis, Tennessee, 38152, USA 2 US Geological Survey, 525 Wilson Ave., Pasadena, California, 91106, USA 3 Univ. of Calif. Los Angeles, 1708 Geology Bldg., Los Angeles, California, 90095, USA

Corresponding Author: J. Gomberg, [email protected]

Abstract: While a growing body of evidence indicates that dynamic deformations (seismic waves) trigger earthquakes at all distances, it is not clear what characteristics of the dynamic deformations are most relevant to the mechanisms that cause triggering. In addition, if we are ultimately to account for dynamic triggering in time-dependent hazard assessments, we need to know how the relevant deformations vary, or scale, with distance from and size of the earthquake that generates them. In this study we address these issues by assuming that the probability of triggering aftershocks is proportional to a power of the peak amplitude of dynamic deformations affecting a fault, or a combination of these and rupture duration. We then ask whether these probabilities are consistent with measurements of triggered seismicity, in particular, linear aftershock densities. Preliminary results suggest that the hypothesis that peak acceleration alone controls aftershock triggering can be rejected. The other mechanisms remain viable, but require different values of d, the effective dimension of the fault population.

1. Introduction Testing the consistency of dynamic deformations with aftershock densities requires developing an analytic model relating the two. The model is constrained by measured scaling parameters and tested using numerical simulations. In this study ‘aftershocks’ are those events that occur within minutes to days after a larger mainshock (triggering earthquake) without a priori limits on their distance. We consider not just the transfer of stresses from triggering to potentially triggered faults but dynamic deformations that we are able to measure using recorded ground motions, which include displacements and their temporal rates of change at the surface. These serve as proxies for displacements, strains, and strain rates at seismogenic depths (Gomberg and Agnew, 1996).

2. Testing Our Hypothesis We assume that the probability, P(r), of dynamic deformations triggering a single aftershock at distance r is proportional to some power, χ, of the peak deformation amplitude, G(r), or alternatively of the product of G(r)χ and rupture duration, which scales as 10M/2. We measure peak ground motions to constrain a model of G(r), G(r)=K [Dm/(αD+r)n] (1) D is the rupture dimension. Constants K, m, n, and α are constrained observationally for velocities only in the near-field (r<>D). Linear aftershock density, ρ(r), the total number of aftershocks per unit distance, is the total number of faults, Nf(r), times the probability of each failing, or

ρ (r)=[Nf(r)/Δr] P(r) (2) Felzer and Brodsky, (2005) show empirically that

ρ(r) = C10M-Mminr-γ (3) with constant distance decay rate, γ, to within ~10s of meters of the mainshock. To measure

Nf(r) we sum all the faults within a volume defined by a surface, S, surrounding the mainshock at distance r everywhere, and width Δr, or

N f (r) = [ ∫ F(r ′) ds]Δr (4) S F(r') describes the number of faults per unit volume and its variation throughout the crust. Numerical models show that F(r') can be considered locally, as Ar(d-3), in which d is the € effective dimensionality of the fault system. If we consider only planar, rectangular faults, S is simply defined and the analytic model is derived as

˜ ˜ (r) 4 AK 2 / m D2[1 (D ) 1 (D )2 ]rd −1( D r)−2n / m CD210−M min r−γ (5) ρ = π + r + 2π r α + = The tilde indicates m or m+1 if triggering depends on peak ground motions only or on the product of these and duration, respectively. We can now ask if measured ground motion € scaling parameters are consistent with this model and measured aftershock densities. In the far-field, r>>D, this model applies to acceleration, velocity, and displacement (because we can constrain it observationally) and may be simplified to

˜ ˜ ρ(r) = 4πAK 2 / m rd −1−2n / m = C10−M min r−γ (6) This constrains d, noting that

d =1+ 2n − γ (7) € m˜ Viable triggering deformations must satisfy the limit that d<3, and according to previous studies, d<~2. In the near-field our model applies only to velocities or their product with € duration and to be consistent they must have parameters that satisfy the following:

˜ ˜ ˜ ρ(r) = [2Aα −2n / m K 2 / m ]D2−2n / m rd −3 = C10−M min r−γ (8) This is only satisfied if n = m˜ . We examine three datasets. The first includes aftershock locations and magnitudes, and € PGAs, PGVs, and PGDs for 22 Japanese earthquakes with magnitudes of M3.0-6.1. € Observations come from the National Research Institute for Earth Science and Disaster Prevention (NIED)’s Hi-net network in Japan, comprised of >700 seismic stations. The second dataset includes the catalog of the California Integrated Seismic Network (CISN) and PGAs, PGVs, and PGDs for the 2005, Mw5.2 Anza and 2005 Mw4.9 Yucaipa earthquakes in southern California, measured from waveforms recorded by California Geological Survey CSMIP, the US Geological Survey NSMP, University of California San Diego’s (UCSD) Anza networks, and the CISN. We chose these two earthquakes because of the abundance of data and well-characterized aftershock sequences. Because of the depths and source dimensions of these earthquakes, almost all the measurements are in the far-field. We measure the closest distance between the fault and observation point for the California and global data (below) and hypocentral distance for the Japanese data. Our final dataset of PGVs only was compiled to examine ground motion scaling at both near- and far-field distances, which required data from large earthquakes that rupture near the surface. Measurement of aftershock densities uniformly would be extremely difficult given the heterogeneity of available catalogs and was not done. The PGVs were measured for distances of 0.1 km to 5300 km from earthquakes with magnitudes M4.4 to M7.9. They include already measured values or waveforms from which we measured PGVs; data sources included the Pacific Earthquake Engineering Research Center, the Institutions for Research in Seismology, Japan’s NIED K-Net, and the Seismology Lab at the University of Nevada Reno. We first constrain the ground motion model parameters. The Japanese data provide the primary constraint on the amplitude scaling of Dm, with values of m~1.0, 1.5, and 2.0 for PGA, PGV, and PGD, respectively. The global and California datasets are consistent with these, but do not provide as strong a constraint. We find best fitting far-field distance decay rates for PGA, PGV, and PGD of ~r-n with n~2.0, 1.7, 1.5 and n~2.1, 1.7, 1.4 for the Japanese and California data, respectively. The far-field global PGV are best fit with n~1.5; the higher estimates for the Japanese and California fall within the scatter. The global data provide the only observational constraint on near-field ground motions, and show that as the triggering faults are approached, the decay rate decreases and PGVs converge to approximately the same value. This is a consequence of fault finiteness; as the triggering fault is approached, smaller areas contribute to the radiation (Brune, 1970, Anderson, 2000). In this particular case, in which m~n, a convenient scaling of r by D also removes all magnitude dependence (Fig. 1); this is evident from our ground motion model that becomes G(r)~K/(α+r/D)n when m~n. Figure 1 PGVs for our global dataset versus the distance scaled by rupture dimension D and our ground motion model (red curve); deformations are ~the same for all earthquakes at the same scaled distance, and maximum at < r/D~1, where triggering potential is greatest.

Observations of aftershock densities for the Japanese and California earthquakes suggest distance decay rates of γ~1.1 and 1.3, respectively. These, the PGA, PGV, and PGD parameter estimates and equation (5) imply faulting dimensionality of d~3.9-4.2, 2.0-2.5, and 1.1-1.7, respectively, if we consider peak amplitudes alone. If we also consider the effect of duration, the dimensionalities become d~1.8-2.2, 0.8-1.3, and 0.4-1.0, respectively. The large value of d for PGA (strain rate) alone suggests it does not control dynamic triggering. We use ground motion observations, G(r), and estimated scaling parameters to predict aftershock densities, to test our model and the consistency of the dynamic deformations with the densities (Fig. 2).

Figure 2 Anza and Yucaipa earthquake aftershock densities and those predicted from the ground motion observations, G(r), directly via our amplitude-only model (colored symbols); predictions agree within the scatter of the density measurements (triangles). Our analytic model applies to velocities at all distances, so we use it with scaling parameters measured in the far-field to predict aftershock densities in the near-field (red dashed curve); these are consistent with those observed.

The relationships between deformations and densities in the near-field potentially provide strong constraints on viable deformation characteristics, although additional numerical modeling and data acquisition and analyses are required to verify this.

3. Discussion & Conclusion We test a suite of hypotheses that the probability of dynamic triggering of a fault at distance r from a triggering rupture is related directly to maximum dynamic deformations of displacements, strains, or strain rates, or to the product of these with the rupture duration. To do this we have developed an analytic model that may be constrained observationally and with numerical experiments. Preliminary results suggest we can reject the hypothesis that aftershocks are triggered by maximum strain rates alone. Additional analyses of near-field data may permit us to rule out other deformation characteristics. Our model allows us to quantify the probability that a given deformation will trigger an earthquake on a particular fault at distance r, which is important for evaluating the viability of physical models of dynamic weakening and nucleation (Beeler and Lockner, 2003, Brodsky and Prejean, 2005, Johnson and Jia, 2005, Marone and Savage, in preparation). We show an example in Figure 3. Since the deformations approach a constant as the fault is approached, the probabilities very close to the fault are the same. At larger distances, they depend on the rupture dimension such that by ~1 rupture dimension they have decreased to 10% of the maximum probability (corresponding to PGVs of ~15 cm/s or ~50 µstrain), except in cases with exceptionally large (or perhaps long duration) ground motions at remote distances, when the probabilities will be correspondingly elevated (Gomberg and Johnson, 2005). Aftershock densities also describe probabilities, of an aftershock occurring at distance r anywhere surrounding the triggering fault. However, they fall off at a constant rate with distance, regardless of the rupture dimension. Numerical models show that all these behaviors may be explained if the deformations and triggering from a large rupture are simply the sums of those from smaller ones on the same fault. Our results do not preclude the possibility that other characteristics of the deformation field may be important, some of which we will measure and test similarly. These include shaking duration, total radiated energy, and perhaps frequency content. Figure 3. Left) Probabilities calculated as [G(r)/G(0)]χ with m=2, n=1.75, a=.4 for hypothetical faults with dimensions labeled. Right) The probability of an aftershock occurring at any azimuth within a distance Δr varies as ρ(r); γ=1.5 in this example. The integral of this provides the probability an aftershock occurring at any distance less than r.

4. Acknowledgements This work is supported by the USGS National Earthquake Hazards Reduction Program. We also acknowledge Miaki Ishii and Frank Vernon (both at UCSD) for providing data.

5. References Anderson, J.G. (2000), Expected shape of regressions for ground-motion parameters on rock, Bull. Seismo. Soc. Am., 90, S43–S52. Beeler, N. M., and Lockner, D. A. (2003), Why earthquakes correlate weakly with the solid Earth tides: Effects of periodic stress on the rate and probability of earthquake occurrence, J. Geophys. Res., 108(B8), 2391, doi:10.1029/2001JB001518. Brodsky, E. E., and S. G. Prejean (2005), New constraints on mechanisms of remotely triggered seismicity at Long Valley Caldera, J. Geophys. Res., 110, B04302, doi:10.1029/2004JB003211. Brune, J.N. (1970), Tectonic stress and the spectra of seismic shear waves from earthquakes, J. Geophys. Res., 75, 4997-5009. Felzer, K.R. and Brodsky, E.E. (2005 in press), Evidence for dynamic aftershock triggering from earthquake densities, Nature, 14 p. Gomberg, J. and Agnew, D.C (1996), The accuracy of seismic estimates of dynamic strains from Pinyon Flat Observatory, California, strainmeter and seismograph data, Bull. Seismo. Soc. Am., 86, 212-220. Gomberg, J. and Johnson, P. (2005), Dynamic triggering of earthquakes, Nature, 437, p. 830, doi:10.1038/nature04167. Helmstetter, A., Kagan, Y.Y. and Jackson, D.D. (2005), Importance of small earthquakes for stress transfers and earthquake triggering, J. Geophys. Res., 110, B05S08, doi: 10.1029/2004JB003286. Johnson, P. and Jia, X. (2005), Nonlinear dynamics, granular media and dynamic earthquake triggering, Nature, 437, 871-874, doi:10.1038/nature04015. Kanamori, H. and Anderson, D.L. (1975) Theoretical basis of some empirical relations in seismology, Bull. Seismo. Soc. Am., 65, 1073-1095. RETAS: a restricted ETAS model inspired by Bath’s law

D. Gospodinov1 and R. Rotondi2

1 Geophysical Institute of the Bulgarian Academy of Sciences, Centralna Posta PK 258, 4000 Plovdiv, Bulgaria 2 C.N.R. - Istituto di Matematica Applicata e Tecnologie Informatiche, Via Bassini 15, 20133 Milano, Italy

Corresponding Author: R. Rotondi, [email protected]

Abstract: A restricted trigger model is proposed to analyse the temporal behaviour of some aftershock sequences. The conditional intensity function of the model is similar to that of the Epidemic Type Aftershock-Sequence (ETAS) model with the restriction that only the after- shocks of magnitude bigger than or equal to some threshold Mtr can trigger secondary events. For this reason we have named the model restricted epidemic type aftershock-sequence (RE- TAS) model. Varying the triggering threshold we examine the variants of the RETAS model which range from the Modified Omori Formula (MOF) to the ETAS model, including such models as limit cases.

1 Introduction Bath’s law states that the difference between the main shock magnitude and the mag- nitude of the strongest aftershock is constant, on average 1.2 for Richter and 1.4 for Utsu (Ogata, 2001). By extending this principle to the subsequences generated by primary events in a trigger model, we get that the difference between the weakest primary event and the weakest event in the aftershock sequence must be at least 1.2. This suggests choosing the primary events among those of magnitude larger than a suitable threshold Mtr. The aim of our work is to examine the class of trigger models obtained by varying Mtr between the cut-off M0 and the maximum magnitude Mmax. At first we have simulated two data sets according to the ETAS and RETAS models respectively and we have checked that the true model was identified by the Akaike criterion. Then we have analysed two aftershock sequences showing a different behaviour: the first, generated by the strong earthquake of M = 7.8 on April 4, 1904 in Kresna region, SW Bulgaria, is a typical mainshock-aftershock sequence, while the second is the seismic swarm started on September 26, 1997 in Umbria- Marche region, central Italy. Our idea is to look for the model that fits best the data according to the Akaike criterion and hence to identify which physical interpretation can be given to each different behaviour, among the following: the aftershock sequence was triggered by the only main shock, by every event or by just some selected events. Then we compare the results obtained through the stochastic modelling with the description of the seismic crisis provided by the seismological studies.

2 Trigger models Let us consider the conditional intensity function of a generic trigger model:

Km λ(t|Ht) = µ + p (t − tm + c) {m∈PX;tm

K Ki λO = p λE(t|Ht) = µ + p . (t + c) (t − ti + c) tXi

By borrowing from Bath’s law the idea of a gap between the magnitude Mtr of the triggering event and that of the largest event generated, we have examined the model in which, like in Ogata (2001), the primary events are those having magnitude larger than or equal to a threshold Mtr, but, contrary to Ogata (2001), the parameters Km obey the restriction α(Mi−M0) Ki = K0 e (Gospodinov and Rotondi (2005)). The conditional intensity function of our model turns out to be

α(Mi−M0) K0 e λ(t|Ht) = µ + p . X (t − ti + c) ti < t Mi ≥ Mtr

Moreover we note that in this model, contrary to the original trigger model (Vere-Jones and Davies, 1966; Vere-Jones, 1970), each primary event belongs to the offspring of the preceding primary events. We have not fixed the size of the gap between Mtr and M0, even better by varying Mtr we have examined all the models between the MOF and the ETAS model on the basis of the Akaike criterion. We recall that in the MOF µ is generally set equal to 0 and α(M1−M0) K0 e = K for the identifiability of the model. After having chosen the best model among those proposed, we have evaluated its goodness of fit through the residual analysis.

3 Data analysis We sketch just the analysis of the 1997 aftershock sequence in Umbria-Marche region, because it highlights better the properties of the RETAS model. A seismic sequence struck Umbria-Marche region during September-October 1997. It was characterized by the occurrence of six earthquakes of magnitude larger than 5 in a period of 20 days. More than 2000 events were recorded from September 26, 1997 up to November 3, 1997 (Waveforms, arrival times and locations of the 1997 Umbria-Marche aftershocks sequence). This data set can be considered complete above magnitude 2.9 and above this cut-off it contains 508 events. Studies on different features of the sequence reveal that it is an untypical compound aftershock sequence for a magnitude 6 main shock. Deschamps et al. (2000) suggest that the strongest events delineate three subzones and are spatially close to the rupture zones of the main shocks. By fitting the class of RETAS models to the aftershock sequence we have obtained that the best model is the ETAS model with Mtr = M0 = 2.9. The α parameter generally varies in the range 0.4 − 3.0 and, according to Ogata (1999, p. 487), its value is smaller in case of swarm-type activity than in ordinary mainshock and aftershock activities. In our case α = 0.89 classifies the sequence as swarm. The residual analysis reveals that the number of the observed events exceeds that of the expected ones nearly in the whole time interval under study. (a) (b)

−1600 40

−1650 30

−1700 20

−1750 10

AIC AIC 0 −1800 −10 −1850 −20 −1900 −30 −1950 3.0 3.5 4.0 4.5 5.0 5.5 3.5 4.0 4.5 5.0 5.5 6.0 Triggering magnitude level Triggering magnitude level

Figure 1: Umbria-Marche region: AIC value of the RETAS model for different Mtr triggering magnitudes with cut-off magnitude (a) M0 = 2.9 and (b) M0 = 3.6. The best model is (a) the ETAS and (b) the RETAS model with Mtr = 5.0.

Figure 1a shows the trend of the AIC values with respect to different triggering magni- tudes. Like in the analysis of the data set simulated from a RETAS model, we observe the presence of a local minimum in Mtr = 4.6. This observation led us to the idea of applying RETAS models with Mtr ≥ 4.6 to the subsequence with cut-off magnitude M0 = 3.6 so that the difference between Mtr and the new M0 is at least 1. By raising the threshold magnitude, secondary clusters appear generated by the six strongest events. The AIC values versus the triggering magnitudes are plotted in Fig. 1b. The residual analysis shows that, apart from a slight decrease of the activity between the 13th and the 16th day, the observed cumulative number of events is always within the error bounds. Summarizing the algorithm we have applied is composed of two stages. The first consists in the following steps:

N i. order the set D = {Mi}i=1 in increasing way and build the set of different magnitude levels K {Magk}k=1 obtained by removing from D the repeated values, so that Magk < Magk+1, k = 1, . . . , K − 1, ∀k Magk ∈ D, and ∀i ∃k: Mi = Magk; set k = 0; ii. k ← k + 1, Mtr = Magk; iii. given the conditional intensity function

α(Mi−M0) K0 e λ(t|Ht) = µ + p , X (t − ti + c) ti < t Mi ≥ Mtr

maximize log λ(·) and evaluate AICk; iv. if k = 1, then set kbest = 1 and AICbest = AIC1; otherwise, if AICk < AICbest, then set kbest = k and AICbest = AICk; v. if k < K then go to (ii), otherwise stop. The final output, expressed by the value kbest, identifies the best model: ETAS if kbest = 1,

MOF if kbest = K and RETAS with triggering magnitude Magkbest if 1 < kbest < K. If the ETAS model comes out to be the best model, it is interesting to examine whether particular correlations between the strongest aftershocks and the remaining seismicity emerge by raising the threshold M0. In this second stage the analysis can be carried out throught the following steps: K i. plot the set of pairs {(Magk, AICk)}k=1; ii. if this set shows a monotonic trend, then stop; otherwise find the index of the local minimum, let’s say j, set Mtr = Magj and M0 ≈ Mtr − 1, and go back to the first stage considering the subset formed by the events of size exceeding M0.

4 Conclusions We have found agreement between the physical interpretations of the results obtained through the stochastic modelling and the description of the seismic crisis provided by de- tailed seismological studies. It appears that the RETAS model can be applied for modelling compound seismic sequences in regions of complex tectonic structure with highly fractured earth crust.

Acknowledgements During this work Gospodinov D. was supported by Grant No. 219.34 of the N.A.T.O.-C.N.R. Outreach Fellowships Programme 2001.

References Deschamps, A., Courboulex, F., Gaffet, S., Lomax, A., Virieux, J., Amato, A., Azzara, R., Castello, B., Chiarabba, C., Cimini, G.B., Cocco, M., Di Bona, M., Margheriti, L., Mele, F., Selvaggi, G., Chiaraluce, L., Piccinini, D. and Ripepe, M. (2000), Spatio-temporal distribution of seismic activity during the Umbria-Marche crisis, 1997, Journal of Seismology, 4, 377-386. Gospodinov, D. and Rotondi, R. (2005) Statistical analysis of triggered seismicity in Kresna region SW Bulgaria, 1904) and Umbria-Marche region (central Italy, 1997), Pure and Applied Geophysics, in press. Ogata, Y. (1988), Statistical models for earthquake occurrences and residual analysis for point pro- cesses, J. Amer. Statist. Assoc., 83, 401, 9-27. Ogata, Y., Seismicity analysis through point-process modeling: a review, In Seismicity Patterns, Their Statistical Significance and Physical Meaning (eds. M. Wyss, K. Shimazaki and A. Ito) (Birkha¨user, Basel 1999), Pure and Applied Geophysics, 155, 471-507. Ogata, Y. (2001), Exploratory analysis of earthquake clusters by likelihood-based trigger models, J. Appl. Probab., 38A, 202-121. Vere-Jones, D. (1970), Stochastic models for earthquake occurrence (with discussion), J. Roy. Statist. Soc., Ser. B 32, 1-62. Vere-Jones, D., and Davies, R.B. (1966), A statistical survey of earthquakes in the main seismic region of New Zealand, Part 2, Time Series Analyses, N. Z. J. Geol. Geophys. 9, 251-284. Waveforms, arrival times and locations of the 1997 Umbria-Marche aftershocks sequence (Istituto Nazionale di Geofisica e Vulcanologia, Rome, Italy; UMR G´eoscience Azur, CNRS/UNSA, Valbonne, Grance; Laboratorio di Geofisica, Universit`a di Camerino, Camerino, Italy). Statistical Measures of Dynamic Triggering

Susanna Gross CIRES, University of Colorado, 216 UCB, Boulder, CO 80309, [email protected]

November 4, 2005

Abstract: A new theory of dynamic triggering based upon fault friction that responds to both slip rate and contact state will be presented, along with observational tests of the theory using aftershocks surrounding 3 southern California events, the 1992 Landers, 1994 Northridge and 1999 Hector Mine mainshocks. This theory of transient fault movement naturally explains the temporal distribution of dynamically triggered aftershock sequences lasting many months. A fast approximate dynamic stress transfer model is used to compute the dynamic failure func- tion predicted by the theory, and 4 different statistical measures are used to evaluate it. Failure functions involving static, dynamic, and hybrid failure functions are quantitatively evaluated by fitting exactly the same data using functional forms that have exactly the same number of de-

grees of freedom. The temporal and the spatial structure of a 40,000 event southern California

¡ ¢ £ catalog with is used to constrain these models. The results suggest that both static and dynamic triggering occur, with the dynamic triggering occurring through transient reversal of the resolved shear stress and consequent fault movement.

1 Introduction

Several careful studies (e.g. Gomberg et al. 1998) have found that when positive shear stresses are applied to a fault and then removed, almost all triggering ceases at their removal. The shear stresses are positive in the sense that they increase the pre-existing tectonic stresses on the fault, driving them towards static failure. Few long-lasting effects from positive stress transients are a natural consequence of the approximate linearity of rate and state friction models. The unloading process nearly mirrors the loading process, and so it returns the fault to almost the same state, and produces few longer term effects like delayed aftershocks. In contrast, the dynamic triggering mechanism presented below produces delayed aftershocks as a result of transient shear stresses which act in a sense opposite to the tectonic stresses. Negative shear stresses can easily have effects that last long after they are gone, because the loading process does not mirror the unloading process. Dynamically triggered aftershocks result from the movement of faults that failed recently, and therefore have a relatively small amount of shear stress accumulated on them. These events consequently should have unusually low stress drops. This paper extends Dietrich seismicity theory to cases in which shear stresses on the fault are transiently reversed by dynamic stresses carried in the seismic wave train from a mainshock. Comparisons with observed earthquakes are made for failure stresses computed from dynamic and static stress transfer models that involve exactly the same number of free parameters. Finally, the models are evaluated with various statistical measures.

2 Dynamic and Static Triggering

Dynamic triggering of aftershock sequences could result from a new population of contacts being formed as the shear stress on the fault is transiently reversed and the surfaces are repositioned slightly in response to dynamic stresses from a mainshock. After the wave train has propagated away, the fault may not experience any residual static stress as a result of the mainshock, so the resolved shear stress to its original value, but the new contacts that form in response are not the same as the old ones. They are much younger, and therefore smaller, but also there may be a residual displacement, because the trajectory the fault follows in reloading need not be identical to the trajectory it followed in unloading. These new contacts will naturally give rise to an aftershock sequence as they mature. It is much like the situation following fault rupture in a mainshock. In the case of relatively young faults having small contact areas, which are particularly easily triggered, it is

possible to derive an expression (please see http://strike.colorado.edu/ ¤ sjg/dynamic)for the number of aftershocks ¥

as a function of minimum dynamic shear stress ¦ § ¨ © involving 3 unknown constants,

¥      

  ¦ § ¨ ©  ¦ § ¨ ©  ¦ § ¨ ©  (1)

1 ¢

Dynamic Failure Stress If we define the dynamic failure stress ¡ to be that stress which generates after-

£

 ¤

shocks, so it is proportional to ¥ , we can define a new unknown constants ,

£

    

¦    ¦  ¦

¢ § ¨ © § ¨ © § ¨ © ¥ ¡ (2)

which is linearly proportional to ¥ . It’s fortunate that only two constants are needed, because then an 8 parameter model including 6 parameters to define the background stress state can be used. The static failure stress model discussed below also involves 8 parameters. This dynamic failure stress is labeled “minfail” in the statistical tables below. Static Coulomb Failure Stress

The static stress model computes a change in static failure stress at the hypocenter of each aftershock using the ¦ three-dimensional stress tensors from before and after the mainshock. The static failure stress ¡ is computed

from the principal stresses § ,



¨ ©   

§  §  §  § 

§ 

¡ ¦ ¥

 

 (3)

in which the effective coefficient of friction  includes the effects of fluid pressures. This failure stress is called “Coulomb” in the tables below. Static stress models involve 8 parameters, including 7 to define the background stress state, which includes variations of vertical normal stress with depth. Like the other 7 parameters, the effective coefficient of friction is a free parameter, adjusted to match the observed distribution of seismicity. Dynamic failure stress (max Coulomb) Most previous studies of dynamic triggering have used a failure stress closely related to the static failure stress discussed above, and assumed that dynamic triggering would be linearly proportional to the Coulomb failure stress,

maximized over time,



    ¨ ©   

§  §  §  § 

§ 

¡ §   ¥

    (4)

which makes some intuitive sense, because it is a measure of how far the triggered faults have been driven toward static failure. In the tables below it is labeled “max Coulomb”. Hybrid Triggering - both Static and Dynamic I have constructed a hybrid aftershock model that includes terms representing both static and dynamic stress transfer triggering, but lacks other effects. This hybrid model provides an additional measure of how strongly the data support the dynamic triggering as contrasted with static triggering in the generation of aftershocks. One of

the free parameters in the hybrid model is a weight  ,

   ¤

  ¦ ¦ ¦   ¦ § ¨ ©  ¦ § ¨ © ¦ !  ¡ (5)

determiningthe relative importanceof static and dynamic triggering. In orderto keep the numberof free parameters

"# ¦ down, a constant shear stress scaling ! bar was assumed. With the elimination of the effective density, a term that permits the vertical normal stress to increase with depth, only 8 free parameters are used, the same number as for the other failure stresses being evaluated. This model is labeled “min shear + Coulomb” in the tables below. Modified Omori Aftershock decay

The modified Omori relation (Utsu 1961) with an additional background term (Gross and Kisslinger 1994) is

% & )

$

% ' 

¥ (6)

 (

% % $

where  is the number of events occurring per unit time at time . Dieterich Aftershock decay

A paper by Dieterich (1994) explores the implications of rate and state dependent friction for the generation of

¦ +

aftershocks. If a population of faults in equilibrium with a steady tectonic stressing rate * and normal stress , so

- ¦

that it produces a rate of background seismicity , , is subjected to a step in shear stress , the new seismicity rate %

$ will follow the following function of time :

/

, .

/ 0

$ 1

. (7)

/ /  ;

8 9 :

.

2 3 4 6 < 0 2 3 4 5 6 7

/

.

< =

¦ >

in which * is the stress rate after the mainshock, is a constitutive parameter measurable for laboratory samples

8 9

% 

and / is the time at which the sequence decays to the steady background rate of activity. .

2 3 Data

Southern California catalogs

The catalog of seismicity used in this work was compiled by the Caltech USGS southern California seismic network

#



¢  from January 1981 December 2000. The minimum magnitude used was ¡ but no spatial cut was applied. The catalog was divided into a spatial grid of boxes whose sizes reflect the level of seismic activity. Dynamic source models We have used inversions for the moment distribution of Wald and Heaton (1994) to define the Landers source, (Wald et al. 1996) to define the Northridge source and (Ji et al. 2002) to define Hector Mine. All elements in the source models are retained in the stress models, each one represented with a point double-couple, with a slip history taken from the original inversion of waveform and geodetic data as published in the references cited above.

4 Models

Fast approximate numerical model for waveforms Because of difficulties computing accurate static stresses using a reflectivity model, I developed a much simpler model for dynamic and static stresses based upon the equations for a point double couple presented in Aki and Richards (1980). This model neglects surface waves, but it is numerically stable and very fast. Single event stress state inversions The orientations and relative magnitudes of the principal stresses for a general background stress state were fit to the seismicity. The magnitude of background stress and parameters relating to the failure function such as

effective coefficient of friction were determined by finding the spatial distribution of the failure stress - which

best separates the aftershocks from the background seismicity.



%

- £ ¤ ¥ ¦ § ¤ - ¨ ¥ © ¤ §



¤ ¥ ¤ ¥

© (8)

£ ¤ ¥ ¦ § ¤ ¨ ¥ © ¤ §

£ ¤ ¥ ¦ § ¤ ¨ ¥ © ¤ § £ ¤ ¥ ¦ § ¤

The average change in failure stress from background seismicity - is compared with the average change in - failure function from the aftershocks ¨ ¥ © ¤ § , and normalized by the pooled standard deviation. To balance the effects of distant and near aftershocks upon the solution, t-statistics are computed for seismicity in 8 concentric shells, and summed. Summed t-statistics for 20,000 models were evaluated in the search for the best fitting 8-parameter model for each of the inversions listed in Table 1. Models are evaluated in 20 groups of 1000 each, following a customized genetic algorithm that has proven successful in previous work (Gross and Kisslinger 1997). The statistics in Table 2 quite similar to the single-event summary t-statistic approach discussed above, but the robust KS statistic is used instead of the t-statistic. A major advantage of the KS statistic compared to the t-statistic is that it does not assume that the failure stresses are normally distributed. Stress Transfer And Nucleation (STAN) model The Stress Transfer and Nucleation (STAN) model is another way to evaluate the aftershock decay and stress transfer models. It compares seismicity models to the temporal distribution of seismicity in a large number of non- overlapping spatial subsets, and the best-fitting background stress rate and decay parameters are found. Candidate STAN models are evaluated by computing Komolgorov-Smirnov (KS) statistics for the temporal distribution of seismicity in each subset. The probabilities of randomness computed from each KS statistic are combined by multiplying, which assumes the activity in each subset is independent. The background stress accumulation rate and a few aftershock decay parameters are fit to observations using a nested grid search technique, finding the largest combined KS probability. In addition, KS statistics from the subsets are summed, to compute a measure which is distributed between 0 and 1, providing a measure of absolute misfit reported in Table 4.

5 Results

Single mainshock fits Table 1 Summed t-statistics Table 2 Summed KS statistics case Landers North- Hector case Landers North- Hector ridge Mine ridge Mine Coulomb -109.9 -20.0 -96.1 Coulomb 4.53 3.62 4.79 max Coulomb -130.0 -16.4 -84.6 max Coulomb 4.56 3.49 4.73 min shear -157 -42 -129 min shear 4.05 3.43 4.44 min shear + Coulomb -149.2 -18.2 -96.4 min shear + Coulomb 4.71 3.74 5.56

The largest t-statistics in Table 1 are associated with min shear, the purely dynamic stress state. But when this model is evaluated with other measures, like the rank correlations it is revealed to not be nearly as successful a model as the t-statistics would suggest. The KS statistics in Table 2 are most positive for the best models, and

3 they clearly show that the purely dynamic min shear model is not in good agreement with the observations. The problem is that the t-statistic is parametric, it assumes Gaussian distributions, and that assumption is violated for purely dynamic failure stresses like min shear. STAN fits Table 3 Pooled KS probabilities Table 4 Summed KS statistics case Dieterich MOM Omori case Dieterich MOM Omori Coulomb -5147 -2590 -2622 Coulomb .371 .246 .248 max Coulomb -5134 -3427 -3509 max Coulomb .368 .302 .308 min shear -5096 -4950 -5135 min shear .373 .365 .366 min shear + Coulomb -4796 -2553 -2571 min shear + Coulomb .357 .245 .246 Tables 3 and 4 use the same catalogs, the same triggering models and different statistical measures, but they tell a consistent story in terms of which models are most successful. The best model is min shear + Coulomb, which combines both static and dynamic triggering elements. The best fitting temporal decay is the Modified Omori model, with the simpler Omori model fitting almost as well. The second best spatial model is purely static stress transfer triggering. These results are consistent with the single-event statistics presented in Tables 1-2, which also found that the most successful triggering model was min shear + Coulomb, with the second best being the purely static Coulomb model.

6 Conclusions

Four different statistical measures have been constructed to test the dynamic triggering theory. Two different statistical measures were used to evaluate spatial distributions of aftershocks and background events surrounding three southern California sequences, 1992 Landers, 1994 Northridge and 1999 Hector Mine. Two more statistical measures were used to evaluate the temporal distribution of seismicity in a catalog containing the same three mainshocks. Purely static, purely dynamic, and hybrid triggering models were evaluated each time, with consistent results found from 3 of the 4 statistical measures. The best fitting triggering model is a hybrid, incorporating the transient stress reversal process presented here in combination with more familiar static Coulomb failure stresses which have been studied by many authors. This result suggests that both static and dynamic triggering occur, with static triggering being the strongest triggering mechanism for most aftershocks. The one failed statistical measure can be explained in terms of violated assumptions, because t-statistics were derived using the assumption that the data can be approximated with a Gaussian distribution. The purely dynamic stress reversal triggering mechanism is identically zero when it does not trigger, it has no ”stress shadows”, and therefore its distribution is not even approximately Gaussian.

Acknowledgements This material is based upon work supported by the National Science Foundation under Grant No. 0003505. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. Research supported by the U.S. Geological Survey (USGS), Department of the Interior, under USGS award number 01HQGR0008. The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the U.S. Government.

7 References

Aki, K. and P. G. Richards, Quantitative seismology, Theory and methods, pp913, W. H. Freeman and Company, NY, 1980. Dieterich, J. H., (1994), A constitutive law for rate of earthquake production and its application to earthquake clustering, J. Geophys. Res., 99, 2601-2618. Dieterich, J.H., (1999) State Dependence of Frictional Contacts: Micromechanical Observations and Role in Earthquake Processes, Materials Research Society Tribology Workshop, San Jose, California, Gomberg, J., N. Beeler, M. L. Blanpied and P. Bodin, (1998) Earthquake triggering by transient and static deformations, J. Geophys. Res., 103, 24411-24426. Gross, S. J., and C. Kisslinger, (1994), Tests of models of aftershock rate decay, Bull. Seismol. Soc. Am. 84, 1571-1579. Gross, S. J. and C. Kisslinger, Estimating tectonic stress rate and state with Landers aftershocks, J. Geophys. Res., 102, 7603-7612, 1997. Ji, C., D.J. Wald and D.V. Helmberger, (2002) Source Description of the 1999 Hector Mine, California, Earthquake, Part II: Complexity of Slip History Bull. Seismol. Soc. Am. 92, 1208-1226. Utsu, T., (1961), A statistical study on the occurrence of aftershocks, Geophys. Mag., 30, 521-605. Wald, D. J., T. H. Heaton, and K. W. Hudnut, (1996), The slip history of the 1994 Northridge, California earthquake determined from strong-motion, teleseismic, GPS, and leveling data, Bull. Seism. Soc. Am., 86, Wald source models Wald, D. J. and T. H. Heaton, (1994), Spatial and temporal distribution of slip for the 1992 Landers, California earthquake, Bull. Seismol. Soc. Am., 84, 668-691 1994.

4 Analysis of complex seismicity pattern generated by fluid diffusion and aftershock triggering

S. Hainzl1 and T. Kraft2 1Institute of Geosciences, University of Potsdam, 14415 Potsdam, Germany 2Department of Earth and Environmental Sciences, Ludwig-Maximilians-University, Germany

Corresponding Author: S. Hainzl, [email protected]

Abstract: According to the well-known Coulomb failure criterion, the variation of both, stress and pore pressure, can result in earthquake rupture. Aftershock sequences charac- terized by the Omori law are often assumed to be the consequence of mainshock induced stress changes, whereas earthquake swarms are supposed to be triggered by fluid intru- sions. In practice, both types of seismicity are often not clearly distinguishable that might result from an interplay of stress triggering and fluid flows in the earthquake generation process. We show that the statistical modeling of earthquake clustering by means of the Epidemic Type Aftershock (ETAS) model and simple pore pressure diffusion models can be an appropriate tool to extract the primary fluid signal from such complex seismicity patterns. This is demonstrated for two examples of natural swarm activity observed in Central Europe.

1. Introduction Stress triggering has been identified as an important mechanism for earthquake genera- tion. This mechanism is based on a well-known criterion for earthquake occurrence, the Coulomb failure criterion, CFS ≡ τ − µ(σ − P ) ≥ 0, stating that a positive Coulomb failure stress (CFS) could promote failures. Here, τ defines the shear and σ the normal stress on the failure plane (positive for compression), µ is the coefficient of friction, and P is the pore pressure which reduces the effective normal stress. Thus earthquakes can be triggered by stress changes as well as due to pore pressure changes. Aftershock activity which is usually associated with the stress changes of mainshocks generally decays according to the modified Omori law, λ(t)=K(c + t)−p,whereK, c and p are constants, and t is the elapsed time since the main event (Utsu et al. 1995). The Epidemic Type Aftershock Sequences (ETAS) model is a stochastic point process incorporating the empirically observed characteristics of stress triggered activity, where each earthquake has some magnitude-dependent ability to trigger its own Omori law type aftershocks (Ogata 1988). In particular, the rate of aftershocks induced by an earthquake that occurred at time ti with magnitude Mi is given by

K α(Mi−Mmin) λi(t)= p · e (1) (c + t − ti) for time t>ti,whereα is a constant and Mmin is the lowest magnitude cut of the catalog. The total occurrence rate,

λ(t)=λ0(t)+ λi(t) , (2) {i:ti

2. Vogtland Earthquake Swarms The Vogtland region is well known for the episodic occurrence of earthquake swarms. The most recent and best documented strong earthquake swarm occurred between August and December 2000 and consisted of approximately 5000 earthquakes with local magnitudes exceeding the magnitude of completeness (mc=0.2). The results presented here are an overview of previous investigations (Hainzl & Fischer 2002, Hainzl & Ogata, 2005). Method and Results At first, we fit the ETAS model to the whole activity assuming a constant forcing rate λ0. The five parameters (λ0, K0, α, c, p) of the ETAS model are estimated by the maximum likelihood method. To explore this non-stationary behavior, then the ETAS model is fitted in a moving time window. We find that the ETAS model fits the earthquake sequence very well. The great majority of earthquakes in this swarm is found to belong to self-triggered activity, in particular to embedded aftershock sequences. All estimated parameters are in agreement with values for tectonic earthquake activity except the α-value (α=0.7) which is similar to previous findings for other earthquake swarm activity, but lower than typical values found for non-swarm activity (α≈2). Although the background forcing is found to trigger only a small fraction of earthquakes, our analysis in moving time windows reveals a large systematic variation of the external forcing strength λ0(t). The earthquake swarm seems to be initiated by a strong fluid impulse which decreases within the first 10 days. Our results are in good agreement with with the identified phases of diffusion-like hypocenter migration (Parotidis et al. 2003). The efficiency of the deconvolution procedure has also been proved by an analogous analysis of simulated earthquake activity, which is forced by pore pressure diffusion and incorporates stress field changes in a 3-dimensional elastic half-space (Hainzl 2004, Hainzl & Ogata 2005).

3. Bad Reichenhall Earthquake Swarms Reports of felt earthquakes in the area of the Staufen Massif near Bad Reichenhall, SE Germany, range back several hundred years reaching maximum intensities of I0 =Von the macroseismic scale (Kraft et al. 2005). The underlying mechanism is not fully resolved so far. We analyze data of a network recently installed by the University of Munich and the Geological Survey of the Bavarian State. The data set consists of 1171 earthquakes recorded in the year 2002. The seismicity is characterized by two swarm type phases in March and August, the later occurred after intense rainfall which led to severe flooding in Central Europe. Method and Results We test the hypothesis proposed by Kraft et al. (2005) that rainfall triggered the earth- quake via the mechanism of pore pressure diffusion. According to the Coulomb failure criterion, earthquakes can be triggered in the absence of stress changes only if the pore pressure increases. Our statistical earthquake model has therefore the intensity function

λ0(z, t)=λ00 + c p˙(z, t)if˙p(z.t) > 0; else λ0(z, t)=λ00 (4) 140 1 120 pressure in depth 100 daily rain 80 0.8 60 40 rain [ mm/day ] 0.6 20 0 0.4 30 25 20 0.2 correlation coefficient 15 10 0 earthquake [#/day] 5

pressure increase [Pa/day] 0 0 50 100 150 200 250 300 350 -15 -10 -5 0 5 10 15 time [days since 2002] time delay [days] Figure 1: Left: The rain rate (top) and the seismic rate (bottom) observed in the region of Reichenhall, SE Germany. The bold line indicates the mean pressure change in the depth resulting from linear diffusion of the rain water. Right: The linear correlation coefficient between the daily rain amount (dashed line), respectively the mean daily pressure increase in depth (solid line), and the daily number of earthquakes as a function of the time shift. where c and λ00 are constants. The latter describes the part of the forcing rate which cannot be related to pore pressure changes. The pressure changes in depth z are calcu- lated from the pressure changes at the surface P0(t) due to the rainfall by means of the Greensfunction G of the diffusion equation

t d p˙(z, t)= G(z, t − τ)P (τ)dτ  0 dt−∞ H(t − τ) z2 with G(z, t − τ)= exp −  (5) 4πD(t − τ) 4D(t − τ)  where H is the Heaviside function. In our case, P0(t) is the average of the four meteorologic stations in this region. The estimation of the three parameters (λ00, c, D) is carried out by the maximum likelihood method. To avoid complications near the surface, we focus on earthquakes in the depth interval 1-4 km. The maximization of the likelihood function yields the hydraulic diffusivity D=0.3 m2/s. This value for the diffusivity corresponds well to results obtained from fluid pressure experiments at boreholes. With the estimated diffusivity, we are now able to calculate the effect of the time-dependent surface rain on the pore pressure in depth. The figure shows the calculated mean pressure increase in the depth interval 1-4km as a function of time. This curve is compared with the observed daily number of earthquakes. Both curves are closely related. We calculate the linear correlation coefficient between both curves as well as between the time series of the daily earthquake number and the daily rain amount. While the seismicity is not correlated with the rain data with zero time delay, it shows significant correlation if the seismicity is shifted 8 days back (Rmax=0.5). On the other hand, the pore pressure change in depth is strongly correlated without delay time with a maximum correlation coefficient which almost doubles that of the rain data (Rmax=0.8).

4. Discussion The study of natural swarm activity occurred in Vogtland region indicates that aftershock sequences are embedded and can even dominate swarm activity (Hainzl & Ogata 2005). In this case, the ETAS model can explain most of the activity and the fluid signal is almost burried. However, the fluid signal can be reconstructed by means of the time- dependence of the background forcing rate. In the Mt. Hochstaufen region, aftershock activity make up only a minor fraction of the activity and the shape of the extracted background rate is almost only the shape of the smoothed activity. In this case, however, we have additional information about the potential fluid source and can directly test the pore pressure diffusion model.

5. Conclusions In natural seismicity, aftershocks are often embedded even in swarm-like activity. The ETAS modeling is an effective method to deal with aftershocks. To deal with swarm-like seismicity, the hypothesis of a constant background activity has to weakened because fluid flows, silent earthquakes and other mechanisms enhance or reduce the earthquake proba- bility. In cases where we have no knowledge about the source of earthquake generation, the statistical modeling can reveal the time-dependence of the mechanism. This is the case for the swarm activity in Vogtland. The revealed signature is found to be in good agreement with the signal of fluid diffusion from a high pressure source. In cases where we have some information about the potential source function, we can go one step further and replace the back ground forcing by a functional form. This is done in the case of the Bad Reichenhall swarm activity where we test the triggering mechanism of pore pressure diffusion due to rainfall. The resulting high correlation between model and observation suggests that rain water diffusion is really the responsible mechanism in this case.

6. References Hainzl, S. (2004), Seismicity patterns of earthquake swarms due to fluid intrusion and stress triggering, Geophys. J. Int. 159, 1090–1096. Hainzl, S. and Fischer, T. (2002), Indications for a successively triggered rupture growth un- derlying the 2000 earthquake swarm in Vogtland/NW Bohemia, J. Geophys. Res., 107, 2338, doi:1029/2002JB001865. Hainzl, S., and Ogata, Y. (2005), Detecting fluid signals in seismicity data through statistical earthquake modeling, J. Geophys. Res. 110, B05S07, doi: 10.1029/2004JB003247. Kraft, T., Wassermann, J., Schmedes, E. and Igel, H. (2005 submitted), Meteorological triggering of earthquake swarms at Mt. Hochstaufen, SE Germany, Tectonophysics. Ogata, Y. (1988), Statistical models of point occurrences and residual analysis for point processes, J. Am. Stat. Assoc. 83, 9–27. Parotidis, M., Rothert, E., and Shapiro, S. A. (2003), Pore-pressure diffusion: A possible trig- gering mechanism for the earthquake swarms 2000 in Vogtland/NW-Bohemia, central Europe, Geophys. Res. Lett. 30, 2075, doi:10.1029/2003GL018110. Utsu, T., Ogata, Y., and Matsu’ura, R. S. (1995), The centenary of the Omori formula for a decay law of aftershock activity, J. Phys. Earth, 43, 1–33. Wang, H. F., Theory of linear poroelasticity with applications to geomechanics and hydrogeology (Princeton University Press, 2000).

Spatial-Temporal Statistics Between Successive Earthquakes on 2D Burridge-Knopoff Model

T.Hasumi

Department of Physics, Faculty of Science and Engineering, Waseda University, Shinjyuku-ku, Tokyo 169-8555, Japan

Corresponding Author: T. Hasumi, [email protected]

Abstract: We numerically investigate two-dimensional spring-block model for earthquakes called Burridge-Knopoff (BK) model and study the statistical properties of spatio-temporal interval between successive earthquakes. The cumulative distribution of spatio-temporal interval obeys the q-exponential distribution, namely in time interval (waiting time) case is qt>1, and spatial interval (distance between the hypocenters) case is qr<1, respectively. In the waiting time case only, the distribution depends on the frictional property. These results are consistent with the observation one, hence it is concluded that 2D BK model has a reality from a complex network and non-additive statistical mechanics.

1. Introduction Earthquakes (EQs) are complex phenomena. The source mechanism of them has not been clear yet, however some empirical laws are known such as Gutenberg-Richter (GR) law and Omori law. Very recently, Abe and Suzuki (2003, 2005) have found the new statistical properties for EQ, namely spatio-temporal interval (hereafter waiting time and distance) between successive EQs with non-additive statistical mechanics proposed by Tsallis (1988). They have shown that the cumulative distribution of waiting time and distance have obeyed the q-exponential distribution with Southern California and Japan. More precisely, q-value of waiting time (qt) and distance (qr) are qt>1, qr<1, respectively. In 1980s Bak et al. (1987, 1988) have proposed the Self-organized Criticality (SOC) in the non-equilibrium open system. Following their idea, EQ seemed to be SOC, hence many models have proposed and discuss the GR law. In particular, 2D spring-block model (originally proposed Burridge and Knopoff (1967)) have studied by Otsuka (1972) and Carlson (1991) to discuss the GR law. It is however that in this model, the waiting time and distance statistics have not been reported. We report numerical investigation about 2D BK model to study the waiting time and distance statistics.

2. Model The model is 2D spring-block model for earthquakes called BK model and shown in Fig.1 (a). We assume that block’s slip is restricted y-direction. Following this, non-dimensional equation of motion of cite (i, j) is

2 2 && , = ()+ ,1 − ,1 2 , +−+ ( + jijiyjijijixji −1,1, 2 ) UUUUlUUUlU ,, jiji (2αφ ( +−−−+ Uv & , ji )), (1) where U , ji is scaled displacement. Another parameters are given by

2 x 2 y = cx = cyp /,/ kklkkl p . (2) The 4th term of right hand side is the nonlinear friction term, namely velocity weakening type

(see Fig. 1(b)).This system has characterized by four parameters: lx, ly, α and ν. lx and ly are stiffness, ν is a scaled plate velocity, α is governing the friction function. We numerically investigate eq. (1) the (100, 25) on the (x,y) plane under free boundary th 2 2 condition by 4 Runge-Kutta method. In simulation, lx = 1, ly = 3, ν = 0.01 are fixed and α is changed.

Figure 1 (a) 2D spring-block model for EQs called BK model. (b) The nonlinear friction function, namely velocity-weakening type.

3. Results and Discussion

In this model, we consider a slip of the block as EQ whose seismic moment M0 and magnitude M are defined by n 0 = ∑δ , ji = log, MMUM 0 , (3) , ji where δU , ji is the total displacement during a events and n is the number of slipping block. The block which slips for the first time is regarded as a hypocenter. Hence, we define the distance and waiting time as the spatio-temporal interval between successive EQ-like events. In this research, 105 order of events are taken to discuss on the statistical properties, it is however, initial 103 order of events are neglected due to the randomness of initial configuration effect. Kumagai et al. (1999) have reported that the magnitude-frequency distribution depended on the friction parameter α. More precisely, the case for α = 3.5, those for 1.5 and 2.5 and for 4.5 and 5.5 corresponds to the critical state, subcritical state, supercritical state, respectively. Hence, we change the parameter α from 2.5 to 4.5 as a representative of three-types of critical state and discuss the waiting time and distance statistics. Firstly, following the Abe and Suzuki (2005), we try to fit cumulative distribution of waiting time to q-exponential distribution that is defined by

(1/1 −q) q ()xe [() (11 −+= )xq ]+ []awhere + ≡ [ ,0max a], (4) if q>1 this is equivalent to the Zipf-Mandelbrot power law. The inverse function called q-logarithmic function written by 1 ln ()x = (x1−q − ).1 (5) q 1− q The cumulative distribution of waiting time in case for α = 3.5 is shown in Fig. 2 (a), (b) with these fitting function. The simulation study says that qt is always more than 1 for any α, it is concluded that the waiting time distribution obeys the Zipf-Mandelbrot power law. This is

consistent with the observation result (Abe and Suzuki, 2005). We also mention that this distribution depends on the frictional parameter α. Secondly, the cumulative distribution of distance is discussed as well as waiting time statistics. Fig. 2 (c), (d) is an example in case for α = 3.5. In distance statistics, 0 < qr < 1 for any α, so it is equivalent to the modified Zipf-Mandelbrot law. This result also satisfied the observation result (Abe and Suzuki, 2003).

Figure 2 The spatio-temporal interval statistics between successive EQ-like events with 2 2 lx = 1, ly = 3, α = 3.5 : (a) and (b) are waiting time statistics, (c) and (d) are distance statistics. In each figure, dots are the simulation result and solid line is fitting function that is

q-exponential function defined by eq. (4). In case for waiting time (q, τ0) = (1.06,4.50) and correlation coefficient ρ = 0.989, and for distance (q,r0) = (0.497, 50.1) and correlation coefficient ρ = 0.9992, respectively.

4. Conclusion We report numerical investigation about the 2D BK model for EQs to discuss the waiting time and distance statistics. The cumulative distribution of waiting time and distance obey the q-exponential distribution in waiting time case for qt>1, in distance case for qr<1, respectively. These results are consistent with the observation one, hence it is concluded that 2D BK model has a reality from the complex network and the non-additive statistical mechanics.

5. Acknowledgement The author thanks Dr. M. Kamogawa, Dr. M. Ohtaki and Dr. Y. Yamazaki for useful discussions and S. Komura for numerical suggestions. This work is partly supported by grant for the 21st Century COE Program, "Holistic Research and Education Center for Physics of Self-organization Systems" at Waseda University from Ministry of Education, Culture, Sports, Science and Technology, Japan.

6. References Abe, S, and Suzuki, N. (2003), Law for distance between successive Earthquakes, J. Geophys. Res., 108, 2113-2116, 2003. Abe, S, and Suzuki, N. (2005), Scale free statistics between successive earthquakes, Physica A, 350, 588-596. Bak, P, Tang, C, and Wiesenfeld, K. (1987), Self-organized criticality: an explanation of 1/f niose, Phys. Rev. Lett., 59, 381-384. Bak, P, Tang, C, and Wiesenfeld, K. (1988), Self-organized criticality, Phys. Rev. A., 38, 364-374. Burridge, R, and Knopoff, L. (1967), Model and theoretical seismisity, Bull. Seism. Soc. Am., 57, 341-371. Carlson, J. M. (1991), Two-dimensional model of a fault, Phys. Rev. A, 15, 6266-6232. Kumagai, H, Fukao, Y, Watanabe, S, and Baba, Y. (1999), A self-organized model of earthquakes with constant stress drops and the b-value of 1, Geophys. Res. Lett., 26, 2817-2820. Otsuka, M. (1972), A Simulation Earthquake Occurrence, Phys. Earth. Planet. Int., 6, 311-315. Tsallis, C. (1988), Possible generalization of Boltzmann-Gibbs statistics, J. Stat. Phys., 52, 479-487.

Estimating stress heterogeneity from aftershock rate Agn`es Helmstetter and Bruce E. Shaw Lamont-Doherty Earth Observatory, Columbia University, New York

Corresponding author: [email protected]

Abstract: We estimate the rate of aftershocks triggered by a heterogeneous stress change, using the rate-and-state model of Dieterich [1994]. We show than an exponential stress distribution P (τ) ∼ exp(−τ/τ0) p gives an Omori law decay of aftershocks with time ∼ 1/t , with an exponent p = 1 − Aσn/τ0, where A is a parameter of the rate-and-state friction law, and σn the normal stress. Omori exponent p thus decreases if the stress ”heterogeneity” τ0 decreases. We also invert the stress distribution P (τ) from the seismicity rate R(t), assuming that the stress does not change with time. We apply this method to a synthetic stress map, using the (modified) scale invariant ”k2” slip model [Herrero and Bernard, 1994]. We generate synthetic aftershock catalogs from this stress change. The seismicity rate on the rupture area shows a huge increase at short times, even if the stress decreases on average. Aftershocks are clustered in the regions of low slip, but the spatial distribution is more diffuse than for a simple slip dislocation. This stochastic slip model gives a Gaussian stress distribution, but nevertheless produces an aftershock rate which is very close to Omori’s law, with an effective p ≤ 1, which increases slowly with time. The inversion of the full stress distribution P (τ) is badly constrained for negative stress values, and for very large positive values, if the time interval of the catalog is limited. However, constraining P (τ) to be a Gaussian distribution allows a good estimation of P (τ) even with a limited number of events and catalog duration. We show that stress shadows are very difficult to observe in a heterogeneous stress context.

1 Introduction

Much progress has been made in describing earthquake behavior, in particular the Omori law decay of aftershocks with time, based on the predictions of rate-and-state friction law [Dieterich, 1994]. However, this model does not explain the abundance of aftershocks on the rupture surface, unless introducing a strong heterogeneity of the slip and stress on the fault. A simple model of an earthquake source, with a uniform stress change on the rupture area, predicts a decrease of the seismicity rate on the rupture area. In order to explain the increase of the seismicity rate on the fault, we need to introduce stress heterogeneity in the rate-and-state model, which can be due to the variability of slip on the fault [Helmstetter and Shaw, 2005; Marsan, 2005], or to fault roughness [Dieterich, 2005]. Here we estimate the seismicity rate triggered by a heterogeneous stress change. We also develop a method to invert for the stress distribution on the fault from the seismicity rate, assuming that stress changes instantaneously after the mainshock.

2 Relation between stress distribution and seismicity rate

Dieterich [1994] estimated the seismicity rate R(t, τ) triggered by a single stress step τ, assuming a population of faults governed by rate-and-stare friction R R(t, τ)= r , (1) e−τ/Aσn − 1 e−t/ta +1  where ta = Aσn/τ˙r is the duration of the aftershock sequence, andτ ˙r the tectonic loading stressing rate. For −τ/Aσn each positive stress value, the seismicity rate is constant for t ≪ tae , and then decreases with time as R(t) ∼ 1/t for t ≪ ta. For a negative stress change, the seismicity rate decreases after the mainshock. In both cases, the seismicity rate recovers its reference value R = Rr for t ≫ ta. For a heterogeneous stress field τ(~r), with a distribution (probability density function) P (τ), the seismicity rate integrated over space is ∞ R(t)= R(t, τ(~r)) d~r = R(t, τ) P (τ) dτ . (2) Z Z −∞

1 5 5 −5 (a) (b) (c) 10 10 −10

15 15 −15

20 20 −20

25 25 −25 z (km) z (km) z (km)u 30 30 −30

35 35 −35

40 40 −40

45 45 −45 10 20 30 40 10 20 30 40 5 10 15 20 25 30 35 40 45 x (km) x (km) x (km)

slip (m) τ (MPa)

0 2 4 6 8 10 −50 0 50

Figure 1: Slip generated with the modified k2 model (a), shear stress change (b) and (c) map of aftershocks generated with the rate-and-state model, using the stress change shown in (b) and Aσn = 1 MPa.

p The rate-and-state model gives an Omori law decay of aftershocks with time R(t) ∼ 1/t , for t << ta, if the −τ/τ0 stress distribution follows an exponential distribution P (τ) ∼ e . The Omori exponent p = 1 − Aσn/τ0 increases toward 1 as the stress heterogeneity τ0 increases. The rate-and-state model with a uniform stress step (1) cannot explain an Omori law decay with p > 1. The only solution in order to obtain p > 1 in the rate-and-state model, as observed for many aftershock sequences, is to have a variation of stress with time, which may be due to postseismic slip or viscous relaxation. We can invert for the complete stress distribution P (τ) from the temporal evolution of the seismicity rate using (2), provided we observe R(t) over a wide enough time interval. In practice, the estimation of P (τ) for large τ is limited by the minimum time tmin at which we can reliably estimate the seismicity rate. The largest stress we can resolve is of the order of τmax = −Aσn log(tmin/ta). For negative stress, we are limited by the maximum time tmax after the mainshock, and by our assumptions that secondary aftershocks are negligible, and that the stress does not change with time (e.g., neglecting post-seismic relaxation).

3 Application of the method to a stochastic slip model

We have tested the rate-and-state model on a realistic synthetic slip pattern. Herrero and Bernard [1994] proposed a kinematic, self-similar model of earthquakes. They assumed that the slip distribution at small scales, compared to the rupture length L, does not depend on L. This led to a slip power-spectrum for high − wave-number equal to u(k)= Cσ0Lk 2/µ, where σ0 is the stress drop (typically 3 MPa), µ is the rigidity, and C is a shape factor close to 1. We have modified the k2 slip model in order to have a finite standard deviation 3 − of the stress distribution. We used a slip power-spectrum given by u(k)= Cσ0L (kL + 1) η/µ, with η =2.3. We have then computed the stress change on the fault from this synthetic slip model, for a fault of 50 × 50 km, with a resolution dx =0.1 km, and a stress drop σ0 = 3 MPa. The stress field has large variations, from about -90 to 90 MPa, due to slip variability (see Fig 1).

We have then estimated the seismicity rate predicted by the rate-and-state (2) on the fault from the stress change, assuming Aσn = 1 MPa (see Fig 2). While the stress on average decreases on the fault, the seismicity rate shows a huge increase after the mainshock. It then decays with time approximately according to Omori law, with an apparent exponent p =0.93. At large times t ≈ ta, the seismicity rate decreases below its reference rate due to the negative stress values. While the pure Omori law with p< 1 occurs for the exponential distribution of shear stress changes, we find that a Gaussian stress distribution (which the k2 model and many other models give), also gives realistic looking p-values over wide ranges of time scales.

2 6 (a) 10 (b) 0.025

5 10 p=0.93 0.02 4 10

3 0.015 r 10

R(t)/R 2 10 0.01

1 10 probability density function

0 0.005 10

−1 10 −7 −6 −5 −4 −3 −2 −1 0 1 2 0 10 10 10 10 10 10 10 10 10 10 −80 −60 −40 −20 0 20 40 60 80 t/t a τ (MPa)

Figure 2: (a) Seismicity rate predicted by the rate-and-state model (2), using the stress map shown in Fig. 1b (thin black line), and computed from the synthetic catalog (thick red line). The dashed black line is a fit by Omori’s law for t

We have generated synthetic earthquake catalogs according to the rate-and-state model (1), using the stress change shown in Fig 1b, without including earthquake interaction. We found that, for a realistic catalog (less than 6 orders of magnitude in time), the limited time interval available does not allow the inversion of the complete stress distribution P (τ) from the seismicity rate R(t) (see Fig 2b). However, we can estimate P (τ) with a good accuracy if we constrain P (τ) to be a Gaussian distribution, as predicted by the k2 slip model, and ∗ if we know the stress drop and Aσn. We can measure the standard deviation τ of the stress distribution and the aftershock duration ta from the observed aftershock rate.

4 Off-fault aftershocks

We can estimate the stress change and seismicity rate off of the fault plane for the synthetic k2 slip model shown in Fig 1. The power-spectrum of the shear stress change off of the fault, at a distance y smaller than the rupture length L, is given by τ(k,y) ∼ e−kyτ(k, 0). The standard deviation of the stress distribution decreases very fast with the distance to the fault, which produces a strong drop of the seismicity rate off of the fault (see Fig 3). The average stress decreases much slower with y. For y/L > 0.1, the stress field is very homogeneous and mostly negative (the standard deviation is smaller than the absolute mean stress). Therefore, the seismicity rate for y/L > 0.1 is smaller than the reference rate at all times t

5 Conclusion

We have shown how the rate-and-state seismicity model, with an heterogeneous stress field, can explain the spatial distribution of aftershocks on or close to the rupture area, and an Omori law decay with exponent p ≤ 1.

3 d/L=0 20 (a) 8 (b) 10 d/L=0.01

d/L=0.02 15 6 10 d/L=0.03 10

r d/L=0.04 4 10 5 d/L=0.05 R(t)/R d/L=0.06 2 10 0 d/L=0.07 standard deviation and average stress change −5 0 10 −10 −10 −8 −6 −4 −2 0 2 10 10 10 10 10 10 10 0 0.05 0.1 0.15 0.2 t/t distance from the fault / L a

Figure 3: (a) Seismicity rate for different values of the distance to the fault y/L decreasing from y/L = 0 (top) to y/L = 0.2 (bottom), using the slip model shown in Figure 1a, and assuming Aσn = 1 MPa. (b) Standard deviation (solid line) and absolute value of the mean (dashed line, the average stress change is always negative) of the stress distribution as a function of the distance to the fault.

Deviations from p = 1 are mapped onto measures of stress change heterogeneity on the fault. We have shown that modest catalogue lengths allow an accurate inversion for the standard deviation of the stress distribution and for the aftershock duration ta. The stress drop is however not constrained if the catalog is too short (tmax < ta). This model cannot explain why some aftershock sequences have p > 1. A possible explanation involves stress relaxation with time due to post-seismic deformation. Inverting the stress distribution from aftershock rate requires to know Aσn. We used Aσn = 1 MPa, based on laboratory experiments, which give A = 0.01 [Dieterich, 1994], and assuming Aσn = 100 MPa (lithostatic pressure at 5 km). Dieterich [1994] suggests Aσn =0.1 MPa from the relation between aftershocks duration, tectonic loading rate, and recurrence time. We have neglected in our study the effect of secondary aftershocks. Ziv and Rubin [2003] found, using numerical and analytical study of the rate-and-state model, that secondary aftershocks increase the reference ∗ rate Rr but do not change p or τ . We have also assumed that Aσn is uniform. But fluctuations of Aσn do not change the seismicity rate if Aσn is less heterogeneous than the coseismic shear stress change τ.

References

[1] Dieterich, J. (1994), A constitutive law for rate of earthquake production and its application to earthquake clustering, J. Geophys. Res. 99, 2601-2618. [2] Dieterich, J. (2005), Manuscript in preparation, talk at http://online.itp.ucsb.edu/online/earthq c05. [3] Helmstetter, A. and B. E. Shaw (2005), Estimating stress heterogeneity from aftershock rate, submitted to J. Goephys. Res (http://www.arxiv.org/pdf/physics/0509252). [4] Herrero, A. and P. Bernard (1994), A kinematic self-similar rupture process for earthquakes, Bull. Seism. Soc. Am. 84, 1216-1228. [5] Marsan, D. (2005) Can co-seismic stress variability suppress seismicity shadows? Insights from a rate-and- state friction model, submitted to J. Goephys. Res. [6] Ziv, A. and A. M. Rubin (2003), Implications of rate-and-state friction for properties of aftershock sequences: quasi-static inherently discrete simulations, J. Geophys. Res. 108, 2051, doi:10.1029/2001JB001219.

4

Earthquake Source Parameters of Microearthquakes at Parkfield, CA, Determined Using the SAFOD Pilot Hole Seismic Array

K.Imanishi1 and W. L. Ellsworth 2

1 Geological Survey of Japan, AIST, Tsukuba, Ibaraki 305-8567, Japan 2 U.S. Geological Survey, Menlo Park, CA 94025, USA

Corresponding Author: K. Imanishi, [email protected]

Abstract: We estimate the source parameters of 34 microearthquakes at Parkfield, CA, ranging in size from M -0.3 to M 2.1, by analyzing seismograms recorded by the 32-level, 3-component seismic array installed in the SAFOD Pilot Hole. We succeeded in obtaining stable spectral ratios by stacking the ratios calculated from the moving windows taken along the record following the direct waves. These spectral ratios were inverted for seismic moments and corner frequencies using the multiple empirical Green's function approach. Both static stress drops and apparent stresses of microearthquakes at Parkfield do not largely differ from those reported for moderate and large earthquakes. It is likely that the dynamics of microearthquakes at Parkfield is similar to that of natural tectonic earthquakes in a macroscopic viewpoint. Half of the static stress drops are above 50 MPa, which are near the strength of the rock. This suggests that locally there can be high levels of stress even in active mature fault zones such as the San Andreas Fault.

1. Introduction Static stress drop and radiated seismic energy are key parameters needed to understand the physics of earthquakes, investigate seismic source scaling relationships, and infer the local stress level in the crust. Recent work by Ide et al. (2003) suggests that propagation effects can contaminate measurements of source parameters even in the borehole recordings. One method for overcoming the problem is to use the spectral ratio method, which removes the propagation effects by deconvolving the smaller earthquake seismograms from those of the larger one. However, the deconvolution procedure is an unstable process, especially for higher frequencies, because small location differences result in the profound effects on the spectral ratio. This leads to large uncertainties in the estimations of corner frequencies. In this study, we propose a method to determine a stable spectral ratio by stacking the ratios calculated from the moving windows taken along the record following the direct waves. We apply the method to microearthquakes occurring at Parkfield, CA, using the SAFOD Pilot Hole seismic array.

2. Data In the summer of 2002, the Pilot Hole for the San Andreas Fault Observatory at Depth (SAFOD) was drilled to a depth of 2.2 km. The site lies just north of the rupture zone of the M 6 1966 Parkfield earthquake (Fig. 1). The seismic array consists of 32 levels of 3-component geophones (GS-20DM) at 40 meter spacing (856 to 2096 m depth). The sensors have a natural frequency of 15 Hz and damping constant of 0.57. The data were recorded at a sampling rate of 1kHz in July 2002 and then increased to 2kHz in December 12, 2002. On the basis of waveform similarity, we detected 34 events which classified into 14 groups and range in size from M -0.3 to M 2.1 (Fig. 1). These events are included in the cluster 9. Seismograms of the cluster 10 recorded at 428 m depth below mean sea level are shown in Fig. 1.

Figure 1. Location of the SAFOD Pilot Hole (square) and earthquake clusters analyzed (circles). The gray line shows the surface trace of the San Andreas Fault. Seismograms of three clusters recorded at 428 m depth below mean sea level are shown.

3. Method We apply the spectral ratio method (Ide et al., 2003) to determine corner frequencies and seismic moments between the events within each cluster. As mentioned above, the deconvolution procedure is an unstable process, especially for higher frequencies, because small location differences result in the profound effects on the spectral ratio. According to Chaverria et al. (2003), the wavetrain recorded in the Pilot Hole is dominated by reflections and conversions and not random coda waves. So, we expect that the spectral ratios of the waves between P and S wave will also reflect the source, as will the waves following S wave. In fact, we succeeded in obtaining stable spectral ratios by stacking the ratios calculated from the successive windows taken along the record following the direct waves. In this study, we further stacked ratios obtained from each level of the array and component. This makes the ratio more stable (Fig. 2a) and permits us to estimate reliable source parameters. We determine the source radius from the corner frequency using the circular crack model of Sato and Hirasawa (1973). The static stress drop is then calculated by the formula of Eshelby (1957). Following Ide et al. (2003), we derived attenuation function at each level of the array and component by taking the ratio between the observed and calculated spectrum. By the average of the functions over all events of the cluster, we obtain the average attenuation function, which is assumed to represent the propagation effects between the source and station. We calculate the radiated seismic energy from spectra corrected by the attenuation function.

Figure 2. (a) Seismograms of the cluster 10 recorded at 428 m depth below mean sea level. (b) Spectral ratios (solid curves) and fitting curves for the theoretical spectral ratios assuming an omega square model (broken curves) .

4. Results The observed spectral ratios were well modeled by the omega square model (Fig. 2b). The estimated static stress drops range from approximately 0.5 to 120 MPa (Fig. 3a), where they do not vary with seismic moment. Our result is consistent with other previous study using deep borehole data that there is no breakdown in the constant stress drop scaling over the wide range of magnitude. The apparent stresses, defined as the rigidity multiplied by the ratio between radiated energy and seismic moment, range from approximately 0.1 to 20 MPa (Fig. 3b). Each data set in the previous studies shows the apparent stress scales with seismic moment. However, Ide and Beroza (2001) and Ide et al. (2003) have shown that most of the observed size dependence can be attributed to artifacts arising from underestimation of energy due to a finite recording bandwidth or inadequate corrections for path effects, and concluded that the apparent stress is almost constant over a wide range of magnitude in seismic moment. Our result is consistent with their conclusion. The estimated static stress drops seem to be high end of the compilation, where half of them are in excess of 10 MPa and some are above 50 MPa. Assuming hydrostatic pore pressures, laboratory-derived coefficients of friction (0.6<μ<0.85), and a vertical stress SV equal to the overburden pressure, the Coulomb failure criterion predicts a maximum shear stress of 43-54 MPa in a strike-slip faulting environments at a depth of 5 km. If we consider that the maximum shear stress is a good approximation of the average shear stress acting on the fault plane to cause slip, the static stress drops of some events are near the strength of the rock. The San Andreas fault has long been thought to be significantly weak as evidenced by the lack of heat flow anomaly adjacent to the fault, and the shear strength of the fault has been predicted about 20 MPa or less (e.g., Brune et al., 1969). High stress drop microearthquakes might suggest that locally there can be high levels of stress even in active mature fault zones.

Figure 3. (a) Static stress drop versus seismic moment. (b) Apparent stress versus seismic moment. The results of this study are plotted as solid circles. Gray circles represent lower limits of the apparent stress.

4. Discussion and Conclusions We have shown that both static stress drops and apparent stresses of microearthquakes at Parkfield do not depend on earthquake size, and are similar to those reported for moderate and large earthquakes. It is likely that natural tectonic earthquakes are self-similar over a wide range of earthquake size and the dynamics of small and large natural tectonic earthquakes are similar in a macroscopic viewpoint. Bridging the gap between laboratory friction experiments and seismological measures of earthquake sources makes it possible to use the laboratory results as a basis for models of

earthquake generation process. It should be noted that the earthquakes studied in this study include events with dimensions less than 10 m, approaching those in laboratory studies (about 1 m). Further analysis of microearthquakes at Parkfield is expected to narrow the gap between laboratory experiments and natural tectonic earthquakes. Based on the recurrence interval of repeating microearthquakes in Parkfield, Nadeau and Johnson (1998) derived the scaling relation for displacements, source dimensions, and static stress drops with seismic moment. The implied stress drops decrease from ~2000 MPa to 100 MPa for M from ~0 to 3. Their scaling relation is shown by a broken line in Fig. 3a, which are much higher than our estimations. Failure of repeating earthquakes is often thought to occur on a strong fault patch (asperity) embedded in an aseismically creeping fault plane. According to an asperity model, the small strong asperity patch is surrounded by a much weaker fault that creeps under the influence of tectonic stress. When the asperity patch ruptures, the surrounding area slips as it is dynamically loaded by the stress release of the asperity patch. Based on this asperity model, our estimated source sizes correspond to the latter, while source dimensions by Nadeau and Johnson (1998) can be considered as the former. If so, the stress drop averaged over the asperity patch should be higher than that over the entire slip surface. This might explain the discrepancy seen in Fig. 3a, but it is unlikely that the stress drops exceed the strength of the rock. Although several models have been proposed to resolve the stress drop question at Parkfield, it remains unresolved. We should re-examine the asperity model based on the results obtained in this study.

5. Acknowledgement We thank Eylon Shalev and Peter Malin of Duke University for their dedication to the operation of the SAFOD Pilot Hole seismic array and for distributing the data within the research community.

6. References Brune, J. N., T. L. Henyey, and R. F. Roy (1969), Heat flow, stress, and rate of slip along the San Andreas Fault, California, J. Geophys. Res., 74, 3821-3827. Chavarria, J. A., P. Malin, R. D. Catchings, and E. Shalev (2003), A look inside the San Andreas fault at Parkfield through vertical seismic profiling, Science, 302, 1746-1748. Eshelby, J.D. (1957), The determination of the elastic field of an ellipsoidal inclusion and related problems, Proc. R. Soc. London, 241, 376 – 396. Ide, S. and G. C. Beroza (2001), Does apparent stress vary with earthquake size?, Geophys. Res. Lett., 28, 3349-3352. Ide, S., G. C. Beroza, S.G. Prejean and W.L. Ellsworth (2003), Apparent break in earthquake scaling due to path and site effects on deep borehole recordings, J. Geophys. Res., 108 (B5), 2271, doi: 10.1029/2001JB001617. Nadeau, R. M., and L. R. Johnson (1998), Seismological studies at Parkfield VI: Moment release rates and estimates of source parameters for small repeating earthquakes, Bull. Seismol. Soc. Am., 88, 790-814. Sato, T., and T. Hirasawa (1973), Body wave spectra from propagating shear cracks, J. Phys. Earth, 21, 415-431.

Statistical Models Based on the Gutenberg-Richter a and b values for Estimating Probabilities of Moderate Earthquakes in Kanto, Japan

M. Imoto1

1NationalResearch Institute for Earth Science and Disaster Prevention Tennodai 3-1, Tsukuba-shi, Ibaraki-ken 305-0006, Japan

Corresponding Author: M. Imoto, [email protected]

Abstract: We attempt to construct models estimating probabilities of moderate size earthquakes in Kanto, central Japan, with recurrent times of target earthquakes considered in reverse relation to the hazard. In estimating the recurrent times, the a and b values of the Gutenberg-Richter relation for an area of interest are estimated and the relation is extrapolated to a magnitude range of targets. We consider two alternative cases; the case of the b value varying from point to point and the case of the b value remaining constant over the whole space under study. Retrospective analysis based on the catalogue of the NIED Kanto-Tokai network for the period from 1990 to 1999 indicates that the model with constant b performs better than that with varying b. This implies that the spatial variation in b will not work effectively for estimating earthquake probability. However, a comparison between the b value distributions during the background period and over a set of time-space points conditioned by target earthquakes suggests that some information could be obtained from the b value, since the Kullback-Leibler information statistic between the two distributions indicates a significant value. In order to incorporate variations in b into the model with b remaining constant, a hazard function with a single parameter, b, is considered, in which a normal distribution is adopted and optimized to the data. The b value is distributed independently from the distribution of the a value. Taking these two values as the parameters for earthquake precursors, a formula for earthquake probabilities based on multi-disciplinary parameters could be applied. Thus, we obtain a model of earthquake probability using both a and b values which performs better than either model of the Gutenberg-Richter relation, with constant or variable b .

1. Introduction Utsu (1977, 1982), Aki (1981) and others have formulated an earthquake probability calculation based on multiple precursory phenomena. These calculations assume that different precursory phenomena occur independently of each other. In such a case, the probability expected from detecting multiple precursory phenomena is given by the product of the probability gains for respective observations and the probability calculated from secular seismicity. Therefore, the probability becomes large when multiple precursory phenomena are detected simultaneously. Only a small number of reports have been published based on this formula, perhaps due to a common understanding that no reliable precursor has been observed except for foreshocks and others. This formulation, in which phenomena are treated as two categorical values in the original form, could be reformulated in light of recent point-process modeling to apply to cases of a quantitative observed value. As an example, the a and b values of the Gutenberg-Richter relation could be treated as such parameters and are indeed being used for building probability models for moderate earthquakes in Kanto, central Japan.

2. Data and Models Hazard functions for a moderate-size earthquake (M≥5.0) occurring within the study volume (200 x 200 x 90 km3) for the period from January 1990 to December 1999 were obtained using the Gutenberg-Richter relation. The a and b values at a given space-time point, which measure the recurrent intervals of targets, are calculated based on earthquakes within 20 km of the point occurring over a period of 3650 days prior to the assessment. The assessment was made at 2km-spaced points and at 10-day intervals. We classified the points of assessment into two categories, points of target events and points of background. A point of target event was the grid point of assessment nearest to a target in space and just before it in time. The other grid points are classified as background. We thus obtained two distributions, the background distribution and conditional distribution, for each of the a and b parameters. Figures 1a and 1b illustrate the background distribution with a blue dashed line and the conditional distribution with a red solid line for a and b, respectively. In Fig. 1a, the difference between the two distributions is quite obvious. On the other hand, in Fig. 1b, the difference between the two distributions is not extremely large but is still significant.

Figure 1a Cumulative distributions of the a value for background points (blue dashed line) and for points conditioned by target earthquakes (red solid line).

Figure 1b Same as Fig.1b but for the b value.

A-model: Assuming a single constant b value over the study volume, only the spatial variation in a is taken into account when estimating the recurrent interval of a target event. The hazard function for this model is given as:

⋅Δ− bm λa Ms )0.5;( MsN 10)0.2;( ⋅⋅≥=≥ rv , (1) where the first term of the right-hand side is the number of earthquakes in a sample space and time volume, the common logarithm of which defines the a value. The second term is a constant in this case; the last term is a normalizing factor between the sample volume and the unit volume for the hazard function. B-model: The same as the A-model but with the b variation. The hazard for this case is given as:

⋅Δ− sbm )( λb Ms )0.5;( MsN 10)0.2;( ⋅⋅≥=≥ rv . (2)

AxB-model: Taking the two distributions of b in Fig. 1b into consideration, a hazard function is assumed as in the following form:

2 +⋅⋅ 21 bcbc λaxb Ms )0.5;( MsN 10)0.2;( ⋅⋅≥=≥ rv , (3) where c1 and c2 are optimized to data for the learning period (January 1990 to December 1999).

3.Performance To estimate the performance of the above-mentioned models, the difference in the log likelihood between a proposed model and the stationary Poisson model, which is called information gain (IG value; Darley and Vere-Jones, 2003 ), is calculated using data from both the learning period and a test period (January 2000 to September 2005). Table 1 lists the IG values for the A-, B-, and AxB-model for both periods. The IG values for the A- model are larger than those for the B-model for both periods. This suggests that the spatial variation in the b value does not work effectively for estimating probabilities. The IG values for the AxB-model, however, are larger than those for either the A- or B-model for both periods. Thus, the b parameter of the Gutenberg-Richter relation is built more effectively in this model than in the model that uses the Gutenberg-Richter relation directly, with the b value constant or varying in space.

Information Gain Period N A-model B-model AxB-model Learning 1990.1-1999.12 38 38.07 36.49 44.07 Test 2000.1-2005.9 19 33.05 31.61 37.23

Table 1. IG values for the A-, B-, and AxB-model for the learning and test periods.

4. References Aki, K., A probabilistic synthesis of precursory phenomena, in Earthquake Prediction, edited by D.W. Simpson and P.G. Richards, (AGU, 1981), pp566-574. Darley, D. J. and Vere-Jones, D., An introduction to the theory of point processes, vol1, Elementary theory and methods, Second edition, 469pp, (Springer, New York, 2003). Utsu, T., Probalities in earthquake prediction, Zisin II, 30, 179 – 185, 1977 (in Japanese). Utsu, T., Probabilities in earthquake prediction (the second paper), Bull. Earthq. Res. Inst., 57, pp499- 524 , 1982 (in Japanese).

Seismicity Cycle and Large Earthquake

S. Itaba 1 and K. Watanabe 2

1Geological Survey of Japan, AIST, 1-1-1, Higashi, Tsukuba, Ibaraki, 305-8567, Japan 2Disaster Prevention Research Institute, Kyoto University, Gokasho, Uji, Kyoto, 611-0011, Japan

Corresponding Author: S. Itaba, [email protected]

Abstract: There are seismicity cycles with repetitions of large earthquake, and the duration is hundreds to tens of thousands of years. However, the period of observations for modern seismology is about a hundred years, and is only a small portion of one cycle. We clarify the outline of the seismicity cycle by the relation between the lapsed rate (LR) from latest large earthquake and present seismic activity of large earthquake occurrence zones (active fault and plate boundary).

1. Introduction There are periodicities in seismicity also including large earthquake occurred in plate boundary or active fault, and microearthquake. For inland active faults, after the occurrence of a large earthquake, the aftershocks usually continue for several to tens of years, decreasing with time [Watanabe, 1989]. In the region of the 1891 Nobi Earthquake, it is known that aftershock activity has been continuing for more than a hundred years. Following the high level of aftershock activity, the seismicity cycle moves into a phase of much lower seismic activity [Toda, 2002], and then the next major earthquake occurs. For the inland active faults, the seismicity cycle is thousands to tens of thousands of years. For plate boundary, it is usually tens to hundreds of years. The duration of a cycle widely varies between various faults and regions. The period of observations for modern seismology is about a hundred years, and is only a small portion of one cycle. Therefore, it is difficult to recognize what portion of the seismicity cycle the current seismic activity corresponds to. However, the current stage of the cycle can be estimated from geological data. By combining both geological data and current seismicity for many different faults or plate boundaries, we can view a complete seismicity cycle. We evaluated quantitatively the seismic activity of 98 major active faults in Japan (98 faults), 17 segments of the San Andreas Fault System (SAFS), and 34 focal regions of plate boundary or interplate in adjacent seas in Japan (34 regions). Then, we investigated the relation between the lapsed time (LT) from the last large earthquake, and the present seismic activity. Although there is large variation in the scales and recurrence times (RT) of faults, we developed a method to correct for these factors. We find a clear correlation between the LR from a large earthquake and the present seismic activity.

2. Method The LT and RT are extracted from the following data source. For the 98 faults, the newest information available on Earthquake Research Committee, The Headquarters for Earthquake Research Promotion [2002, 2003 and 2004] are used For the SAFS, the report “Probabilities of Large Earthquakes Occurring in California on the San Andreas Fault” by the Working Group on California Earthquake Probabilities [1988] is used. For the 34 regions, Japan Seismic Hazard Information Station (J-SHIS) by National Research Institute for Earth Science and Disaster Prevention is used. The seismic activity of inland active faults is evaluated quantitatively using the method of Itaba et al [2003]. Since there are large variations in RT for each fault, the concept of LR is introduced. LR is

defined as from RT and LT of a fault, as LR = (LT/RT) ・100 [%]. (1) Moreover, it is thought that the pace of seismicity also changes with length of cycle period. Therefore, the following formula defines the earthquake number density per 1% of RT (NPR), and comparison with LR is performed. The N is total number of earthquake occurred within the target zone, the A is area of the target zone, and the P is observation period (year). NPR = N・RT/(A ・P・100) [count / km 2・1%RT] (2)

3. Result and discussion The relations between LR and NPR are shown in Figure 1 to Figure 3. In all examples, there are clear correlation between the LR and NRP. For inland faults, after the occurrence of latest large earthquakes, the seismicity decays smoothly inversely proportional to the lapsed time for over a thousand years. For inland faults, after a large earthquake, it is usually assumed that the aftershock activity decays to the background level in a few tens to about a hundred years. If this is the case, the correlation between LR and seismic activity should be recognized only for values of LR or less. However, according to the result of this in this investigation, there is a dependence on the LR in most of the seismicity cycle. Moreover, it turns out that the pattern is consistent with the modified Omori formula [ Utsu , 1961]. It turns out that it continues linear attenuation for about one cycle also in 34 focal regions. From these results, we conclude that the low level of seismicity that is usually thought to be steady background activity may actually be decaying aftershocks that last over most of the seismicity cycle, which can be hundreds to thousands of years for the inland faults, or tens to hundreds of years for around plate boundary.

1000 (98 faults in Japan) 10 17 segments of SAFS p value = 0.954 100 101after 93-1 101(Tottori) South part of SAFS 93-2 1 p value = 1.18 10 km2*LR] 73-120 13 / 04after 13 81 t 73-297-1 25 97-2 72 94 30 6726 5 79 71-2 79(Hyogo) 65 80 coun count / km2*LR] / count 64-1 8 14 15 58 4764-2 61-076 82-28871-136 11 24 834456-2 R [

R [ 77 1 69 21 5170 56-1102(Niigata) 4

34 NP 1

NP 43 12 15 40 18 16 4 86 41 0.1 35 50 75-0 0.1 89 North and central 3 9 85 10 part of SAFS 7 p value = 1.03 2 0.01 0.01 8 0.01 0.1 1 10 100 1000 1 10 100 1000 Lapsed Rate [%] 103(Fukuoka) Lapsed rate [%] Figure 1. (Left) Relation between LR and NPR for Japan. There is a relation that NPR falls linearly as LR increases. The p value of the modified Omori formula is 0.954. A brown round mark shows some examples before and after large earthquakes (for three years).

Figure 2. (Right) Relation between LR and NPR for SAFS. There is a relation that NPR falls linearly as LR increases. For the north and central part of SAFS, the p value of the modified Omori formula is 1.03, for the south part of SAFS, it is 1.18. A brown round mark shows some examples before and after large earthquakes (for three years).

Jan, 1997 - Dec, 1999 Jan, 2000 - Dec, 2002 1.0E+00 1.0E+00 p value = 0.780 p value = 0.743

1.0E-01 1.0E-01 20 21 21

20

2 23 2 1.0E-02 24 1.0E-02 23 4 4 26 24 9 18 15 26 9 13 15 13 14 28 1 1 NPR [count/km2*RT] NPR [count/km2*RT] 5 28 5 1.0E-03 1.0E-03 14

30 30 29 A 29 B 1.0E-04 1.0E-04 0.1 1 10 100 1000 0.1 1 10 100 1000 Lapsed Rate [%] Lapsed Rate [%] Jan, 2003 - Jul, 2005 1997-1999, 2000-2002 and 2003-2005(July) 1.0E+00 1.0E+00 p value = 0.527 p value = 0.639

1.0E-01 1.0E-01 20 21 21 20

20 21 20 21 2 23 4 2 1.0E-02 2 1.0E-02 24 2 23 24 4 24 24 26 9 18 26 15 5 4 5 4 19 23 19 9 13 15 23 13 15 26 15 1 9 26 14 28 NPR [count/km2*RT] 1 NPR [count/km2*RT] 9 1 5 1 5 28 13 14 13 1.0E-03 1.0E-03 14 14 30 30 29 28 28 30 29 C 30 29 29 1.0E-04 1.0E-04 0.1 1 10 100 1000 0.1 1 10 100 1000 Lapsed Rate [%] Lapsed Rate [%] Figure 3. Relation between LR and NPR for focal regions in 3 periods (A-C) respectively, and all periods (lower right). There is a relation that NPR falls linearly as LR increases in all cases.

4. Conclusion The hypocenters of the present seismic activity and the geologic information from active fault and trench investigation can be used together to study the changes of seismicity as related to the seismicity cycle. Although there is large variation in the scale and RT of a fault, we develop corrections for these factors, and clear correlation is recognized between the LR and the present seismicity. The rates of the seismicity show decay that last for almost the entire seismicity cycle. Using our methodology, we can identify how the present seismicity fits in the seismicity cycle. It is thought that some faults shift to a preparatory stage before a large earthquake. The LR is found to be a large portion of the seismicity cycle, when it shifts to a different preparatory stage from the aftershock activity, the seismicity may increase. Based on this idea, it may be useful to try to detect the shift to a preparatory stage or calm stage before a large earthquake in the seismicity pattern. Thus, this study contributes to the overall understanding

of the seismicity cycle, which will contribute to the ongoing efforts in earthquake prediction and disaster mitigation.

5. Acknowledgement I used the hypocentral data by Japan Meteorological Agency and The catalog of the Council of the National Seismic System. I used the fault trace data by Research group for Active Faults of Japan and California Department of Conservation Division of Mines and Geology. I used the position data and evaluating result of focal region by Japan Seismic Hazard Information Station (J-SHIS) by National Research Institute for Earth Science and Disaster Prevention. I used the evaluating result of the active fault by Earthquake Research Committee, Headquarters for Earthquake Research Promotion, and The Working Group on California Earthquake Probabilities. I appreciate for these indispensable data.

6. References Earthquake Research Committee, Headquarters for Earthquake Research Promotion, Publications of Earthquake Research Committee (Evaluations on long-term and strong motion) -1995-2001- (in Japanese), 476p, 2002. Earthquake Research Committee, Headquarters for Earthquake Research Promotion, Publications of Earthquake Research Committee –January-December 2002- (in Japanese), 916p, 2003. Earthquake Research Committee, Headquarters for Earthquake Research Promotion, Publications of Earthquake Research Committee –January-December 2003- (in Japanese), 1, 793p, 2004. Itaba, S., K. Watanabe, R. Nishida, T. Noguchi, and J. Mori, Quantitative Evaluation of Inland Seismic Activity in Japan using GIS –Part 2-, AGU2003 Fall Meeting , San Francisco, F1138, 12 th Dec., 2003. Toda, S., Y. Sugiyama, Implications of seismicity on active faults in Japan (in Japanese with English abstract), 2002 Japan Earth and Planetary Science Joint Meeting, Tokyo, 29 th Mar., 2002. Utsu, T., Aftershocks and earthquake statistics (1) , J. Fac. Sci., Hokkaido Univ., Ser. 7, 2, 129-195, 1969 Watanabe, K., On the Duration Time of Aftershock of Earthquake of the Central Part of Gifu Prefecture, September 9, 196 9 (in Japanese), Bull. Disas. Prev. Inst., Kyoto Univ., 39, Parts 1, 2, 1-22, 1989. Working Group on California Earthquake Probabilities, Probabilities of Large Earthquake Occurring in California on the San Andreas Fault , USGS Open-File Report, 88-398, 1988.

The Correlation Between the Phase of the Moon and the Occurrences of Microearthquakes in the Tamba Region

T. Iwata1,andH.Katao2 1The Institute of Statistical Mathematics, 4-6-7, Minato-ku, Tokyo 106-8569, Japan 2Disaster Prevention Research Institute, Kyoto University, Gokasho, Uji, Kyoto 611-0011, Japan

Corresponding author: T. Iwata, [email protected]

Abstract: We study the correlation between the phase of the moon and the occurrence of microearthquakes in Tamba region which is close to the fault of the 1995 Kobe earthquake. Since the existence of the correlation after the Kobe earthquake has been suggested in the previous study, we investigate the statistical significance of the correlation in this study. Using point-process modeling and AIC (Akaike Information Criterion), we can confirm that the existence of the correlation is statistically significant and that the correlation is strong just after the Kobe earthquake and that then it becomes weaker year by year. 1. Introduction There are previous studies which made an attempt to find a correlation between oc- currences of earthquakes and the phase of the moon, and Katao [2002] (referred to K2002 hereafter) is the one of these studies. K2002 reports that, after the 1995 Kobe earth- quake, the occurrences of microearthquakes (MEs) correlated with the phase of the moon in Tamba region which is close to the focal region of the Kobe earthquake. K2002 ex- amines the frequency distribution of the MEs after approximately two years after the occurrence of the Kobe earthquake, and showed that the frequency of the MEs varies depending on the phase of the moon. In K2002, the statistical significance of the variation was not investigated. Thus, to confirm the significance, we apply a statistical analysis called “point-process modeling” [e.g., Ogata, 1983a] to the data examined in K2002. In the analysis, we investigate not only the significance of the correlation but also the temporal variation of the correlation. 2. Data The earthquakes used in this study occurred at Tamba region, listed in the catalogue compiled by Disaster Prevention Research Institute (DPRI), Kyoto University, Japan. Tamba region is neighbor to the focal region of the 1995 Kobe earthquake, and the seismicity in this region increased after the occurrence of the Kobe earthquake [e.g., Toda et al., 1998]. We make a dataset of earthquakes inside a rectangular area which is the same as defined in K2002. Considering the detection capability, the earthquakes of M ≥ 1.4 are used. 3. Statistical Analysis Using Point-Process Modeling To examine the statistical significance of the correlation shown in K2002, we did a statistical analysis called “point-process modeling” [e.g., Ogata, 1983a]. To investigate the periodicity of seismicity with point-process modeling, Ogata [1983a] suggested the following type of function as λ(t):

λ(t)=µ + (trend) + (cluster) + (periodicity), (1) where λ(t) is the intensity function which shows the occurrence rate of earthquakes in unit time (one day, in this study). We use a polynomial function to indicate trend, ETAS model [Ogata, 1988] to indicate cluster, and trigonometric functions to indicate periodicity. Thus, the intensity function used in this study is as follows:

n − L1 L1 k K exp(α(Mi Mz)) k · k · λ(t)=µ + akt + p + A1kt sin θ(t)+ B1kt cos θ(t) − i k=1 i;ti

L2 L2 k k + A2kt · sin(2θ(t)) + B2kt · cos(2θ(t)). (2) k=0 k=0 (ak, K, c, p, α, A1k, B1k, A2k, : parameters) where ti and Mi are an occurrence time and magnitude of i-th ME, respectively. Mz indicates the lower threshold of the magnitude in our dataset, and θ(t) is a function which converts actual time t into the phase angle. As the phase angle, we assigned 0◦ or 360◦ to the times of new moon, and 180◦ to the times of full moon. Then, the phase angle is assigned to the time linearly from 0◦ to 180◦ or from 180◦ to 360◦.Notethat θ(t)and2θ(t) indicate the phase angle related with a synodic and a half-synodic month, respectively. In equation (2), we consider four kinds of constraint for the orders L1 and L2:(i) L1 = L2 = 0; (ii) L2 = 0; (iii) L1 = 0; and (iv) no constraint. Each constraint corresponds to the case when we consider (i) no triggering effect related with the phase of the moon, (ii) only the effect related with a synodic month, (iii) only the effect related with a half- synodic month, and (iv) the effect related with both a synodic and a half-synodic month. For each of the four cases, we estimate the parameters which provide the best fit to the observed time series of the MEs using maximum likelihood method [e.g., Ogata, 1983b]. We also estimate the AIC (Akaike Information Criterion) [Akaike, 1974] for each of them. Among the cases, the case whose AIC is the smallest is considered to offer the best fit to the observed occurrences of the MEs. Note that the orders N’s, L1’s and L2’s are also determined using AIC. We analyze the MEs during approximately four years after the 1995 Kobe earthquake; from 3:00 on 17 January 1995 to 3:00 on 17 December 1999 (JST). The numbers of used MEs are 5438. The AICs and the orders N’s, L1’s and L2’s of the four cases are shown in Table 1. The AIC of the case (iv) is the smallest among the four cases. Since the cases (i), (ii), and (iii) are restricted versions of the case (iv), the difference of the AIC of the case (iv) and those of the other cases can be converted into the log-likelihood ratio statistic [e.g., Ogata, 1983b]. Using this feature, we can estimate the probability that the fit of the case (iii) whose AIC is the second smallest is better than the case (iv). Following the values shown in Table 1, the estimated probability is 2.56%, which is too small to accept the null hypothesis. This implies that the correlation between the activity of the MEs and the phase of the moon is statistically significant, and we should consider the triggering effect related with both a synodic and a half-synodic month. Figure 1(a) shows the intensity of the trigonometric part, which represents the periodicity. The value of the intensity after the moments of new/full moon is generally higher compared with that in the other period. Since

Li Li  k 2 k gi(t)=( Aikt ) +( Bikt )(i =1, 2) (3)     k=0 k=0 shows the amplitudes of the intensity functions related with a synodic or a half-synodic month, we plot these in Figure 1(b). These are in maximum just after the occurrence of the 1995 Kobe earthquake, and generally decreases as time elapsed. We also apply the same statistical test to the dataset during four years before the occurrence of the Kobe earthquake; from 0:00 on 17 January 1993 to 0:00 on 17 January 1995 (JST). The case (i) is chosen as the best one; the significant correlation is not found. 4. Discussion After the Kobe earthquake, the seismicity rate in Tamba region was increased com- pared with before, and this increase is assumed to be caused by increase of static stress due to the main rupture of the Kobe earthquake [e.g., Toda et al., 1998]. As mentioned in the previous sections, the significant correlation is found only after the Kobe earthquake and not found before. Additionally the correlation is the most remarkable just after the Kobe earthquake, and becomes weaker. This feature supports our interpretation that rupture of the Kobe earthquake makes stress/strain state in Tamba region critical and that the seismicity correlates with the phase of the moon. Recent laboratory experiments imply the occurrence of acoustic emissions/earthquakes affected by the amplitudes of stress/strain changes [Lockner and Beeler, 1999]. Yamamura et al. [2003] examine the seismic velocity observed in a vault at the coast of Miura Bay, Japan and find the temporal variation of the velocity with 14-days (approximately a half- synodic month) periodicity. These studies suggest that the changes of the amplitudes of stress/strain caused by the phase of the moon could affect the seismicity or the physical properties of the earth, and support our results in this study. One may note that the feature that not only 2θ(t) but also θ(t) is correlated with the occurrence of the MEs. A suggestion for the cause of this feature is that the maximum amplitude of stress/strain changes varies for new and full moon. Using the program of Agnew [1997], we calculated the strain changes at the center of the focused area. In some periods, it is remarkable that the maximum amplitude of strain changes varies between around times of new and full moon. Our results may suggest that this variation would influence the occurrence of the earthquakes and that we should include the variation directly for more precise modeling of the occurrence of earthquakes. 5. Acknowledgement This study was carried out under ISM Cooperative Research Program (03-ISM-CRP- 2026). Figures are generated using the GMT software [Wessel and Smith, 1998]. 6. References Akaike, H., A New look at the statistical model identification, IEEE Trans. Autom. Control, 19, 716-723, 1974. Agnew, D. C., NLOADF: A program for computing ocean-tide loading, J. Geophys. Res., 102, 5109–5110, 1997. Katao, H., Relation between moon phase and occurrences of micro-earthquakes in the Tamba Plateau (in Japanese with English abstract and figure captions), J. Geography., 111, 248–255, 2002. Lockner, D. A., and N. M. Beeler, Premonitory and tidal triggering of earthquakes, J. Geophys. Res., 104, 20,133-20,151, 1999. Ogata, Y., Likelihood analysis of point processes and its applications to seismological data, Bull. Int. Statist. Inst., 50, Book 2, 943–961, 1983a. Ogata, Y., Estimation of the parameters in the modified Omori formula for aftershock frequencies by the maximum likelihood procedure, J. Phys. Earth, 31, 115–124, 1983b. Ogata, Y., Statistical models for earthquake occurrences and residual analysis for point processes, J. Amer. Statist. Assoc., 83, 9–27, 1988. Toda, S., R. S. Stein, P. A. Reasenberg, J. H. Dieterich, and A. Yoshida, Stress transferred by the 1995 Mw =6.9 Kobe, Japan, shock: Effect on aftershocks and future earthquake probabilities, J. Geophys. Res., 103, 24,543–24,565, 1998. Wessel, P., and W. H. F. Smith, New, improved version of the Generic mapping Tools released, EOS, 79, 579, 1998. Yamamura, K., O. Sano, H. Utada, Y. Takei, S. Nakao, and Y. Fukao, Long-term ob- servation of in situ seismic velocity and attenuation, J. Geophys. Res., 108, 2317, doi:10.1029/2002JB002005, 2003.

(a) 360 2

0 0 ]

270 1 -1

0 180 0.5 0 0

0 90 0 -1 intensity[day

phase angle [degree] -0.5 0 0 -2 0 200 400 600 800 1000 1200 1400 (b) 1.0

] 0.8 -1 0.6

0.4

intensity [day 0.2

0.0 0 200 400 600 800 1000 1200 1400 elapsed time [day]

Figure 1 (a) Temporal variation of the intensity functions of the trigonometric part in equation (2) obtained by the analyses of the MEs from 12:00 on 17 January 1995 to 12:00 on 17 January 1999. The vertical axis indicates the elapsed days since 12:00 on 17 January 1995. (b) Temporal variations of gi(t) as shown in equation (3), which denotes the amplitudes of the intensity functions of the trigonometric part in equation (2). The solid and dotted lines correspond to the amplitudes related with a synodic (i=1 in equation (3) and a half-synodic month (i=2), respectively.

constraint (i) (ii) (iii) (iv) (N, L1, L2) (3, 0, 0) (3, 3, 0) (3, 0, 3) (3, 3, 3) AIC −4933.03 −4938.84 −4936.18 −4942.14∗ * The minimum value of AIC is obtained.

Table 1 The AICs and the orders N, L1 and N2 of the polynomial function representing trend, temporal variations of periodicities related with a synodic and a half-synodic month using the intensity function as shown in equation (2), for the analyses of the MEs from 12:00 on 17 January 1995 to 12:00 on 17 January 1999. A New Multiple Dimension Stress Release Statistic Model based on

co-seismic stress triggering

Mingming Jiang and Shiyong Zhou Geophysics Department, Peking University, Beijing 100871, China Abstract Based on the stress release model proposed by Vere-Jones (1978), we have developed a multiple-dimensional stress release model, which involves a quantitative model of static stress triggering. The model is applied to the historic catalog in North China. According to the AIC, our model is better than the original model and the static Poisson process. Also we give a reasonable physical interpretation of the risk function in the stress release model. Introduction Since the elastic rebound theory was presented by Reid (1910), kinds of models, such as spring-block model and cell automation model, have been presented to simulate the earthquake occurrence. However, many features of earthquake generation are not captured in these physical models. These unknown features result in a degree of randomness, which could be factored into forecasts through the medium of stochastic models. First, Knopoff (1971) took the earthquake sequence as a Markov process and made the first stochastic model for earthquake occurrence. Vere-Jones (1978) proposed the stress release model, a stochastic version of the elastic rebound theory, incorporating a deterministic build-up of stress within a region and its stochastic release through earthquake. Earthquake interaction by stress triggering and stress shadows is now well accepted. Liu et al. (1999) considered the spatial interaction of earthquake occurrence between seismic belts and developed the coupled stress release model. However, the coupled stress release model can not assess the interaction between two single earthquakes quantitatively. The motivation of this paper is to involve static stress triggering model in the stress release model and to formulate a new space-time magnitude version of the stress release model. A physical interpretation of the risk function In the stress release model, the risk function, which relate the stress level in a region to the probability of an earthquake occurring, is as follows,

λ = exp(μ +νX ) (1) where λ is the conditional intensity function, X is the stress level, and μ and ν are constants. This exponential relation has been proved the best choice compared to the other relations via many numerous trials and analysis (Zheng & Vere-Jones, 1991, 1994). But what is the nature of this risk function? It is well known that the strength of most brittle materials, include rocks, is time dependent. If such a material is subjected to a constant load it will, in general, fracture after some time interval. This weakening in time is known as static fatigue. In static fatigue studies in the laboratory, for most of materials that have been studied a good formula of static fatigue is given by (Scholz, 1968)

Sbat ∗ −⋅= σ )(exp (2) T [ ] where is the fracture time, S* is the ‘no corrosion’ strength, σ is stress, and a and b are constants. Here we need note that the reciprocal of in equation (2) is equivalent to the conditional intensity function λ in equation (1). From equation (2) we can easily get the similar form of equation (2). This result in laboratory may be taken as the nature of the risk function. Besides, the stress level X can be the real stress σ, which is important for our model. New physical model for SRM Elastic rebound theory is a simple time-dependent model. To extend SRM to a space-time version, we need construct a new 3-D physical model based on the simple elastic rebound theory. Assume every spatial point is corresponding to a fault, we express our new physical model on the fault as,

σ tyx = σ + ρ + σ s tyxtyx ),,(),()0(),,( (3) where σ(x,y,t) is the shear stress in the slip direction of one fault, ρ(x,y) is shear loading rate from external tectonic force, and can be calculate in our loading model, σs(x,y,t) is accumulated shear stress variety due to earthquakes including the effect by stress triggering and stress release is negative. To simplify the problem, we consider our region in a homogeneous three dimensional elastic half-space. Assume the external tectonic force is horizontal and time-independent, the loading rate tensor can be express as two orthogonal constant loading rate ρ1 & ρ2 (ρ3 = 0) in the primary coordinates and a azimuth θ between the direction of ρ1 and the east direction; Furthermore, we take ρ2 = −ρ1 for more simplification. If we have the value of ρ1 and θ, we can project the loading rate tensor to the slip direction on the faults of earthquakes and get the loading rate ρ(x,y) in equation (3). The accumulated stress variety due to earthquakes includes not only the direct stress release due to the local earthquake, but also the influence due to the stress triggering far away. It is

Figure 1. The three coordinate systems used in the new physical model. E (east), N (north) and Up are the master coordinate in which each fault is specified; x, y, and z are the fault-referenced coordinates used by Okada (1992); 1, 2 and 3 are our fault-referenced coordinates used for calculation of shear stress on the fault plane. Here δ is the fault dip, φ is the fault strike, and L and W are the fault length and width. convenient to introduce the coordinate systems first (Figure 1). Okada’s (1992) computer routines are used to derive the stress variety due to earthquakes. His formulation gives the displacements and their spatial derivatives anywhere in an elastic half-space due to slip of any direction and magnitude on a rectangular fault (here is the fault which occurs the earthquake). Then the induced strain and stress tensor of object point can be easily got. The induced shear stress on the object fault can be got by projecting the stress tensor on the fault. To calculate this, we need to know the source mechanics of earthquakes: strike, dip, length, width and slip vector of faults, besides the common data in the earthquake catalogs. Construction of 4-D SRM Based on the new physical model, the conditional intensity function can be gained. λ ⋅= λ tyxmfmtyx ),,()(),,,( 1 (4) mf exp)( {}+⋅= []+ σρνα S tyxtyx ),,(),( where f(m) is the probability distribution of magnitude. Obviously, σ(0) has been absorbed into the other parameters. The model parameters can be obtain by using the log-likelihood method,

TN )( T N log L = λ − λ ),,(),,(log dsdttyxtyx + mf )(log (5) ∑ iii ∫∫∫ ∑ i=1 oS 1 where N(T) is the number of earthquakes within the interval (0, T). Because the space points where no earthquakes have occurred in the time interval (0, T) may never have earthquake occur, only the space points where earthquakes have ever occurred in the time interval (0, T) are used in the inversion. To obtained the conditional intensity function at the space points where no earthquakes have occurred in our time interval, we consider a weight function g(x,y), that is,

λ2 = ⋅ λ1 mtyxyxgmtyx ),,,(),(),,,( . (6)

The weight function g(x,y) can be obtained by using the procedure by Stock & Smith (2002). Application to the historic earthquake catalog of North China When the statistic model is applied to the historic earthquake catalog in North China, we select only MS ≥ 6.5 earthquakes from 1300 to 2000. The focus region is from 108° E to 121° E in longitude and from 34° N to 42° N in latitude (figure 2). Consider there are no source mechanics solutions of the earthquakes before 1970, we need obtain them indirectly. The strike and dip of the earthquake fault can be obtained through geologic survey. The rupture length, rupture width and slip distance can be obtained by scaling relation. The slip angle can be obtained by projecting the mean tectonic stress to the earthquake faults. Consider the slip angle is calculated from the mean tectonic stress field, we take the azimuth θ between the direction of ρ1 and the east direction as a constant. We compared the result from the original stress release model and static Poisson process by using AIC (Akaike, 1977) (Table 1). The new model is much better than them. We show the risk function versus time in Figure 2.

Table 1. The result of maximum likelihood method

α ν σ1 (Pa) b-value AICNSRM AICOSRM AICPoisson -6.874 2.71e-6 1671.9 0.8176 535.1459 583.8810 586.0022

0.12

0.08

0.04 Events/Year

0 1300 1400 1500 1600 1700 1800 1900 2000 Year 10

8

6

4 Magnitude 2

0 1300 1400 1500 1600 1700 1800 1900 2000 Year

Figure 2. The risk function (events/year) versus time (year) calculated by the original SRM (dotted line) and the new SRM (solid line). The M-t figure of earthquake catalog is also plotted. Conclusions We interpreted the nature of the risk function used in the stress release models based on the results in the laboratory. We confirm the stress level in SRM is the true stress. Based on this result, we involved a static stress triggering model to SRM and extended the SRM to multiple-dimensions. The new SRM is applied the North China by using the historical catalog. We proved the new model is more effective than the original model. References Akaike, H., 1977, On entropy maximization principle, in Applications of statistics, edited by P. R. Krishnaiah, pp.27-41, North-Holland, New York; Liu, J., Chen, Y., Shi, Y. & Vere-Jones, D., 1999. Coupled stress release model for time dependent seismicity, Pure appl. Geophys., 155, 649-667; Knopoff, L., 1971, A stochastic model for the occurrence of main sequence events, Rev. Geophys. Space Phys., 9, 175-188; Okada, Y., 1992, Internal deformation due to shear and tensile faults in a half-space, Bull. Seismol. Soc. Am., 82, 1,018-1,040; Reid, H.F., 1910, The mechanism of the earthquake, in the California Earthquake of April 18, 1906, Report of the State Earthquake Investigation commission, vol.2, pp.16-28, Carnegie Institute of Washington, Washington, DC; Sholz, C.H., 1968, Mechanism of creep in brittle rock, J. Geophys. Res., 73, 3,295-3,302; Stock, C. &E. G. C. Smith, 2002, Adaptive kernel estimation and continuous probability representation of historical earthquake catalogs, Bull. Seismol. Soc. Am., 92, 904-912; Vere-Jonse, D., 1978, Earthquake prediction-a statistician’s view, J. Phys. Earth, 26, 129-146; Zheng, X. & Vere-Jones, D., 1991, Applications of stress release models to earthquakes from North China, Pure appl. Geophys., 135, 559-576 Zheng, X. & Vere-Jones, D., 1994, Further applications of the stochastic stress release model to historical earthquake data, Tectonophysics, 229, 101-121;

An Accelerating Moment Release Version of the Stress Release Model

Steven Johnston 1

1School of Mathematics, Statistics and Computer Science, Victoria University of Wellington, PO Box 600, Wellington, New Zealand

Corresponding Author: Steven Johnston, [email protected]

Abstract: We develop a new stochastic point process model for Accelerating Moment Release (AMR), by combining features of the Stress Release Model and a previous point process model for AMR. Simulated earthquake sequences are produced from the new model and tested for the AMR pattern in the lead-up to the largest events.

1. Accelerating moment release The concept of Accelerating Moment Release (AMR) is based on the observation that in the time leading up to some large earthquakes, the seismic activity in a region surrounding the epicentre of the large earthquake is not constant, but increases as the large earthquake approaches. In particular, it is often observed that the accumulation of (scalar) seismic moment released by earthquakes in the region can be well approximated by a power-law of the time remaining until the large earthquake. That is,

α m ∑ Si = A − B(t f − t) , (1) ti ≤t where: • the ti’s and Si’s are the times and seismic moments of the earthquakes, • tf is the time of the large event, • A, B and m are parameters to be fitted from the data ( A > 0, B > 0 and 0 < m <1), • and α is usually taken to be 0.5. See Jaume and Sykes (1999) for a review of the observational evidence and further discussion. Vere-Jones et al (2001) developed a marked point process model for AMR, in which the conditional intensity function (instantaneous rate of events) has the form  −k1  t  λ(t) = λ0 1−  . (2)  t f  The mark associated with each event is assumed to be a seismic moment from a tapered Pareto distribution, whose upper “corner” parameter can change over time so that the mean moment has the form  −k2  t  µ(t) = µ0 1−  . (3)  t f 

The parameters k1 and k2 are such that k1 + k2 = 1− m . This allows for the acceleration in seismic moment release to be a result of either an increase in the number of earthquakes, or an increase in the sizes of the earthquakes, or a balance between the two factors. A limitation of this model is that it only describes the activity in one AMR “episode”. How should the model be modified to include the seismic activity after the large earthquake at tf? This is the main focus of the present study.

2. The stress release model The Stress Release Model (SRM) was proposed by Vere-Jones (1978) as a stochastic version of the elastic rebound theory of earthquakes. It assumes that the probability of occurrence of an earthquake at time t is related to the underlying “stress”, X(t) , of the region, which is given by

X (t) = X )0( + ρt − ∑ Si , (4) ti

λ(t) = f (X (t)). (5) f is usually taken to be an exponential function.

3. A new model The key idea of this poster is to formulate a new model that combines elements of Vere-Jones et al’s (2001) AMR model and the SRM. Let

−k λ(t) = λ0 (X c − X (t)) , (6) where Xc is a critical stress level needed to invoke a large event, analogous to the failure time tf in equations (1) to (3). The right hand side of (6) is a function of the underlying stress X(t) , and therefore the model is a form of SRM. The conditional intensity has the same form in as Vere-Jones et al’s (2001) AMR model (equation (2)), although it is a function of the stress rather than the time. As in the SRM, the conditional intensity increases between events and drops back by an amount related to the size of the earthquake when an event occurs. In the new scenario, the increase in the conditional intensity between earthquakes will have the form of the AMR-type power law, rather than the exponential shape of the usual SRM. The new approach allows for modeling multiple episodes of AMR; i.e. modeling a longer sequence of earthquakes, containing multiple large events, in which the AMR pattern can be seen in the lead up to each of the large events. It is not restricted, as in the Vere-Jones et al (2001) model, to modeling the seismic activity before only one large event. This is demonstrated in the poster by simulating from the model, and testing for the AMR pattern using the methods developed by Robinson et al (2005) and Zhou et al (2005).

4. References Jaume, S. C., and Sykes, L. R. (1999), Evolving towards a critical point: a review of accelerating seismic moment/energy release prior to large and great earthquakes , Pure Appl. Geophys. 155 , 279-306. Robinson, R., Zhou, S., Johnston, S., and Vere-Jones, D. (2005), Precursory accelerating seismic moment release (AMR) in a synthetic seismicity catalog: a preliminary study , Geophys. Res. Lett. 32 , L07309, doi:10.1029/2005GL022576. Vere-Jones, D. (1978), Earthquake prediction: a statistician’s view , J. Phys. Earth 26 , 129-146.

Vere-Jones, D., Robinson, R., and Yang, W. (2001), Remarks on the accelerated moment release model: problems of model formulation, simulation and estimation , Geophys. J. Int. 144 , 517-531. Zhou, S., Johnston, S., Robinson, R., and Vere-Jones, D. (2005), Tests of the precursory accelerating moment release (AMR) model using a synthetic seismicity model for Wellington, New Zealand , submitted to J. Geophys. Res.

Spacial distribution and time variation in seismicity around Antarctic Plate before and after the M9.0 Sumatra Earthquake, 26 December 2004

Masaki Kanao 1, Yoshifumi Nogi 1 and Seiji Tsuboi 2

1 Department of Earth Science, National Institute of Polar Resaerch, 1-9-10 kaga, Itabashi-ku, Tokyo 173-8515, Japan [[email protected], [email protected] ] 2 Institute for Research on Earth Evolution, Japan Agency for Marine-Earth Science and Technology, 2-15 Natsushima-cho Yokosuka 237-0061, Japan [ [email protected] ]

A large earthquake (Magnitude 9.0, 3.30N, 95.96E, depth=30km) occurred Off the West Coast of Northern Sumatra, on December 26, 2004. This is the fourth largest earthquake in the world since 1900 and is the largest since the 1964, Alaska earthquake. In total, more than 280,000 people were killed, 14,000 are still listed as missing and 1,100,000 were displaced by the earthquake and subsequent tsunami in 10 countries in South Asia and East Africa. The tsunami caused more casualties than any other in recorded history and was recorded nearly world-wide on tide gauges in the Indian, Pacific and Atlantic Oceans including coastal area of Antarctica. In this study, spacial distribution and time variations in seismicity around the Antarctic Plate – Indian Ocean (0-160E, 20-80S) is evaluated in relation to the occurrence of large earthquakes, such as the Sumatra event, based on the compiled data since 1964 by International Seismological Centre. The Antarctic continent and surrounding ocean were believed to be one of the aseismic regions of the Earth for many decades. However, the following four earthquakes with Magnitude 8.0 and greater are observed since 1990 around the Antarctic Plate – Indian Ocean region. 1) Balleny Islands Region (Mw=8.1, March 25, 1998), 2) North of Macquarie Island (Mw= 8.1, December 23, 2004), 3) Off West Coast of Northern Sumatra (Mw=9.0, December 26, 2004) and 4) Northern Sumatra, Indonesia (Mw= 8.7, March 28, 2005). In spite of the existence of a total 12 large events with Magnitude 8.0 and greater are recorded since 1990 in the world, occurrence of the above four large earthquakes imply the recent anomalous dynamic feature of the oceanic lithosphere in surrounding region. Hypocentral data since 1994, however, indicate no significant events with Magnitude 7.0 and greater inside the Antarctic Plate and along the oceanic ridges / transform faults in the Indian Ocean. Moreover, less than 30 earthquakes with Magnitude 6.0 and greater observed in the region during these ten years. Spacial distribution and time variations in seismicity around the oceanic ridges between the Antarctic Plate and the Indian-Australian Plate represent characteristic features before and after the large earthquakes. Seismicity of the aseismic ridges in eastward adjacent area of the

Australia-Antarctic Discordance (AAD) increased in a year before occurrence of the Balleny Islands earthquake in 1998 March, together with before the Off West Coast of Northern Sumatra earthquake in 2004 December. Seismicity of the oceanic ridge around 90E, moreover, has similar activity associated with the occurrence of large events. The most specified area in the Indian Ocean is recognized at Kerguelen Plateau, where has relatively high seismicity compared with surrounding Indian Oceans. Hot spot activity together with high heat flow around the Plateau involves the excitation of high seismicity. Nevertheless of the development of Global Seismic Networks and local seismic arrays, there is a possibility that several small earthquake events occurred around the aseismic ridges involving non-volcanic extension along the oceanic plate boundaries. Seismic activities at these regions reflects the far-field tectonic stress around the Antarctic Plate – Indian Ocean region.

An experiment in intermediate-term earthquake prediction: possibility of an M 8 earthquake occurring off the coast of eastern Hokkaido in 2005

K. Katsumata, and M. Kasahara

Institute of Seismology and Volcanology, Hokkaido University, Sapporo 060-0810, Japan

Corresponding Author: K. Katsumata, [email protected]

Abstract: We observed precursory seismicity rate changes lasting five years prior to the two recent great earthquakes off the coast of eastern Hokkaido: the Hokkaido-Toho-Oki earthquake in 1994 (M = 8.3) and the Tokachi-Oki earthquake in 2003 (M = 8.3). A seismographic network maintained by Hokkaido University has detected another significant seismicity rate change, and has identified the signals as intermediate-term precursors to a great interplate subduction earthquake in the near future. The seismicity rate decreased by 48% in June 2000 around the northeastern edge of the asperity ruptured by the Nemuro-Oki earthquake in 1973 (M = 7.4). Based on the above empirically supported information, we can state that if an M ≈ 8 earthquake follows the Nemuro-Oki quiescence with same time interval as the Tokachi-Oki earthquake, the time of occurrence could be in 2005.

1. Introduction Several significant cases support the hypothesis that seismic quiescence precedes large earthquakes [e.g. Mogi, 1969; Ohtaka et al., 1977; Wyss, 1985]. Wyss and Habermann [1988] summarized seventeen cases of precursory seismic quiescence to main shocks with magnitudes ranging from M = 4.7 to 8.0, and found that (1) the rate of decrease ranges from 45% to 90%, (2) the duration of the precursors ranges from 15 to 75 months, (3) if a false alarm is defined as a period of quiescence with a significance level larger than a precursory quiescence in the same tectonic area, the false alarm rate may be in the order of 50%, and (4) failure to predict may be expected in 50% of the main shocks, even in carefully monitored areas. Among the seventeen cases, there are three cases which can be considered successful predictions in the sense that a quiescence anomaly was recognized and interpreted as a precursor before the occurrence of the main shock [Ohtake et al., 1977; Kisslinger, 1988; Wyss and Burford, 1985; Wyss and Burford, 1987]. Recently more reliable examples of the seismic quiescence were reported: the Spitak (M = 7.0) earthquake in 1988 [Wyss and Martirosyan, 1998], the Landerse (M = 7.5) earthquake in 1992 [Wiemer and Wyss, 1994],

the Hokkaido-Toho-Oki (M = 8.3) earthquake in 1994 [Katsumata and Kasahara, 1999]. Based on these previous studies, it is now possible to predict some, if not all the major earthquakes several years prior to the main shock by monitoring the seismicity rate. Unfortunately, almost all precursory seismicity rate changes were identified not before but after the main shock. The purpose of the present study is to locate significant seismicity rate changes and identify them as precursors to a main shock before the occurrence of the main shock. This is an experiment in intermediate-term earthquake prediction in the Hokkaido subduction zone.

2. Data and Analysis A temporally homogeneous seismic catalog is the most important factor to the success of this experiment. Seismicity data sets are very simple lists, including the origin time, latitude, longitude, depth of hypocenter, and magnitude. However, they reflect the output of an extremely complex and highly variable detection and reporting system. These complexities

cause man-made changes in all seismicity data sets even if they are local, regional, or global [Haberman, 1987]. In this paper, we have produced a new seismic catalog that is free from man-made changes, which allows for a more reliable analysis than was possible with previous studies. We manually reexamined all waveform data, one by one, from eight thousand earthquakes with M ≥ 3.0, located by the seismographic network of Hokkaido University from January 1994 to September 2004. We considered the following three points in the reexamination process. First, seventeen seismographic stations, which had already been installed in 1994, were selected. Any station installed after 1994 was not used in this process because installation of new seismic stations can cause man-made changes such as a magnitude shift and magnitude-scale compression or stretch. In addition, the station combination was fixed while calculating magnitude and reading P and S wave arrival times, in order that the condition of hypocenter determination remained constant. Finally, a single person carried out the reexamination of waveform data to avoid differences among individuals. From this new seismic catalog, we selected 4162 earthquakes within the subducting Pacific plate, that is located deeper than the upper boundary of the deep seismic zone modeled by Katsumata et al. [2003]. Clustered events such as aftershocks and earthquake swarms were removed by the Reasenberg’s algorithm [Reasenberg, 1985] and the aftershocks of the 2003 Tokachi-Oki earthquake were manually identified and deleted. From the declustered catalog, we selected 2887 earthquakes with M ≥ 3.3, which was the basis of our analysis. Using the algorithm GENAS [Haberman, 1983], we found that the reporting in this catalog was homogeneous except for the period after the Tokachi-Oki earthquake. The reasons for the heterogeneous reporting are: (1) some aftershocks remained in the catalog and (2) seismicity was activated in the entire study area following the Tokachi-Oki earthquake. We used the gridding technique ZMAP [Wiemer and Wyss, 1994] to produce an image of the significance of rate changes in space and time. The study area was divided into grids spaced at 0.1° in latitude and longitude. A circle was drawn around each node and its radius r was increased until it included a number N = 100 earthquakes. The circle defines the resolution circle for a given N. The cumulative number of events vs. time was plotted for each node. The cumulative number plot started at a time t0 (January 1, 1994) and ended at a time te (September 30, 2004). We place a time window starting at t1 and ending at t2, where t0 ≤ t1 ≤ t2 ≤ te and Tw = t2 - t1 (here Tw = 4.0 years), and move t1 through time, stepping forward by one month. For each window position, we calculate Z-value, generating the function LTA defined by Wiemer and Wyss [1994], which measures the significance of the difference between the mean seismicity rate within the window Tw, R2, and the background rate R1, defined here as the mean rate in time period between t0 and te, except for Tw. The Z-value is defined as, 1/2 Z = ( R1 - R2 ) / ( S1/n1 + S2/n2 ) , where S and n are the variances and number of samples, respectively.

3. Seismicity Rate Change As a result of scanning time and space using ZMAP [Wiemer and Wyss, 1994], we identified nine areas with significant rate changes. Two temporal synchronizations were observed. The first one started between August and October 1998 in Areas 1, 2, and 3. Seismicity rate decreased in Area 1 near the asperity ruptured by the 2003 Tokachi-Oki earthquake [Yamanaka and Kikuchi, 2003], and increased in Areas 2 and 3. Depths of hypocenter in Areas 2 and 3 are in the range of 120-270 km and 240-340 km, respectively.

Seismic quiescence in the shallow part appears to correlate with seismic activation in the deep part of the subducting Pacific plate. The second synchronization started between March and July 2000 in Areas 4, 5, 6, 7, and 9. Seismicity rate decreased in Areas 4 and 9, and increased in Areas 5, 6, and 7. Depths of hypocenter in Areas 5 and 6 are in the range of 110-250 km and 220-360 km, respectively. Except for Areas 7 and 9, seismic quiescence in the shallow part appears to correlate with seismic activation in the deep part, as in the case of the synchronization in 1998. Moreover, a triangle consisting of Areas 1, 2, and 3 moved eastward to a triangle consisting of Areas 4, 5, and 6. The synchronized change started in 1998, lasting five years; it was followed by the M = 8.3 Tokachi-Oki earthquake in September 2003. The change does not appear to be a man-made change because we have reexamined all waveform data and the seismic quiescence was observed homogeneously in the magnitude band from 3.3 to 4.5. The quiescence occurred on the edge of the asperity ruptured by the main shock. This is consistent with a numerical simulation of an earthquake cycle based on a friction law [Kato and Hirasawa, 1999]. The simulation suggested that (1) the stress changes due to aseismic sliding cause a decrease in the seismicity rate lasting several years before the main shock and (2) the decreasing areas are located around the edge rather than the center of the asperity. Mogi [2004] pointed out that the deep earthquake seismicity increased prior to a great shallow subduction earthquake. His studies focus on earthquakes with a magnitude larger than 5. Our results show that deep earthquakes smaller than M = 5 also became more frequent prior to a great shallow earthquake. In order to estimate the probability of these seismicity rate changes occurring by chance, we obtained the distribution of Zmax and Zmin for Tw = 4 years, by calculating LTA functions for 2000 samples from 100 earthquakes, randomly selected from the declustered catalog used in this study. The calculation was repeated 10000 times. The Z-values below which 50, 95, and 99 per cent of the results fall are 3.01, 3.48, and 3.70 for Zmax, respectively, and -3.08, -3.46 and -3.65 for Zmin, respectively. These percentages are shown in Table 1 as an indicator of reliability. Even though the statistical confidence level was not very high in Area 1 and the physical mechanism of the synchronization is unknown, we suggest that the first synchronized seismicity change is a candidate for intermediate-term precursor to the 2003 Tokachi-Oki earthquake.

4. Discussion The most important point to consider is whether the second synchronized change that started in the first half of 2000 is a precursor to another great earthquake. Since large earthquakes occurred in 1894 and 1973 in the Nemuro-Oki area [Shimazaki, 1974], this area should not be considered an aseismic region and the crustal stress was released by large earthquakes repeatedly. There are two reasons why we believe that stress large enough to produce a great earthquake has been accumulated in this area. First, the M = 7.4 earthquake in 1973 released energy equal to only half of the M = 7.9 earthquake in 1894 [Shimazaki, 1974]. Second, since the seismic coupling is almost 100% in the region, as shown by data from the GPS network, the coseismic slip of 260 cm is expected 32 years after the 1973 earthquake. If the area of the seismic fault is 100 × 100 km2, the moment magnitude will be 7.9. In addition to the stress accumulation, the seismic quiescence in Area 4 shows a Z-value of +3.6, which is more significant than the seismic quiescence prior to the 2003 Tokachi-Oki earthquake, and the area is located on the deeper edge of the asperity ruptured by the 1973 event. It is possible that the seismic quiescence in Area 4 is caused by the precursory aseismic sliding that begins several years before the main shock, as was the case with 2003 Tokachi-Oki earthquake. The

coseismic slip and the following after-slip of the 2003 Tokachi-Oki earthquake [Ozawa et al., 2004] could possibly induce a low-angle-thrust-type reverse fault slip on the plate boundary in the Nemuro-Oki area. Another outstanding fact is that rapid subsidence has been recorded at HNS in the Nemuro peninsula. The rate of subsidence increased from 0.0 to 0.6 cm/year in 1999, as estimated from tide gauge data. Rapid subsidence had occurred twice before at HNS [Katsumata et al., 2002]. First, the subsidence rate increased from 0.3 to 1.5 cm/year, four years before the 1973 Nemuro-Oki earthquake. After the earthquake, the rapid subsidence stopped and the rate was 0.3 cm/year up to 1990. Another rapid subsidence started in 1990 with a rate of 1.1 cm/year, five years before the 1994 M = 8.3 Hokkaido-Toho-Oki earthquake. After the earthquake, the rapid subsidence stopped and the rate was 0.0 cm/year up to 1999. A third rapid subsidence started in 1999, and has continued at HNS. The rapid subsidence was not recorded at KSR 100 km southwest of HNS during this period indicating a signal source closer to the HNS tide meter. Recently, Murakami at the Geographical Survey Institute found that a slow slip with a velocity of 10 cm/year started in mid 2004 on the plate boundary in Area 4, based on data from GEONET, which is a GPS network in Japan [Murakami, 2004, personal communication]. He interpreted the slip as the after-slip associated with the 2003 Tokachi-Oki earthquake. However, we suggest that the slip is a provable precursory slip to the Nemuro-Oki earthquake that is predicted in this paper.

5. Conclusion Based on the seismic quiescence hypothesis, the ongoing quiescence in the Nemuro-Oki area is interpreted as an intermediate-term precursor to a great earthquake. The seismic quiescence precursor to the Tokachi-Oki earthquake in September 2003 started in October 1998. The Nemuro-Oki quiescence started in June 2000, twenty months after the Tokachi-Oki quiescence started. Although there is approximately a fifty percent probability of a false alarm, we venture to state that if a great earthquake follows the Nemuro-Oki quiescence with same time interval as the Tokachi-Oki earthquake, the time of occurrence could be in 2005.

6. Acknowledgement We thank Stefan Wiemer for providing ZMAP software.

Temporal Power-laws on Preseismic Activation and Aftershock Decay Affected by Transient Behavior of Rocks

Y. Kawada1, H. Nagahama1

1Department of Geoenvironmental Sciences, Graduate School of Science, Tohoku University, 6-3 Aramaki-Aza-Aoba, Aoba-ku, Sendai 980-8578, Japan

Corresponding Author: Y. Kawada, [email protected]

Abstract: A constitutive law for rock behaviors including the brittle, transient and steady-state behaviors is derived to recognize the temporal seismicity patterns on surface displacement or prior and subsequent to mainshocks, and formulated as the relaxation modulus (i.e., the ratio of stress to strain) following a temporal power-law relaxation. This constitutive law is transformable into the empirical power-law creep equation, and the exponent of the constitutive law is the valuables reflecting the fractal structures of rocks in response to the different deformation mechanisms. By the time-scale invariance in our law, the temporal seismicity pattern is recognized on the surface displacement owing to major large earthquake with many small events, on the power-law increase in the cumulative Benioff strain-release as the preseismic activation, and on the power-law aftershock decay represented by the modified Omori's law. The exponents of the temporal power-laws on the cumulative Benioff strain-release and the aftershock decay are linked to that of the constitutive law of rocks, and change coupled with the fractal structure of the crustal rocks.

1. Introduction We study the constitutive law for the rock behavior in order to recognize temporal seismicity patterns on surface displacement and prior or subsequent to mainshocks. To this end, the law requires to include the brittle behavior as well as viscoelastic behavior of rocks, and to express the transient response of rocks, i.e., the response to the sudden change in the stress and strain-rate such as earthquakes. The brittle behavior has been formulated by the damage mechanics and fibre-bundle model (e.g., Turcotte et al., 2003; Nanjo et al, 2005), and the transient behavior is derived as the temporal power-law relaxation called the long time tail or temporal fractal relaxation (e.g., Nagahama, 1994; Kawada and Nagahama, 2004). However, the constitutive law satisfying both of the behaviors has not been derived. The preseismic activation given by the increase in cumulative Benioff strain-release and the aftershock decay represented by modified Omori's law follow the temporal power-laws. Although these processes prior or subsequent to major earthquakes are kinds of the behavior of the crustal rocks, the processes have not been related with the temporal fractal property of the transient behavior of rocks. On the other hand, in the percolation model of rock fracture processes (e.g., Chelidze, 1993) and the theory on the power-law activation of spinodal instability (e.g., Rundle et al., 2000), the temporal power-law increase in cumulative Benioff strain-release is formulated with the settled exponent independent on the size or location of the events. However, the analytical results of the exponents do not become the fixed values (e.g., Bowman et al., 1998), and it has never been discussed what affects the change in the exponent well. In this presentation, from the constitutive law for the transient behavior of rocks, we recognize the temporal seismicity pattern of surface displacement, aftershock decay and cumulative Benioff strain-release. In particular, we state what property of the rocks affects

the exponents of the temporal power-laws on the cumulative Benioff strain-release and aftershock decay based on the constitutive law.

2. Constitutive law for transient behavior of rocks The transient behavior of viscoelastic materials is generally formulated based on the relaxation modulus E(t) (t is the deformation time), viz., dσ/dε. For analytical reason, the secant modulus ES(t) = σ / ε in stead of E(t) is utilized. The analyses on the experimental data of the transient behaviors of halite (Shimamoto, 1987; Nagahama, 1994), marble and lherzolite (Kawada and Nagahama, 2004) yields the formulations:

σ E¢ - β t æ Q ö º ES (ξ ) = ξ , ξ = expç- ÷ , (1) ε g(ε) C è RT ø where σ is the applied stress, ε is the strain, g(ε) is the strain-dependent function (g(ε) = 1 + aε), β is the positive exponent, Q is the activated energy, R is the universal gas constant, T is the absolute temperature, E' and a are the material constants, and C is a constant. Moreover, ξ stands for the temperature reduced time obtained by normalizing the various temperature behaviors. The power-law of ξ (or t) in Eq. (1) shows the time-scale invariance of the rock behavior, called the temporal fractal property or long time tail (e.g., Nagahama, 1994). Equation (1) leads to E(ξ)µ ξ -β (e.g., Kawada and Nagahama, 2004). The temporal power-law decay function is simply considered as the superposition of exponential decay functions with different lifetimes λ. Then, E(t) is derived also by the Laplace transformation of the distribution function of λ as follows;

¥ æ t ö E(t) = D(λ)expç- ÷dλ , (2) ò 0 è λ ø where D(λ) is the distribution function of λ. When D(λ)µ λ-β-1, E(t) shows the temporal power-law relaxation. This means that the structural fractal property of the rocks given by D(λ)µ λ-β-1 yields the long time tail behavior of rocks and β is related to the fractal distribution of elements composing the rocks (e.g., Nagahama, 1994). When ε = ε / t, Eq. (1) generates a relation (e.g., Kawada and Nagahama, 2004): 1 é ù 1 β êæ g(ε)ö æ ε öú β æ Q ö ε = ç ÷ ç ÷ σ expç- ÷ . (3) êè εE¢ ø è C øú è RT ø ë û Equation (3) shows the stress power-law relation with thermal activated process, and is reduced to the empirical equation for the steady-state flow law. Then, 1/β is the stress exponent reflecting the deformation mechanism, and the empirical steady-state creep equation is a special case of Eq. (3) from the mathematical perspective. In this sense, the 1/β power-law relation between E(ξ) and ξ (or E(t) and t) yields the relation ε µ σ , and represents the transient behavior as well as the steady-state behavior (e.g., Kawada and Nagahama, 2004). The brittle behavior of rocks has been formulated by the continuum damage mechanics and fibre-bundle model (e.g. Turcotte et al., 2003). In particular, Nanjo and Turcotte (2005) 1/β pointed out that the brittle behavior of rocks is expressed by the stress power-law ε µ σ 1/β -β based on 1D fibre-bundle model. Hence, the power-law ε µ σ or E(t)µ t represents the brittle as well as viscoelastic (transient and steady-state) behaviors of rocks. Based on this constitutive law, we investigate the temporal seismicity pattern in the next section.

3. Various power-laws on the temporal seismicity patterns 3.1 Surface displacement time series The surface displacement time series symbolizes a temporal seismicity pattern as a summation of transient responses of surface in various time-scales to the earthquakes. The transient responses in a-few-year scale to only a major large earthquake have been often 1/β analyzed by experimental steady-state creep equation of rocks: ε µ σ with the exponent 1/β = 3.5 (β ~ 0.29) of which the time series observed by the GPS (e.g., Freed and Bürgmann, 2004) is analyzed. On the other hand, the time series due to major earthquakes with small events can be recognized by the temporal fractal property on our constitutive law of rocks in Eq. (1) with β = 0.95 of which the time series observed by the creep meter (e.g., Wesson, 1987) is analyzed. 3.2 Modified Omori's law The temporal fractal decay of aftershocks is represented by the modified Omori's law: dN = B(τ + c)- p , (4) dτ where τ is the time elapsed after the main shock, dN/dτ is the rate of occurrence of aftershocks greater than a magnitude, p is the positive exponent, and B and c are constants. Based on the continuum damage mechanics (e.g., Nanjo et al, 2005), the modified Omori's law is related to the stress power-law in Eq. (3) under β = 1 - 1/p. 3.3 Cumulative Benioff strain-release Seismic activation has been observed prior to a lot of major earthquakes, and been quantified by the time series of cumulative Benioff strain-release Ω, i.e., the summation of the square root of the energy release at earthquake sequence like K Ω = Ω - (t - t)s , (5) c s c

where tc is the occurrence time of the earthquake, Ωc is the Benioff cumulative strain-release at t = tc, s is a positive exponent, and K is a constant (e.g., Bowman et al., 1998). s is approximately equal to 0.25 (e.g., Rundle et al., 2000). Based on the fibre-bundle model and continuum damage model (e.g., Turcotte et al., 2003), the power-law increase in the Benioff strain release is linked to the stress power-law in Eq. (3) by β = 2(s - 1) / (3s - 2). In this section, we show that the temporal seismicity pattern on the surface displacement and prior or subsequent to mainshocks can be recognized by the long time tail of the transient behavior of rocks.

4. Discussion In the percolation model of rock fracture processes (e.g., Chelidze, 1993) and the theory on the power-law activation of spinodal instability (e.g., Rundle et al., 2000), the exponent of the temporal power-law on the cumulative Benioff strain-release is settled regardless of the location or size of the objective events, though s-value is not fixed in the analytical results (e.g., Bowman et al., 1998). In Section 3.3, s in Eq. (5) is related to β in Eqs. (1) and (3). β is linked to the structural fractal property of rocks as noted in Section 2, and the stress sensitive exponent 1/β depends on the deformation mechanisms such as the brittle failure and dislocation creep, and ranges from 1 to 60 (Table 1). Therefore, the change in β affects that in s (or p in Eq. (4)). Namely, we point out that β is regulated by the fractal structures of rocks corresponding to the different deformation mechanisms, and that the change in the structures affects that in the exponents s and p.

Table 1. Experimental data of stress exponents 1/β and the corresponding deformation mechanisms (modified from Nagahama (1994) and Nakamura and Nagahama (1999)).

1/β Mechanism Material 1/β Mechanism Material

1.3 Diffusion creep Anorthite 15.0 Primary creep Halite 2.6 Dislocation Quartzite 27.0 Brittle failure Plagioclase 3.0 Dislocation Dunite 32.0 Brittle failure Granite 6.7 Dislocation Halite 60.0 Brittle failure Sandstone

5. Conclusion The temporal seismicity pattern on the surface displacement and prior or subsequent to the mainshocks can be recognized by the time-scale invariance of the transient behavior of rocks. The exponents of the temporal power-laws on the cumulative Benioff strain-release and modified Omori's law change linked with the fractal structure of crustal rocks in response to the different deformation mechanisms.

6. Acknowledgement We would like to thank I. Main and K.Z. Nanjo for their variable comments. The first author (YK) was financially supported by the 21st Century Center-Of-Excellence program, "Advanced Science and Technology Center for the Dynamic Earth", of Tohoku University.

7. References Bowman, D.D., Ouillon, G., Sammis, C.G., Sornette, A. and Sornette, D. (1998), An obser- vational test of the critical earthquake concept, J. Geophys. Res. 103B, 24,359-24,372. Chelidze, T.L. (1993), Fractal damage mechanics of geomaterials, Terra Nova 5, 421-437. Freed, A.M. and Bürgmann, R. (2004), Evidence of power-law flow in the Mojave desert mantle, Nature 430, 548-551. Kawada, Y. and Nagahama, H. (2004), Viscoelastic behaviour and temporal fractal properties of lherzolite and marble, Terra Nova 16, 128-132. Nagahama, H., High-temperature viscoelastic behaviour and long time tail of rocks in Fractal and Dynamical Systems in Geosciences, edited by J.H. Kruhl, (Springer-Verlag, Berlin, 1994), pp. 121-129. Nakamura, N. and Nagahama, H., Geomagnetic field perturbation and fault creep motion: a new tectonomagnetic model in Atmospheric and Ionospheric Electromagnetic Phenomena Associated with Earthquakes, edited by M. Hayakawa, (TERRAPUB, Tokyo, 1999), pp. 307-323. Nanjo, K.Z. and Turcotte, D.L. (2005), Damage and rheology in a fibre-bundle model, Geophys. J. Int. 162, 859-866. Nanjo, K.Z., Turcotte, D.L. and Shcherbakov, R. (2005), A model of damage mechanics for the deformation of the continental crust, J. Geophys. Res. 110, B07403, doi: 10.1029/2004JB003438. Rundle, J.B., Klein W., Turcotte, D.L. and Malamud, B.D. (2000), Precursory seismic activation and critical-point phenomena, Pure Appl. Geophys. 157, 2165-2182. Shimamoto, T., High-temperature viscoelastic behavior of rocks in Proceedings of the 7th Japan Symposium on Rock Mechanics, Tokyo, Japan, (Japanese Committee of Rock Mechanics, Tokyo, 1987), pp. 467-472 (in Japanese, with English Abstr.). Turcotte, D.L., Newman, W.I. and Shcherbakov, R. (2003), Micro and macroscopic models of rock failure, Geophys. J. Int. 152, 718-728. Wesson, R.L. (1987), Modelling aftershock migration and afterslip of the San Juan Bautista, California, earthquake of October 3, 1972, Tectonophysics 144, 215-229. The eects of fault zone coupling on seismic cycling and aftershock generation N. Kuehn1 and S. Hainzl1 1Institute of Geosciences, University of Potsdam, 14415 Potsdam, Germany

Corresponding Author: N. Kuehn, [email protected]

Abstract: A stochastic version of elastic rebound theory, the linked stress release model, is analyzed concerning quasiperiodic behaviour and generation of aftershocks. In particu- lar, the eects of dierent coupling strengths are investigated. Simulation results indicate that large earthquakes do occur quasiperiodically in the stress release model. Coupling between seismogenic zones leads to a correlated occurence of large earthquakes in adjacent regions. Moreover, large earthquakes generate aftershocks that follow the Omori law.

1. Introduction In seismic hazard analysis the temporal evolution of seismicity is usually modeled by a Poisson process, based on the assumption that earthquakes occur independently of each other. The Poisson model, however, does not account for correlated earthquake occurrence observed in nature, such as foreshock and aftershock activity accompanying large earthquakes. Furthermore an earthquake decreases the amount of strain present at the fault zone where rupture occurs (Reid, 1910). To account for these features, more elaborate models than the Poissonian have been proposed, such as the Epidemic Type Aftershock Sequence (ETAS) model (Ogata, 1988) for short term clustering or the stress release model (Vere-Jones, 1978) for long term clustering. The latter model is a stochastic version of elastic rebound theory (Reid, 1910), which states that in a seismically active zone, elastic strain energy or stress accumulates due to plate movement and is released by an earthquake that occurs when the stored energy exceeds a threshold. Here we analyze the stress release model with regard to its implications for seismic hazard. We study in particular recurrence time distributions of large earthquakes and the impact of fault interactions.

2. The Model The occurrence probability of an earthquake in the stress release model is governed by a conditional intensity function (t) = (t) that depends on the actual stress level S(t) in the studied region. The stress level increases deterministically between two earthquakes due to tectonical loading and drops when an earthquake occurs. Its temporal evolution can be written as S(t) = S(0) + t Si, (1) tXi

(j) Si(t) = Si(0) + i ijS (t), (3) Xj 5.5 1e+09

5 1e+08

4.5 1e+07

4 1e+06

3.5 100000

3 10000

2.5 1000 earthquale rate

coefficient of variation 2 100

1.5 10

1 1

0.5 0.1 3 3.5 4 4.5 5 5.5 6 6.5 7 7.5 1e-10 1e-08 1e-06 1e-04 0.01 1 mcut time since mainshock [a] Figure 1: Left: Coecient of variation in the case of the uncoupled (crosses), and for the seismicity of one region (triangles) and the whole fault network (rhombs) in the case of the coupled stress release model. The coecient of variation was calculated for all events with a magnitude greater mcut. Right: Aftershock activity in the case of coupled (solid line) and uncoupled (dashed line) stress release model where S(j)(t) is the accumulated stress release in region j for earthquakes that occurred prior to time t, and ij is the fraction of stress transferred from region j to region i. For simulation, the magnitude-frequency-relationship as well as the conditional inten- sity function need to be specied. Magnitudes are chosen randomly from a truncated Gutenberg-Richter distribution, where the maximal magnitude depends on the actual stress level: the stress drop cannot be higher than the actual stress level. Our standard choice for the conditional intensity function is an exponential function (t) = (S(t)) = exp [ + S(t)] (4)

(Zheng and Vere-Jones, 1994). The simulations are carried out as follows: A grid of MxN regions is considered. Stress transfer is only allowed for neighbouring regions. The parameters and are xed by the desired frequency-magnitude distribution, and and remain as free parameters. 3. Simulation Results Figure 1 shows the coecient of variation CV , which is dened as the standard devia- tion divided by the mean of the interevent times, for the simple and the linked stress release model. Parameters were chosen to produce a maximum magnitude of 7.5 with a recurrence interval of 100a. In the case of no coupling, CV > 1 if all earthquakes are considered, whereas for large earthquakes CV < 1 which indicates that large earthquakes occur quasiperiodically. Correspondingly, the interevent time histogram is peaked. For the linked stress release model, this periodicity breaks down if we consider the whole system, whereas for each region the cycling is maintained. However, coupling leads to a correlated occurrence of large earthquakes in neighbouring regions. This can be seen by the peak in the distribution of the generalized phase dierence in gure 2. The right plot of gure 1 shows the earthquake activity relative to mainshocks which are dened as the largest event within one year. If the regions are coupled, the activity in adjacent regions clearly follows the Omori-law.

4. Discussion We perform simulations both for h = 0.75 and h = 1.5. In the latter case, more stress is released by the largest events. This results in stronger eects for quasiperiodicity and aftershock generation. However, in both cases we nd clear deviations from a Poisson pro- cess: The interaction between adjacent seismogenic zones does not randomize earthquake occurrence: Small earthquakes occur mostly in clusters and large earthquakes quasiperi- odically in each region. In neighbouring regions, the occurrence of large earthquakes is correlated. 0.23

0.22

0.21

0.2

0.19

0.18

0.17

probability density 0.16

0.15

0.14

0.13 0 1 2 3 4 5 6 generalized phase difference Figure 2: Generalized phase dierence for two coupled regions; no coupling leads to a uniform distribution

5. Conclusions We investigated the simple and linked stress release model with regard to cycling and aftershock generation of large earthquakes. We show that cycling of large earthquakes is present in the simple stress release model. Nevertheless, large earthquake in adjacent re- gions are correlated. Furthermore, large earthquakes generate aftershocks that reproduce Omori-type behaviour.

6. References Bebbington, M., and Harte, D. (2003), The linked stress release model for spatio-temporal seis- micity: formulations, procedures and applications, Geophys. J. Int. 22, 925-946. Ogata, Y. (1988), Statistical models for earthquake occurrences and residual analysis for point processes, J. Amer. Statist. Assoc., 83, 9-27. Reid, H. F. (1910), The mechanism of the earthquake, in The California Earthquake of April 18, 1906, Report of the State Earthquake Investigation Commission, Vol. 2, pp. 16-28, Carnegie Institute of Washington, Washington, DC. Vere-Jones, D. (1978), Earthquake prediction - a statisticians’s view, J. Phys. Earth, 26, 129-146. Zheng, X. and Vere-Jones, D. (1994), Further applications of the stochastic stress release model to historical earthquake data, Tectonophysics, 229, 101-121.

Exploratory Statistical Analysis with the Data of Various Observations in Tangshan Well

L. Ma1, T. Wang2, B. Yin1,3, W. Wang 1, J. Huang1, and C. Zhang1,4

1Institute of Earthquake Science, China Earthquake Administration, Beijing 100036, China 2 School of Mathematical Sciences, Beijing Normal University, Beijing 100875, China 3Tangshan Earthquake Administration, Tangshan 063000, China 4Laborary of Computational Geodynamics, Graduate School of the Chinese Academy of Sciences, Beijing 100049, China

Corresponding Author: L. Ma, [email protected]; [email protected]

Abstract: Nonlinear statistical methods are used in this paper, to analyze and remove the influence factors from groundwater level, and the relations among them .The individuality feature of the Tangshan well is discussed.

1. Introduction The lot of the papers with the research on the various observations from the records of ground water well, which are shown in my references. Basis on the geophysical and hydrologic process, the several kinds of factors were influenced and commixed within the water level records. The effects are caused by atmospheric pressure, earth tide, rainfall, exploitation of underground water, earth deformation, and the seismically induced fluctuation, and so on. The several approaches were studied for removed and analyzed the factors from the records of water level which shown in the references. In the paper the statistical analysis are explored with the data of various observations in Tangshan well.

2. Data The more interesting data were found from Tangshan well where is located as the Figure 1 shows, and the Figure 2 shows the columnar section of Tangshan Well. The various data of the Tangshan well are observed starting in more than 30 years ago all the information in the Table 1. The especially phenomena, fluctuation of the water level in Tangshan well were induced by 63 teleseismics (M≥7.0, NEIC catalogue) in 2001.10-2005.10, at the same time the groundwater temperature were reduced. Three of the examples are shown in Figure 3. Table 1. The information of the time duration for various observations in Tangshan well Observation duration Observation sampling

W TW TG TA P

1971.01.01-1971.04.30 Per day 1971.05.01-1973.12.31 Per five days 1974.01.01-1982.12.31 Per day 1983.01.01-2001.09.15 Per hour 2001.09.16-2002.12.31 Per minute Per minute Per minute 2003.01.01-2005 Per minute Per minute

Note: W: ground water lever, TW: water temperature, TG: deep water temperature;

TA: air temperature in room, P: air pressure in room 3. Methods The nonlinear statistical methods are used, such as Nonlinear Least Square, Seasonal Auto Regression with Moving Averages model with Unit Root Test, earth

tidal filter, Fast Fourier Transform and spectral analysis, ... the analyzed seismic waves and induced fluctuation of the

d

Figure 1 Historical seismicity in Tangshan Area (M≥ Figure 2 Columnar 4.7) and the location of the wells section of Tangshan Well

Figure 3 Three records of the fluctuation from the water level in Tangshan well observation data groundwater, it’s compared and filtered. w The sequence of monthly data of groundwater level X t (Figure 4) is a Unit Root Process. First difference is made to the sequence and in the first-difference sequence shows period 12,

w w ⎛ 2πt ⎞ t XX t−1 =− sin8154.0 ⎜ + 5436.14 ⎟ +ηt , (1) ⎝ 12 ⎠ where ηt is the residual, which satisfies the following SARMA model,

ηt = ηt−1 − ηt−2 − 2035.01855.04985.0 ηt−10 − 1015.0 ηt−11 + 0377.0 η −12 + ε tt , (2) here ε t is white noise.

The Nonlinear Least Square method is used to fit atmospheric pressure response to the groundwater level (Figure 5).

Figure 4 Analysis to water level monthly Figure 5 Atmospheric pressure response to

dtThe first-order difference was calculated for the data of two-month water levels to remove trending, and the result conserves the structure and the amplitude of the data from theoretical earth tidal effect of volumetric strain (Figure 6).

Figure 6 The plot of analysis earth tide of the Tangshan water level observation data 4. Conclusion The nonlinear statistical methods is considered and explored to analyze the influence factors from all the data of groundwater in Tangshan well based on the physical mechanism, and the relation among them is discussed. Except the common features of the groundwater, there is the individuality feature for the Tangshan well. The earthquake precursors can be found from the groundwater level if the all influence factors of the ground water level can be cleared up from the observation data.

5. Acknowledgement The peoples of Tangshan Earthquake Administration continuous support for data of the groundwater level with others information at Tangshan well. Prof. Qinwen Xi permitted us to use his computer programs for calculating earth tides. The software package, SSLib, was used for some of the calculation and figures. Prof. Hui Li, Yong Li provided detailed comments on the manuscript. A part of the supports was provided by NSFC programs (10371012, 40574020).

6. References Beavan, J., K. Evans, S. Mousa and D. Simpson. (1991), Estimating aquifer parameters from analysis of forced fluctuations in well level: An example from the Nubian formation near Aswan, Egypt, 2, Poroelastic parameters, Journal of Geophysical Research, 96(B7), 12139-12160. Blanchard, F. B., Byerly, P., (1935), Astudy of a well gauge as a seismograph, Bull. Seismol. Soc. Am., 25. Box, G. E. P., Jenkins, G. M., Reinsel, G. C., (1994), Time series analysis: forecasting and control, Vol. 3, Prentice-Hall, Englewood Cliffs, NJ. Brodsky, E.E., E. Roeloffs, D. Woodcock, I. Gall, and M. Manga, (2003), A mechanism for sustained groundwater pressure changes induced by distant earthquakes, J. Geophys. Res.,108(B8), 2390. Cooper, H. H., (1965), The response of well–aquifer systems to seismic waves, Journal of Geophysial Research, 70(16), 3915-3926. Department of Technology and Monitoring, State Seismological Bureau, (1988), Groundwater research-collection on methods of earthquake monitoring and prediction, Beijing: Seismological Press. (in Chinese) Department of Technology and Monitoring, State Seismological Bureau, (1990), Groundwater and hydrochemical books-collection on practical application for methods of earthquake predicting, Beijing: Seismological Press. (in Chinese) Groundwater analyzing group of State Seismological Bureau, (1985), Analysis to groundwater behavior and its affecting factors, Beijing: Seismological Press. (in Chinese) Harte, D. (1998), Documentation for the statistical seismology library, School of Mathematical and Computing Sciences, UVW, Research Report, 10. Igarashi, G & Wakita, H. (1991), Tidal response and earthquake-related changes in the water level of deep wells, Journal of Geophysical Research, 96, 4269-4278. Koizumi, N., Y. Kano, Y.Kitagawa, T.Sato, M. Takahashi, S. Nishimura,and R. Nishida. (1996) , Groundwater anomalies associated with the 1995 Hyogo-ken Nanbu earthquake, J. Phys. Earth., 44, 373-380. Liu, L. B., E. Roeloffs and X. Y. Zheng, (1989), Seismically induced water level fluctuations in the Wali well, Beijing, China, Journal of Geophysical Research, 94(B7), 9453-9462. Marsaud, B., Mangin, A., Bel, F., (1993), Estimation des caracte´ristiques physiques d’aquife`res profonds a` partir de l’incidence barome´trique et des mare´es terrestres, J. Hydrol. 144, 85–100. Matsumoto, N., Kitagawa, G., Roeloffs, E. A., (2003), Hydrological response to earthquakes in the Haibara well, central Japan – I. Groundwater level changes revealed using state space decomposition of atmospheric pressure, rainfall and tidal responses, Geophysical Journal International, 155(3), 885-898. Matsumoto, N., Roeloffs, E. A., (2003), Hydrological response to earthquakes in the Haibara well, central Japan – II. Possible mechanism inferred from time-varying hydraulic properties, Geophysical Journal International, 155(3), 899-913. Quilty, E. G., and Roeloffs, E. A., (1991), Removal of barometric pressure response from water level data, J. Geophys. Research, 96, 10,209-10,218. Quilty, E., and E. Roeloffs, (1997), Water level changes in response to the December 20, 1994 earthquake near Parkfield, California, Bull. Seismol. Soc. Am., 87, 310-317. Rexin, E. E., Oliver, J., Prentiss, D., (1962), Seiemically-induced fluctuation of the water level in the Nunn-Bush well in Milwaukee, Bull. Seismol. Soc. Am., 52(1), 17-25. Roeloffs, E., (1998), Persistent water level changes in a well near Parkfield, California, due to local and distant earthquakes, J. Geophys. Res., 103 (B1), 869-889.

Wakita, H. (1975), Water wells as possible indicators of tectonic strain, Science, 189, 553-555. Wang, C.M., Che, Y.T., Wan, D.K. et.al., (1988), Study of groundwater micro-behavior, Beijing: Seismological Press. (in Chinese) William W. S. Wei, (1989), Time series analysis: univariate and multivariate methods, Addison-Wesley. Yi, D. H., (2002), Data analysis and the application of Eviews, China Statistics Press. (in Chinese).

Estimation of the Fault Constitutive Parameter Aσ and Stress Accumulation Rate from Seismicity Response to a Large Earthquake

K. Maeda

Meteorological Research Institute, 1-1 Nagamine, Tsukuba, Ibaraki 305-0052, Japan

Corresponding Author: K. Maeda, [email protected]

Abstract: We estimate the fault constitutive parameter Aσ and stress accumulation rate for some regions in Hokkaido, northern part of Japan, by applying the theoretical seismicity model of Dieterich (1994) to the sudden change of seismicity caused by the 2003 Tokachi-oki earthquake (M8.0). To estimate the parameters the model has been approximated by Omori’s formula and applied to the declustered seismicity data.

1. Introduction Dieterich (1994) has proposed a theoretical model that explains a seismicity rate change basing on the rate- and state- dependent friction law for the physical basis. According to his model we can predict a seismic rate change caused by a stress change, and inversely we can infer a stress change by observing an earthquake rate change (e.g. Dieterich et al., 2000). Therefore this model is promising in the point of view that we may become to know a stress change in many locations where seismicity data are available. However, in order to get reliable results, it is important to check the model applicability to the actual data and to estimate parameters concerning the model appropriately. In this study, we introduce a method of estimating one of the fault constitutive parameters, Aσ, and background stressing rate that appear in the model by using the observed seismicity response to a step-like change of stress that is caused by a large earthquake and is independently known.

2. Method From the model of Dieterich (1994), it is reduced that a step change of Coulomb failure stress, dS, causes an instantaneous seismicity rate change expressed by the following simple equation:

0 = AdSrR σ )(exp (1) where r and R0 are earthquake occurrence rates immediately before and just after the stress change, respectively, A is a fault constitutive parameter and σ is normal stress (less pore fluid pressure). This equation indicates that we can estimate Aσ if we know the values of dS and R0/r. Here we do not treat the parameters A and σ separately, but consider them as one parameter Aσ in the form of a product. In this study we estimate dS by evaluating an elastic stress change calculated for a known fault model of a large earthquake, and estimate r as an average seismicity rate before a large earthquake. Dieterich (1994) has also shown that the change of seismicity rate R after a stress step is approximately expressed by Omori’s formula and we modify the expression by adding background rate B as;

++= tcKBR where),( = σ &r = σ &r exp()(,)( =− σσ &r RrSAAdSSAcrSAK 0 ))(() (2) where S&r represents the reference stressing rate correspond to r and is equivalent to the stress accumulation rate or the background stressing rate. Therefore R0 can be evaluated by applying Omori’s formula to the seismicity data just after a large earthquake. In actual case we have used the declustered data instead of original one to be fitted to Omori’s formula because the original data usually contains small clustering activities that may be caused by unknown local stress change and make the fitting bad. When we use the declustered data, r is

reduced in proportion to the reduction rate for the declustered data after the stress step. We

can also estimate S&r by using the relation among the parameters in Omori’s formula and S&r represented in equation (2). Once the parameters Aσ and S&r are estimated, we can calculate the stressing rate change from the seismicity data by applying the formulation of Dieterich et al. (2000) that can be written as

= γ SrR &r where),( −= )( AdSdtd σγγ (3) where γ is a state variable.

3. Application to the Observed Seismicity When we select the target area to apply the method, we consider the following conditions: the fault model of a large earthquake is well determined, the seismicity rate change caused by a stress step is clearly 0.07~0.16MPa recognized, the area is far enough from a large earthquake not to be effected by a slip inhomogeneity of the earthquake, and the rate change tend to follow Omori’s formula. Consequently we try to apply the method to four inland regions (denoted by A to D) in Hokkaido, northern part of Japan, where the seismicity is clearly activated in relation to the 2003 Figure 1 Distribution of Coulomb failure stress change Tokachi-oki earthquake (M8.0) (see caused by the 2003 Tokachi-oki earthquake. Pure right Fig.2). We use the JMA integrated lateral strike faults with direction of N70°E are assumed earthquake catalog data with as receiver faults. The values corresponding to the regions magnitude larger than or equal to 0.5 A to D are 0.07~0.16 MPa. and depth shallower than 15km during the period from January 2003 through June 2005. The source fault model of the 2003 Tokachi-oki earthquake we adopted is the single fault plane model estimated by GSI (2004) using geodetic data, and the resulting Coulomb failure stress change is displayed in Fig.1. We assume vertical right-lateral faults with strike of N70°E as receivers which represent the typical fault mechanism around the regions. The estimated stress changes around the regions A to D are about 0.07 to 0.16 MPa. The variation of seismicity in the region A, for example, is shown in Fig.3. We can see the activated seismicity after the Tokachi-oki earthquake and also notice many small clusters. This activated activity is well fitted to modified Omori’s formula with p=1.8, but is not well expressed by Omori’s formula that is requested to apply the previously mentioned method. This is probably because small clusters are activated by the unknown local stress changes and the total activity does not represent the pure step response to the Tokachi-oki earthquake. So, we applied the declustering process to the original data. The declustering method adopted here is that the earthquakes located each other within some parameterized epicentral distance and time are classified into the same cluster including jointly classified earthquakes and the cluster is represented by one largest earthquake in the cluster. Fig.4 shows the declustered seismicity in the region A with the declustering parameter of 1 km in epicentral distance and 1 day in time. It is clear that the clustering activity is significantly removed. The declustered seismicity is well approximated by Omori’s formula and we can estimate the

parameters Aσ and S&r by using the estimated parameters of Omori’s formula and eq.2.

A B D C D C A B

2003 2004

2003 Tokachi-oki Eq. (M8.0)

Figure 2 (Left) Seismicity before and after the Tokachi-oki earthquake whose epicenter is indicated by a solid star. (Right) Space-time distribution of earthquakes in the area shown by the rectangle in the left. The seismicity is activated after the Tokachi-oki earthquake in the regions A to D.

5 2003 Tokachi-oki Eq. (M8.0) Space-Time M-Time

0

0 1000 Cumulative Depth-Time

0 2003 2004 15 2003 2004 Figure 3 Magnitude-time, cumulative number, space-time, and depth-time variations are shown for the region A. The seismicity is activated after the Tokachi-oki earthquake and there are many small clustering activities.

5 2003 Tokachi-oki Eq. (M8.0) Space-Time M-Time

0

300 0 Cumulative Depth-Time

0 15 2003 2004 2003 2004 Figure 4 The same figures as Fig.3, but the seismicity is declustered by the algorithm that earthquakes within 1 km and 1 day are classified into the same cluster which is represented by one largest earthquake in the cluster. It is clear if compared with Fig.3 that clustering activities are significantly reduced.

4. Results and Conclusion The values of Aσ obtained for four regions have relatively small variation ranging from

0.02 to 0.03 MPa, but those of S&r vary widely as about 0.003, 0.005, 0.0002, and 0.01 MPa/year for the region A, B, C, and D, respectively. The values of declustering parameters do not much affect the result of estimated values. Fig.5 (top) shows how the estimated seismicity rate expressed by Omori’ s formula fit to the declustered data in the region A. The line showing a step-like change of stress in Fig.5 (bottom) corresponds to the estimated value

of S&r (=0.0028), and the other lines in the same figure represent how the value of S&r affect the estimated stress change for the same seismicity rate shown in the top figure. Once the

parameter values of Aσ and S&r are estimated, we can calculate the change of stress that may be actually loaded to the region by applying the formulation (3) to the original seismicity data. Fig.6 represents the seismicity rate change and cumulative number in the region A based on the original data (top), and corresponding estimated stress change (bottom). The first stress step shown in Fig.6 (bottom) is caused by the Tokachi-oki earthquake and the following step-like changes are interpreted as caused by unknown local stress changes. The fact that the declustered seismicities in all the four regions are well model by Omori’s

formula suggests the declustering is effective to estimate values of Aσ and S&r .

Region A (declustered 1km 1day) Region A 250 2.5 1000 90 o o Eq. Rate (n/day) Rate Eq. strike = 70 strike = 70 (n/day) Rate Eq. 200 Aσ = 0.016461MPa 2 800 A = 0.016461MPa 72 c σc rc = 0.027937/d r = 0.027937/d 150 1.5 600 c 54 B = 0 100 K = 59.7123 1 400 36 c = 29.1805 Cumulative N 50 p = 1 0.5 Cumulative N 200 18

0 0 0 0 Jan03 Jan04 Jan05 Jan03 Apr03 Jul03 Oct03 Jan04 Apr04 Jul04

0.12 0.4 MPa/y S′ =0.0028MPa/y 0.1 S′ =0.0048 r r 0.3 S′ =0.0038 0.08 r S′ =0.0028 0.06 r 0.2 S′ =0.0018 0.04 r S =0.0008

Stress (MPa) Stress ′ r (MPa) Stress 0.1 0.02 0 0 Jan03 Jan04 Jan05 Jan03 Apr03 Jul03 Oct03 Jan04 Apr04 Jul04 Time Time Figure 5 (Top) Cumulative number of declustered Figure 6 (Top) Cumulative number of non declustered earthquakes and its fitted curve to Omori’s formula as earthquakes and its modeled seismicity rate and well as seismicity rate. (Bottom) Stress change estimated cumulative rate basing on modified Omori’s formula. by the seismicity rate change. Each curve corresponds to (Bottom) Stress change estimated by the seismicity rate different values of the background stressing rate. change using the estimated background stressing rate.

5. Acknowledgement The author is very grateful to H. Takayama for the help of declustering and to H. Naito, S. Yoshikawa, Y. Ishikawa and K. Nakamura for providing the programs to plot some figures. 6. References Dieterich, J. (1994), A constitutive law for rate of earthquake production and its application to earthquake clustering, JGR, 99, 2601-2618. Dieterich, J., Cayol, V. and Okubo, P. (2000), The use of earthquake rate changes as a stress meter at Kilauea volcano, Nature, 408, 457-460. Geographical Survey Institute,(2004), Crustal Movements in the Hokkaido District, Report of The Coordinating Committee for Earthquake Prediction,71,135. Seismicity shadows: observations and modelling

D. Marsan1, G. Daniel2, and M. Bouchon2

1Laboratoire de Géophysique Interne et Tectonophysique, Universite de Savoie, Le Bourget du Lac, France. 2 Laboratoire de Géophysique Interne et Tectonophysique, Observatoire de Grenoble, France.

Corresponding Author: D. Marsan, [email protected]

Abstract: Seismicity shadows caused by the occurrence of large mainshocks have been reported to be rare or even missing, for a number of case studies, while some clear instances of quiescence have also been reported. We review here this debate, emphasizing the difficulty to detect seismicity shadows when the prior level of seismicity is low. Two sequences (Turkey and Taiwan 1999) are further examined, for which shadows are either absent or delayed by days to months. A model based on rate and state friction is used to show that heterogeneity in stress change can explain these observations.

1. Introduction A number of recent studies have raised the question as to whether seismicity shadows exist following the occurrence of large earthquakes (Parsons, 2002; Marsan, 2003; Marsan and Nalbant, 2005; Felzer and Brodsky, 2005). While very clear quiescence appear to be triggered by the 2nd M6 Kagoshima shock in Japan, 1997 (Toda and Stein, 2003; Woessner et al., 2004), other previously reported seismicity shadows seem to be too few and too small compared to Coulomb stress prediction (Marsan, 2003), to be delayed by several months (Ma et al., 2005), to occur in volcanic rather than purely tectonic contexts (Dieterich et al., 2000), or have been attributed to aseismic slip on neighbouring faults rather than on co-seismic slip (Ogata et al., 2003).

Figure 1: Seismicity rate changes for the time interval 2 days - 100 days after the occurrence of the Landers earthquake. (Left) Estimated rate change r (in log10), and (right) significance γ. Iso-contours are shown with a unit step, the zero contour being drawn with a thick line. Superimposed is the M2.2+ seismicity that occurred during this time interval. Three zones with clear (γ < -1) quiescence are observed, but the two in the middle and south of this zone can be shown to start about 30 days before Landers, hence the causality with this earthquake is not ensured. Only the northernmost quiescence appear to be triggered by the Landers earthquake (taken from Marsan and Nalbant, 2005).

Figure 2: log10 of the estimated seismicity rate change r (top graph) and significance γ (bottom graph) vs. observation duration Δ in years, for actual seismicity rate changes r = 100 (diamonds), r = 1 (squares), r = 0.01 (circles), and λ0 = 2 / year. For r = 100, the estimated r > 100 for 0 < Δ < 10 years. The 99% significance levels are indicated for γ (dashed lines), along with the two values log10(0.5) and -log10(0.5) (continuous lines) between which γ is excluded by construction, cf. Marsan and Nalbant (2005).

2. Measuring seismicity rate changes A general methodology for the estimation of rate changes was sketched by Marsan (2003) and Marsan and Nalbant (2005). The observed number n of earthquakes occurring in a given time span after a mainshock is compared to the rate λ0 extrapolated from a seismicity model that mimics the activity up to the time of the mainshock. This rate λ0 is believed to characterize the seismicity if the mainshock had not happened (for example, the 'background' rate if one assumes this rate to be the expected seismicity level in a 'normally active' period), and therefore must be estimated from a model of seismicity. Various such models have been studied in this context: single and multiple Omori's laws, the ETAS model, auto-regressive processes, declustering algorithms. A significance γ is calculated for Poisson rather than Gaussian statistics. Figure 1 shows the estimated rate change and its significance γ for the 1992 Mw6.1 Joshua Tree rupture zone, as caused by the occurrence of the Mw7.3 Landers earthquake.

3. Detectability of seismicity shadows As pointed out by Marsan and Nalbant (2005) and Freed (2005), seismicity shadows can go undetected if the prior seismicity level is too low. This crucial detectability issue must therefore be closely examined to make sure that the observed lack of shadows is not a purely statistical effect. Also, in the case of an heterogeneous change in seismicity rate, the statistical bias towards the detection of triggering is likely to amount to the dissolving of shadows witin larger triggering patches. Given an expected seismicity rate λ0, one would have to wait a time Δ(r,γ) before making sure that a given seismicity rate decrease r can be detected with significance γ. This Δ(r,γ) function, which also depends on λ0 , was detailed in Marsan and Nalbant (2005), and is illsutrated in Figure 2 in the rather extreme case of λ0 = 2 / year.

Such an argument needs to be extended to every areas and time periods for which a seismicity rate change is estimated. A way to facilitate the detectability of shadows is to study sequences of several large shocks, so that the later events modulate the seismicity patterns activated by the previous shocks, or to analyse regions with high levels of seismicity prior to the mainshock.

4. The 1999 Izmit-Duzce and Chi-Chi sequence Here we review two studies of seismicity rate changes: the 1999 Izmit and Duzce sequence in western Turkey (Daniel et al., submitted manuscript) and the 1999 Chi-Chi earthquake in Taiwan (Ma et al., 2005; Marsan et al., in preparation). In the case of the Turkey earthquakes, we analyse how the Mw7.2 Duzce earthquake modified the seismicity triggered by the Mw7.4 Izmit earthquake. Owing to a relatively high completeness magnitude, only several large areas of the North Anatolian Fault are considered in this analysis. Apart from a zone extending to the east of the Izmit epicenter all the way to the Karadere segment, for which an observed mild quiescence lasting for several days could be due to a temporary redeployement of local stations, no shadows appear to be caused by the occurrence of the Duzce earthquake. Seismicity at the Yalova cluster, at the very western end of the Izmit rupture, is reactivated 18 hours after the Duzce earthquake, possibly as a result of pore fluid readjustements triggered by transient deformations propagating away from the mainshock. Fitting this activity with the ETAS model shows that this reactivation is triggered by a short (~ 1 day) transient, followed by self-sustained activity lasting for weeks. About 50 days after the Duzce earthquake, a clear decrease in seismicity is observed at Yalova, marking the end of this triggering phase.

Figure 3: cumulative number of M2.7+ earthquakes at the Yalova cluster, following the Izmit earthquake (data courtesy of the Kandili Observatory). The black line shows the best Omori fit in ln(t), with a cut-off time scale of about 2 days. Clear quiescence is observed 123 days after Izmit, i.e., 36 days after Duzce

The 1999 Mw7.6 Chi-Chi earthquake in Taiwan was found by Ma et al. (2005) to have caused shadows in several locations, but with onsets that were delayed by months. We revisit this analysis in more details, in particular by exploring the dependence of triggering with depth. Decreases of seismicity is observed to be more clear-cut at shallow depths, but occur in general after a period of days to months of initial triggering. This observation is strongly reminescent of Ogata et al. (2003), albeit in a different tectonic setting.

5. Modelling seismicity changes following an heterogeneous stress change A model based on rate-and-state friction (Dieterich, 1994) is proposed, in conjonction with a strongly variable co-seismic, on-fault stress drop which spatial heterogeneity is extrapolated at the nucleation scale (1 m to 10 m) based on scale-invariant models of slip distribution. It is found that stress variability can explain why triggering of aftershocks is ubiquitous on the ruptured fault plane even though stress has been relaxed overall. Also, off-fault triggering can occur off-fault, in zones that are similarly unloaded at larger scales. Such an initial triggering phase is predicted to precede quiescence, see Figure 4. Seismicity shadows can therefore be delayed by months to years if the co-seismic stress change is heterogeneous, due to slip variability, fault roughness, geometrical complexity of fault networks, or crustal small-scale heterogeneity (e.g., lithology, material properties, damaged rock, etc.). This can help explaining why shadows are relatively uncommon, and why the can develop later in the post-seismic phase rather than immediately after the occurrence of the mainshock.

Figure 4: Seismicity rate change vs time t in units of ta, the typical duration of the aftershock sequence, for a stress drop with mean -100 and varying standard deviation σ , in units of Aσ, a characteristic stress level for τ rate-and-state friction. Quiescence is observed to emerge after an initial triggering phase.

6. References Dieterich JH (1994), A constitutive law for rate of earthquake production and its application to earthquake clustering, J. Geophys. Res., 99, 2601.

Dieterich J, Cayol V, Okubo P (2000), The use of earthquake rate changes as a stress meter at Kilauea volcano, Nature, 408, 457.

Felzer KR, Brodsky EE (2005), Testing the stress shadow hypothesis, J. Geophys. Res., 110 (B5), 05S09.

Freed AM (2005), Earthquake triggering by static, dynamic, and postseismic stress transfer, Annu. Rev. Earth Planet. Sci., 335.

Ma KF, Chan CH, Stein, RS (2005), Response of seismicity to Coulomb stress triggers and shadows of the 1999 Mw = 7.6 Chi-Chi, Taiwan, earthquake, J. Geophys. Res., 110 (B5), B05S19.

Marsan D (2003), Triggering of seismicity at short timescales following Californian earthquakes, J. Geophys. Res., 108 (B5), 2266.

Marsan D, Nalbant SS (2005), Methods for measuring seismicity rate changes: A review and a study of how the M-w 7.3 Landers earthquake affected the aftershock sequence of the M-w 6.1 Joshua Tree earthquake, PAGEOPH, 162 (6-7), 1151

Ogata Y, Jones LM, and Toda S (2003), When and where the aftershock activity was depressed: Contrasting decay patterns of the proximate large earthquakes in southern California, J. Geophys. Res., 108 (B6), 2318.

Parsons T (2002), Global Omori law decay of triggered earthquakes: Large aftershocks outside the classical aftershock zone, J. Geophys. Res., 107, 2199.

Toda S., and Stein R (2003), Toggling of seismicity by the 1997 Kagoshima earthquake couplet: A demonstration of time-dependent stress transfer, J. Geophys. Res., 108, B12, 2567.

Woessner J, Hauksson E, Wiemer S, and Neukomm S (2004), The 1997 Kagoshima (Japan) earthquake doublet: A quantitative analysis of aftershock rate changes, Geophys. Res. Lett., 31, 03605.

The late Prof. Tokuji Utsu: His career with Geophysics and Seismology

R. S. Matsu’ura1

1Association for the Development of Earthquake Prediction, 1-5-18, Sarugaku-cho, Chiyoda-ku, Tokyo 101-0064, Japan

Corresponding Author: R. S. Matsu’ura, [email protected]

Abstract: Dr. Tokuji Utsu, Professor Emeritus of the University of Tokyo passed away on August 17, 2004. Among his various works on seismology and geophysics, the thorough research on aftershocks and the structure model under island arcs (Utsu model) are the most notable. He devoted more than a half century to the study of seismology and geophysics until he fell down to disease at the age of 76. He left us the enormous information on seismicity by the end of the 20th century in his best book “Seismicity Studies: A Comprehensive Review.” Although he was usually a so silent person, who seldom did conversation with anybody, his book tells us a lot about what we found about seismicity so far and what we should attack from now on.

1. Introduction Prof. Utsu was born in a downtown area of Tokyo on April 13, 1928. After he graduated from Geophysical Institute, the University of Tokyo, he was employed by the Central Meteorological Observatory (which became Japan Meteorological Agency in 1956) in 1951. Although his supervisor was Prof. Takeshi Nagata, and his first paper was on the electromagnetic phenomena, he happened to be posted to the seismological section in the observatory. This was the beginning of his career as a seismologist. If he were posted to the meteorological section or the geomagnetic section at that time, we would not have had Omori-Utsu formula as the decay law of aftershocks, nor the thorough catalog of the world destructive earthquakes. Furthermore, he was lucky enough to have Dr. Win Inouye as his superior at the section. Here I review Tokuji’s various achievements in seismology and geophysics to praise his half-century works until our sorrowful loss of him on August 17, 2004.

2. His major works During his thirteen years at the Central Meteorological Observatory, he derived Utsu-Seki formula (Utsu and Seki, 1955) of the relation between a shallow earthquake magnitude M and its aftershock area S,

MS −= 01.402.1log , (1) and the modified Omori formula (Utsu, 1961) as the decay law of aftershocks,

+= ctKtn )/()( p , (2) where n(t) is the rate of aftershock occurrence after time t from a main shock. K shows the level of the activity, p shows the degree of decaying, and c determines the initial rate of aftershocks right after a main shock. Hereafter, we would like to call this formula as Omori-Utsu formula, since the addition of the parameter p to the original Omori formula is

not just the modification as Utsu(1961) modestly named, but the substantial derivation of the formula which enabled us to represent the clustering, which is the fundamental temporal feature of earthquakes, quantitatively only in a few parameters. He took his Ph. D degree from the University of Tokyo in 1962 on the research of aftershocks. He also spent one year from February of 1961 at the United States Coast and Geodetic Survey in Washington D.C. as a visiting scholar sponsored by the National Science Academy of the United States. Furthermore, he introduced telemeter recording of seismograms in JMA. He also found the Japan Sea is the ocean from the observation of Lg wave through its floor. He was promoted to the associate professor of Hokkaido University in 1964. There he experienced the Tokachi-oki earthquake of M7.9 in 1968. It was him who noticed the existence of a seismic gap off the Nemuro Peninsula in the southern part of seismic zone along the Kurile trench in 1972 (Utsu, 1972). In 1973, an M7.4 earthquake occurred off Nemuro Peninsula as expected. He also proposed the method to obtain b value of G-R relation (Utsu, 1965), which was proved as the maximum likelihood estimation by Aki(1965).

Figure 1 Above shows iso-depth contours of deep earthquakes along the Wadati zone in and around Japan. Below shows the Utsu model. Vertical section of the north-east Japan island arc showing anomalous structure [made after Utsu (1987). Utsu and Okada(1968) and Utsu(1971) were modified with contemporary plates and their names].

In Hokkaido, he also found the anomalous structure under the island arc (Fig. 1) and presented the Utsu model (Utsu, 1966, 1967, 1971, Utsu and Okada, 1968) in the dawn of the plate tectonics. The Utsu model shows the high Q - high V subducting slub from a trench under an island arc. The Utsu model and the finding of a seismic gap off Nemuro were achieved by the induction from a lot of data. He was so superior to notice the substantial common feature among data that look scattered at a glance. He became Professor of Nagoya University in 1972. In 1978, he escaped from increasing administrative jobs there to the Earthquake Research Institute, University of Tokyo, where he again was forced to spare a lot of time for administration. I cannot forget his happy face on the last day of his term as the director general of ERI. Even during these days, he wrote a textbook on seismology for undergraduate students (Utsu, 1977, 1984, 2001), which is still the standard textbook of seismology. He also was a good educator. He was fluent when a student asked a serious question. He was a good volunteer for the IASPEI, too. He made several kinds of software that works on cheap personal computers for developing countries. He also wrote three chapters in the handbook published to celebrate IASPEI’s centennial (Utsu, 2002a, 2002b, 200c). He also compiled the catalog of earthquakes in and around Japan from 1885 to 1925 (Utsu, 1979), and the destructive earthquakes in the world (Utsu, 1990). His character to pursue perfection in research fully exhibited in compilation of data. His catalogs have been utilized widely. He also served as the head in revising “Japanese Scientific Terms of Seismology” (Ministry of Edu. and JSS, 2000) . After he retired from ERI in 1989, he spent the happiest days as a seismologist at the Institute of Statistical Mathematics with the courtesy of Prof. Ogata. He reviewed the seismicity research thoroughly (Utsu, 1999), revised his former books, and did research, what he always wanted to do most. He spent the happiest retirement days. He continued doing research until the cancer was found in his stomach in late summer of 2002. After the operation, he spent most of the time listening to various music, what he loved next to seismology. However, he looked tired of life without enough vitality for doing research. He passed away on August 17, 2004. According to his will, a small funeral was held by the close relatives, and he is now sleeping in the temple near Shinjuku in an eternal peace.

Figure 2 Prof. Utsu’s photo taken on the day of retirement from ERI. He looked so happy.

3. Conclusion Prof. Utsu did several substantial findings on seismology and geophysics. He left us various compiled data and a comprehensive review of the study on seismicity. What we should do is to find some rules among them in order to forecast the seismicity process.

4. Acknowledgement We are grateful to all participated to the fourth International Workshop of Statistical Seismology in the memory of Tokuji Utsu. I believe that nothing but the progress in this field will most please his soul, although he had no interest in the world after death.

5. References Aki, K. (1965), Maximum Likelihood Estimate of b in the Formula log-N=a-bM and its Confidence Limits, Bull. Earthq. Res. Inst., 43, 237-239. Ministry of Education and Japan Seismological Society (2000), Japanese Scientific Terms, Seismology, Revised and Enlarged Ed., (Japan Soc. Prom. Science, Tokyo), pp.310. Utsu, T. and Seki, A. (1955), A Relation between the Area of After-shock Region and the Energy of Main-shock, Zisin (2), 7, 233-240 (in Japanese). Utsu, T. (1961), A statistical study on the occurrence of aftershocks, Geophys. Mag., 30, 521-605. Utsu, T. (1965), A method for determining the value of b in a formula log n = a – bM showing the magnitude-frequency relation for earthquakes, Geophys. Bull. Hokkaido Univ., 13, 99-103 (in Japanese). Utsu, T. (1966), Regional differences in absorption of seismic waves in the upper mantle as inferred from abnormal distributions of seismic intensities, J. Fac. Sci. Hokkaido Univ., Ser. VII, 2, 359-374. Utsu, T. (1967), Anomalies in seismic wave velocity and attenuation associated with a deep earthquake zone (I), J. Fac. Sci. Hokkaido Univ., Ser. VII, 3, 1-25. Utsu, T. (1972), Large earthquakes near Hokkaido and the expectancy of the occurrence of a large earthquake off Nemuro, Rep. Coord. Comm.Earthq. Pred., 7, 7-13 (in Japanese). Utsu, T. (1977), Seismology, (Kyoritsu Shuppan, Tokyo), pp. 286 (in Japanese). Utsu, T. (1979), Seismicity of Japan from 1885 through 1925 – a new catalog of earthquakes of M>=6 felt in Japan and smaller earthquakes which caused damage in Japan, Bull. Earthq. Res. Inst., 54, 253-308 (in Japanese). Utsu, T. (1984), Seismology 2nd Ed., (Kyoritsu Shuppan, Tokyo), pp. 310 (in Japanese). Utsu, T. (1987), Encyclopedia of Earthquakes, (Asakura Shoten, Tokyo, pp.568), 147. Utsu, T. (1990), Destructive earthquakes in the world - from ancient times to 1989), (Utsu Press, Tokyo), pp.243 (in Japanese). Utsu, T. (1999), Seismicity Studies: A Comprehensive Review, (Univ. Tokyo Press, Tokyo), pp. 876 (in Japanese). Utsu, T. (2001), Seismology 3rd Ed., (Kyoritsu Shuppan, Tokyo), pp. 376 (in Japanese). Utsu, T, (2002a), A list of deadly earthquakes in the world: 1500-2000, in International Handbook of Earthquake & Engineering Seismology Part A, Eds. W. H. K. Lee, H. Kanamori, P.C.Jennings, and C. Kisslinger (Academic Press, San Diego), 691-717. Utsu, T, (2002b), Statistical features of seismicity, in International Handbook of Earthquake & Engineering Seismology Part A, Eds. W. H. K. Lee, H. Kanamori, P.C.Jennings, and C. Kisslinger (Academic Press, San Diego), 719-732. Utsu, T, (2002c), Relationship between magnitude scales, in International Handbook of Earthquake & Engineering Seismology Part A, Eds. W. H. K. Lee, H. Kanamori,

P.C.Jennings, and C. Kisslinger (Academic Press, San Diego), 733-746. Utsu, T. and Okada, H. (1968), Anomalies in seismic wave velocity and attenuation associated with a deep earthquake zone (II), J. Fac. Sci. Hokkaido Univ., Ser. VII, 3, 65-84.

7. Appendix. Selected publications of Professor Tokuji Utsu Utsu, T. (1953), On energy and frequency of aftershocks, Quart. J. Seism., 18, 66-84 (in Japanese). Utsu, T. (1953), On energy and frequency of aftershocks (continued), Quart. J. Seism., 18, 165-175 (in Japanese). Utsu, T. and Seki, A. (1955), A Relation between the Area of After-shock Region and the Energy of Main-shock, Zisin (2), 7, 233-240 (in Japanese). Utsu, T. (1956), Some remarkable phases on seismograms of near earthquakes (part 1), Quart. J. Seism., 20, 141-144 (in Japanese). Utsu, T. (1956), On deflection of the direction of initial motion of P wave, Quart. J. Seism., 21, 13-20 (in Japanese). Utsu, T. (1956), On some remarkable phases on seismograms of near earthquakes (part 2), Quart. J. Seism., 21, 107-111 (in Japanese). Utsu, T. (1957), Response curves of electromagnetic seismograph (1), Quart. J. Seism., 22, 5-18 (in Japanese). Utsu, T. (1957), Response curves of electromagnetic seismograph (2), Quart. J. Seism., 22, 133-139 (in Japanese). Utsu, T. (1957), Magnitude of earthquakes and occurrence of their aftershocks, Zisin(2), 10, 35-45 (in Japanese). Utsu, T. (1958), On the Lg phase of seismic wave observed in Japan (1), Quart. J. Seism., 23, 61-76 (in Japanese). Utsu, T. (1960), Observations of the Lg waves and crustal structure in the vicinity of Japan, Geophys. Mag., 29, 499-508. Utsu, T. (1961), A statistical study on the occurrence of aftershocks, Geophys. Mag., 30, 521-605. Utsu, T. (1962), On the nature of three Alaskan aftershock sequences of 1957 and 1958, Bull. Seism. Soc. Amer., 52, 279-297. Utsu, T. (1962), On the time interval between two consecutive earthquakes, US Coast & Geodetic Survey Technical Bull., 17, 1-5. Utsu, T. (1964), On the relation betweenαandγin a formula M=log A+α・logΔ+γ for calculating earthquake magnitude, Zisin (2), 17, 236-238 (in Japanese). Utsu, T. (1964), On the statistical formula showing the magnitude-frequency relation of earthquakes (a preliminary report), Quart. J. Seism., 28, 79-88 (in Japanese). Utsu, T. (1964), Magnitude distribution of earthquakes with special consideration for aftershock activities, Quart. J. Seism., 28, 129-136 (in Japanese). Utsu, T. (1964), Characteristics of aftershocks in space, time, and magnitude, Proc. US- Japan Conf. Earthq. Prediction, 59-60. Utsu, T. (1965), A method for determining the value of b in a formula log n = a – bM showing the magnitude-frequency relation for earthquakes, Geophys. Bull. Hokkaido Univ., 13, 99-103 (in Japanese). Utsu, T. (1966), Variations in spectra of P waves recorded at Canadian Arctic seismograph stations, Can. J. Earth Sci., 3, 597-621. Utsu, T. (1966), Regional differences in absorption of seismic waves in the upper mantle as

inferred from abnormal distributions of seismic intensities, J. Fac. Sci. Hokkaido Univ., Ser. VII, 2, 359-374. Utsu, T. (1966), A statistical significance test for the difference in b-value between two earthquake groups, J. Phys. Earth, 14, 37-40. Utsu, T. (1966), Abnormal distributions of seismic intensities and regional differences in absorption of seismic waves, Zisin (2), 19, 226-227 (in Japanese). Utsu, T. (1967), Anomalies in seismic wave velocity and attenuation associated with a deep earthquake zone (I), J. Fac. Sci. Hokkaido Univ., Ser. VII, 3, 1-25. Utsu, T. (1967), Some problems of the frequency distribution of earthquakes in respect to magnitude (I), Geophys. Bull. Hokkaido Univ., 17, 85-112 (in Japanese). Utsu, T. (1967), Some problems of the frequency distribution of earthquakes in respect to magnitude (II), Geophys. Bull. Hokkaido Univ., 18, 53-69 (in Japanese). Utsu, T. and Okada, H. (1968), Anomalies in seismic wave velocity and attenuation associated with a deep earthquake zone (II), J. Fac. Sci. Hokkaido Univ., Ser. VII, 3, 65-84. Utsu, T. and Hirota, T. (1968), A note on the statistical nature of energy and strain release in earthquake sequences, J. Fac. Sci. Hokkaido Univ., Ser. VII, 3, 49-64. Utsu, T. (1968), Seismic activity in Hokkaido and its vicinity, Geophys. Bull. Hokkaido Univ., 20, 51-75 (in Japanese). Utsu, T. (1969), Abnormal distributions of seismic intensities in western Japan, Geophys. Bull. Hokkaido Univ., 21, 45-52 (in Japanese). Utsu, T. (1969), Note on seismic intensity scales, Geophys. Bull. Hokkaido Univ., 21, 53-62 (in Japanese). Utsu, T. (1969), Seismic investigation of rockbursts in Bibai Coal Mine, Hokkaido, Zisin (2), 22, 76-79 (in Japanese). Utsu, T. (1969), Ratio of Vp/Vs in the upper mantle beneath the island arcs of Japan, Zisin (2), 22, 41-53 (in Japanese). Utsu, T. (1969), Time and space distribution of deep earthquakes in Japan, J. Fac. Sci. Hokkaido Univ., Ser. VII, 3, 117-128. Utsu, T. (1969), Some problems of the distribution of earthquakes in time (Part 1) – distributions of frequency and time interval of earthquakes, Geophys. Bull. Hokkaido Univ., 22, 73-93 (in Japanese). Utsu, T. (1969), Some problems of the distribution of earthquakes in time (Part 2) – stationarity and randomness of earthquake occurrence, Geophys. Bull. Hokkaido Univ., 22, 95-108 (in Japanese). Utsu, T. (1969), Aftershocks and earthquake statistics (I) - some parameters which characterize an aftershock sequence and their interrelations, J. Fac. Sci. Hokkaido Univ., Ser. VII, 3, 129-195. Utsu, T. (1970), Aftershocks and earthquake statistics (II) – further investigation of aftershocks and other earthquake sequences based on a new classification of earthquake sequences, J. Fac. Sci. Hokkaido Univ., Ser. VII, 3, 197-266. Utsu, T. (1970), Aftershocks and earthquake statistics (III) – analyses of the distribution of earthquakes in magnitude, time, and space with special consideration to clustering characteristics of earthquake occurrence (1), J. Fac. Sci. Hokkaido Univ., Ser. VII, 3, 379-441. Utsu, T. (1970), Aftershocks and earthquake statistics (IV) - analyses of the distribution of earthquakes in magnitude, time, and space with special consideration to clustering characteristics of earthquake occurrence (2), J. Fac. Sci. Hokkaido Univ., Ser. VII, 4,

1-42. Utsu, T. (1970), Some problems of the distribution of earthquakes in time (Part 3) – characteristics of aftershocks, foreshocks, and earthquake swarms in time, Geophys. Bull. Hokkaido Univ., 23, 49-71 (in Japanese). Utsu, T. (1970), Some problems of the distribution of earthquakes in time (Part 4) – a stochastic model for earthquake occurrence with consideration to clustering of earthquakes, Geophys. Bull. Hokkaido Univ., 24, 107-123 (in Japanese). Utsu, T. (1970), Seismic activity and seismic observation in Hokkaido in recent years, Rep. Coord. Comm. Earthq. Pred., 2, 1-2 (in Japanese). Utsu, T. (1971), Anomalous structure of the upper mantle beneath the Japanese islands, Geophys. Bull. Hokkaido Univ., 25, 99-127 (in Japanese). Utsu, T. (1971), Seismological evidence for anomalous structure of island arcs with special reference to the Japanese region, Rev. Geophys. Space Phys., 9, 839-890. Utsu, T. (1971), Comments on the synodic monthly distribution of destructive earthquakes in Japan, Zisin (2), 24, 369-371 (in Japanese). Utsu, T. (1972), Large earthquakes near Hokkaido and the expectancy of the occurrence of a large earthquake off Nemuro, Rep. Coord. Comm.Earthq. Pred., 7, 7-13 (in Japanese). Utsu, T. (1973), Temporal variations in travel time residuals of P waves from Nevada sources, J. Phys. Earth, 21, 475-480. Utsu, T. (1974), A three-parameter formula for magnitude-distribution of earthquakes, J. Phys. Earth, 22, 71-85. Utsu, T. (1974), Correlation between great earthquakes along the Nankai trough and destructive earthquakes in western Japan, Rep. Coord. Comm.Earthq. Pred., 12, 120-122 (in Japanese). Utsu, T. (1974), Space-time pattern of large earthquakes occurring off the Pacific coast of the Japanese Islands, J. Phys. Earth, 22, 325-342. Utsu, T. (1975), Correlation between shallow earthquakes in Kwanto region and intermediate earthquakes in Hida region, central Japan, Zisin (2), 28, 303-311 (in Japanese). Utsu, T. (1975), Detection of a domain of decreased P-velocity prior to an earthquake, Zisin (2), 28, 435-448 (in Japanese). Utsu, T. (1975), Regional variation of travel-time residuals of P waves from nearby deep earthquakes in Japan and vicinity, J. Phys. Earth, 23, 367-380. Utsu, T. (1977), Probabilities in earthquake prediction, Zisin (2), 30, 179-185 (in Japanese). Utsu, T. (1977), Possibility of a great earthquake in the Tokai district, central Japan, J. Phys. Earth, 25, Suppl., s219-s230. Utsu, T. (1978), An investigation into the discrimination of foreshocks sequences from earthquake swarms, Zisin (2), 31, 129-135 (in Japanese). Utsu, T. (1978), Estimation of parameters in formulas for frequency-magnitude relation of earthquake occurrence - in cases involving a parameter c for the maximum magnitude, Zisin (2), 31, 367-382 (in Japanese). Utsu, T. (1979), Calculation of the probability of success of an earthquake prediction (in the case of Izu-Oshima-Kinkai earthquake of 1978), Rep. Coord. Comm. Earthq. Pred., 21, 65-166 (in Japanese). Utsu, T. (1979), The Nemuro-hanto-oki earthquake, Ten years of the Coodinating Committee for Earthquake Prediction, edited by Coord. Comm. Earthq. Prediction, 79-87 (in Japanese). Utsu, T. (1979), Seismicity of Japan from 1885 through 1925 – a new catalog of earthquakes

of M>=6 felt in Japan and smaller earthquakes which caused damage in Japan, Bull. Earthq. Res. Inst., 54, 253-308 (in Japanese). Utsu, T. (1979), Do large earthquakes in Japan correlate with the age of the moon?, Zisin (2), 32, 495-497 (in Japanese). Utsu, T. (1979), Some remarks on a seismic gap off Miyagi prefecture, Rep. Coord. Comm. Earthq. Pred., 21, 44-46 (in Japanese). Utsu, T. (1980), Spatial and temporal distribution of low-frequency earthquakes in Japan, J. Phys. Earth, 28, 361-384. Utsu, T. (1980), Low-Frequency Earthquake Prediction, Rep. Coord. Comm. Earthq. Pred., 24, 274-278 (in Japanese). Utsu, T. (1981), Seismicity of the Izu Peninsula and its vicinity from 1901 through 1980 with some remarks on the characteristics of foreshock activities, Bull. Earthq. Res. Inst., 56, 25-41. Utsu, T. (1981), Seismicity of central Japan from 1904 through 1925, Bull. Earthq. Res. Inst., 56, 111-137 (in Japanese). Utsu, T. (1982), Probabilities Related to Earthquake Prediction, Rep. Coord. Comm. Earthq. Pred., 28, 341-343 (in Japanese). Utsu, T. (1982), Seismicity of Japan from 1885 through 1925 (Correction and supplement), Bull. Earthq. Res. Inst., 57, 111-117 (in Japanese). Utsu, T. (1982), Catalog of large earthquakes in the region of Japan from 1885 through 1980, Bull. Earthq. Res. Inst., 57, 401-463 (in Japanese). Utsu, T. (1982), Relation between earthquake magnitude scales, Bull. Earthq. Res. Inst., 57, 465-497 (in Japanese). Utsu, T. (1982), Probabilities of earthquake prediction (the second paper), Bull. Earthq. Res. Inst., 57, 499-524 (in Japanese). Utsu, T. (1983), Probabilities associated with earthquake prediction and their relationships, Earthq. Predict. Res., 2, 105-114. Utsu, T. (1984), Estimation of parameters for recurrence models of earthquakes, Bull. Earthq. Res. Inst., 59, 53-66, 1984. Utsu, T. (1984), Long- and short-term seismic risk estimation from seismicity and precursors, Proc. Int. Conf. Continental Seismicity Earthq. Predic., 818-827. Utsu, T. (1984), Relation between seismic intensity, epicentral distance and magnitude, Part 1, empirical formula for shallow earthquakes in Japan excluding those occurring off the Pacific coast of eastern Japan, Bull. Earthq. Res. Inst., 59, 219-233 (in Japanese). Utsu, T. (1985), Catalog of large earthquakes in the region of Japan from 1885 through 1980 (Correction and supplement), Bull. Earthq. Res. Inst., 60, 639-642 (in Japanese). Utsu, T. (1986), Relation between seismic intensity, epicentral distance and magnitude, Part 2, empirical formula for uppermost mantle earthquakes in Japan excluding those occurring off the Pacific coast of eastern Japan, Bull. Earthq. Res. Inst., 61, 551-561 (in Japanese). Utsu, T. (1987), Relation between seismic intensity, epicentral distance and magnitude, Part 3, empirical formula for earthquakes occurring off the Pacific coast of eastern Japan, Bull. Earthq. Res. Inst., 62, 289-296 (in Japanese). Utsu, T. (1988), Relation between seismic intensity near the, epicenter, focal depth and magnitude, Bull. Earthq. Res. Inst., 63, 23-31 (in Japanese). Utsu, T. (1988), A catalog of large earthquakes (M>=6) and damaging earthquakes in Japan for the years 1885-1925 in Historical Seismograms and Earthquakes of the World, Eds. W. H. K. Lee, H. Meysrs and K. Shimazaki, (Academic Press, San Diego), 150-161.

Utsu, T. (1988),η-value for magnitude distribution and earthquake prediction, Rep. Coord. Comm. Earthq. Pred., 39,380-386 (in Japanese). Utsu, T. (1990), Destructive earthquakes in the world - from ancient times to 1989), (Utsu Press, Tokyo), pp.243 (in Japanese). Utsu, T. (1994), Aftershock activity of the 1896 Sanriku Earthquake, Zisin (2), 47, 89-92 (in Japanese). Utsu, T. (1994), Recurrence of several earthquakes at almost equal time intervals, Zisin (2), 47, 93-95 (in Japanese). Utsu, T. (1994), An earthquake that has been considered to be a foreshock of the 1946 Nankai Earthquake, Zisin (2), 47, 183 (in Japanese). Utsu, T., Ogata, Y., and Matsu'ura, R.S. (1995), The centenary of the Omori formula for a decay law of aftershock activity, J. Phys. Earth, 43, 1-33. Ogata, Y., Utsu, T., and Katsura, K. (1995), Statistical features of foreshocks in comparison with other earthquake clusters, Gephys. J. Int., 121, 233-254. Utsu, T. (1999), Representation and analysis of the earthquake size distribution: a historical review and some new approaches, Pure Appl. Geophys., 155, 509-535. Utsu, T, (2002), A list of deadly earthquakes in the world: 1500-2000, in International Handbook of Earthquake & Engineering Seismology Part A, Eds. W. H. K. Lee, H. Kanamori, P.C.Jennings, and C. Kisslinger (Academic Press, San Diego), 691-717. Utsu, T, (2002), Statistical features of seismicity, in International Handbook of Earthquake & Engineering Seismology Part A, Eds. W. H. K. Lee, H. Kanamori, P.C.Jennings, and C. Kisslinger (Academic Press, San Diego), 719-732. Utsu, T, (2002), Relationship between magnitude scales, in International Handbook of Earthquake & Engineering Seismology Part A, Eds. W. H. K. Lee, H. Kanamori, P.C.Jennings, and C. Kisslinger (Academic Press, San Diego), 733-746.

Quasi-static slips before and after large interplate earthquakes inferred from small repeating earthquake data

T. Matsuzawa1, N. Uchida1, T. Okada1, K. Ariyoshi1, T. Igarashi2, and A. Hasegawa1

1Graduate School of Science, Tohoku University, Aoba-ku, Sendai 980-8578, Japan 2 Earthquake Research Institute, University of Tokyo, Bunkyo-ku, Tokyo 113-0032, Japan

Corresponding Author: T. Matsuzawa, [email protected]

Abstract: Many small repeating earthquakes have been found in the northeastern Japan subduction zone. These earthquakes are thought to be caused by repeated rupture of asperities on the plate boundary. Using the small repeating earthquake data, we can infer the space-time distribution of the quasi-static slips on the plate boundary. The obtained distribution shows that most of large interplate earthquakes are followed by large afterslips occurring in the surrounding areas and the afterslips sometimes foster the occurrence of other large earthquakes. Therefore the quasi-static slip distribution is very important to evaluate the possibility of rupture of large asperities.

1. Introduction Recent friction laws predict that there must be a nucleation process before the occurrence of a large earthquake (e.g., Scholz, 1998). If we detect the nucleation phase, we will be able to make a short-term earthquake prediction. However, the nucleation size is still open to dispute and no modern geodetic instruments such as GPS have succeeded in detecting the nucleation process. Therefore, the earthquake prediction strategy relying on the nucleation process might be hopeless. In this paper, we discuss the possibility of the prediction of large interplate earthquakes based on the recent progress in the study on the property of the plate boundaries.

2. Large asperities on the plate boundaries Yamanaka and Kikuchi (2004) investigated the source processes of large interplate earthquakes along the northeastern Japan subduction zone using regional seismic data. They found that in some areas large slips had been repeatedly observed for the large earthquakes whose source regions were overlapped. This means that asperities (seismic patches which are strongly coupled in the interseismic periods and show large slips when they are ruptured) are persistent on the plate boundaries and the asperity model proposed by Lay and Kanamori (1980, 1981) is basically correct.

3. Small asperities on the plate boundaries If the asperities are persistent on the plate boundaries, there should be small earthquakes caused by repeated rupture of an identical small asperity. Such small repeating earthquakes have been actually found in California (Ellsworth and Diets, 1990; Nadeau et al., 1995) and NE Japan (Matsuzawa et al., 2002; Igarashi et al., 2003). Most of the small repeating earthquakes occur periodically but the intervals become shorter when nearby large asperities are ruptured as large earthquakes.

4. Modified asperity model From the observations mentioned above, the plate boundaries are thought to be composed of asperities (seismic patches) and aseismic areas. The asperities slip seismically as the

earthquakes but they are strongly locked in the interseismic periods. The aseismic areas slip steadily and aseismically in the interseismic periods but their slips are accelerated as afterslips when nearby large asperities are ruptured as large earthquakes. In terms of the rate- and state-dependent friction law, an asperity and an aseismic area respectively correspond to rate-weakening and rate-hardening regions. In the original 'asperity model' proposed by Lay and Kanamori (1980, 1981), the regions other than asperities are treated as 'weak zones'. The model mentioned above is very similar to the original asperity model but explicitly incorporates the area where quasi-static slip (stable sliding; aseismic slip) is dominant.

5. Quasi-static slip distribution estimated from small repeating earthquakes Cumulative slip at a small asperity must coincide with the cumulative aseismic slip in the surrounding region. Thus we can estimate the cumulative aseismic slip using the rupture history of the small repeating earthquakes (Nadeau and McEvilly, 1999; Uchida et al., 2003). The obtained quasi-static slip distribution clearly shows that most of large interplate earthquakes are followed by large afterslips in the regions surrounding the main source areas (i.e., asperities). Matsuzawa et al. (2004) investigated the space-time distribution of the quasi-static slips in the earthquake swarms off Sanirku, NE Japan and proposed a 'chain reaction model' where seismic slip at a large asperity (earthquake) accelerates the quasi-static slip in the surrounding area (afterslip) and a large afterslip fosters rupturing of the nearby asperities.

6. Implications in long-term earthquake prediction The modified asperity model mentioned above predicts that if an asperity is isolated from other asperities of similar size or larger, the asperity will generate earthquakes of the same size with a constant time interval. In such a case, long-term earthquake prediction is possible; actually Matsuzawa et al. (2002) successfully predicted a M4.7 earthquake off Kamaishi. Many small repeating earthquakes showing very regular occurrences are found in California and Japan and these are also thought to be the earthquakes occurring on isolated asperities. However, large isolated asperities are not so common and interactions between asperities cannot be neglected. Actually, Yamanaka and Kikuchi (2004) revealed that M8 class events off Sanriku ruptured plural asperities but the combinations of the asperities were not necessarily always the same and the hypocenters (initial break points) were usually located away from the large asperities. Therefore it is very difficult to carry out a deterministic long-term earthquake prediction. However, the asperities related to a target earthquake are usually limited; all we have to do is take finite combinations of asperities into account and evaluate the possibility of each combination to assess the probability of each candidate of the impending large earthquake. In the assessment, we have to consider the possibility that a gigantic earthquake rupturing exceptionally many large asperities (such as the 2004 M9.0 Sumatra-Andaman Islands earthquake) may occur; the possibility would be very small but should not be neglected.

7. Implications in short-term earthquake prediction As mentioned above, Matsuzawa et al. (2004) showed that earthquake swarms off Sanriku can be explained by a 'chain reaction model'. Uchida et al. (2005) revealed that the afterslip of the 2003 Tokachi-oki earthquake fostered the rupture of a large asperity off Kushiro to generate a M7.1 earthquake. These studies indicate that interactions among asperities and aseismic regions make the deterministic earthquake prediction difficult but the interactions

also make the probabilistic prediction possible in some cases. Of course, we have to know the strain accumulation history of each asperity for one cycle at least to issue a reliable prediction.

8. Conclusion The possibility of the earthquake prediction based on the 'modified asperity model' is discussed. The model predicts that isolated seismic patches (asperities; rate-weakening regions) surrounded by aseismic (rate-hardening) regions will be regularly ruptured with a constant size and repeating interval. In this case, long-term prediction with a small error is possible. For example, the standard error of the occurrence interval is only around 10% of the averaged repeating interval for the case of the M4.8 off-Kamaishi sequence. In the regions where asperities are closely located, periodicities of the large earthquakes become vague because of the interactions among the asperities and aseismic slips. In this case, however, we can issue a probabilistic prediction to some extent when large seismic and/or aseismic events occur close to the target region.

9. References Ellsworth and Dietz, USGS Open-file Rept., 90-98, 226-245, 1990. Igarashi et al., J. Geophs. Res., 108, 2249, doi:10.1029/2002JB001920, 2003. Lay and Kanamori, Phys. Earth Planet. Inter., 21, 283-304, 1980. Lay and Kanamori, Maurice Ewing Ser. 4, AGU, Washington, D.C., 579-592, 1981. Matsuzawa et al., Geophys. Res. Lett., 29, 1543, doi:10.10129/2001GL014632, 2002. Matsuzawa et al., Earth Planets Space, 56, 803-811, 2004. Nadeau et al., Science, 267, 503-507, 1995. Nadeau and McEvilly, Science, 285, 718-721, 1999. Scholz, Nature, 391, 37-42, 1998. Uchida et al., Geophys. Res. Lett., 30, 1801, doi:10.1029/2003GL017452, 2003. Uchida et al., Geophys. Res. Lett., submitted, 2005. Yamanaka and Kikuchi, J. Geophys. Res., 109, B07307, doi:10.1029/2003JB002683, 2004.

Dynamic triggering with no and/or little time-delay

Masatoshi Miyazawa1

1Disaster Prevention Research Institute, Kyoto University, Kyoto 611-0011, Japan

Corresponding Author: M. Miyazawa, [email protected]

Abstract: Remote triggering has been observed immediately after the arrival of seismic waves from large teleseismic events. The triggered earthquakes which are usually reported as increasing seismicity appear to be excited with some time-delay after the wave arrivals, in agreement with a model based on the frictional law. The event triggering without time-delay appears to have a fluid-related mechanism and the triggering process itself should be different from that of the usually triggered earthquakes.

1. Introduction These days there have been many findings of dynamic triggering of earthquakes due to large earthquakes. After the passage of the seismic waves, earthquakes actively occur and the seismicity becomes relatively high after the excitation. The triggering occurs with some time-delay after the onset of the stress perturbation, which has been explained by the stress loading on the fault planes, when we assume the rate and state frictional law. On the other hand, some event triggering is not accompanied by a time-delay and is excited during and/or immediately after the passage of the seismic waves. The 1999 Chi-Chi, Taiwan earthquake (Mw 7.7) triggered volcanic tremors (Miyazawa et al., 2005). The 2003 Tokachi-oki earthquake (Mw 8.1) triggered deep low-frequency earthquakes (Miyazawa and Mori, 2005). Considering the possible source process of these events, we discuss why the triggering occurred without any significant time-delay.

2. Volcanic tremors triggered by the 1999 Chi-Chi, Taiwan earthquake After the 1999 Chi-Chi earthquake, we observed the occurrence of frequent volcanic tremors at Aso active volcano, Japan, which is located at a distance of about 1400 km from the epicenter. Immediately after the onset of the P-waves, short-period tremor (SPT) began to occur more frequently, for about 2000 sec, and during and after the arrival of surface waves, 2 micro-earthquakes were observed around the crater (Fig. 1). The frequent excitation of SPT indicates the increase of gas and fluids, which flowed up to the crater from source regions of the long-period tremor (LPT), in the 1−1.5 km deep aquifer region. For the cases of seismic waves arrivals from other large teleseisms occurred from March 1995 to June 2002, triggering of the tremor was observed from 1998 to 1999. During this period increased thermal supply to the crater was observed while the background activity was relatively low, and the volcanic system beneath the crater was more responsive to external stress perturbations.

Figure 1. 1999 Chi-Chi, Taiwan earthquake filtered with a pass-band of 2 to 5 Hz at two seismic stations. Waveforms at the station MGR, located about 10 km south of the crater, and at the station SUN, deployed a few hundred meters from the crater. P- and S-wave arrivals are indicated by black arrows, two triggered micro-earthquakes by open arrows and the aftershocks of the Chi-Chi earthquakes by gray arrows. Pulse-like waves at SUN are short-period tremor.

3. Remote triggering of deep low-frequency earthquakes By using bore-hole seismic data of the High Sensitivity Seismograph Network (Hi-net) deployed by NIED, we detected seismic events triggered in the main part of Japan, by the 2003 Tokachi-oki, Japan earthquake (Mw 8.1). We constructed root-mean-square envelopes from velocity waveforms filtered with a path-band of 5−20 Hz, and statistically compared the changes of the amplitude before and after the passing of the body waves. During the arrivals of the surface waves, we found the excitations of the deep-low frequency (DLF) earthquakes in western Japan (Δ = 1000–1400 km). These types of DLF tremors were recently discovered in this region and have predominant frequencies of 1–10 Hz with magnitudes less then about 2.0 at depth of 30–40 km. A possible source of DLF events is hydro-fracturing in the region where the oceanic crust is subducting. The surface waves from other large teleseismic events (M > 7) have also triggered DLF earthquakes. During the arrivals of surface waves from the 2004 Sumatra-Andaman earthquake (Mw 9.1−9.3), DLF events were periodically observed, especially correlating with the Rayleigh waves. The results indicate that the DLF events were triggered by the surface waves without significant time-delay compared with the dominant periods of the surface waves (20−30 sec).

4. Discussion and Conclusions The SPT tremor at Aso volcano and the DLF earthquakes in western Japan are triggered with no and/or little time-delay. This observation suggests that for such triggering cases with fluid-related mechanism, the models based on the rate-state frictional law may not be appropriate and clearly other mechanisms, which involve fluid related sources, are responsible. We cannot discount the possibility that our findings show only the cases, where the physical states of source regions were just in critical condition and events were easily and immediately triggered. We possibly still do not have enough circumstantial evidences to conclude that the fluids play an important role for the triggering without significant time-delay.

5. Acknowledgements We used Hi-net data by NIED.

6. References Miyazawa, M. and Mori, J. (2005), Detection of triggered deep low-frequency events from the 2003 Tokachi-oki earthquake, Geophys. Res. Lett., 32, L10307, doi:10.1029/2005GL022539. Miyazawa, M., Nakanishi, I., Sudo, Y., and Ohkura, T. (2005), Dynamic response of frequent tremors at Aso volcano to teleseismic waves from the 1999 Chi-Chi, Taiwan earthquake, J. Volcanol. Geotherm. Res., 147, 173-186.

Application of physical and stochastic models of earthquake clustering to regional catalogs

M. Murru1, R. Console1, F. Catalli1, and G. Falcone1,2

1Istituto Nazionale di Geofisica e Vulcanologia, Rome, Italy 2Dipartimento di Scienze della Terra, Messina University, Italy

Corresponding Author: M. Murru, [email protected]

We show how a previously existing clustering model can be modified by the application of the rate-and-state theory. This application has allowed reducing the number of free parameters of the model, giving at the same time a physical meaning to each of them.

1. Introduction The increase of occurrence probability for seismic events close in space and time to other previous earthquakes has been modeled both by statistical and physical processes. From a statistical viewpoint the so-called epidemic model (ETAS) introduced by Ogata in 1988 and its variations have become fairly well known in the seismological community (Ogata, 1998; Console and Murru, 2001; Console, Murru, and Lombardi, 2003). Tests on real seismicity and comparison with a plain time-independent Poissonian model through likelihood-based methods have reliably proved their validity. On the other hand, in the last decade many papers have been published on the so-called Coulomb stress change principle, based on the theory of elasticity, showing qualitatively that an increase of the Coulomb stress in a given area is usually associated with an increase of seismic activity. More specifically, the rate-and-state theory developed by Dieterich in the ‘90s has been able to give a physical justification to the phenomenon known as Omori law. According to this law, a mainshock is followed by a series of aftershocks whose frequency decreases in time as an inverse power law.

2. Application to Japanese and Italian earthquakes

In this study we give an outline of the above-mentioned stochastic and physical models, and build up an approach by which these models can be merged in a single algorithm and statistically tested (Console, Murru, and Catalli, 2005; Console, Murru, Catalli and Falcone, 2005). The application to the JMA catalog from 1970 to 2003 shows that the new model incorporating the physical concept of the rate-and-state theory performs not worse than the purely stochastic model with two free parameters only, while the application to the Italian seismic catalog collected by the National Institute of Geophysics and Vulcanology (INGV) for the period from 1987 to 2005 shows that the new model incorporating the physical concept of the rate-and-state theory performs not better than the purely stochastic model.

3. Conclusion The better performance obtained with the new model on Japanese dataset with respect to Italian catalog can be explained by the different magnitude threshold of the two catalogs (larger

for Japan than for Italy) and by the fact that the location errors in the Italian catalog are larger than the physical dimension of the earthquake sources in the low magnitude range.

6. References Console, R., Murru, M., (2001), A simple and testable model for earthquake clustering, J. Geophys. Res. 106, 8699-8711. Console R., Murru, M., Lombardi, A.M., (2003), Refining earthquake clustering models, J. Geophys. Res. 108, 2468, doi:10.1029/2002JB002130. Console, R., M Murru, and F. Catalli (2005 in press), Physical and stochastic models of earthquake clustering, Tectonophysics. Console, R., M Murru, F. Catalli, and Falcone, G. (2005 submitted), Real time forecasts through an earthquake clustering model constrained by the rate-and-state constitutive law: Comparison with a purely stochastic ETAS model, Seismol. Res. Lett. Ogata, Y., (1988), Statistical models for earthquake occurrence and residual analysis for point process, J. Am. Stat. Assoc. 83, 9-27. Ogata, Y., (1998), Space-time point-process models for earthquake occurrences, Ann. Inst. Statist. Math. 50, 379-402.

Real-time Application of Coulomb Stress Modelling and Related Issues

Suleyman S. Nalbant, Sandy Steacy and John McCloskey

University of Ulster, School of Environmental Sciences, Geophysics Res. Group, Cromore Road, Coleraine, BT52 1SA, Northern Ireland

Corresponding Author: S. S. Nalbant, [email protected]

Abstract: There is a growing demand from the public, governmental decision making agencies and civil protection groups for having an updated seismic hazard assessment following a major earthquake in real or near real-time. Given that there is strong evidence of first order stress relationships between earthquakes occurring closely in space and time (minutes to several years), can we calculate the stress perturbation reliably in near real-time and thereby help decision makers and civil protection agencies? Coulomb stress perturbations due to a preceding event influences the location and probably the timing of subsequent events. Is this Coulomb stress technique a feasible method for updating seismic hazard and can it be applied in near real-time? What are the challenges and conditions for making such a reliable calculation and assessment? What would be the protocols for passing the results to end-users and informing the public? These issues will be discussed in the light of results obtained from a number Californian, Turkish and Sumatran earthquakes.

Aftershock Relaxation for Japanese and Sumatra Earthquakes

K. Z. Nanjo1, B. Enescu2, R. Shcherbakov3, D. L. Turcotte4, T., Iwata1, and Y. Ogata1

1The Institute of Statistical Mathematics, Minato-ku, Tokyo 106-8569, Japan 2Earthquake Hazards Division, Disaster Prevention Research Institute, Kyoto University, Gokasho, Uji, Kyoto 611-0011, Japan 3Center for Computational Science and Engineering, c/o Department of Physics, University of California at Davis, One Shields Avenue, Davis, CA 95616, USA 4Department of Geology, University of California at Davis, One Shields Avenue, Davis, CA 95616, USA

Corresponding Author: K. Z. Nanjo, [email protected]; [email protected]

Abstract: We discuss the decay of aftershock activity for Japanese and Sumatra earthquakes. One approach is the modified Omori law that provides an appropriate representation of the rate of aftershock occurrence dN/dt with time t in the form dN/dt = τ-1(1+t/c)-p, where τ, c, and p are parameters. Our alternative approach is to combine three empirical scaling laws that aftershock sequences approximately satisfy; the modified Omori law for the rate of aftershock occurrence, the Gutenberg-Richter law for the frequency-magnitude relation, and the Båth’s law for the difference between the magnitude of a main shock and its largest aftershock. This combination results in a so-called generalized Omori law. This law can show that the parameter c is no longer a constant but it scales with a lower cutoff magnitude m and another parameter τ is a constant independent of m. The proposed law is tested by using the aftershocks of six relatively recent and large earthquakes that occurred in Japan and Sumatra.

1. Introduction It is remarkable fact that an earthquake of large magnitude is usually followed by a sequence of smaller earthquakes called aftershocks. These aftershocks occur close to the focus of the main earthquake, gradually decreasing in activity with increasing time, which lasts over many months. The modified Omori’s law proposed by Utsu (1961) describes the temporal decay of aftershock activity and is given in the form

dN/dt = τ-1(1 + t/c)-p, (1) where dN/dt is the rate of occurrence of aftershocks greater than m, t is the time since the main shock, c and τ are characteristic times, and p is the exponent called “p-value”. Shcherbakov et al. (2004, 2005) combined three empirical scaling laws that aftershock sequences approximately satisfy; the modified Omori law for the rate of aftershock occurrence in eq. (1), the Gutenberg-Richter law (Gutenberg and Richter, 1954) for the frequency-magnitude relation, Log10 N = A – bm (A and b are constants and N is the number of earthquakes with magnitude larger than or equal to m), and the Båth’s law for the difference between the magnitude of a main shock and its largest aftershock (Båth, 1965; Richter, 1958). This combination results in a so-called generalized Omori law. The main difference from the original form of the modified Omori’s law in eq. (1) is that the characteristic time c is no longer a constant but scales with a lower cutoff magnitude m. This law can have the form given below:

-1 -p dN/dt = τ0 {1 + t/c(m)} , (2)

with

( ms *−Δ− mmmb ) ()τ 0 (pmc −= )101 (3) where τ0 is a constant, mms is the main shock magnitude, and Δm* = mms - A/b. The primary objective of this paper is to test the validity of the aftershock statistics proposed by Shcherbakov et al. (2004, 2005), using six earthquakes: the Jan. 17, 1995 Kobe earthquake (mms = 7.3), the Oct. 6, 2000 Tottori earthquake (mms = 7.3), the May 26, 2003, Miyagi offshore earthquake (mms = 7.1), the Oct. 23, 2004, Niigata Chuetsu earthquake (mms = 6.8), the Dec. 26, 2004 Northern Sumatra earthquake (mms = 9.0), and the Apr. 20, 2005 Fukuoka earthquake (mms = 7.0). For the Japanese earthquakes, we use the data catalog maintained by the Japan Meteorological Agency (JMA). For the Sumatra earthquake, we use the ANSS (Advanced National Seismic System) composite catalog, a world-wide earthquake catalog (http://quake.geo.berkeley.edu/anss/).

2. Application to Japanese and Sumatra earthquakes We consider the Kobe earthquake. We first study the cumulative number of aftershocks with magnitude larger than or equal to m as a function of magnitude m in a semi-logarithmic graph. A correlation with the Gutenberg-Richter law gives b = 0.782 and A = 4.846 so that Δm* = mms - A/b = 1.10. The rate of occurrence of aftershocks dN/dt in number per day is plotted in Fig. 1 as a function of time t for different magnitude cutoffs m1 = 2.0 (■), m2 = 2.5 (●), m3 = 3.0 (▲), and m4 = 3.5 (▼). The occurrence rates for m1 = 2.0 and m2 = 2.5 in the time period 0.01 < t (days) ≤ 0.1 are indicated by unfilled symbols. These rates may partially reflect the effect of missing aftershocks. We do not include them into the test of the generalized Omori’s law in eqs. (2) and (3).

Figure 1 The rates of occurrence of aftershocks with magnitudes greater than m in number per day, dN/dt, are given for 1000 days after the Kobe earthquake. Lower cutoff magnitudes are

taken to be m1 = 2.0, m2 = 2.5, m3 = 3.0, and m4 = 3.5. The best fit of eqs. (2) and (3) is obtained with τ0 = 0.000508 days and p = 1.16.

Figure 2 The rates of occurrence of aftershocks with magnitudes greater than m in number per day, dN/dt, are given for 1000 days after the Tottori earthquake (left) and for 251 days after the Sumatra earthquake (right).

Using maximum likelihood method, we do a statistical analysis called point process modeling (e.g., Ogata, 1999) for the optimal fit of the prediction in eqs. (2) and (3) to the data. This analysis is carried out with the values of mms = 7.3, Δm* = 1.10, b = 0.782. The pair of τ0 = 0.000508 (days) and p = 1.16 gives the optimal fitting. Using these parameter values, we add the predictions (solid lines) of the dependence of the occurrence rate of aftershocks on time into Fig. 1. We see good agreement between the predictions and the data. The same analysis has been performed for other earthquakes. Two of them are shown in Fig. 2(left) for the Tottori and Fig. 2(right) for the Sumatra. The occurrence rates for m1 = 2.0 in the time period 0.01 ≤ t (days) < 0.1 for the Tottori (left) are indicated by unfilled symbols. Also the rates for m1 = 4.5 in the period 0.01 ≤ t (days) < 10 and for m1 = 5.0 in the period 0.01 ≤ t (days) < 1 for the Sumatra (right) are indicated by unfilled symbols. As for the Kobe case, these rates may be interpreted to partially reflect the effect of missing aftershocks so that the rates are not included into our test. Also for the Sumatra (right), the rates denoted by unfilled symbols after t = 16 days are not used in our analysis because they are largely disturbed by aftershocks triggered by the largest aftershock of magnitude 8.7. A set of the parameter values shown in the figures can give the optimal fitting. Using these values, we show the predictions in eqs. (2) and (3), denoted by solid lines. We see good agreement between these predictions and the data. For the Miyagi, Niigata, and Fukuoka cases, we also get the results that are consistent with the results obtained for the Kobe, Tottori, and Sumatra cases.

3. Conclusion In this paper, the applicability of a model for aftershock decay rate has been tested for the Japanese and Sumatra earthquakes. This model is given in eqs. (2) and (3) and called the generalized Omori’s law proposed by Shcherbakov et al. (2004, 2005) who incorporated three laws: the Gutenberg-Richter law law, the Båth’s law, and the modified Omori law in eq. (1). Main advantage of this new law is to describe the entire sequence for different cutoff magnitudes from a state defined immediately after the main shock. The proposed law shows that τ is a constant τ = τ0 and c scales with m. We used the point process modeling and the maximum likelihood method to find the optimal fitting of the

prediction in eqs. (2) and (3) to the data. With this optimal fitting, our analyses based on the Japanese and Sumatra earthquakes show that the law provides good approximation for the decay of aftershock activity.

4. Acknowledgement This work is supported by the National Science Foundation (USA) under grant NSF ATM-03-27571 (DLT, RS); NASA/JPL Grant 1247848 (RS); and JSPS Research Fellowships (BE, KZN). We would like to acknowledge JMA and ANSS for the use of their catalogs.

5. References Båth, M. (1965), Lateral inhomogeneities in the upper mantle, Tectonophysics, 2, 483-514. Gutenberg, B., and Richter, C.F., Seismicity of the earth and associated phenomena, 2nd ed., (Princeton Univ. Press, Princeton, 1954). Ogata, Y. (1999), Seismicity analysis through point-process modeling: A review, Pure Appl. Geophys., 155, 471-507. Richter, C. F., Elementary seismology, (Freeman, San Francisco, 1958). Shchervakov, R., Turcotte, D. L., and Rundle, J. B. (2004), A generalized Omori’s law for earthquake aftershock decay, Geophys. Res. Lett., 31, L11613, doi: 10.1029/2004GL019808. Shchervakov, R., Turcotte, D. L., and Rundle, J. B. (2005), Aftershock statistics, Pure Appl. Geophys., 162(6-7), 1051-1076, doi: 10.1007/s00024-004-2661-8. Utsu, T. (1961), A statistical study on the occurrence of aftershocks, Geophys. Mag., 30, 521-605.

COULOMB STRESS CHANGES FROM GROUPED MAGNITUDE 7+ EARTHQUAKES IN SOUTH ITALY

G. Neri B. Orecchio, D. Presti

Earth Science Department, Messina University, Salita Sperone, 31, 98166 Messina (ME) - Italy

Corresponding Author: G. Neri, [email protected]

Neri et al. (2005) performed an investigation of the seismic, geologic and geodynamic information available for the Calabro-Sicilian region in South Italy and proposed on these grounds a unifying view of the magnitude 7+ earthquakes which have occurred in the same region in the last millennium. These earthquakes define a sort of space-time megacluster because they occurred in a relatively short time interval of about three centuries and were located in a narrow curvilinear extensional belt passing through Western Calabria and Eastern

Sicily (WCES belt) and including a nearly continuous north-to-south succession of east-dipping normal faults. In Neri et al.’s (2005) reconstruction of the seismogenic process producing the WCES megacluster, faulting is activated by WNW-ESE extension induced by residual rollback of the Ionian subduction slab.

In the present work, we first estimated the Poisson Dispersion Coefficient (PDC) of the time series of M7+ earthquakes occurring in the study region over the millennium. PDC is given by

2 2 2 (- )/ , where ni is the number of events in the i-th period of assigned length Δt, and the symbol < > indicates the average calculated considering all intervals equal to Δt that can be extracted from the complete period under investigation. PDC is 1 in the Poisson model, and the difference between the value of PDC determined for a certain sequence of events and 1 gives information on the level of interdependence between the events of the sequence. By this

procedure the occurrence of M7+ earthquakes in the WCES belt was proven to be highly non-random. Then, we estimated the Coulomb stress changes induced by the individual events of the cluster in relation to the following events in the same cluster. Relatively small values of

Coulomb stress changes were found, leading us to suggest that the clustering tendency of the events is not due to direct interaction between the events, but to other factors. We tentatively propose that in the framework of the extensional process generated by the subduction trench retreat, the nearly continuous north-to-south succession of east-dipping normal faults in the belt

(equiparable to a sort of megafault consisting of partially linked segments) activates with clustered-in-time failures giving rise to the observed magnitude 7+ earthquakes.

Finally, the space distribution of the earthquakes under investigation indicates a silent zone along the megafault, corresponding to the 30 km-long Scaletta-Fiumefreddo segment in Eastern Sicily (Neri et al., 2005). Since recurrence times are on the order of a millennium for M7 earthquakes along different parts of the WCES belt, and historical data for destructive earthquakes in the 1st millennium AD are not detailed enough to allow reliable identification of the source zones, it cannot be stated whether or not there is a late-stage seismic gap for a large earthquake in eastern Sicily. Other workers are currently performing tomographic investigations of Gutenberg-Richter’s b-value in this region and this could bring evidence of asperity, or aseismic creep mechanisms in the Scaletta-Fiumefreddo silent zone.

References Neri, G., Oliva, G., Orecchio, B., and Presti, D. (2005). A Possible Seismic Gap Within a Highly Seismogenic Belt Crossing Calabria and Eastern Sicily, Italy. Submitted to BSSA

Contributions of Professor Tokuji Utsu to Statistical Seismology and Recent Developments

Yosihiko Ogata1,2

1The Institute of Statistical Mathematics, Minato-ku, Tokyo 106-8569, Japan 2 Graduate University of Advanced Study, Minato-ku, Tokyo 106-8569, Japan

Corresponding Author: Y. Ogata, [email protected]

Abstract and Introduction: Late Professor Tokuji Utsu made so many contributions to seismology especially in the research of seismicity. It is out of my ability to introduce many of his important contributions; his extensive works have been most frequently cited in seismological community in Japan. I can only talk about works that are related to statistical seismology and that have been important for my personal works. Especially, I have been influenced very much by the series of his extensive studies on aftershocks and earthquake statistics [Utsu, 1969, 1970, 1971 and 1972].

Omori-Utsu formula. One of his prominent works is the empirical formula for the decaying rate of aftershocks which has been called the modified Omori formula, λ(t) = K / (t + c)-p, (K, c, p; constant), where t is the elapsed time from the mainshock, usually in days. This formula is modestly named by Utsu [1961]. However, this formula is quite substantial extension of the original Omori formula [Omori, 1894] for the case of p = 1. Although Utsu [1957] first used the form λ(t) = Kt-p, he has become confident this form including the scale coefficient c (see below for the reason). On this occasion, I would like to call it as the “Omori-Utsu formula” after Marsan and Nalbant [2005]. Utsu [1961, 1969] analyzed more than 100 aftershock sequences in Japan and world by using the doubly logarithmic diagram of occurrence rate against the elapsed time from the mainshock, and he showed that the characterizing parameter p-value was different from place to place, varying from 0.9 to 1.8, to be a geophysical quantity. In fact, p-value is shown to be correlated with the crustal temperature and related to the stress relaxation of the main rupture [Mogi 1967; Kisslinger and Jones, 1991; Creamer and Kisslinger]. The p-values may also correlate with the degree of heterogeneity of the mainshock’s fault zone. The p-value is usually independent of threshold magnitude [Utsu, 1962], which is equivalent to the fact that b-value of the Gutenberg and Richter’s (G-R) magnitude frequency law is constant throughout the aftershock activity. The p-value and other parameters of the Omori-Utsu formula is nowadays estimated by the maximum likelihood procedure directly from the series of occurrence times assuming the non-stationary Poisson process model with the intensity function λ(t) [Ogata, 1983]. The duration of aftershock activity can last more than one century [Utsu, 1969; Utsu et al., 1995] until it will merge into the normal activity of the region, the time of which is independent of magnitude threshold of the sequence [Ogata and Shimazaki, 1984].

Further studies on empirical laws of aftershocks. Utsu [1970] discovered that the secondary aftershock sequence following after a large aftershock also obeyed the Omori-Utsu law. Utsu [1970] provided the standard formula of aftershock occurrence rate, the size of which is proportional to the exponential to magnitude of the mainshock, which is in fact the numerical basis of aftershock probability forecasting [Reasenberg and Jones, 1989] in California and Japan. These works by Utsu was integrated into the ETAS point-process model [Ogata, 1986, 1988, 1989]. His study on the aftershock area against the magnitude of the mainshock led well known as the Utsu-Seki law [Utsu and Seki, 1954]. This is probably the first seismic scaling law related to the moment magnitude. One of the interesting discovered facts is that the scaling law for the inland earthquakes is clearly discriminated from those of offshore [Utsu, 1969]. His scaling law is a basis of modeling the spatial aftershock distribution of the space-time ETAS models [Ogata, 1998, 2003; Ogata et al., 2004]. A comprehensive review on the studies of the decaying law of aftershock activity and their development is

given by Utsu et al. [1995].

Remarks on the c-value of the Omori-Utsu formula. True c-values are difficult to estimate accurately unless very careful observation is started immediately after the main shock. When data are taken from ordinary earthquake catalogs, the estimated c-value may partially reflect the effect of incomplete detection of small aftershocks shortly after the main shock. There is an opinion that the c value is essentially 0 and all reported positive c values result from the above-mentioned incompleteness. If c = 0, λ(t) above diverges at t = 0. Kagan and Knopoff [1981] explained this difficulty by considering that the main shock is a multiple occurrence of numerous sub-events occurring in a very short time interval, and thus the stochastic seismicity model by Kagan and Knopoff [1987] uses this transfer function after certain dead times following the earthquakes, which is actually different from the Omori-Utsu transfer function of the ETAS model, though the both are extended from the Hawkes’ self-exciting point-process model [Hawkes and Adamopoulos, 1973]. In fact, positive c values have been obtained for a number of adequately observed and selected datasets. Moreover, consider a case where the aftershock activity obeys the law λ(t) = Kt-p, but the observational data fit the relation λ(t) = K(t+c)-p owing to the above mentioned incompleteness. In this case, 100(1-2-p) percent of aftershocks is considered to be missing at t=c (50% if p =1 and 60% if p = 1.3). Such a large percentage of missing events seems unlikely for many of the aftershock sequences to which relatively large c values have been estimated.

Magnitude distributions. The b-value of the G-R magnitude frequency was graphically estimated by measuring the slope of the fitted straight line to the log-frequency plot against magnitude. Assuming the exponential distribution of the G-R law, Utsu [1965] derived the estimate of b-value by the method of moment, which is nothing but the MLE [Aki, 1965]. The compensation must be taken for the MLE when we use discrete (rounded) magnitude values [Utsu, 1970]. Utsu [1967] provided the testing procedure for different b-values from two regions. For example, it is seen that b-values of the mainshocks is different from that of aftershocks in and near Japan [Utsu, 1969]. To measure a deviation from the G-R law, Utsu [1988] proposed the η-estimate based on the second order moment of the empirical magnitude distribution.

Discrimination of foreshocks from earthquake swarm is difficult problem, because no easily recognizable differences in statistical or properties are found between them. Utsu [1978] searched for the sequential pattern of three largest values of magnitude in a cluster, which provides the highest probability of being foreshocks, using the JMA hypocenter data. Utsu [1988] showed that the η-value in foreshocks tends to be smaller than that of aftershocks in the same sequence. Ogata, Utsu and Katsura [1995, 1996] investigated data sets of earthquake clusters to discriminate features of foreshocks from earthquakes of other cluster types in a statistical sense, and found several features of some predictive value, including the fact that foreshocks were more closely-spaced in time than either swarms or aftershocks, the fact that foreshocks were closer together in space than other types of events, and that foreshocks' sizes were more likely to increase chronologically than the other types. By modeling such discriminating features Ogata, Utsu and Katsura [1996] developed probability forecasts of an earthquake cluster being of foreshock type.

Multi-element prediction formula. The occurrence probability of earthquake is enhanced when we assess the probability using the composite anomalies of independent record. Utsu [1977, 1978b] calculated such probability and retrospectively applied to the reported several anomalies of long-, intermediate- and short-term preceding Izu-Oshima Kinkai Earthquake of 1978, showing the high probability. Aki [1981] provide a simple, approximated version of this formula assuming that the probability due to each anomaly is small. Ogata, Utsu and Katsura [1996] make use of a logit model as the extended version of this formula to assess the probability of foreshocks based on the location of the first event and inter-event features (time, distance and magnitude) of a cluster.

Causal relation of seismic activity between two regions. Relatively clear correlation is seen in subduction zones. Utsu [1975] found such a significant correlation between shallow and intermediate-depth earthquakes. Since earthquakes have a tendency to clustering, the correlation coefficient between two regions easily exceeds the significance level expected for the Poissonian occurrence, even if there is no correlation. This paper was my first occasion to know Professor Utsu’s work, and I was motivated to apply a point-process model to his compiled dataset to show the one way causal relationship [Ogata, 1982; Ogata and Katsura, 1986; also see LINLIN program of IASPEI Software by Utsu and Ogata, 1997].

Utsu catalog of large earthquakes in Japan 1885-1980. A homogeneous catalog of earthquakes of M>=6 in and near Japan is compiled by Utsu [1979, 1982, 1985] based on both instrumental and macroseismic data in the determination of focal parameters. Based on the Utsu catalog, Ogata [1986, 1988] compared the goodness-of-fit of the ETAS with other models to the earthquakes from a certain active offshore region to show the best fit, and revealed some relative quiescence before some great earthquakes.

Selected references of Utsu’s works Utsu, T. and Seki, A. (1954) Relation between the area of aftershock region and the energy of the main shock, Zisin(2) (J. Seismol. Soc. Jap.), 10, 233-240 (in Japanese). Utsu, T. (1957) Magnitude of earthquakes and occurrence of their aftershocks, Zisin(2) (J. Seismol. Soc. Jap.), 10, 35-45 (in Japanese). Utsu, T., A statistical study on the occurrence of aftershocks, Geophys. Mag., 30, 521605, 1961. Utsu, T., On the nature of three Alaskan aftershock sequences of 1957 and 1958, Bull. Seism. Soc. Amer., 52, 279297, 1962. Utsu, T. (1965) A method for determinig the value of b in a formula log n =a− bM showing the magnitude- frequency relation for earthquakes, Geophys Bull Hokkaido Univ., 13, 99-103 (in Japanese). Utsu, T. (1967) Statistical significance of test of the difference in b value between two earthquake groups, J. Phys. Earth, 14, 37-40, 1974. Utsu, T. (1969) Aftershocks and earthquake statistics (I) - Some parameters which characterize an aftershock sequence and their interrelations, J. Fac. Sci. Hokkaido Univ., Ser.VII, 3, 121-195. Utsu, T. (1970) Aftershocks and earthquake statistics (II) - Further investigation of aftershocks and other waergquake sequences based on a new classification of earthquake sequences, J. Fac. Sci. Hokkaido Univ., Ser.VII, 3, 197-266. Utsu, T. (1971) Aftershocks and earthquake statistics (III) - Analyses of the distribution of earthquakes in magnitude, time, and space with special consideration to clustering characteristics of earthquake occurrence (1), J. Fac. Sci. Hokkaido Univ., Ser.VII, 3, 379-441. Utsu, T. (1972) Aftershocks and earthquake statistics (IV) - Analyses of the distribution of earthquakes in magnitude, time, and space with special consideration to clustering characteristics of earthquake occurrence (2), J. Fac. Sci. Hokkaido Univ., Ser.VII, 4, 1-42. Utsu, T. (1974) Space-time pattern of large earthquakes occurring off the Pacific coast of the Japanese Islands, J. Phys. Earth, 22, 325342, 1974. Utsu, T. (1975) Correlation between shallow earthquakes in Kwanto region and intermediate earthquakes in Hida region, central Japan, Zisin(2) (J. Seismol. Soc. Jap.), 28, 303−311 (in Japanese). Utsu, T. (1977) Probabilities in earthquake prediction, Zisin(2) (J. Seismol. Soc. Jap.), 30, 179−185 (in Japanese). Utsu, T. (1978a) An investigation into the discrimination of foreshock sequences from earthquake swarms, Zisin(2) (J. Seismol. Soc. Jap.), 31, 129−135 (in Japanese). Utsu, T. (1978b) Calculation of the probability of success of an earthquake prediction (In the case of Izu-Oshima Kinkai Earthquake of 1978), Report of the Coordinating Committee for Earthquake Prediction, 31, 129−135, Geographical Survey Institute of Japan (in Japanese). Utsu, T. (1979) Seismicity of Japan from 1885 through 1925 – a new catalog of earthquakes of M>=6 felt in Japan and smaller earthquakes which caused damage in Japan, Bull. Earthq. Res. Inst, 54, 253-308 (in Japanese). Utsu, T. (1982, 1985) Catalog of large earthquakes in the region of Japan from 1885 through 1980, Bull. Earthq. Res. Inst, 57, 253-308 (in Japanese); Correction and supplement, ibid, 60, 639-642 (in Japanese). Utsu, T., Probabilities associated with earthquake prediction and their relationships, Earthq. Predict. Res., 2, 105114, 1983.

Utsu, T. (1988) η-value for magnitude distribution and earthquake prediction, Report of the Coordinating Committee for Earthquake Prediction, 39, 380-386, Geographical Survey Institute of Japan (in Japanese). Utsu, T., Y. Ogata, and R. S. Matsu’ura, The Centenary of the Omori formula for a decay law of aftershock activity, J, Phys. Earth, 43, 1-133, 1995. Ogata, Y., T. Utsu, and Katsura, Statistical features of foreshocks in comparison with other earthquake clusters, Geophys. J. Int., 121, 233-254, 1995. Ogata, Y., T. Utsu, and Katsura, Statistical discrimination of foreshocks from other earthquake clusters, Geophys. J. Int., 127, 17-30, 1996. Utsu, T. and Y. Ogata, Statistical analysis of seismicity, Algorithms for Earthquake Statistics and Prediction (Healy, J. H., V. I. Keilis-Borok, and W. H. K. Lee, Eds.), IASPEI Software Library 6, 1394, 1997. Ogata, Y. and Utsu, T. (1999) Real time statistical discrimination of foreshocks from other earthquake clusters (in Japanese), Tokei-Suri (Proc. Inst. Statist. Math), Vol. 47, No. 1, pp. 223-241. Utsu, T., A list of deadly earthquakes in the world: 1500 - 2000, International Handbook of Earthquake and Engineering Seismology (W. H. K. Lee et al., Eds.), Part A, 691-717, Academic Press, 2002. Utsu, T. Statistical features of seismicity, International Handbook of Earthquake and Engineering Seismology (W. H. K. Lee et al., Eds.), Part A, 719-732, Academic Press, 2002. Utsu, T., Historical development of seismology in Japan, International Handbook of Earthquake and Engineering Seismology (W. H. K. Lee et al., Eds.), Part B, Academic Press, to be published in 2003. Utsu, T., Seismicity Studies: A comprehensive Review, University of Tokyo Press, 876pp., 1999.

Related references Aki, K. (1965) Maximum likelihood estimate of b in the formula log N = a-bM and its confidence limits, Bull. Earthq. Res. Inst., Univ. Tokyo, 43, 237-239. Aki, K. (1981) A probabilistic synthesis of precursory phenomena, in Earthquake Prediction: An International Review, edited by D.W. Simpson & P.G. Richards, A.G.U., Washington, D.C., 566-574. Creamer, F.H. and Kisslinger, C. (1993) The relation between temperature and the Omori decay parameter for aftershock sequences near Japan, EOS, 74, No. 43, Supplement, 417. Hawkes, A.G. and Adamopoulos, L. (1973) Cluster models for earthquakes - Regional comparisons, Bull. Int. Stat. Inst., 45, 3, 454-461. Kagan, Y.Y. and Knopoff, L (1981) Stochastic synthesis of earthquake catalogs, J. Geophys. Res., 86, 2853-2862. Kagan, Y.Y. and Knopoff, L. (1987) Statistical short-term earthquake prediction, Science, 236, 1563 - 1567. Kisslinger, C. and L.M. Jones (1991) Properties of aftershocks in southern California, J. Geophys. Res., 96, 11947-11958. Marsan, D. and Nalbant, S.S., Methods for measuring seismicity rate changes: A review and a study of how the Mw7.3 Landers Earthqyake affected the aftershock sequence of Mw6.1 Joshua Tree Earthquake, Pure and Applied Geophysics, 162, pp. 1151-1186. Mogi, K., (1967) Earthquakes and fractures, Tectonophysics, 5, 35-55. Ogata, Y., Estimation of the parameters in the modified Omori formula for aftershock frequencies by the maximum likelihood procedure, J. Phys. Earth, 31, 115-124, (1983). Ogata, Y. (1986) Statistical models for earthquake occurrences and residual analysis for point processes, Mathematical Seismology (I), 228-281, Institute of Statistical Mathematics, Tokyo. Ogata, Y. (1988) Statistical models for earthquake occurrences and residual analysis for point processes, Journal of American Statistical Association, Application, Vol. 83, No. 401, pp. 9-27. Ogata, Y. (1989) Statistical model for standard seismicity and detection of anomalies by residual analysis, Techtonophysics, Vol. 169, pp. 159-174. Ogata, Y. (1998) Space-time point-process models for earthquake occurrences, Ann. Inst. Statist. Math., 50, 379-402. Ogata, Y. (2001) Exploratory analysis of earthquake clusters by likelihood-based trigger models, Festscrift Volume for Professor Vere-Jones, J. Applied Probability, 38A, pp. 202-212. Ogata, Y. (2004) Space-time model for regional seismicity and detection of crustal stress changes, J. Geophys. Res., Vol. 109, No. B3, B03308, doi:10.1029/2003JB002621. Ogata, Y. and Shimazaki, K. (1984) Transition from aftershock to normal activity, Bull. Seismol. Soc. Am., 74, 5, pp. 1757-1765 Ogata, Y., Katsura, K. and Tanemura, M. (2003) Modelling of heterogeneous space-time earthquake occurrences and its residual analysis, Applied Statistics (J. Roy. Stat. Soc. Ser. C.), Vol. 52, Part 4, pp. 499-509 (2003). Omori, F. On the aftershocks of earthquakes, J. Coll. Sci. Imp. Univ. Tokyo 7, 111 -200 (1894). Reasenberg, P.A. and Jones, L.M. (1989)Earthquake hazard after a mainshock in California, Science, 243, 1173-1176.

Relative quiescence reported before the occurrence of the largest aftershock (M5.8) with likely scenarios of precursory slips considered for the stress-shadow covering the aftershock area

Yosihiko Ogata1,2

1The Institute of Statistical Mathematics, Minato-ku, Tokyo 106-8569, Japan 2 Graduate University of Advanced Study, Minato-ku, Tokyo 106-8569, Japan

Corresponding Author: Y. Ogata, [email protected]

Abstract: Monitoring of aftershock sequences to detect lowering activity, relative to the modeled rate (the relative quiescence), becomes realistic and practical in predicting the enhancement of the likelihood of having a substantially large aftershock, or even another earthquake of similar size or larger. A significant relative quiescence in the aftershock sequence of the 2005 March earthquake of M7.0 off the western coast of Fukuoka, Japan, was reported two weeks before the largest aftershock of M5.8 that also hit the Kyushu District. The relative quiescence was discussed in relation to the stress-shadowing as inhibiting the activity due to probable precursory slips, which are retrospectively speculated in more detail.

1. Introduction On 20th March 2005, a strong earthquake of M7.0 took place off the coast of western Fukuoka Prefecture, Kyushu District [cf., Figure 1]. The probability forecast of the occurrence of M ≥ 5.5 aftershocks, announced on March 21st by the Japan Meteorological Agency (JMA), was less than 10% for the next three days period, with a much less probability for a further few months period. In fact, the largest aftershock of M5.8 occurred one month later, which would suggest the failure of the forecast. On the other hand, the statistical result from Japanese data [Ogata, 2001] suggests that the probability of having another large earthquake of a similar size to the mainshock becomes several times greater than such a normal probability, if the aftershock activity of the first event shows anomalous decay. Therefore, we need to pay careful attention to anomalous activity in the sequence. For predicting large aftershocks, Matsu’ura [1986] noted the utility of the quiescence in aftershock occurrences relative to the Omori-Utsu decay [Omori, 1894; Utsu, 1961]. Ogata [1988, 1992, 2001] studied the quiescence relative to the epidemic-type aftershock sequence model (ETAS model), which is a natural extension of the Omori-Utsu model, useful for the detailed description of aftershock sequences as well as general seismicity: the reader is referred to Ogata [1992, 1999, 2001, 2005a, b, c] and Utsu and Ogata [1997] for the statistical method using the ETAS model, including the significance test of the change of seismicity pattern. Then, using the model, some retrospective case studies [Ogata et al., 2003; Ogata, 2005a, b] are concerned with the phenomenon that the stress-shadow [e.g., Harris, 1998] inhibits normal decay of aftershock activity.

2. The relative quiescence and speculations based on the stress-shadowing Here I represent a monitoring experiment to detect the aftershock anomaly of the March 20th earthquake of M7.0 in Kyushu District, Japan. At the meeting of the Coordinating Committee for Earthquake Prediction, Japan, held two weeks after the mainshock, I reported the significantly lower activity of the aftershock occurrence than the predicted one by the ETAS model (relative quiescence) [Ogata, 2005c]. This was the result based on the data for the period up until 5th April (vertical solid lines in the Figure 2 in the right side). Furthermore, in order to predict likely location of a large aftershock or another proximate large earthquake, it is assumed that a significant slip may occur within and near to the source of the suspected earthquake due to the acceleration of quasi-static (slow) slips on the fault as the time of rupture of the major asperity approaches. For example, such a procedure is indicated by the analysis of small repeating earthquake data [Uchida et al., 2004]. Such a scenario for the prediction can be useful for explaining the anomalous features. Thus, we should look carefully at the activity in the stress-shadow [Ogata, 2005a-d], to explain it as transferred from the slip. Nevertheless, in fact, given an anomaly of seismicity rate change, the difficulty lies in identifying the slip location and its imminence to a major rupture. Most of them are usually unknown unless any other data or constraints, such as geodetic ones, are available. Even if it is identified, it may be in an area of the habitual slipping or intermittent creeping. Thus, in order to make a probability forecast of a likely large earthquake, we need to make and examine all possible scenarios based on the available knowledge in seismology, tectonics etc. Actually, for probable predictions, Ogata [2005c] explored and reported several scenarios of some thinkable slow slips that could have been triggered by the main M7.0 rupture, and moreover that such a slip, in turn, transferred the stress-shadows covering the majority of the aftershock volume. Such a slip could have occurred within a conjugate fault of the main rupture fault, or several known active faults near the rupture fault. For example, however, it has not been likely occurred within the Kego Fault (cf., Figure 1) that runs through the urban area in the city of Fukuoka, since it must result in a stress-increase in the entire aftershock region; namely, no part of aftershock volume can be the stress-shadow. One month after the mainshock, on April 20th, the largest aftershock of M5.8 took place in the south-western extension of the main aftershock volume, but the strike angle of the rupture fault is significantly different from the main fault. The hypocenters and depths against time plots of the aftershocks indicate the secondary aftershocks following the largest aftershock, which reveals outline of the ruptured fault of M5.8 (the area with densely populated black circles around 130.3oE). In fact, the above mentioned report failed to predict this rupture fault, but the scenario itself appears to be still valid. Based on the hypocenter distribution, we may have the following explanation for the observed relative quiescence in the aftershock activity.

Assuming that there is a precursory slip on the strike-slip fault surface (the area covering the seismicity gap between the main aftershock volume and the secondary aftershock volume), Coulomb failure stress-changes are calculated at a depth of 5km and 10km, where ΔCFS = Δ(shear-stress) – μ' Δ(normal-stress) with the apparent friction coefficient μ′= 0.4 being assumed, and positive normal stress means the compression. The ΔCFS values indicate that the stress increased in the shallower eastern part of the aftershock volume, while the aftershock volume was mostly in the stress-shadow for the deeper part, especially the shadowing is strong in the eastern part of the aftershock volume. These features are consistent with the fact that the aftershocks migrated from the deep to the shallow in the eastern region, whereas the migration is not seen in the western region. Likewise, the same speculated slip should transfer the stress-shadow covering the active off-fault clusters in the depth range 6 ~ 8.5km benearth Hakata Bay, the activity of which was triggered by the main rupture with a half day’s delay. The shadowing due to the preslip before the largest aftershock then seems to cause a lowering in the seismicity about 2 weeks prior to the occurrence of the largest aftershock. Here one may argue that the lowering may be understood as seismicity rate-change of Dieterich [1994, Figures 2 and 5] that represent a transition from stable rate increase and delayed decay off the fault source. However, although the rate decays gradually after mid-May, it appears that the focal turning point decrease is too drastic for the characteristic time of natural transition to the delayed decay. Indeed, the rate jumped down, reducing to 20% of the previous rate as seen from the slopes of the cumulative function for the period before the largest aftershock. The similar stress-shadowing scenario can also be well applied to the relative quiescence in the secondary aftershock sequence following the largest aftershock of M5.8 before the M5.0 event that occurred on 2nd May 2005 [Ogata, 2005c]. 3. Concluding Remarks Since aftershocks are triggered by complex mechanisms under fractal random media, their interactions are too complicated to calculate the stress-changes within and near to the field. Thus, the ETAS model based on the empirical laws of aftershocks has been used as a practical representation of locally triggered (self-exciting) seismicity. Then, the diagnostic analysis based on fitting the ETAS model is helpful in detecting exogenous stress-changes. Indeed, these changes can be so slight compared to the self-exciting changes that even current state of the art methods and the geodetic records from the GPS network can barely recognize systematic anomalies in the time series of displacement records. The main results in the present manuscript do not agree with the claim that there should be a threshold value of ΔCFS capable of affecting seismic changes. Although the stress-decrease due to precursory

slips can be small in the order of a few tens of millibars or less, the number of occurred and potential earthquakes of moderate or small sizes in a receiver region can be large enough to make a statistically significant detection of the relative quiescence. The scenarios of the relative quiescence rely on the seismicity rate change based on the rate-and-state friction law [Dieterich, 1994] as the quantitative basis of the triggering [Toda and Stein, 2003; Ogata, 2004]. The readers are referred to Ogata [2005d] for some reasons of empirical facts that the relative activation is not very sensitive compared to the relative quiescence. In summary, the present manuscript supports the idea that aftershock activity is sensitive enough to a slight Coulomb stress-decrease, depending on the degree of homogeneity of aftershock fault mechanisms, caused by a small slip elsewhere. Thus it will be useful to make careful monitoring of the aftershock activity using the ETAS model to detect the relative quiescence, to identify the stress-shadow areas, and eventually to consider likelihood of preslips in the suspected nearby faults. References Dieterich, J., A constitutive law for rate of earthquake production and its application to earthquake clustering. J. Geophys. Res., 99, 2601–2618, 1994. Geographical Survey Institute of Japan, Crustal movement in the Kyushu District (in Japanese), Report of the Coordinating Committee for Earthquake Prediction, 74, in press, 2005. Harris, R. A., Introduction to special section: Stress triggers, stress shadows, and implications for seismic hazard, J. Geophys. Res., 103, 24347–24358, 1998. Matsu'ura R.S., Precursory quiescence and recovery of aftershock activities before some large aftershocks, Bull. Earthq. Res. Inst., Univ. Tokyo, 61, 1-65, 1986. Ogata, Y., Statistical models for earthquake occurrences and residual analysis for point processes. J. Amer. Statist. Assoc., 83, 9-27, 1988. ---, Detection of precursory relative quiescence before great earthquakes through a statistical model, J. Geophys. Res., 97, 19845-19871, 1992. ---, Seismicity analyses through point-process modelling: A review, in Seismicity Patterns, Their Statistical Significance and Physical Meaning M. Wyss, K. Shimazaki and A. Ito eds. Birkhauser Verlag, Basel, Pure and Applied Geophysics, 155, 471-507, 1999. ---., Increased probability of large earthquakes near aftershock regions with relative quiescence, J. Geophys. Res., 106, 8729-8744, 2001. ---, Detection of anomalous seismicity as a stress change sensor, J. Geophys. Res., 110(B5), B05S06, 10.1029/2004JB003245, 2005a. ---, Seismicity changes and stress changes in and around the northern Japan relating to the 2003 Tokachi earthquake of M8.0, J. Geophys. Res., 110, in press, 2005b. ---, Relative quiescence reported before the occurrence of the largest aftershock (M5.8) in the aftershocks of the 2005 earthquake of M7.0 at the western Fukuoka, Kyushu, and possible scenarios of precursory slips considered for the stress-shadow covering the aftershock area (in Japanese), Report of the Coordinating Committee for Earthquake Prediction, 74, in press, 2005c. ---, Seismicity anomaly scenario prior to the major recurrent earthquakes off the east coast of Miyagi Prefecture, northern Japan, submitted to the special issue on Dynamics of Seismicity Patterns and Earthquake Triggering (Guest editors: S. Hainzl, G. Zoller and I. Main), Tectonophysics, 2005d. Ogata, Y., L. M. Jones, and S. Toda, When and where the aftershock activity was depressed: Contrasting decay patterns of the proximate large earthquakes in southern California, J. Geophys. Res., 108(B6), 2318, doi:10.1029/2002JB002009, 2003. Reasenberg, P., and L.M. Jones, Earthquake hazard after a mainshock in California, Science, 243, 1173–1176, 1989. Toda S., and R. Stein, Toggling of seismicity by the 1997 Kagoshima earthquake couplet: A demonstration of time‐dependent stress transfer, J. Geophys. Res., 108 (B12), 2567, doi:10.1029/2003JB002527, 2003. Uchida, N., A. Hasegawa, T. Matsuzawa, and T. Igarashi, Pre- and post-seismic slow slip on the plate boundary off Sanriku, NE Japan, associated with three interplate earthquakes as estimated from small repeating earthquake data, Tectonophysics., 385, 1-15, 2004. Utsu, T., and Y. Ogata, Statistical Analysis of point processes for Seismicity (SASeis). IASPEI Software Library, 6, 13-94, International Association of Seismology and Physics of Earth's Interior, 1997.

Toward urgent forecasting of aftershock hazard: Simultaneous estimation of b-value of the Gutenberg-Richter’s law of the magnitude frequency and changing detection rates of aftershocks immediately after the mainshock

Yosihiko Ogata1,2

1The Institute of Statistical Mathematics, Minato-ku, Tokyo 106-8569, Japan 2 Graduate University of Advanced Study, Minato-ku, Tokyo 106-8569, Japan

Corresponding Author: Y. Ogata, [email protected]

Abstract: It is known that detection rate of aftershocks is extremely low during a period immediately after a large earthquake due to the contamination of arriving seismic waves. This has been made a considerable difficulty to attain the estimate of the empirical laws of aftershock decay and magnitude frequency immediately after the main shock. This paper presents an estimation method for predicting underlying occurrence rate of aftershocks of any magnitude range, based on the magnitude frequency model that combines Gutenberg-Richter’s law with the detection rate function. This procedure enables real-time probability forecasting of aftershocks immediately after the mainshock, when a majority of large aftershocks are likely to occur.

1. Introduction The aftershock probability forecasting [Reasenberg and Jones, 1989] was proposed based on the joint intensity rate [Utsu, 1970] a+b(M - M) p λ(t,M) = m(M) n(t) = 10 0 / (t + c) (a, b, c, p; constant) (1) of aftershocks with magnitude M, at the time t following a mainshock of magnitude M0, where the parameters are independently estimated by the maximum likelihood method [Utsu, 1965; Aki, 1965; Ogata, 1983] for respective empirical law of the Omori-Utsu and the Gutenberg-Richter. Then, the probability of expecting one or more aftershocks is calculated in the interval of magnitude range and time range. The public forecast has been implemented by the responsible agencies in California and Japan. However, a considerable difficulty in the above procedure is that, due to the contamination of arriving seismic waves, detection rate of aftershocks is extremely low during a period immediately after the main shock, especially, during the first day, when the forecasting is most critical for public in the affected area. To avoid such a difficulty, they adopt a generic model with a set of the standard parameter values in California or Japan for the forecast of a probability during such a period.

In fact, the parameters, a, b, c and p are estimated by using completely detected aftershocks above a threshold magnitude where the aftershocks are almost all detected during the estimation period. However, detected earthquakes below the threshold magnitude do not take a minor portion in all detected earthquakes. Especially, such a threshold magnitude for the complete detection is very high during the early period after the mainshock. From a viewpoint of effective use of data, it is quite wasteful for the statistical analysis of seismic activity not to

utilize all detected aftershocks. This is particularly the case where we have to use data set which is inhomogeneous in the sense that the detection-rate at each magnitude varies depending on time or location. In particular, it is seen that the a-value in equation (1), accordingly 10a called as aftershock productivity, takes values of considerably wide range, depending on aftershock sequence[Reasenberg and Jones, 1989], and therefore it is crucial to estimate the a-value suitably in the early stage immediately after the mainshock. For an effective and practical estimation, Ogata and Katsura [1993] propose to utilize the statistical model formulated, taking the detection rate of earthquakes into account for simultaneous estimation of the b-value in equation (1) together with detection-rate (probability) of earthquakes of each magnitude-band from the data of all detected events. The detection-rate of earthquakes clearly depends on their magnitudes in such a way that large shocks are almost all detected but smaller ones are lower rate in detection. Some authors [see Ogata and Katsura, 1993, for the reference] studied the detection capability of a seismic station or network for earthquakes’ signal amplitudes that are observed over the seismic noise level. Consider a possible detection-rate function that is the cumulative function of the Normal distribution

x−μ )( 2 M − q(M | μ, σ) = 1 2σ 2 dxe (μ, σ; constant). (2) ∫ ∞− 2 σπ The parameter μ represents the magnitude where earthquakes are detected in the rate of 50%, and σ represents a range of magnitudes where earthquakes are more or less partially detected. The shape of the product function m(M)q(M | μ, σ) agrees very well with histograms of magnitude frequency for homogeneously detected earthquakes in a seismic area and a period from the Hypocenter Catalog of Japan Meteorological Agency as well as Harvard catalog of

global seismicity. Figure 1 illustrates the schematic diagrams of the present model. Therefore, if we fit the magnitude data of all detected earthquakes to the normalized version of m(M) q(M | μ, σ) as the probability density by the maximum likelihood procedure, we can simultaneously estimate the parameters, b, μ, and σ, where ‘ln’ represents the natural logarithm. This function of magnitude with the coefficients of the maximum likelihood estimates (bˆ, ˆ,σμ ˆ ) shown in Figure 1 demonstrates very well fit to the empirical frequency of all the detected aftershocks of the 2003 Miyagi-Ken-Oki earthquake of M7.0, northern Japan, during the period of 350 days from 20th day since the main shock, where homogeneous detection of events of every magnitude band is expected. Moreover, these parameters are extended to be a function of time, b(t), μ(t), and σ(t), to estimate these assuming a prior distribution for the constraint of a smooth changes. Given the estimation of the changing parameters,b,ˆ(t) μˆ t)( and σˆ t)( , we have the following intensity rate function ˆ λ(t, M) = 10 0 −+ MMtba ))(( + − p |[)( ˆ ),( σμ ˆ ttMqct )]( (a, c, p; constant). (3) Here, as mentioned above, estimation of the productivity coefficient, a, is crucial in the early stage of the aftershock activity. This intensity rate leads to the occurrence rate of the all detected aftershocks. Here, for a real-time probability forecasting of future period, we may well assume that ˆ tb )( andσˆ t)( are constants during the period for estimation from the mainshock to extrapolate it for the forecast. As regards the μ(t) we consider to use a certain parametric form. Then the equation (3) is used to get the maximum likelihood estimates of the parameters, c, p, and ˆ K= + Mba 0 /10 bˆ 10ln/ of the intensity in (1) for the underlying aftershock activity including undetected aftershocks, by fitting to all the detected aftershocks from the beginning of the period.

Eventually, we obtain the estimates aˆ , bˆ , cˆ and pˆ to evaluate the joint intensity (1) of the underlying aftershocks of any magnitude. Table 1 and Figure 2 show each set of updated estimates using the data of detected aftershocks during all observed magnitude range of the early periods of the aftershock activity of the 2005 Miyagi-Ken-Oki earthquake. Figure 3 shows estimated decay curve of underlying aftershocks of M>0, due to (1) with the coefficients ˆ ˆ Kˆ = + Mba 0 /10 bˆ 10ln/ , c and p estimated by applying (3) to the events during respective period from the mainshock up until the updated times. From the figure we see that the rate of the underlying aftershocks predict well up to now by the observations of the detected aftershocks up until 1/4 day (6hours) from the mainshock.

On the other hand, the expected magnitude frequency of the detected aftershocks in a time interval [S, T] is given from (7) by

T T ˆ K ˆ Λ[S,T](M) = λ ),( dtMt ≈ ⋅ |( , ˆ ),( σμ ˆ), ≤≤ TtStbMfds , (4) ∫S ∫S ( + cs ˆ) pˆ where f(M| b, μ, σ) is the probability density normalized from m(M) q(M | μ, σ). From this we can predict the magnitude frequency of aftershocks during some period ahead the updated time of the observation. Figure 4 shows such predicted Λ(Μ) curves predicted at t=S and t=T for each panel of the figure, and actual frequency in the considered period [S,T]. This figure demonstrates the feasibility and practical utility of such forecasts in the early stage for case of aftershock activity of the 2005 Miyagi-Ken-Oki earthquake. References Aki, K. Maximum likelihood estimate of b in the formula log N = a-bM and its confidence limits, Bull. Earthq. Res. Inst., Univ. Tokyo, 43, 237-239 (1965). Ogata, Y. Estimation of the parameters in the modified Omori formula for aftershock frequencies by the maximum likelihood procedure, J. Phys. Earth, 31, 115-124, (1983). Ogata, Y. Increased probability of large earthquakes near aftershock regions with relative quiescence, J. Geophys. Res., 106, 8729-8744 (2001). Ogata, Y. and Katsura, K. Analysis of temporal and spatial heterogeneity of magnitude frequency distribution inferred from earthquake catalogs, Geophys. J. Int. 113, 727-738 (1993). Reasenberg, P.A. and Jones, L.M. Earthquake hazard after a mainshock in California, Science, 243, 1173-1176 (1989). Utsu, T. Aftershocks and earthquake statistics (II) Further investigation of aftershocks and other earthquake sequences based on a new classification of earthquake sequences, J. Fac. Sci. Hokkaido Univ., Ser.VII, 3, 197-266 (1970). Utsu, T. A method for determining the value of b in a formula log n = a-bM showing the magnitude-frequency relations for earthquakes, Geophys Bull. Hokkaido Univ. 13, 99-103 (1965).

Seismogenesis, scaling and the EEPAS model

D. A. Rhoades

Institute of Geological & Nuclear Sciences, P. O. Box 30-368, Lower Hutt, New Zealand

Corresponding Author: D. A. Rhoades, [email protected]

Abstract: Predictive scaling relations derived from many examples of the precursory scale increase phenomenon are the basis for the EEPAS forecasting model, in which every earthquake is regarded as precursor of larger ones to follow it in the long-term. This model has been applied, with relatively high magnitude thresholds, to the catalogues of New Zealand, all of California, all of Japan, and Greece, and with relatively low thresholds to Kanto and S. California. The information value of the model does not vary appreciably with magnitude threshold. However, differences in the fitted parameters suggest that, at low magnitude thresholds, the values are affected by the need to accommodate aftershocks in the targeted magnitude range. A modification is proposed to the magnitude distribution in the EEPAS model in order to accommodate such aftershocks. The modification should lead to more consistent parameter values and improved performance of the model.

1. Introduction Major shallow earthquakes in well-catalogued regions typically have a long-term preparation process, signaled by an increase in the magnitude and rate of occurrence of minor earthquakes within an envelope of time and space which scales with the magnitude of the major earthquake. This is the precursory scale increase (Ψ) phenomenon (Evison and Rhoades, 2002, 2004). It has been explained in terms of a three-stage faulting model (Evison and Rhoades, 2001), in which the formation of a major crack induces the formation of minor cracks around it. The fracture of the minor cracks prior to that of the major cracks produces the precursory increase in seismicity. A typical example of the Ψ-phenomenon is shown in Figure 1. From many such examples, precursory scaling relations were derived, in the form of linear regressions of mainshock magnitude Mm, logarithm of precursor time logTP, and logarithm of precursory area logAP on precursor magnitude MP. These relations form the basis of the EEPAS model, in which every earthquake is regarded as a precursor, according to scale, of larger earthquakes to follow it in the long term.

2. EEPAS model

In the EEPAS model, each earthquake (ti,mi,xi,yi) contributes a transient increment λi(t,m,x,y) to the future earthquake occurrence rate density in its vicinity, given by

λi = 111 iiii yxhmgtfwyxmt ),()()(),,,( (1)

where wi is a weighting factor, and fi, gi and hi are densities of the probability distributions for time, magnitude and location, which take the form ⎡ 2 ⎤ − ttH i )( 1 ⎛ )log( −−− mbatt iTTi ⎞ i tf )( = exp⎢− ⎜ ⎟ ⎥ , (2) − tt 2)( πσ 2 ⎜ σ ⎟ Ti ⎣⎢ ⎝ T ⎠ ⎦⎥ ⎡ 2 ⎤ 1 1 ⎛ −− mbam iMM ⎞ i mg )( = exp⎢− ⎜ ⎟ ⎥ , (3) 2πσ 2 ⎜ σ ⎟ M ⎣⎢ ⎝ M ⎠ ⎦⎥

2 2 1 ⎡ i −+− yyxx i )()( ⎤ i yxh ),( = exp − . (4) 2 mb iA ⎢ 2 mb iA ⎥ πσ A102 ⎣ σ A102 ⎦ The total rate density is

λ = μλ0 yxmtyxmt + ∑ λη ii yxmtm ),,,()(),,,(),,,( (5) 0 ; ii ≥≥ mmtt 0 where λ0 is a baseline rate density, and η is a normalising function. Two different weighting strategies have been considered: equal weighting, and down-weighting of aftershocks.

Figure 1 Example of the Ψ phenomenon. (a) Epicentres of precursory earthquakes, mainshock and

aftershocks. Dashed lines enclose the precursory area AP. (b) Magnitudes versus time of prior and precursory earthquakes, also mainshock and aftershocks. Dashed lines show precursory increase in

magnitude level. Mm is mainshock magnitude; MP is precursor magnitude. (c) Cumulative magnitude anomaly (Cumag) versus time. Dashed lines show precursory increase in seismicity rate. Protractor translates cumag slope into seismicity rate in magnitude units per year (M.U./yr), for times before the mainshock. Cumag values at the right hand ordinate refer to times beginning with the mainshock.

3. Scale-invariance of performance and universality of parameters Two questions of interest are: (1) Does the EEPAS model work equally well at all magnitude scales? (2) Are the parameter values universal across different regions and magnitude thresholds? The model has been fitted to, or tested on, the catalogues of New Zealand (m0=3.95, mc=5.75), California (m0=3.95, mc=5.75), Japan (m0=4.95, mc=6.75), Greece (m0=3.95, mc=5.95), Kanto (m0=2.45, mc=4.75) and Southern California (m0=2.45, mc=4.95). In all cases it has been found to be more informative than a spatially varying Poisson model (PPE) similar to that of Jackson and Kagan (1999), and much more informative than a stationary and spatially uniform Poisson model (SUP). The information rate per earthquake relative to PPE and SUP in each region (Rhoades and Evison, 2004, 2005, in press; Rhoades, submitted; Console et al. submitted) is shown in Fig. 2. This gives no indication that the advantage of EEPAS over PPE depends on magnitude, but does suggest that the high quality catalogues of the southern California and Kanto regions give better information on where earthquakes occur.

In all cases, the “failure rate” parameter μ is either 0 or close to it, suggesting that nearly all earthquakes with M > mc display the Ψ phenomenon.

Figure 2 Information rate per earthquake ΔlnL/N of the EEPAS model relative to SUP (a stationary and spatially uniform Poisson model), and to PPE (a spatially varying Poisson model similar to that of Jackson and Kagan (1999).

The distributions fi(t), gi(m), and hi(x,y) fitted in applications to different catalogues are illustrated in Fig. 3. Note that only in S. California were all parameters optimized; in the other cases the values of the slope parameters bM, bT and bA were fixed in advance. The S. California fit appears to be unusual. The data set in this case included mainshocks ranging widely in magnitude from 5.0 to 7.3, and some of the larger mainshocks had numerous aftershocks with M > 4.95. The optimal EEPAS parameters evidently compromised between forecasting the mainshocks and the aftershocks. Also, the two applications with relatively low magnitude threshold, Kanto and S. California, were the only ones where the equal weighting strategy was superior to that of down-weighting aftershocks. This suggests that in these cases the aftershocks of precursory earthquakes were needed to improve forecasting of aftershocks above the threshold mc. The same would be expected to occur occasionally with higher magnitude thresholds, although it did not occur much in the short period over which the model was fitted in New Zealand, Japan and Greece.

In light of the above, a modified magnitude distribution i′(mg ) , which explicitly allows for aftershocks of forecasted events, is proposed.

i′( i +∝ i mGmgmg β exp)()() [− β ()− − mbam iMM − δ ] (6) where Gi is the survivor function of gi. In view of Båth’s law, an optimal value of δ close to 1 is expected. The use of this revised distribution is expected to reduce the dependence of parameter estimates on magnitude threshold, and perhaps also to improve the performance of the model.

4. Acknowledgement This work was supported by the Foundation for Research, Science and Technology under Contract No. CO5X0402.

Figure 3 Fitted distributions for time, magnitude, and location in applications of EEPAS model to

various regions. (a) Median of gi(m) versus mi. (b) Median of fi(t) versus mi. Error bars are

+1standard deviation. (c) Standard deviation of hi(x,y) versus mi.

6. References Console, R., Rhoades, D.A., Murru, M, Evison, F.F., Papadimitriou, E.E., and Karakostas, V.G., (submitted), Comparative performance of time-invariant, long-term and short-term forecasting models on the earthquake catalogue of Greece. J. Geophys. Res. Evison, F.F. and Rhoades, D.A. (2001), Model of long-term seismogenesis, Ann. Geofis. 44, 81-93. Evison, F.F. and Rhoades, D.A. (2002), Precursory scale increase and long-term seismogenesis in California and northern Mexico. Ann. Geophys. 45, 479-495. Evison, F. F. and Rhoades, D.A. (2004), Demarcation and scaling of long-term seismogenesis, Pure Appl. Geophys. 161, 21-45. Jackson, D.D., and Kagan, Y.Y. (1999), Testable earthquake forecasts for 1999. Seismol. Res. Lett. 70, 393-403. Rhoades, D.A. (submitted), Application of the EEPAS model to forecasting earthquakes of moderate magnitude in southern California. Seismol. Res. Lett. Rhoades, D.A. and Evison, F. F. (2004), Long-range earthquake forecasting with every earthquake a precursor according to scale, Pure Appl. Geophys. 161, 47-71. Rhoades, D.A. and Evison, F.F. (2005), Test of the EEPAS forecasting model on the Japan earthquake catalogue, Pure Appl. Geophys. 162, 1271-1290. Rhoades, D.A., and Evison, F.F. (in press). The EEPAS forecasting model and the probability of moderate-to-large earthquakes in central Japan. Tectonophysics. Bayesian analysis of the local intensity attenuation

R. Rotondi1 and G. Zonno2

1 C.N.R. - Istituto di Matematica Applicata e Tecnologie Informatiche, Via Bassini 15, 20133 Milano, Italy 2 I.N.G.V. - Milano Department “Applied Seismology”, Via Bassini 15, 20133 Milano, Italy

Corresponding Author: R. Rotondi, [email protected]

Abstract: We present a method that allows us to incorporate additional information from the historical earthquake felt reports in the probability estimation of local intensity attenua- tion. The approach is based on two ideas: a) standard intensity versus epicentral distance relationships constitute an unnecessary filter between observations and estimates; and b) the intensity decay process is affected by many, scarcely known elements; hence intensity decay should be treated as a random variable as is the macroseismic intensity. The observations related to earthquakes with their epicenter outside the area concerned, but belonging to ho- mogeneous zones, are used as prior knowledge of the phenomenon, while the data points of events inside the area are used to update the estimates through the posterior means of the quantities involved.

1 Introduction Where long historical catalogues are available, as in Italy, it is quite natural to take the macroseismic intensity as a measure of the size of earthquakes. This ordinal quantity, often measured on the 12 degrees of the MCS scale, is in practice treated, as best one can, as an integer variable on {1, 12}. In order to assess the seismic hazard in terms of intensity we have to address the problem of its attenuation. Many studies on this topic have appeared in the literature; in the large majority of these the key role is played by the deterministic function which expresses the link between the ∆I intensity decay and factors such as epicentral intensity, site-epicenter distance, depth, site types, and styles of faulting. In some cases a normally distributed random error is added to take into account the scatter of the observations around the Is site intensity value predicted through the attenuation relationship. More emphasis is given to the uncertainty when the decay is considered an aleatory variable: for the intensity decay normalized on I0, a Beta distribution with mean proportional to an attenuation law and varying deviation was first proposed by Zonno et al. (1995), while a logistic model was used by Magri et al. (Pageoph., 43, 1994) to estimate the probability that the attenuation exceeds a threshold value. We present a complete probabilistic analysis of the attenuation issue, avoiding the use of any deterministic attenuation relationship (Rotondi and Zonno, 2004). On the contrary, the emphasis here is on exploiting information from seismogenically homogeneous zones, that is, on assigning and updating prior parameters in the Bayesian framework. We give the predictive distribution of the intensity at site, conditioned on I0 and on the d distance from the epicenter, with both discrete and continuous d. We present some validation criteria for these two classes of distributions, and then apply these criteria first to an historical event, the Camerino 1799 earthquake, to explore the goodness of fit of the model, and then to a recent event, the Colfiorito 1997 earthquake, to validate the procedure. We compare our approach with that of Magri et al. (1994), estimating the decay through the mode of the respective probability distributions.

2 Description of the model

Let us assume that the variable ∆I is discrete and belongs to the domain {0, I0}; it is reasonable to choose for Is = I0 −∆I, at a fixed distance, the binomial distribution Bin(I0, p) conditioned on I0 and p:

i − P r {I = i | I = i , p} = P r {∆I = I − i | I = i , p} = 0 pi(1 − p)i0 i s 0 0 0 0 0 i ! where p ∈ [0, 1]. To account for the variability of the ground shaking even among sites located at the same distance, the parameter p has been considered a Beta distributed random variable in the Bayesian paradigm

p Γ(α + β) − Be(p; α, β) = xα−1(1 − x)β 1dx . Γ(α)Γ(β) Z0 2.1 Assigning prior parameters and their updating The procedure described here draw initial knowledge of the phenomenon from the intensity data points of earthquakes occurring in zones that are homogeneous to the area under study, from the viewpoint of seismotectonics and seismic-wave propagation.

Let us draw L distance bins {R1, R2, . . . , RL} of width ∆r around the epicenter of any earthquake of I0 intensity in those zones. Taking a sufficiently small step we may assume that the decay process behaves in the same way within each Rj band; moreover, we denote Nj, (n) 0 by Dj,0 = is the set of Nj,0 felt intensities in the j-th band. As the probability of n on=1 i0 null decay (Is = I0) is pj , we assign the initial mean value of pj using simply the frequency of null decay, Nj,0(∆ I = 0)/Nj,0, and deduce from this value the hyperparameters αj,0 and βj,0. Where there is no report of null decay, we have smoothed the valuable pj’s through the c2 function f(d) = (c1/d) and estimated the coefficients c1, c2 by the method of least squares. Now let us consider all the earthquakes of I0 intensity with epicenter within the area Nj L L (n) under study; D = j=1 Dj = j=1 is denotes the set of their intensity data points, n on=1 subdivided into L Ssubsets. OnSthe basis of this new information we update our knowledge on the attenuation process. We use Bayes’ theorem to compute the posterior distribution Be p p p αj ( j | Dj), and estimate j through its posterior mean ˆj = αj +βj where

Nj Nj (n) (n) αj = αj,0 + is βj = βj,0 + i0 − is . nX=1 nX=1  

In order to let the p parameter of the binomial distribution for the intensity Is at site vary with continuity, we smooth the estimates pˆj, j = 1, . . . , L with the method of least squares, γ2 again using an inverse power function g(d) = (γ1/d) . In this way it is possible to assign the probability of the intensity decay P r {∆I | I0, g(d)} at any distance from the epicenter. 3 Validation To predict the intensity points an earthquake will generate, on the basis of the knowledge accumulated before its occurrence, we can apply either a predictive probability function for all the points within every Rj band

i0 Γ(αj + βj) Γ(αj + i) Γ(βj + i0 − i) P r {Is = i | I0 = i0, Dj} = ! i Γ(αj) Γ(βj ) Γ(αj + βj + i0) or use a different binomial Bin(I0, g(d)) probability function for the points at distance d from the epicenter, where g(·) denotes the smoothing inverse power function. We propose here some measures of the degree to which a model predicts the data; for the lack of space we refer just to the predictive distribution. • Logarithmic scoring rule based on the logarithm of a posterior probability (Lindley (Stat. Science, 2, 1987); Winkler (Test, 5, 1996)). We use the marginal likelihood to evaluate this measure, obtaining the following expression:

L (n0) (n0) 1 i0 Γ(αj + βj) Γ(αj + is ) Γ(βj + i0 − is ) − 0 log (n0) ! N i Γ(αj) Γ(βj ) Γ(αj + βj + i ) s jY=1 0 Y 0 s 0 n = 1, . . . , Ns 0 (n ) 0 is ∈ Dj

0 where Ns is the number of sites, within the L bands Rj, at which the future event will be felt. • p(A)/p(B) ratio between the probability that the fitted model assesses to the realization A and the probability of the predicted value B. The idea behind this measure is borrowed from the concept of deviance (Read and Cressie, 1988) and is based on a consideration of how much is gained from having predicted B when A occurs. If we indicate the mode of a posterior probability as the predicted value

(n0) ipred = arg maxis=0,...,i0 P rpred( is | ·) , the error, expressed in probabilistic terms, is given by the geometric mean of the correspond- ing odds in logarithmic scale:

0 0 Ns (n ) 1 P rpred(is | Dj) odds = − log 0 . pred N 0 (n ) s nY0=1 P rpred(ipred | Dj)

• Deterministic absolute discrepancy between observed and estimated intensities at site

0 Ns 0 0 (n0) (n ) diffpred = 1/Ns is − ipred . 0 nX=1

4 Some results Let us consider the ZS4 zonation of central Italy represented in Figure 1. We have studied two earthquakes of IX intensity occurred in zone 47: the Camerino earthquake of 28 July 1799 and the Colfiorito earthquake of 26 September 1997. We have assumed that the shaded year P r {Is | ·} validation criterion pred. binom. logist. 1799 scoring ∗1.205 1.405 1.395 B odds ∗0.218 0.648 0.541 discrepancy ∗0.543 0.696 0.696 1997 scoring 1.575 1.789 ∗1.337 F odds ∗0.301 0.650 0.308 discrepancy 0.941 ∗0.838 0.847 ∗ Figure 1: The ZS4 zonation of central 1997 scoring 1.402 1.610 1.171 B odds ∗0.118 0.432 0.259 Italy (courtesy of the Italian Group for discrepancy ∗0.441 0.670 0.667 Defence against Earthquakes).

Io = IX Io = IX

0.9 0.9 0.8 0.8 0.7 0.7 0.6 0.6 0.5 0.5 probability 0.4 probability 0.4 0.3

0.3 0 0 6 6 0 0 1 1 5

5 0.2

0.2 0 0 1 1 4 4 0 0 1 1 3 3 0.1 0.1 0 0 1 1 2 2 0 0 1 1

0 1 0 1 0 0 1 1 0 0 0 0 1 1

9 9 9 9 0 distance 0 distance 8 8 8 8 0 0 7 7 7 7 0 0 6 6 6 6 5 5 0 0 5 5 4 4 Is

Is 0 0 4 4 3 3 0 0

3 2

3 2 0 0 1 1 2 2 0 0 0 0 1 1

Figure 2: Estimate of the predictive Figure 3: Estimate of the probability probability function for the intensity function for the intensity Is drawn from Is. the logistic model. zones, homogeneous from the viewpoint of kinematic context and expected rupture mech- anism according to the zonation, are also homogeneous from the seismic-wave propagation point of view. Figure 2 shows the estimated predictive probability function for IS, while Figure 3 depicts the probability function produced by the logistic model. The estimation does not include the 1997 earthquake. The Table summarizes the results of the validation performed backward for the 1799 earthquake, and first forward and then backward for the Colfiorito quake respectively. The predictive distribution can be indicated as the all-around best of the probabilistic models examined.

References Read, T.R.C. and Cressie, N.A.C. (1988) Goodness-of-fit statistics for discrete multivariate data, Springer-Verlag, New York. Rotondi, R. and Zonno, G. (2004) Bayesian analysis of a probability distribution for local intensity attenuation, Annals of Geophysics, 47, 5, 1521-1540. Zonno, G., Meroni F., Rotondi R., Petrini V. (1995) Bayesian estimation of the local intensity proba- bility for seismic hazard assessment, Proceeding of the Fifth International Conference on Seismic Zonation, Nice, October 17-19, 1995, 1723-1729. From the Testing Center of Regional Earthquake Likelihood Models to the Collaboratory for the Study of Earthquake Predictability D. Schorlemmer1, M. Gerstenberger2,3, T. Jordan4, E. Field2, S. Wiemer1, L. Jones2, D. D. Jackson5

1Swiss Seismological Service, ETH Zurich, Schafmattstr. 30, 8093 Zurich, Switzerland 2United States Geological Survery, 525 S. Wilson Ave., Pasadena, CA 91106, USA 3Institute of Geological and Nuclear Sciences, 69 Gracefield Road, Lower Hutt, New Zealand 4Department of Earth Sciences, University of Southern California, Los Angeles, California 90089-7042, USA 5Department of Earth and Space Sciences, University of California Los Angeles,, 595 Young Drive East, Los Angeles, CA 90095-1567, USA

Corresponding Author: D. Schorlemmer, [email protected]

Abstract: The Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced to test the validity of their assumptions and to explore which models should be incorporated in seismic hazard and risk evaluation. During the years of the RELM effort, multiple earthquake forecast models have been developed and prepared to comply with the rules of the T-RELM Testing Center. Although this development can be considered a successful project, several drawbacks of the testing setup have been revealed. With the new CSEP Testing Center, we seek to fill these gaps, namely extending the testing methods to alarm- based and fault-based models as well as expanding the model space. 1. Introduction Earthquake forecasting is a social demand to Earth science. A first step in satisfying this demand are the quasi-stationary earthquake hazard assessments, e. g., (Frankel et al., 1996). During the last ten years, the scientific community is moving from quasi-stationary models to fully time-dependent models. These encompass probabilistic and alarm-based models. Still, the challenge remains that all available models need to be tested comparatively in a rigorous and transparent way. Furthermore, forecasts made publicly available, sometimes with optimistic claims of success have increased the need in authorized evaluation of earthquake forecasts. This second step in satisfying the public demand has already started with the RELM and Testing RELM (T-RELM) initiative: Within T-RELM, all submitted RELM-models will be tested in classes against each other and for consistency with observed data (Schorlemmer et al., submitted). The testing procedures developed within T-RELM are designed for probabilistic grid-based models for California. Although all submitted RELM-models will be tested during the next 5 years, many existing models are neither probabilistic nor grid-based, e. g., alarm-based or fault-based models, and thus not part of T-RELM. Thus, an expansion of the possibilities of T-RELM into the CSEP Testing Center is strongly needed. The design of T-RELM has set a new standard in transparency, rigorousness, and community acceptance (Schorlemmer and Gerstenberger, submitted). Not only the statistical rules have been developed but also all constraints have been defined to make every test fully reproducible and unbiased. It will be the first time that a large number of modelers adopt a common testing framework, a significant milestone for advancing forecasting related research as a whole. However, several testing procedures as well as an expansion to other areas are missing in order to allow testing of all available models. With these new testing procedures, the CSEP Testing Center will provide all necessary tools for scientists, authorities, and the public to deal with public forecasts or official forecast evaluation.

2. T-RELM The testing suite used in T-RELM consists of three different tests: L-Test, N-Test, and R- Test (Schorlemmer et al., submitted). These tests are defined similarly to Kagan and Jackson (1995). The first two tests examine the consistency of the models with the observations while the last test compares the spatial performances of the models. None of the observed earthquake parameters (location, focal time, etc.) can be estimated without uncertainties. Therefore, each parameter uncertainty is included in the testing. Additionally, for testing stationary models, every observed event is assigned an

independence probability pI of not being associated with an earthquake cluster; we calculate these probabilities by using a Monte Carlo approach to the Reasenberg (1985) declustering algorithm. If we were to try to define the best performing model, this model must never be rejected in the R-Test and must show consistency with the observation in the L- and N-Tests. This would be a statement about the model's performances over the full magnitude range and the entire testing area. Additionally, we propose more detailed investigations of a model's performance by testing smaller spatial and magnitude ranges. Therefore, we compute and retain the significances for each model for each bin. This results in maps from which areas can be identified for which certain models show a strong or weak performance. This kind of secondary tests can help understanding how and why models perform as they do. In order to compare models with different characteristics within one test, the models need to comply to one set of rules, or a testing class (Table 1). These testing classes defined for RELM reflect both scientific and public interest and needs, including applications of forecasts. The classes correspond to quasi-stationary (long-term) and time-dependent (short- term) models. Quasi-stationary models assume that earthquake rates are relatively stable over about a year or longer. Short-term models assume that rates vary from day to day because of stress changes and other variations possibly resulting from past earthquakes. Class Forecast period Aftershocks Magnitude range Submission I 5 years Not included [5, 9] Numbers II 5 years Included [5, 9] Numbers III 24 hours Included [4, 9] Code IV 1 year Not included [5, 9] Code V 1 year Included [5, 9] Code Table 1. Classes of models in the T-RELM testing.

Quasi-stationary models (Figure 1) are relevant for public policy, construction planning, and setting insurance rates and priorities for remediation, all of which require seismic hazard estimates valid for years or more. Some quasi-stationary models are fundamentally time- dependent. For example, some renewal models assert that large-earthquake probability decreases substantially after a large event and only gradually recovers over a period of decades or centuries. Figure 1 Forecast of events of magnitude M  5 for an example model.

Short-term models are need for emergency response and public information announcements. These models incorporate probabilities that depend on time and distance from previous earthquakes, usually exploiting power-law time dependence evident in aftershock sequences. It is difficult to apply these models in any fixed time interval because of the implicit scale invariance. Because earthquake rates change so fast after an earthquake, only an automatic algorithm for updating rates is adequate to implement the full time- dependence of these models.

2. CSEP With CSEP, we seek to continue the development of probabilistic earthquake forecast testing and to greatly expand the model space to be considered. While T-RELM is currently only covering grid-based models, CSEP will additionally cover testing of fault-based models as well as alarm-based models. This requires examination of applicability and definition of all necessary rules, thus a setup of testing classes. New testing methods will be integrated, e.g., Receiver Operating Characteristics (ROC), Contingency Tables, and the Molchan Error Diagram (Jolliffe and Stephenson, 2003; Molchan, 1990, 1997; Molchan and Kagan, 1992), and entropy score (Harte and Vere-Jones, 2005) as potential candidates. CSEP is also designed to expand to other regions and other forecast periods. Tests of models forecasting only M7+ events but worldwide will become possible. This requires the integration of additional authorized data sources (earthquake catalogs). To enhance testing in California, the testing grid definition will be expanded by focal mechanisms for offering higher resolution for more sophisticated models. For fault-based testing, the community fault model for southern California will be integrated and thus the rules of grid-based testing of T-RELM can be transfered to fault-based models. CSEP also aims to include external data sources needed by time-dependent models for their forecast generation, e.g., automatic slip distributions.

3. Conclusion With the recent advances and multitude of models becoming available in earthquake forecasting and prediction, it has become clear that a standard is necessary for forecast testing. Through the design of this testing procedure we have learned that not only must a test be statistically robust, but that it must also be done in the spirit of the forecast and it must be practical; these tasks are not easily combined together. 4. References

Frankel, M. et al. (1996), National seismic hazard maps. USGS Open-File Report 96-532. Harte, D. and Vere-Jones, D. (2005), The entropy score and its uses in earthquake forecasting, Pure Appl. Geophys., 162, 1229-1253. Jolliffe, I. T. and D. B. Stephenson, Forecast Verification: A Practitioner’s Guide in Atmospheric Sciences, John Wiley & Sons, 240 pp., 2003. Kagan, Y. Y. and Jackson, D. D. (1995), New seismic gap hypothesis: Five years after, J. Geophys. Res., 100, 3943-3959. Molchan, G. M. (1990), Strategies in strong earthquake prediction, Phys. Earth Planet. Inter., 61, 84-98. Molchan, G. M. (1997), Earthquake prediction as a decision-making problem, Pure Appl. Geophys., 149, 233-248. Molchan, G. M. and Kagan, Y. Y. (1992), Earthquake prediction and its optimization, J. Geophys. Res., 97, 4823-4838. Reasenberg, P. (1985), Second-Order Moment of Central California Seismicity, 1969-1982, J. Geophys. Res., 90, 5479-5495. Schorlemmer, D. and Gerstenberger, M. (submitted), RELM Testing Center, Seismol. Res. Letts. Schorlemmer, D., Gerstenberger, M., Wiemer, S. and Jackson, D. D. (submitted), Earthquake Likelihood Model Testing, Seismol. Res. Letts.

Earthquake recurrence models for faults: the crucial role of intermediate magnitude earthquakes in seismic hazard assessment.

O. Scotti, C. Clement and F. Bonilla

Institut de Radioprotection et Sûreté Nucléaire, Bureau d'Evaluation du Risque Sismique BP 17 - 92262 FONTENAY-AUX-ROSES - FRANCE

Corresponding Author: O. Scotti, [email protected]

Abstract: Seismic hazard results (levels and scenarios) obtained for a site located close to a potentially active fault segment of the Durance fault system, southeast France, depend strongly on the assumed seismicity rates of intermediate magnitude events. We consider here three time-independent recurrence models: Gutenberg-Richter, Young-Coppersmith and Maximum Earthquake for the probability density function on magnitude and different slip rate estimates and historical seismicity to estimate the earthquake occurrence rates. We only consider M≥5 events in hazard calculations. The model producing the highest rate of intermediate magnitude events leads systematically to higher hazard levels and lower magnitude scenarios. The integration of the aleatory variability of ground motion prediction equations in the hazard calculation leads naturally to this result. Additional geological criteria indicative of the earthquake frequency-size potential of faults is necessary in order to reduce uncertainty in the modeling of intermediate magnitude events, potentially important contributors to seismic hazard estimates close to faults.

1. Introduction In many regions of the world fault characterization is difficult: paleoseismologial studies are scarce and it is often difficult to establish clear relationships between faults and seismicity. Yet potentially active faults have been mapped and have been recognized in high risk regions and they need to be considered in seismic hazard assessments. To described fault activity in seismic hazard studies we appeal to recurrence models that describe the relative likelihood of different magnitude earthquakes on the fault and the rate of occurrence of earthquakes. Three probability density function on magnitude f(m) are currently used in the literature (Figure 1). (a) The truncated exponential model (GR) based on the results of Gutenberg and Richter (1944), with a lower and upper magnitude cut-off. (b) The maximum magnitude model (CE) based on seismological data compiled by Schwartz and Coppersmith (1984) and Wesnousky (1994), which suggests that some individual faults and fault segments tend to repeatedly generate earthquakes of comparable magnitudes. It consists of a truncated normal distribution for f(m) centered on the maximum magnitude. (c) The characteristic earthquake model (YC) for f(m) which was proposed by Youngs and Coppersmith (1985) to adjust the truncated exponential model to allow for the increased likelihood of characteristic events. The characteristic earthquake is uniformly distributed in the magnitude range of mu-Δmc to mu (in this study we centred this interval on the maximum magnitude of the CE model) whereas lower magnitude earthquakes follow an

exponential distribution..

Figure 1 Probability density functions for magnitude: (a) truncated exponential and characteristic, (b) maximum magnitude (from Stewart et al., 2001).

Two philosophies are generally used to model the rate of earthquake occurrence: time-independent and time-dependent models. The time independent or Poisson model is the recurrence model commonly used in probabilistic seismic hazard analysis. The implication of this model is that the likelihood of occurrence of an earthquake in any time interval is independent of when the previous event occurred. However, the physics of the process of stress accumulation followed by release in characteristic earthquake ruptures on faults has led a number of investigators to consider time-dependent recurrence models for these sources. In the simplest form, time-dependent recurrence models are cast as a renewal model in which the likelihood of the next characteristic event occurring in a specified time interval is dependent only on the elapsed time since the previous characteristic event (e.g., Cornell and Winterstein, 1988). Here we assume time-independent models only. The development of such models must be constrained in such a way that the moment release from earthquakes balances moment build up. For a given fault, moment build up is proportional to slip rate, which is the long-term, time-averaged relative velocity of block movements on opposite sides of the fault. Slip rates can be derived from the amount of slip that has occurred over a geologically defined interval, from the measurement of strains across a fault or from the seismic catalogue of the region through some simplified assumptions. We calculate hazard at a site close to a single fault segment using two different approaches for the estimate of seismicity rates: one based on deformation data and a second one based on the historical seismicity catalogue. We then discuss the impact of each approach and each recurrence model on the seismic hazard and point out the need for additional data and numerical models to help reduce uncertainties in the modelling of intermediate magnitude events, potentially important contributors to seismic hazard estimates close to faults.

2. Model parameters and slip rate estimates The Durance fault system is a potentially active structure of the south-eastern France tectonic province. Although the segmentation of this fault system is reasonably well characterized, the details of the surface projection of this fault are still debated (Aochi et al., 2005). Here we consider a 10x10 km2 vertical fault segment and compute hazard for a site located 10km away . Slip rates estimates (Scotti et al, 2005) along the Durance fault system vary between 0.07 mm/yr (GPS) and 0.2 mm/yr (geological markers). To compute hazard estimates we first compute for each slip rate the equivalent moment rate assuming the maximum magnitude is imposed by the fault geometry. This moment rate is then divided among the different

magnitude bins following the three recurrence models. The resulting return periods for M≥5 events range between 475 and 26000 years. Historical seismicity records, on the other hand, show evidence for an event of M5 along the Durance fault system almost every century, two of which are located along the southern faults segment modeled here. An additional hypothesis assumes therefore, that the fault segment maybe better characterized by a maximum magnitude recurrence model of M5 events. In this case, based on the historical seismicity and assuming the event can occur anywhere along 60 km long fault, we estimate the return period of M=5 for the fault modeled segment events between 400 and 800 years. There is also paleoseismological evidence for the occurrence of an important seismic event between 9ky and 27ky along this fault segment which provides thus rough estimates of the recurrence periods that can be associated to M>6 events along this fault segment.

3. Results, discussion and questions The resulting seismic hazard estimates and seismic scenarios (only M≥5 events are modeled) depend strongly on the assumed seismicity rates of the intermediate magnitude events. Assuming the maximum magnitude is imposed by the fault geometry, then the GR recurrence model leads to the higher hazard values and the lower magnitude seismic scenario (Figure 2) compared to CE and YC models. It is the intermediate magnitude range that contributes the most to the hazard due to the tail distributions of the ground motion prediction equations (epsilon > 1) integrated in these calculations. It is for this same reason that hazard computed following the Mmax=5 hypothesis may produce even greater hazard levels depending on the assumed return period and the annual probability of exceedance of interest. The choice of an appropriate recurrence model for predicting seismic events along a given fault system is very often a difficult task. In this paper we have seen that the scarce data available for the Durance fault system, for example, does not allow to privilege one model or another: GR , CE and YC recurrence models do equally well/bad depending on the data that one prefers to fit. We have also seen that intermediate magnitude events dominate the hazard if the GR model is used or if we assume a maximum magnitude M5 that reflects the historical seismicity rates. Thus, if the search for strong paleoseismic events remains a priority, it is also important to pursue research that will help decision makers in choosing among existing recurrence models given their drastically different predictions for the moderate magnitude events. In a recent publication Zoeller et al. (2005) suggest, on the basis of numerical modeling, that spatial heterogeneities significantly influence rupture propagation and thus the frequency-size statistics and other aspects of earthquake behavior. According to the authors, immature faults maybe characterized by strongly heterogeneous, non-planar fault zones having a broad spatial distribution of heterogeneities. Progressive slip tends to regularize the disorder and localize it long approximately planar fault-zones, thereby narrowing the range of sizes of the heterogeneities. Is the Durance fault system a mature or an immature fault? Which are the measurable field parameters that, can be injected in numerical models, to help define the maturity of a fault? Does the tectonic strain rate play a role? Answers to these questions may help choosing the appropriate recurrence models and reduce uncertainties in seismic hazard assessments.

GR CE YC

Figure 2 Magnitude, distance and epsilon contributions to a given seismic hazard level at a site located 10km away from a 10x10km2 segment of the Durance fault system. The dominant seismic scenario depends strongly on the recurrence model used and the assumed slip rates (the moment rate budget is exactly the same for the three recurrence models).

4. Acknowledgement This work is part of the Seismic-PSA (Probabilistic Safety Assessment) research program currently conducted at IRSN.

5. References Aochi H., M. Cushing, O. Scotti and C. Berge-Thierry (2005) Estimating rupture scenario likelihood based on dynamic rupture simulations: the example of the segmented Middle Durance fault, southeastern France. In press in Journ.Geophy.Int. Cornell, C.A., and Winterstein (1988), S.R., Temporal and Magnitude Dependence in Earthquake Recurrence Models, Bull. Seism. Soc. Amer.; 78; 4;1522-1537 Gutenberg B, Richter C F. (1944). Frequency of earthquakes in California. Bull.Seism. Soc.Amer., 34: 185-188. Schwartz, D.P., and K.J. Coppersmith (1984). Fault behaviour and characteristic earthquakes: examples from Wasatch and San Andreas fault zones, J.G..R., 89, 5681-5698. Scotti O., C. Clément, L.F. Bonilla and S.Baize (2005), Évaluation probabiliste de l’aléa sismique spécifique au site de Tricastin. Rapport IRSN/DEI/SARG n° 2005-008. J. P. Stewart, S.J. Chiou J., D. Bray R.W., Graves, P. G. Somerville and N. Abrahamson (2001) Ground Motion Evaluation Procedures for Performance-Based Design. PEER Report 2001/09. Youngs, R. R., and Coppersmith, K. J. (1985) Implications of Fault Slip Rates and Earthquake Recurrence Models to Probabilistic Seismic Hazard Estimates,” Bull.Seism. Soc.Amer., 75, pp 939-964. Weasnouski, S. G. (1994), The Gutenberg-Richter or Characteristic Earthquake Distribution, which is it?, Bull.Seism.Soc.Amer., 84, 1940–1959. Zoeller, G., M. Holschneider and Y. Ben-Zion, The role of heterogeneities as a tuning parameter of earthquake dynamics , Pure Appl. Geophys., 162, 1077-1111, DOI 10.1007/s00024-004-2662-7, 2

Long-term Earthquake Forecasts in Japan (1996-2005)

Kunihiko. Shimazaki

The Earthquake Research Institute, University of Tokyo Yayoi 1-1-1, Bunkyo-ku, Tokyo 113-0032, Japan

Corresponding Author: K. Shimazaki, [email protected]

Abstract: The first large earthquake for which the long-term earthquake forecast had been publicized in Japan took place in Tokachi-Oki area off the southeastern coast of Hokkaido, the northernmost one of the four major islands, on 26 September 2003. The forecasted magnitude was M=8.0-8.2 and that of the 2003 event was 8.0. The forecast was made in March 2003 that an occurrence probability of a large interplate earthquake in the area along the Kuril trench in the coming 30 years is 60%. Detailed studies show that the focal region of the 2003 Tokachi-Oki earthquake was slightly smaller than the forecast. Although the forecast was not perfect, it is in general accepted as a successful one. Fundamental information on which the forecast was made comes from historical earthquake data and paucity of written documents in the 19th century in eastern Hokkaido limits the accuracy of long-term forecasts in this region. All the sites recording strong shakings coincided well with areas where probability of strong shaking had been evaluated as high on the preliminary seismic hazard map of northern Japan based on the long-term forecast. Long-term forecasts have been publicized not only for other great earthquakes along the oceanic trenches surrounding the Japanese islands, but also for many large shallow inland earthquakes in Japan. The first long-term forecast was made in September 1996 for an earthquake on the Itoigawa-Shizuoka Tectonic Line active fault system, which almost cuts across the central part of Japan. The forecasts are made public by the Headquarters of Earthquake Research Promotion, which was established after the devastating Kobe earthquake of 1995. One of the major present tasks of the organization is to complete The first national seismic hazard maps on the basis of the long-term forecasts were completed and made public on 23 March 2005. All damaging seismic sources in Japan are evaluated: identifiable sources and sources whose location and occurrence time cannot be specified in advance. Even for identifiable sources, the occurrence time cannot be predicted. At first an expression ‘probability of earthquake occurrence in the coming several hundreds years is high’ is used, but later using much short time range is recommended and now occurrence probability in 30 years is used for the forecast. For great earthquakes

along oceanic trenches, whose repeat times are mostly on the order of decades, calculated probabilities may amount to several tens of percents. On the other hand, 30-year probabilities evaluated for shallow inland earthquakes, whose recurrence intervals are on the order of 1,000 to 10,000 years, are utmost 10% or so. The 30-yr probability for the Kobe earthquake of 1995 which could have been calculated prior to its occurrence, amounts to only 0.02-8%. Small probability does not necessarily mean safe; expressions ‘probability is high’ and ‘slightly high’ are now used in relative sense for the forecasts of shallow inland earthquakes. Historical data are fully used for the evaluation of earthquakes along the oceanic trenches because of short recurrence interval. On the other hand, geomorphological and geological data are mainly used for shallow inland earthquakes. The size of a future earthquake is estimated from the empirical relationship between earthquake magnitude and fault length/source area. The location of the event is fairly precisely known from active fault data and historical earthquake catalogs, except for a certain type of earthquakes such as deep events. In Nankai and Tonankai regions in Western Japan, where geodetic data on the recurrence of great interplate earthquakes are available, the time-predictable model is combined with the BPT (Brownian Passage Time) model for the estimation of earthquake occurrence probability. In fifteen other regions along the oceanic trenches, the BPT model is used. In two cases where the occurrence time of the last event along the oceanic trenches is unknown, the Poisson process is assumed. On the other hand, the 98 major active fault zones on land are classified into 159 seismic zones. The BPT model was applied for 96 zones and the Poisson model for 20 zones. No forecast was given for 39 zones because of the lack of paleoseismic data. Four zones were evaluated as inactive. Relatively small aperiodicty parameter for the BPT model, i.e. 0.24, is estimated from the recurrence data from four major fault zones and is used for the evaluation

Tidal Triggering of Earthquakes Precursory to the 2004 Mw = 9.0 Off Sumatra Earthquake

S. Tanaka

National Research Institute for Earth Science and Disaster Prevention, Tsukuba 305-0006, Japan

Corresponding Author: S. Tanaka, [email protected]

Abstract: We observed tidal triggering of earthquakes precursory to the 2004 Off Sumatra earthquake (Mw = 9.0). We precisely measured the correlation between the earth tide and earthquake occurrence in and around the focal region of this large megathrust earthquake. The result of statistical analysis shows that a high correlation concentrated in the area around the initial rupture point of the future large earthquake for about ten years preceding the occurrence of the Off Sumatra earthquake. The frequency distribution of tidal phase angles in the pre-event period exhibited a peak near the phase angle where the tidal shear stress is at its maximum to accelerate the fault slip. This indicates that the observed high correlation is not a stochastic chance but is a physical consequence of the tidal triggering effect.

1. Introduction The earth tide produces periodic stress variations of the order of 103 Pa in the earth. Although these stress variations are much smaller than typical stress drops of earthquakes, their rates are generally much larger than those of tectonic stress accumulation. Thus it has been postulated that the tidal stress may act as a triggering factor for earthquakes (e.g., Emter, 1997). Recently, a clear correlation between the earth tide and earthquake occurrence was successfully verified by carefully examining global earthquake data (Tanaka et al., 2002b; Cochran et al., 2004). In their analysis, a weakness in the previous studies was improved; the indirect term of the earth tide due to the ocean tide loading was properly incorporated into the theoretical calculation of the earth tide by using a reliable ocean tide model. Applying these new tidal synthetics, recent studies have further found that a high correlation between the earth tide and background seismicity appeared for several years preceding the occurrence of several large earthquakes (Tanaka et al., 2002a, 2004, 2005). These results suggest that the tidal triggering may appear when the stress in the focal region is close to a critical condition to release a large earthquake. In the present study, we examine the correlation between the earth tide and earthquake occurrence in and around the focal region of the recent catastrophic earthquake off Sumatra. We focus on the space-time pattern of the tidal triggering effect associated with the occurrence of this great earthquake.

2. Data The devastating earthquake with Mw = 9.0 occurred off the west coast of northern Sumatra on December 26, 2004, as thrust faulting on the interface of the India and Burma plates. The aftershock zone of this great earthquake is over 1300 km long, which may include the focal region of the main rupture (e.g., Lay et al., 2005). We investigate the correlation between the earth tide and earthquake occurrence in and around the focal region of this large megathrust earthquake. As for the study area, we set the rectangular area of 1600 km ×800 km that covers the aftershock zone of the Off Sumatra earthquake. From the Harvard centroid moment tensor (CMT) catalog, we use the origin times, hypocenter locations, and focal mechanism solutions of the 275 earthquakes ( Mw ≥ 5.0 , H ≤ 70 km )

occurring within the study area for the period from 1977 to 2004. The epicentral distribution of the earthquakes used in this study is shown in Figure 1 by black dots.

3. Method of Statistical Test We statistically investigate the correlation between the earth tide and earthquake occurrence following the method of Tanaka et al. (2002b). We theoretically calculate the tidal shear stress on the fault plane (positive in the same sense as the fault slip) for each earthquake. This calculation includes both the direct solid earth tide and the indirect term due to the ocean tide loading, where the ocean tide model NAO.99b (Matsumoto et al., 2000; Takanezawa et al., 2001) is used. From the calculated tidal shear stress, we assign the tidal phase angle at the occurrence time for each earthquake. The tidal phase angle, taking a value between −180° and 180° , is assigned by linearly dividing the time interval from 0° to 180° or from −180° to 0° , where 0° and ±180° corresponds to the maximum and the minimum of the tidal shear stress in the slip direction, respectively. Determining the phase angles for all the earthquakes, we statistically test whether they concentrate near some particular angle or not by using the Schuster’s test (e.g., Emter, 1997). In this test, the result is evaluated by p-value. The p-value, ranging between 0 and 1, represents the significance level to reject the null hypothesis that the earthquakes occur randomly irrespective of the tidal phase angle. A smaller p-value indicates a higher correlation between the earth tide and earthquake occurrence.

4. Result Using all the 275 earthquakes, we obtain a large p-value of 48%. This indicates that a significant correlation is not seen between the earth tide and earthquake occurrence for the data set including all the earthquakes used in this study. However, we find a characteristic tidal effect by precisely examining the space-time pattern of the p-value. Figure 2a shows the temporal variation of p-value, where a time

Study Area Aftershock Zone N

N

2004/12/26 Mw 9.0 Off Sumatra S E E E E Figure 1 Epicentral distribution of the earthquakes used in this study (black dots). Star is the epicenter of the Off Sumatra earthquake (2004/12/26, Mw 9.0). Focal mechanism solution of this earthquake is shown after the Harvard CMT catalog.

(a) Mw 9.0 (b) 20 102 p=1.7% N=85

10 101 p (%) Frequency (%) 100 0 1980 1985 1990 1995 2000 -180 -90 0 90 180 Time (year) Phase Angle (degree) Figure 2 (a) Temporal variation of p-value. A time window of five years, which is represented by horizontal bar, is shifted by one year. (b) Frequency distribution of tidal phase angles in the ten years prior to the Off Sumatra earthquake. Solid curve represents a sinusoidal function fitted to the distribution.

window of five years is shifted by one year. The p-value had been larger than 20% for about 20 years since 1977. However, a clear decrease appeared in the late 1990s. The p-value decreased until the occurrence of the Off Sumatra earthquake and attained a significantly small value of 2.4% just prior to it. Figure 2b shows the frequency distribution of tidal phase angles for the period of ten years preceding the occurrence of the Off Sumatra earthquake. Using all the 85 earthquakes occurring in this period, we obtain a very small p-value of 1.7%. The phase distribution in this period had a peak near the angle 0° , where the tidal shear stress is at its maximum to accelerate the fault slip. This indicates that the observed small p-value is not a stochastic chance but is a physical consequence of the tidal effect. We further investigate the spatial distribution of p-value in the pre-seismic low-p period. Figure 3 shows the spatial distribution of p-value in the ten years preceding the occurrence of the Off Sumatra earthquake, where a spatial window of 400 km in the NNW-SSE direction is shifted by 100 km. We find that the small p-value appeared only in the southern part of the study area, which includes the initial rupture point of the Off Sumatra main shock. This indicates that the precursory tidal effect concentrated only in the area around the initial rupture point of the impending large earthquake.

5. Conclusion We investigated the correlation between the earth tide and earthquake occurrence in and around the focal region of the 2004 Mw = 9.0 Off Sumatra earthquake. As a result of statistical analysis, a high correlation, which concentrated in the area around the initial

Mw 9.0 (Initial Rupture Point) NNW SSE 102

101 p (%)

100 -800 -400 0 400 800 Horizontal Distance (km) Figure 3 Spatial distribution of p-value in the ten years prior to the Off Sumatra earthquake. A spatial window of 400 km, which is represented by horizontal bar, is shifted by 100 km from NNW to SSE.

rupture point of the future large earthquake, was found for about ten years preceding the occurrence of the Off Sumatra earthquake. Those observations suggest that a small stress change due to the earth tide can trigger an earthquake when the tectonic stress is close to a critical condition to release a large earthquake. Among possible factors of earthquake triggering, the earth tide is of particular interest since it exits universally, and is theoretically predicted with sufficient accuracy. A systematic monitoring of the tidal triggering effect on earthquakes may be feasible for detecting the final stage of the tectonic stress prior to the occurrence of a main rupture.

6. Acknowledgement This study was partially supported by Grant-in-Aid for JSPS Fellows (No. 02489)

7. References Cochran, E. S., Vidale, J. E., Tanaka, S. (2004), Earth tides can trigger shallow thrust fault earthquakes, Science, 306, 1164-1166, doi:10.1126/science.1103961. Emter, D., Tidal triggering of earthquakes and volcanic events in Tidal Phenomena, Lect. Notes Earth Sci., vol. 66, edited by H. Wilhelm, W. Zurn, and H.-G. Wenzel, (Springer- Verlag, New York, 1997), pp. 293-309. Lay, T., Kanamori, H., Ammon, C. J., Nettles, M., Ward., S. N., Aster, R. C., Beck, S. L., Bilek, S. L., Brudzinski, M. R., Butler, R., DeShon, H. R., Ekstrom, G., Satake, K., Sipkin, S. (2005), The great Sumatra-Andaman earthquake of 26 December 2004, Science, 308, 1127-1133, doi:10.1126/science.1112250. Matsumoto, K., Takanezawa, T., and Ooe, M. (2000), Ocean tide models developed by assimilating TOPEX/POSEIDON altimeter data into hydrodynamical model: A global model and a regional model around Japan, J. Oceanogr., 56, 567-681. Takanezawa, T., Matsumoto, K., Ooe, M., and Naito, I. (2001), Effects of the long-period ocean tides on earth rotation, gravity and crustal deformation predicted by global barotropic model –periods from Mtm to Sa–, J. Geod. Soc. Jpn., 47, 545-550. Tanaka, S., Ohtake, M., and Sato, H. (2002a), Spatio-temporal variation of the tidal triggering effect on earthquake occurrence associated with the 1982 South Tonga earthquake of Mw 7.5, Geophys. Res. Lett., 29(16), 1756, doi:10.1029/2002GL015386. Tanaka, S., Ohtake, M., and Sato, H. (2002b), Evidence for tidal triggering of earthquakes as revealed from statistical analysis of global data, J. Geophys. Res., 107(B10), 2211, doi:10.1029/2001JB001577. Tanaka, S., Ohtake, M., and Sato, H. (2004), Tidal triggering of earthquakes in Japan related to the regional tectonic stress, Earth Planets Space, 56, 511-515. Tanaka, S., Sato, H., Matsumura, S., and Ohtake, M. (2005 in press), Tidal triggering of earthquakes in the subducting Philippine Sea plate beneath the locked zone of the plate interface in the Tokai region, Japan, Tectonophysics.

Forecasting the evolution of seismicity in southern California: Animations built on earthquake stress transfer

S. Toda1, R. S. Stein2, K. Richards-Dinger3, and S. Bozkurt2

1Active Fault Research Center, AIST, Tsukuba 305-8567, Japan 2 U.S. Geological Survey, Menlo Park, CA, USA, 3 Geothermal Program Office, Naval Air Weapons Station, China Lake, CA, USA.

Corresponding Author: S. Toda, [email protected]

Abstract: We develop a forecast model to reproduce the distribution of mainshocks, aftershocks and surrounding seismicity observed during 1986-2003 in a 300 x 310 km area centered on the 1992 M=7.3 Landers earthquake. To parse the catalog into frames with equal numbers of aftershocks, we animate seismicity in log-time increments that lengthen after each mainshock; this reveals aftershock zone migration, expansion and densification. We implement a rate/state algorithm that incorporates the static stress transferred by each M≥6 shock, and then evolves. Coulomb stress changes amplify the background seismicity, so small stress changes produce large changes in seismicity rate in areas of high background seismicity. Similarly, seismicity rate declines in the stress shadows are evident only in areas with previously high seismicity rates. Thus a key constituent of the model is the background seismicity rate, which we smooth from 1981-1986 seismicity. The mean spatial correlation coefficient between observed and predicted M≥1.4 shocks (the minimum magnitude of completeness) is 0.52 for 1986-2003, and 0.63 for 1992-2003; a control standard aftershock model yields 0.54 and 0.52 for the same periods. We also evaluate the ability of the model to forecast the total number of aftershocks as a function of time, and as a function of radial distance from each mainshock. Four M≥6.0 shocks struck during the test period; three locate at sites where the expected seismicity rate falls above the 92 percentile, and one above the 75 percentile. The model thus reproduces much—but certainly not all—of the observed spatial and temporal seismicity, from which we infer that the decaying effect of stress transferred by successive mainshocks influences seismicity for decades. Finally, we offer a M≥5 earthquake forecast for 2005-2015, assigning probabilities to 324 10 x 10-km cells.

Development of Seismicity Analysis Software on Unix, Windows and Mac OS X

Hiroshi Tsuruoka Earthquake Information Center, Earthquake Research Intitute, University of Tokyo

Corresponding author: H. Tsuruoka, [email protected] A tool for seismicity analysis is revised from original software that is only working under Unix Workstation to new one that is available under Unix, Windows and Mac OS X. The new software is written by standard g77 that is widely distributed and is in operation under many kinds of operating systems. We add functions of 1) macro command, 2) customizable menu and 3) web interface into the new system. The new system is more convenient and powerful for users with the help of graphical user interface, and is welcomed to usage for researchers in the field of seismicity.

Testing the Precursory Seismic Quiescence Hypothesis

T. van Stiphout1, S. Wiemer1

1 Institute of Geophysics, ETH Zurich, 8093 Zurich, Switzerland

Corresponding Author: T. van Stiphout, [email protected]

Abstract: Precursory seismic quiescence (PSQ) has been investigated in numerous studies in the past; nevertheless it is not widely accepted in the seismological community. We believe that in order to make progress we need to move from retrospective testing to prospective testing, using well defined and community accepted testing approaches. As a first step, we are currently re-investigating selected cases studies of PSQ (e.g., Landers, Big Bear, Adak, Utah, and Nisqually), using an improved quiescence mapping approach as well as re-located earthquake catalogs when available. From these case studies we will derive a model to forecast seismicity for the next 24 hours for selected regions, to be implemented as a prospective test in the upcoming collaboratory for the study of predictability activities.

1. Introduction Earthquake rates and earthquake size distribution are sensitive to stress and strain build-up in the Earth’s crust. The occurrence patterns of micro-earthquakes, therefore, might contain information on upcoming large events. One of the patterns that have been consistently reported as precursors is seismic quiescence in the months to years before a mainshock. Precursory seismic quiescence (PSQ) is generally defined as a significant decrease in the rate of micro earthquakes in the moths to years preceding a mainshock. PSQ has been investigated in more than 70 publications in the past 25 years; nevertheless it is not widely accepted in the seismological community. Many case studies has been carried out using data analysis in a retrospective sense by different methods as calculating the z-values, using the AS(t) function ((Habermann 1983),(Habermann 1987),(Habermann 1991)), LTA(t) function (Habermann 1988; Habermann 1991), β-values (Reasenberg and Matthews 1988), or the RTL-algorithm (Sobolev 2003). Two examples of PSQ are shown in Figure 1.

Figure 1 Two example of precursory seismic quiescence observed prior to a M5.3 mainshock in Utah (Arabasz and Wyss 1996) (left) and the 1986 M8.0 Adak (Alaska) mainshock ((Kisslinger 1986); (Kisslinger and Kindel 1994); (Wyss and Wiemer 1999)). Plotted is the cumulative number of small earthquakes as a function of time near the hypocenter area.

Increased computational power allows the application of processor expensive methods such as Monte Carlo simulations to scan the six-dimensional parameter space (x, y, z, window length, time, sample size) and determine the probability of the solution. This and the fact data quality and availability have dramatically improved and so have the analysis techniques for data mining makes us believe that we can make significant progress in the systematic evaluation of the PSQ hypothesis. Within our anticipated test areas (Japan, California, Utah, Alaska, and Switzerland) more than 300'000 micro earthquakes are

reported annually.

2. Case study: Systematic investigation of the parameter space To improve our understand of the dependency of the rate change value as a function of the six-dimensional parameter space (x, y, z, time, duration, and sampling area), we first conduct a case study for the anomalies preceding the 2001 Feb. 28, M6.8 Nisqually earthquake in the subducting slab of the Pacific Plate. We have calculated the rate changes with different definitions of the z-value (AS-, LTA-, Rubberband-Method and percent change) and β-values as a function of window length and sampling area and have translated them to probability with respect to uniform distribution. While the choice of the way we calculate the z-value is not a critical issue, the choice of the window length and the size of sampling area was much more critical. One of the critical issues dealing when with seismic rate changes is declustering. Declustering – or the modeling of clustered seismicity - is generally needed in some form or another in order to be able to deal with the fact that a large fraction of seismicity stems from dependent events. This process, however, is non-unique, since no unique definition of an aftershocks or clustered seismicity in general exists. We strive to incorporate this uncertainty by applying a Monte Carlo simulation that covers a realistic parameter space as input of declustering using the method of Reasenberg (Reasenberg 1985).

3. Parameter Search and Testing We will test the forecasting ability of PSQ that is based on a fully prospective sense. In our test, PSQ , we will map out the occurrence of quiescence in a six-dimensional parameter pace volume (x, y, z, time, duration, sampling area), following the approaches outlines by ((Wiemer and Wyss 1994),(Dieterich and Okubo 1996),(Joswig 2001), (Ogata 2001), and (Wyss, Hasegawa et al. 1999)). The significance of each rate decay is estimated using a Monte Carlo type simulation over the parameter space Figure 2. An illustration of our method for the Landers/Big Bear is shown in Figure 3,4, and 5.

Figure 2 Two Schematic illustration of the Figure 3 Three maps, which represent translation of z-value into probabilities, using snapshots trough the six dimensional Monte Carlo simulations. parameter space. Color-coded is the log of the probability that a rate change is

anomalous.

Once we have re-evaluated several case studies of PSQ using our new mapping approach in multi-dimensional space, we plan to derive a quantitative forecast model, a

time-dependent model that forecasts the seismicity rates for the next 24 hours based on the recent seismicity. This model will be build on top of the STEP model (Gerstenberger, Wiemer et al. 2005) where the clustered seismicity forecast of STEP is modulated based on the observation of PSQ. We will thus also be able to systematically test for precursory quiescence before larger aftershocks (REFS).

To test the performance of our model, we will integrate our model into the RELM (Regional Earthquake Likelihood Model) testing initiative (Schorlemmer, Gerstenberger et al. 2005) of SCEC and the future collaboratory on the study of earthquake predictability, CSEP. Forecast models will be created fro several regions with good network coverage and an established history of PSQ: Alaska, California, Japan, Utah, Greece, and Switzerland.

Figure 4 Measuring the optimized duration and Figure 5 Ranking anomalies according to volume of the rate decrease preceding the 1992 significance. The height of each bar Big Bear mainshocks. represents the area an anomaly covers.

4. Outlook Our goal is to complete within a time frame of 2 years a re-evaluation of the PSQ hypothesis and the implementation of prospective test for selected regions. We believe that within about 5 years we would thus be able to report first result of the prospective testing.

6. References Arabasz, W. T. and M. Wyss (1996). "Significant precursory seismic quiescences in the extensional Wasatch front region Utah." EOS 77: F455. Dieterich, J. H. and P. G. Okubo (1996). "An unusual pattern of seismic quiescence at Kalapana, Hawaii." Geophysical Research Letters 23: 447-450. Gerstenberger, M. C., S. Wiemer, et al. (2005). "Real-time forecasts of tomorrow's earthquakes in California." Nature 435(7040): 328-331. Habermann, R. E. (1983). "Teleseismic detection in the Aleutian Island arc." J. Geophys. Res. 88: 5056-5064. Habermann, R. E. (1987). "Man-made changes of Seismicity rates." Bull. Seism. Soc. Am. 77: 141-159. Habermann, R. E. (1988). "Precursory Seismic quiescence: Past, present and future." Pageoph

126: 279-318. Habermann, R. E. (1991). "Seismicity rate variations and systematic changes in magnitudes in teleseismic catalogs." Tectonophysics 193: 277-289. Joswig, M. (2001). "Mapping seismic quiescence in California." Bulletin of the Seismological Society of America 91(1): 64-81. Kisslinger, C. (1986). Seismicity patterns in the Adak seismic zone and the short-term outlook for a major earthquake. Meeting of the National Earthquake Prediction Evaluation Council, Anchorage, Alaska. Kisslinger, K. and B. Kindel (1994). "A comparison of seismicity rates near Adak island, Alaska, September 1988 through May 1990 with rates before the 1982 to 1986 apparent quiescence." Bull. Seism. Soc. Am. 84: 1560-1570. Ogata, Y. (2001). "Increased probability of large earthquakes near aftershock regions with relative quiescence." Journal of Geophysical Research-Solid Earth 106(B5): 8729-8744. Reasenberg, P. A. (1985). "Second-order moment of Central California Seismicity." Journal of Geophysical Research 90: 5479-5495. Reasenberg, P. A. and M. V. Matthews (1988). "Precursory Seismic quiescence: A preliminary assessment of the hypothesis." Pageoph 126: 373-406. Schorlemmer, D., M. Gerstenberger, et al. (2005). "Earthquake Likelihood Model Testing." Seism. Res. Lett., submitted. Sobolev, G. A. (2003). "Application of the RTL algorithm to the analysis of preseismic processes before strong earthquakes in California." Izvestiya-Physics of the Solid Earth 39(3): 179-188. Wiemer, S. and M. Wyss (1994). "Seismic quiescence before the Landers (M=7.5) and Big Bear (M=6.5) 1992 earthquakes." Bulletin of the Seismological Society of America 84: 900-916. Wyss, M., A. Hasegawa, et al. (1999). "Quantitative mapping of precursory seismic quiescence before the 1989, M7.1, off-Sanriku earthquake, Japan." Annali di Geophysica 42: 851-869. Wyss, M. and S. Wiemer (1999). "How can one test the seismic gap hypothesis? The Case of repeated ruptures in the Aleutians." Pure and Applied Geophysics 155: 259-278.

Estimation of spatial-temporal point process models using the (stochastic) Expectation Maximization algorithm and its application to California earthquakes

Alejandro Veen1and Frederic Paik Schoenberg1

1UCLA Department of Statistics, 8125 Math Sciences Building, Box 951554, Los Angeles, CA 90095-1554, USA

Corresponding Author: Alejandro Veen, [email protected]

Abstract: We explore the use of the Expectation Maximization (EM) algorithm as well as its stochastic version for estimating the parameters of spatial-temporal point process models, in particular the Epidemic-Type Aftershock Sequence (ETAS) model [Ogata 1988, 1998]. We present situations in which this algorithm performs better than directly maximizing the log-likelihood function and show an application to earthquake occurrence data from Southern California using a spatial-temporal ETAS model.

1. Introduction The Expectation Maximization (EM) algorithm was first introduced in 1977 by Dempster, Laird, and Rubin and has since proven to be an efficient algorithm for finding Maximum Likelihood estimates in settings one can describe as incomplete data problems. In this work, we will extend this methodology to the realm of spatial-temporal point process models. The Epidemic-Type Aftershock Sequence (ETAS) model [Ogata 1988, 1998] can in fact be formulated as an incomplete data problem, since it can be interpreted as a sub-critical branching process with immigration, in which some earthquakes can be considered to be background events while others are treated as being triggered by a preceding event. In this framework, the observable quantities are the time, location, and magnitude of an earthquake and there is an unobservable quantity, which is the information on which earthquake triggered which aftershock or whether an earthquake is a background event. The so-called complete data would be the combination of the observable and unobservable quantities, if it were possible to measure them. Note that we do not intend to make a statement about the geophysical processes of how earthquakes trigger one another. In particular, the assumption that earthquakes can be assigned to single triggering events can be disputed – at least from a geophysical point of view. However, the ETAS model allows for such an interpretation. Alternatively, the ETAS model also allows for an interpretation in which multiple preceding events trigger subsequent earthquakes as their combined effect. For a more extensive explanation of alternative interpretations of the ETAS model, see for instance the beginning of section 2.2 in Sornette and Werner, 2005. For computational reasons, it may be favorable to think of the ETAS model as a branching process with immigration. In this case, the EM algorithm or its stochastic version can be applied with the unobservable quantity being the information on which earthquakes triggered which aftershocks or whether an earthquake is a background event.

2. The EM algorithm and its stochastic version Let θ denote the parameter vector of a spatial-temporal ETAS model and let l θ D)|( be the log-likelihood function for a given a data set D. Then, the Maximum Likelihood ˆ ˆ estimate θ can be expressed as θ = l θ D)|(maxarg . θ

The EM algorithm provides results that are asymptotically equivalent to the direct maximization of the log-likelihood function. However, it has computational advantages, does not depend very much on starting values, and is often more robust than non-linear maximization (or minimization) algorithms. The EM algorithm works in two steps, which ought to be repeated until convergence.

• Expectation step (E-step): Using the current estimate θˆ n)( for the parameter vector θ , find the expectation of the complete data log-likelihood l θ UD ),|( , where U is the set of unobservable quantities. This expectation can be written as ˆ n)( n){ θ = DupUDE ,|(),|( θθ UD ),|() . θˆ []l ∑ l ∈Uu • Maximization step (M-step): Update the current estimate by finding the Maximum Likelihood estimate of θ using the expectation derived in the E-step, rather than the incomplete data log-likelihood. Denote the new estimate as θˆ n+ )1( .

ˆ n+ )1( = n){ UDE ),|(maxarg In short, one can write the EM algorithm as θ θˆ [l θ ]. θ Finding the Maximum Likelihood estimate of θ in the M-step can be a computationally simpler than the direct maximization of the incomplete data log-likelihood l θ D)|( . This situation arises in cases where the partial derivatives of the complete data log-likelihood l UD θ)|,( are simpler than those of the incomplete data log-likelihood l θ D)|( .

The stochastic EM algorithm differs in the sense that the E-step is replaced by a “Random Assignment” step. Using the current estimate θˆ n)( , the probability that a given earthquake i is triggered by the preceding earthquake j as well as the probability that earthquake i is a background event can be computed. Using this probability vector, earthquake i is then randomly assigned to a triggering “parent” earthquake or classified as a background event. Note that this idea is similar to what is called “stochastic reconstruction” in Zhuang, Ogata, and Vere-Jones (2004). The M-step then maximizes the complete data log-likelihood function, using the generated random assignment of earthquakes to their triggering events in place of the unobservable quantity U.

3. Application of EM techniques to Southern California earthquake data In a simulation study, we will present the properties of the EM algorithm for the estimation of a spatial-temporal ETAS model. This methodology is then applied to the estimation of such a model using earthquake occurrence data from Southern California.

4. Concluding Remarks Simulation studies show that EM-type algorithms are not only computationally efficient and robust, they also seem to yield estimates that are closer to the real parameters than those derived from directly maximizing the log-likelihood function.

5. Acknowledgements This material is based upon work supported by the National Science Foundation under Grant No. 0306526. We thank Yan Kagan and Ilya Zaliapin for helpful comments and the Southern California Earthquake Center for its generosity in sharing their data.

5. References Dempster, A., Laird, N., and Rubin, D., Maximum likelihood from incomplete data via the EM algorithm, Journal of the Royal Statistical Society, Series B, 39/1, , 1977, 1 – 38. Ogata, Y., Statistical Models for Earthquake Occurrences and Residual Analysis for Point Processes, Journal of the American Statistical Association, Vol. 83, 1988, 9 – 27. Ogata, Y., Space-time point-process models for earthquake occurrences, Annals of the Institute of Statistical Mathematics, Vol. 50, 1998, 379 – 402. Sornette, D. and Werner, M.J., Apparent Clustering and Apparent Background Earthquakes Biased by Undetected Seismicity, J. Geophys. Res., Vol. 110, No. B9, B09303, 10.1029/2005JB003621. Zhuang, J., Y. Ogata, and D. Vere-Jones (2002), Stochastic declustering of space-time earthquake occurrences, Journal of American Statistical Association, Vol. 97, No. 458, 369 – 380. Zhuang, J., Y. Ogata, and D. Vere-Jones (2004), Analyzing earthquake clustering features by using stochastic reconstruction, J. Geophys. Res., 109, B05301, doi:10.1029/2003JB002879.

STOCHASTIC MODELS FOR THE ANALYSIS OF SEISMICITY DATA.

David Vere-Jones Victoria University and Statistical Research Associates, Ltd, Wellington, New Zealand email: [email protected]

Abstract: This tutorial introduces stochastic modelling as an element in the de- scription, understanding, and forecasting of events recorded in an earthquake cata- logue. The three main sections are on model simulation, model estimation, and the computation and assessment of probability forecasts. A common linking theme is the use of simulation methods to clarify the behaviour and assessment of models. Models considered start with models for basic features of seismicity, such as Poisson and cluster processes, the Gutenberg-Richter relation, and Omori’s law, and then to proceed to more complex models in which the dynamics is captured through a conditional intensity function, such as the ETAS and stress release models..

1 Introduction

In this tutorial I want to start by making a few general comments about the role of stochastic models, in particular the importance of modelling the uncertainties in the values we observe, as well as the physical processes underlying the observations. Then I shall emphasize the development and use of model simulations as an aid to understanding the models as well as testing them, and also as a tool for fitting them and predicting from them. The tutorial will require some familiarity with elementary probability and statis- tics, and some experience of using and interpreting simulation algorithms. Many of the procedures to be discussed are available on special purpose packages, such as Utsu and Ogata’s IASPEI software, or David Harte’s SSLib package. However, I shall not attempt to provide assistance in using the packages, as time is limited, my skills are limited, and in any case the packages have good documentation of their own.

2 Simulation of seismicity models: basic ingredi- ents

Here I shall outline and illustrate a sequence of simulation algorithms of increasing complexity for:

1 1. Simple (constant rate) Poisson processes;

2. Poisson processes with varying rate (in time or space or both);

3. The Gutenberg-Richter law and variants;

4. Aftershock sequences

5. Conditional intensity models.

6. extension to space-time models.

3 Prediction, assessment and estimation

Simulation methods can not only be revealing in themselves, but also lead to ef- fectrive procedures for prediction (providing probability forecasts) assessment, and even model selection. My aim in this section is to suggest some ways in which this can be done. Prediction is a natural extension of basic simulation techniques: given the model, updated and adjusted as necessary to take into account current information, one simply simulates the model into the future, many times, and bases probabilities or expected values on observed frequencies or averages from the simulations. To score a probability prediction, one can use the logarithm of the probability gain (ratio of forecast probability to probability from the simplest background model), and use the record of these over a sequence of forecasts to make model comparisons or to examine the circumstances where the model performs well of poorly. Moreover, the average of the log probability gains can be interpreted as just the loglikelihood of the observed outcomes. In a rather similar way, testing involves repeated simulation of the model, and comparing some relevant characteristic of the data (typically a mean value or prob- ability), as computed from the real data, with the population of values computed from a large number of simulations. Model estimation or selection is more troublesome to fit into this framework, but it can done by scoring the model performance for different selections of parameter values and choosing those values which give the best forecasting performance. This is tantamount to selecting on the basis of the best likelihood behaviour. In principle Bayesian methods can be incorporated by allowing an initial sim- ulation of model parameters from the prior distributions before commencing the siumlation of data from the model itself.

2 4 Discussion and conclusions

Depending on time available and audience interests, I shall spend the remaining time extending the discussion of probability forecasts, their uses and assessment, or expanding on some other section of the previous discussion.

3 A Further Note on B˚ath’sLaw

David Vere-Jones1, Junko Murakami2 and Annemarie Christophersen2

1Victoria University and Statistical Research Associates, Wellington, New Zealand 2Victoria University, Wellington, New Zealand Corresponding Author: David Vere-Jones, [email protected]

Abstract: A simple model for the number and magnitudes of events in an af- tershock sequences is explored. Two components are distinguished: the growth of observed aftershock numbers as the magnitude threshold is lowered, and the growth of aftershock numbers as the magnitude of the initiating event is increased. For a self-similar regime, the two growth rates should be the same. The distribution of the difference between the magnitude of the initiating event, and that of the largest aftershock in the sequence, is explored, and shown to provide a close approximation to B˚ath’slaw as well as a simple form for the probability of an initiating event to be followed by an aftershock of larger magnitude.

Introduction

The question of whether the so-called B˚ath’slaw has a physical basis, or is simply the consequence of basic statistical features of aftershock sequences, has remained controversial. In an early study, Vere-Jones (1969) pointed out that even without any physical basis, the distribution of the difference, say ∆, between the magnitudes of the largest and second largest events in an arbitrary sequence of events should be independent of the number of events in the sequence, so suggesting some form of statistical origin for the law. Recent papers by Lombardi (2002) and Console et al. (2003) explored this possibility in much greater depth, and concluded it was still possible that some additional, physically-based, component was needed to explain the observational results. Even more recently, Sornette and coworkers (see eg Helmsberger and Sornette Sornette (2003), Saichev et al. (2004), Zhuang (2005)) have made detailed studies of the generation of aftershocks in the ETAS model and established conditions under wich B˚ath’slaw appears, while Shcherbakov et al. (2004, 2005) have proposed modified versions of the law from a more physical viewpoint. The present note explores what seems to us the simplest model for the numbers and magnitudes of the events in an aftershock sequence, and was prompted by the desire to interpret the modelling sugestions of some of the earlier papers in a simpler general setting.

Model description

We suppose first that aftershock magnitudes are independently and identically

1 distributed above some threshold M0 with common distribution

1 − F (M) = Pr{Mag > M} = e−β(M−M0) = 10−b(M−M0) ; β = 2.30b. (1)

(Note: we shall use logarithms to base e in the remainder of the discussion, and hence β rather than the more familiar b-value). Denoting the magnitude of the initiating event by M ∗, we suppose next that ∗ the expected number µ(M ) of aftershocks with magnitudes over the threshold M0 satisfies a scaling relation of the form

∗ ∗ α(M −M0) ∗ ∗ µ(M ) = Ae ; log[µ(M )] = log A + α(M − M0). (2)

Here M0 plays the role of a scaling origin, and is not necessarily related to any cata- logue completeness threshold, while A represents the expected number of aftershock events with M ≥ M0 if the initiating event itself has magnitude M0. From these assumptions alone it follows that, if we now take some higher threshold Mc determined by considerations of catalogue completeness or other reasons, the ∗ expected number of events above Mc, given an initiating event of size M , is given by ∗ ∗ ∗ −β(Mc−M0) α(M −M0) µ(M | Mc) = pµ(M ) = e A e , (3) since each of the initial events has the same probability p = e−β(Mc−M0) of reaching the threshold Mc. Setting δ = α − β, we rewrite this in the form

∗ ∗ ∗ ∗ ∗ β(M −Mc) (α−β)(M −M0) β(M −Mc) δ(M −M0) µ(M | Mc) = Ae e = Ae e . (4) which shows that the expected number depends on two separate features: the pa- rameter β (or b) of the G-R law; and the discrepancy δ = α − β of the two growth parameters.

∗ ∗ If the threshold Mc is chosen relative to M itself, so that Mc = M − D, the equation becomes ∗ µ(M ∗ | M ∗ − D) = AeβD eδ(M −M0). (5) From this form it is clear that the constraint α = β is essential if the aftershock process is to be self-similar in regard to the magnitude of the initiating event. It also provides a basis for treating an initial event as the foreshock of a larger event on a minimum of assumpions concerning the earthquake model, for setting D = 0 we obtain ∗ µ(M ∗ | M ∗) = Aeδ(M −M0) (6) which provides an approximation to the probability that an initiating event of mag- nitude M ∗ generates an aftershock larger than itself.

Distribution of the discrepancy between M ∗ and the largest aftershock

More detailed results can be obtained by inserting specific assumptions for the N distribution of the number of aftershocks. Let GN (z) = E(z ) be the probability

2 generating function (p.g.f.) of the number N ≡ N(M ∗) of aftershock events with ∗ M > M0, initiated by an event of magnitude M . Conditional on N = n, and assuming independent magnitudes following the G-R law, the distribution of the largest of these events is given by

¡ ¢n ∗ −β(M−M0) P r(Mmax ≤ M | N(M ) = n) = 1 − e , (7) with the convention that Mmax = M0 if N = 0. Taking expectations over n then leads to the expression £ ¤ −β(M−M0) Fmax(M) = GN 1 − e . (8) for the overall distribution of the maximum. Specific examples can now be studied by selecting a particular form for the distribution of N(M ∗). One of the easiest to handle, which we use here for illustration, is to suppose N has a Poisson distribution with parameter µ(M ∗), in which case (7) becomes

∗ −µ(M ∗) e−βM P r(Mmax ≤ M | M ) = e (9)

∗ ∗ Substituting for µ(M ), we obtain for the distribution of ∆ = M − Mmax,

∗ ∗ −Ae δ (M −M0) eβx 1 − F∆(x) = P r(∆ ≥ x) = P r(Mmax ≤ M − x) = e .

Note that the distribution is limited to the region ∆ ≤ M ∗, with negative values corresponding to situations where the largest aftershock has a magnitude above that ∗ of the initiating event, and we need to take note of the convention 1 − F∆(M ) = ∗ Pr{Mmax = 0)} = Pr{N(M ) = 0}.

∗ 1 ∗ The density f∆(x) attains its maximum at x = − β (log A + δ(M − M0)); this ∗ mode is around 1 (i.e., the B˚ath’slaw value) when log A = −δ(M − M0) − β. When δ = 0 and β = 2.3, this gives A = e−2.3 ≈ 10% for the chance of an initiating event being followed by a larger aftershock. Graphs of the density f∆(x) are shown in Figure 1 for a few choices of the parameters A and δ and M0.

p.d.f. of ∆ with β = 2.2 p.d.f. of ∆ with β = 2.3

M* = 6.5, A = 0.075, Mc = 3.5 M* = 6.5, A = 0.075, Mc = 3.5

α = 2.2 α = 1.9 α = 2.3 α = 2 (µ = 55) (µ = 8) (µ = 74) (µ = 11)

α = 2.5 α = 2.6 (µ = 388) (µ = 523) ) ) ∆ ∆ ( ( ∆ ∆ f f 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0

−2 −1 0 1 2 3 −2 −1 0 1 2 3 ∆ ∆

∗ αM∗ ∗ Figure 1: p.d.f. of ∆ = M − Mmax with n ∼ POI(Ae ). (“µ” = µ(Mc | M ))

3 Discussion and Conclusions

The simple model described here can be compared with those given by Console et al. (2003) and Shcherbakov (2005). The difference in appearance between the distributions described here and those of Console et al. may be attributable to the smoothing effect introduced by conditioning on M ∗ and allowing an averaging over N(M ∗). The equations can also be considered a reinterpretation of the ideas put forward in the papers by Shcherbakov et al., with the emphasis shifted to variations in the distribution of N(M ∗) as explaining the behaviour of ∆ rather than vice-versa. Although the derivations are straightforward, and the tools are clearly well-known to the writers cited earlier, as far as we can ascertain, and somewhat surprisingly, they have not previously been presented as a unified model. In fact the model supplies a general basis for anticipating a phenomenon closely similar to B˚ath’slaw without the need for a special physical mechanism, and at the same time gives a simple, general formula which can be used for estimating the probability that a given event will be followed by larger aftershocks. We are currently examining the extent to which it is supported by real data, and checking for any insight the parameters A, M0, β and δ may provide into generating mechanisms.

References

Console, R., Lombardi, A.M. and Murru, M. (2003). B˚ath’slaw and the self- similarity of earthquakes. Journal of Geophysical Research, 108 N0 B2, 2128,doi: 1029/2001JB001651,2003. Felzer, K.R., Abercrombie, R.E., and Ekstr¨om,G. (2004) A common origin for aftershocks, foreshocks and multiplets. Bulletin of American Seismological SAociety 94 88-98 Helmstetter, A, and Sornette, D. (2003) B˚ath’slaw derived from the Gutenberg- Richter law and from aftershock properties. Preprint. Lombardi, A.M. (2002) Probabilistic interpretation of B˚ath’slaw. Ann. Geophys. 45 455-472. Saichev, A. and Sornette, D. (2005) Distribution of the largest aftershocks in branching models of triggered seismicity: theory of B˚ath’slaw. (APS preprint) Shcherbakov, R. and Turcotte, D.L. (2004) A modified form of B˚ath’slaw. Bul- letin of American Seismological Society 94 1968-1975 Shcherbakov, R. Turcotte, D.L., and Rundle, J.B. (2004) Aftershock statistics. Pure and Applied Geophysics 162 1051-1076. Zhuang, J. and Ogata, Y. (2005) Properties of the probability distributions as- sociated with the largest event in an earthquake cluster and their implications to foreshocks. (preprint)

4

A Study on the Limited Power Law Model

T. Wang1, L. Ma2, and Y. Li1

1School of Mathematical Sciences, Statistical Data Analysis Laboratory, Beijing Normal University, Beijing 100875, China 2Institute of Earthquake Science, China Earthquake Administration, Beijing 100036, China

Corresponding Author: T. Wang, [email protected]; [email protected]

Abstract: This paper modifies the ETAS model for the aftershock rate by using the Limited Power Law (LPL) formula in place of the modified Omori law. The parameters are estimated by the maximum likelihood method. The model is then applied to the Chi-Chi sequence and the resulting AIC values show that the new model performs better than the original ETAS model for the earliest aftershocks.

1. Introduction From the Omori law (Omori, 1894) to the modified Omori law (Utsu, 1961), the aftershock decay rate has been well described as a power function (Zhuang and Ma, 2000). This provided the basis of Epidemic Type Aftershock Sequence (ETAS) model (Ogata, 1988). The modified Omori law suggests that the aftershock decay rate is

K Λ(t) = p (1) (t + c) where Λ(t) is the aftershock rate at time t , K depends on the lower bound of the magnitude of aftershocks considered, but c and p are usually regarded independent of this bound (Ogata, 1983). However, the earliest aftershocks are always less frequent than predicted by both the Omori law and the modified Omori law. Narteau, et al. (2002) investigated a great number of aftershock sequences in a seismic region and proposed the Limited Power Law (LPL) to relate this phenomenon to stress characteristics in the aftershock zone, resulting in an aftershock rate of

γ (q,λbt) −γ (q,λat) Λ(t) = A q , (2) t x where A and q are constants; γ (ρ, x) = τ ρ−1e−τ dτ is the incomplete Gamma function; ∫0 λb = λ(s) describes the stress relaxation decay rate at s , while it is assumed that the overload (stress minus strength) is uniformly distributed on [0, s] and λa is usually described as a characteristic rate associated with the threshold of crack growth (Narteau, et al., 2002; Narteau, et al., 2003). The present paper utilizes the LPL formula (2) in place of the modified Omori law (1) in the ETAS model to model the rate of aftershocks. The maximum likelihood method is used to estimate the parameters, and the model is applied to the Chi-Chi sequence to illustrate its performance.

2. ETAS model with LPL formula The conditional intensity function of the original ETAS model (Ogata, 1983) is given by α (M −M ) e i c λ(t | H ) = µ + K , M ≥ M (3) t ∑ (t t c) p i c ti

where µ is the background rate; K , c , and p are the parameters corresponding to (1), α is the parameter “measuring the effect of magnitude in the production of aftershocks”, and shocks with magnitude M i occur at times ti while M c is the magnitude threshold. Each earthquake produces an inhomogeneous field of increased stress in individual parts of a special region in and around the earthquake source. The stress will then be gradually released in the form of aftershocks. Under the same assumptions in (Shebalin, 2004), we can use a stationary decay rate of λ(σ i ) to describe such relaxation, where σ i is the overload. The present paper is concerned with an exponential expression of the transition rates (Shebalin, 2004),

σ 0 /σ a λ(σ 0 ) = λae , (4)

where σ a > 0 is a scaling parameter. Thus (2) becomes (according to Shebalin, 2004, q = 1) −λ t −λ t e a − e b Λ(t) = A . (5) t Therefore, the conditional intensity function in ETAS model will be modified to α (M −M ) e i c λ(t | H ) = µ + A (e−λa (t −ti ) − e−λb (t −ti ) ), M ≥ M (6) t ∑ t − t i c ti

3. Estimation of the parameters and the goodness-of-fit test We use the maximum likelihood method (Ogata, 1983) to estimate the parameters µ , A ,

α , λa and λb from the observed aftershocks. As for the modified ETAS model, the likelihood function is

n ⎧ ⎫ T2 L(µ, A,α,λa ,λb ) = ⎨ λ()t j | Ht ⎬exp{− λ(t | Ht )dt}, (7) ∏ j ∫T ⎩ j =1 ⎭ 1 where t1,t2 ,",tn are the times at which the aftershocks occurs within a time interval

[T1,T2 ] . Thus the log-likelihood function is

ln L(µ, A,α,λa ,λb )

n α (M i −M c ) ⎛ e −λ (t −t ) −λ (t −t ) ⎞ ⎜ a j i b j i ⎟ = ln µ + A ()e − e − µ(T2 − T1) ∑ ⎜ ∑ t − t ⎟ j=1 ⎝ ti

α (M i −M c ) T2 e − A ∑ (e−λa (t −ti ) − e−λb (t −ti ) )dt . (8) ∫T1 t − t ti

AIC = −2ln L + 2k , (9)

where k is the number of the parameters in the model. A smaller AIC shows a better fit.

4. Application to Chi-Chi sequence The data are chosen from SSLib (Harte, 1998) in the region with latitude between 23.3N and 24.24N, longitude between 120.2E and 121.2E, during the period 1999.09.20 to 2001.03.31. This includes the well-known Chi-Chi sequence. For the Chi-Chi sequence, the main shock on September 20, 1999 with magnitude 7.3. The Gutenberg-Richter law is used here to determine the cut-off magnitudes, which is M1.3 (Liu and Ma, 2004). We select the first 150 aftershocks following the main shock to examine the performance of the ETAS model using the LPL formula and compare it with the original ETAS model. The estimates of the parameters (Healy, et al., 1997; Ma and Vere-Jones, 1997) and the AIC values are shown in Table 1, from which we see that the ETAS model using the LPL formula is better than the original ETAS model for the earliest aftershocks.

Table 1. The parameter estimates and AIC values for the original ETAS model, and the ETAS model using LPL formula.

M c ≥ 1.3 µ K ( A ) α c ( λa ) p ( λb ) AIC Original 0.0150 0.0638 1.4855 0.0107 1.0841 -1055.232

LPL 0.0150 0.0257( A ) 0.4739 0.3579( λa ) 2820.6899( λb ) -1067.752

5. Conclusion Since the earliest aftershocks are always less frequent than predicted by the modified Omori law due to the purely physical causes of aftershock shortage, and since the LPL formula performs better than the Omori law for the earliest aftershocks, this paper replaces the Omori law in the original ETAS model by the LPL formula. From the analysis of the Chi-Chi sequence, it shows that the modified ETAS model using the LPL formula describes the decay of earliest aftershock much better than the original ETAS model. Further work using more different sequences to compare these two models should be done to enhance this conclusion. Moreover, it is helpful to investigate the relationship between the parameters and strong aftershocks in different periods.

6. Acknowledgement The authors would like to acknowledge Mark Bebbington for reviewing this paper patiently and providing lots of helpful suggestions. We are grateful to Hengjian Cui, P. N. Shebalin, Jianping Huang, Wenbing Liu, Shengjun Dang, Shaochuan Lv, Jiao Jin, Yuan Zhou and all of them who have shared their knowledge and suggestions to this research. The work is initialized with the assistance of No. 10371012 of the National Natural Science Foundation of China. This research was also partially supported by NSFC programs No. 40574020 and No. 40134010. All these financial assistance is gratefully acknowledged here. All calculations were done by the software package SSLib.

7. References Akaike, H. (1974), A New Look at the Statistical Model Identification, IEEE Trans. on Automatic Control, 19, pp. 716-723. Deng, Y. L., Liang, Z. S. (1992), Stochastic Point Process and Its Application, (in Chinese),

Science Press. Harte, D. (1998), Documentation for the statistical seismology library[J], School of Mathematical and Computing Sciences, VUW, Research Report, 10. Healy, J. H., Keilis-Borok, V. I., Lee W. H. K. (1997), IASPEI Software Library, Volume 6. Liu, W. B., Ma, L. (2004), The Analysis Between Two Seismic Sequences of Chi-Chi,Taiwan and Datong,North China With the Quantitative Model, (in Chinese with English abstract), Earthquake, 24, pp. 155-162. Ma, L., Vere-Jones, D. (1997), Application of M8 and Lin-Lin Algorithms to New Zealand Earthquake Data, New Zealand J. Geology. And Geophys. 40, pp. 77-89. Narteau, C., Shebalin, P., Hainzl, S., Zoller, H., Holschneider, M. (2003), Emergence of a Band-limited Power Law in the Aftershock Decay Rate of a Slider-block Model, Geophys. Res. Letters, 30, NO. 11, 1568, doi:10.1029/2003GL017110. Narteau, C., Shebalin, P., Holschneider, M. (2002), Temporal Limits of the Power Law Aftershock Decay Rate, J. Geophys. Res., 107, No. 0,XXXX, doi:10.1029/2002JB001868. Ogata, Y. (1983), Estimation of the Parameters in the Modified Omori Formula for Aftershock Frequencies by the Maximum Likelihood Procedure, J. Phys. Earth, 31, pp. 115-124. Ogata, Y. (1988), Statistical models for earthquake occurrences and residual analysis for point processes, J. Amer. Statist. Assoc., 83, pp. 9--27. Omori, F. (1894), On Aftershocks of Earthquakes, J. Coll. Sci. Imp. Univ. Tokyo, 7, pp. 11-200. Shebalin, P. N. (2004), Aftershocks as Indicators of the State of Stress in a Fault System, Doklady Earth Sciences, 398, No. 7, pp.978-982. Ustu, T. (1961), Statistical Study on the Occurrence of Aftershocks, Geophys. Mag., 30, pp. 521-605. Zhuang, J. C., Ma, L. (2000), Main Shock and After Shock-- From the Omori Law Formula to the ETAS Model, (in Chinese with English abstract) Recent Development in World Seismology, 5, pp. 12-18.

Towards an European Framework for Testing Earthquake Forecasts

S. Wiemer1, M. Cocco2, G. Gruenthal3, I. Main4, W. Marzocchi5, M. Gerstenberger6 and D. Schorlemmer1

1ETH, Zurich, Switzerland. 2INGV, Rome, Italy. 3GFZ Potsdam, Germany. 4Edinburgh Uni., England,. 5INGV, Bologno, Italy. 7GNS, Wellington, New Zealand.

Corresponding author: S. Wiemer: [email protected]

Abstract Efforts are currently under way to establish community accepted prospective tests of earthquake likelihood models for California, Europe, New Zealand as well as global models. Coordinating these initiatives into a sustainable network for model testing, spanning a multitude of spatial and temporal domains, is highly desirable. I will focus on ongoing efforts in Europe but promote the

1 . Introduction Seismologists’ primary societal contribution over the past 30 year has been the development of detailed seismic hazard maps, used as input for seismic engineering. Such maps portray, for example, the expected average level of ground shaking for a given time period, largely based on empirical observations of past seismicity and fault activity. Current earthquake hazard models (e.g., (Field et al., 1999; Giardini et al., 1999; Jiménez et al., 2003) are mostly static; they do not change with time unless exchanged every few years for a new generation of maps. Seismologists, however, increasingly realize that in order to respond to changing societal needs and emerging capabilities in Earth science, a new system-level, physics-based and increasingly time-dependent approach to hazard assessment is needed.

To make such a development possible in the European and Mediterranean region, a sustainable and integrated approach to earthquake forecasting related research in Euro-Med needs to be developed. It is urgently needed in order to advance seismic hazard assessment and earthquake forecasting related research and to participate and benefit from similar research activities planned right now for California by the US Geological Survey and the Southern California Earthquake Center. Key components are

1) Integrated approach to hazard and forecasting spanning a multitude of spatial and temporal scales (static hazard maps – time-dependent hazard mapping – earthquake forecasting – earthquake prediction); 2) Development of community accepted testing and model validation. 3) Development of a suitable cyber-infrastructure that satisfies the computational and information access needs of the research and user community.

We have recently received funding from the EU to create a sustainable community based testing and modeling environment, networking a diverse set of scientists and users in Europe, and hopefully rejuvenating research into earthquake forecasting.

2. Time-dependent earthquake hazard assessment

The next frontier in hazard assessment is to move away from static (stationary Poissonian) models towards increasingly time-dependent ones, based on validated statistical or physical models. This transition, however, is a perilous one, as opinions on the applicability of time-dependent models vary widely in the seismological community. A number of such models already exist in Europe and elsewhere (e.g., www.relm.org); however, none is generally accepted and used in hazard assessment. Therefore, future efforts need to be built on the principles of community based testing and validation of models, instead of individual researchers designing their own tests (Schorlemmer et al., 2005).

For Europe, we have formed a collaboratory to facilitate such testing, which will work closely with the California focussed RELM (Regional Earthquake Likelihood Model) and CSEP (Collaboratory for the study of Earthquake Predictability) efforts in the US. In Europe, we plan to develop two major components, as outlined below.

2.1 A Euro-Med clustering model

We will develop a reference time-dependent earthquake model for the entire Euro-Med region, a model that reflects the known fact that earthquakes cluster in space and time. A statistical time- dependent earthquake hazard model will be created that takes into account the contribution of aftershocks, potential foreshocks and earthquake swarms into hazard computations. Its contribution to short-term hazard exceeds the hazard from the stationary background in some locations for periods of days to decades by orders of magnitude. The group at ETHZ in partnership with the USGS has recently developed such a model for California (http://pasadena.wr.usgs.gov/step/; Figure 1). This model (Gerstenberger et al., 2004; Gerstenberger et al., 2005) is we believe the most accurate seismicity forecast model for California existing to date and thus the null hypothesis for all other proposed models. First similar models for other regions of the world are under development (e.g., Japan, Figure 2).

Other time-dependent models that need to be implemented are specifically a Epidemic Type Aftershocks Model (ETAS, (Helmstetter et al., 2004; Helmstetter and Sornette, 2003; Ogata, 1999), Coulomb based rate and state models (Parsons et al., 2000; Stein, 2003), strain model based approaches (Jackson et al., 1997), seismicity based models (Kagan and Jackson, 2000; Ogata, 2004; Schorlemmer et al., 2004; Wiemer and Schorlemmer, 2005; Wyss and Wiemer, 2000), M8 (Kossobokov et al., 1999) and many others. These types of model provide an important resource for emergency response workers and decision makers after a mainshock, a comprehensive platform to communicate time-dependent hazard to the public and a benchmark for seismologist for the development of more sophisticated models.

Figure 1: Snapshot of the time-dependent hazard model for California http://pasadena.wr.usgs.gov/step. Portrayed is the probability of experiencing a modified Mercalli Intensity of VI or more on Nov. 14, 2005.

2.2 A Euro-Med platform for testing earthquake forecast models. Validation and quantitative hypothesis testing is key to advancing earthquake science (Geller et al., 1997; Jackson, 1996; Mulargia, 1997; Wyss, 1997). Any hazard map or earthquake forecast is a likelihood model and should be rigorously tested against future seismicity. We will create a new type of environment for forecasting experiments: a virtual, distributed laboratory with an infrastructure adequate to support a European and global program of research on earthquake forecasting and predictability, based on the following principles: . Rigorous, community accepted procedures for registering forecasting procedures, including the delivery and maintenance of documented code for making forecasts. . Community-endorsed standards for assessing probabilistic forecasts, including probability gains relative to well-defined reference procedures. . Access to data sets and monitoring products, certified by the agencies that produce the for use in calibrating procedures and testing models. . Software support to allow individual researchers and groups to participate in the framework and access testing results. The EU testing center will be set up in close collaboration with SCEC and CSEP, using the some testing algorithms (Schorlemmer et al., 2005). The models to be tested, however, will be region specific.

Figure 2: Same time-dependent model as shown in Figure 1, developed for Japan.

3 . Conclusion and Outlook The next three years are in our opinion a critical period for advancing research into earthquake forecasting and the predictability of earthquakes. Because of several funded or proposed projects, seismologists have a window of opportunity to create a widely accepted and sustainable framework for testing earthquake forecasts. Coordination, however, is critical to ensure that testing algorithms and ‘rules of the game’ applied in regional testing centers are identical or at least compatible, in essence creating a single virtual testing center.

Within Europe, we strive to create a first generation of time-dependent forecast and hazard models within the next two years. We also will create a first testing center implementation and invite modelers to contribute their specific models for the center.

4 . Acknowledgement We thank D. Giardini, N. Field, L. Jones, T. Jordan and M. Wyss, for stimulating discussions and input. We thank the ANSS and Japan Meteorological Agency for providing data used to construct the time- dependent models shown in Figure 1 and 2. References Field, E.H., Jackson, D.D., and Dolan, J.F., 1999, A mutually consistent seismic-hazard source model for southern California: Bulletin of the Seismological Society of America, v. 89, p. 559-578. Geller, R.J., Jackson, D.D., Kagan, Y.Y., and Mulargia, F., 1997, Earthquakes cannot be predicted: Science, v. 275, p. 1616-1617. Gerstenberger, M., Wiemer, S., and Jones, L., 2004, Real-time Forecasts of Tomorrow's Earthquakes in California: a New Mapping Tool: United States Geological Survey Open-File Report,, v. 2004-1390. Gerstenberger, M.C., Wiemer, S., Jones, L.M., and Reasenberg, P.A., 2005, Real-time forecasts of tomorrow's earthquakes in California: Nature, v. 435, p. 328-331. Giardini, D., Grunthal, G., Shedlock, K.M., and Zhang, P.Z., 1999, The GSHAP Global Seismic Hazard Map: Annali Di Geofisica, v. 42, p. 1225-1230. Helmstetter, A., Hergarten, S., and Sornette, D., 2004, Properties of foreshocks and aftershocks of the nonconservative self-organized critical Olami-Feder-Christensen Model: Physical Review E, v. 70. Helmstetter, A., and Sornette, D., 2003, Predictability in the epidemic-type aftershock sequence model of interacting triggered seismicity: Journal Of Geophysical Research-Solid Earth, v. 108. Jackson, D.D., 1996, Hypothesis testing and earthquake prediction: Proceedings of the National Academy of Science United States of America, v. 93, p. 3772-3775. Jackson, D.D., Shen, Z.K., Potter, D., Ge, X.B., and Sung, L.Y., 1997, Southern California deformation: Science, v. 277, p. 1621-1622. Jiménez, M.-J., D., G., and G., G., 2003, The ESC-SESAME Unified Hazard Model for the European- Mediterranean Region,: EMSC/CSEM Newsletter, v. 19, p. 2-4. Kagan, Y., and Jackson, D., 2000, Probabilistic forecasting of earthquakes: Geophysical Journal International, v. 143, p. 438-453. Kossobokov, V.G., Romashkova, L.L., Keilis-Borok, V.I., and Healy, J.H., 1999, Testing earthquake prediction algorithms: statistically significant advance prediction of the largest earthquakes in the Circum-Pacific, 1992-1997: Physics of the Earth and Planetary Interiors, v. 111, p. 187-196. Mulargia, F., 1997, Retrospective validation of the time association of precursors: Geophysical Journal International, v. 131, p. 500-504. Ogata, Y., 1999, Seismicity analysis through point-process modeling: A review: Pure and Applied Geophysics, v. 155, p. 471-507. —, 2004, Seismicity quiescence and activation in western Japan associated with the 1944 and 1946 great earthquakes near the Nankai trough: Journal Of Geophysical Research-Solid Earth, v. 109. Parsons, T., Toda, S., Stein, R.S., Barka, A., and Dieterich, J.H., 2000, Heightened odds of large earthquakes near Istanbul: An interaction-based probability calculation: Science, v. 288, p. 661-665. Schorlemmer, D., Gerstenberger, M., Jackson, D., and Wiemer, S., 2005, Earthquake Likelihood Model Testing: Seism. Res. Lett.,, v. submitted. Schorlemmer, D., Wiemer, S., Wyss, M., and Jackson, D.D., 2004, Earthquake statistics at Parkfield: 2. Probabilistic forecasting and testing: Journal Of Geophysical Research-Solid Earth, v. 109. Stein, R.S., 2003, Earthquake conversations: Scientific American, v. 288, p. 72-79. Wiemer, S., and Schorlemmer, D., 2005, ALM: the Asperity based likelihood model: SRL, v. submitted. Wyss, M., 1997, Second round of evaluations of proposed earthquake precursors: Pure and Applied Geophysics, v. 149, p. 3-16. Wyss, M., and Wiemer, S., 2000, Change in the probability for earthquakes in Southern California due to the Landers magnitude 7.3 earthquake: Science, v. 290, p. 1334-1338.

The 4th International Workshop on Statistical Seismology (Statsei4)

Correlating properties of aftershock sequences with earthquake physics

J.Woessner1, S. Wiemer2, S. Toda3

1California Institute of Technology, 1200 E. California Blvd, Pasadena, CA, 91125, USA 2ETH Zurich, Swiss Seismological Service, Schafmattstr. 30, HPP P4, 8093 Zurich, Switzerland 3Active Fault Research Center, Geological Survey of Japan, Higashi 1-1-1, Tsukuba, 305-8767, Japan

Corresponding Author: J. Woessner, [email protected]

Abstract: We investigate the spatial and temporal properties of aftershock sequences in the immediate vicinity of seismogenic faults to improve the understanding of the link between main shock slip, resulting stress changes, effects on the local stress field and aftershock occurrence. The results obtained for several Californian earthquake sequences demonstrate 1) that the largest heterogeneities in the stress field and rotations of the maximum compressive stress axis are in general concentrated in areas of highest coseismic slip, which is in agreement with the heterogeneous post-seismic stress field hypothesis (HPSSF) and 2) that the heterogeneity of the stress tensor decreases with time on a shorter time-scale than the aftershock sequence decays. We present a conceptual Coulomb stress transfer model that partly explains the results of our observational study.

1. Introduction Aftershock zones present an ideal environment for studying geophysical mechanisms that influence the earthquake size distribution (Gutenberg and Richter, 1944) and the temporal evolution of aftershock sequences (Utsu 1961) because of high seismic activity and sudden changes in the stress field caused by a main shock. The stress field orientation can be determined using focal mechanisms (Michael, 1984; Michael 1987) and the local stress field heterogeneity can be inferred using the misfit angle ß to an assumed homogeneous background stress tensor. Several studies have documented strong temporal and spatial heterogeneity of b-values, p-values, and the post-seismic stress field within aftershock sequences of large events such as Loma Prieta, Landers, Hector Mine, and Denali (Michael et al., 1990; S. Wiemer et al., 2002; Ratchkovski et al., 2004). To improve our understanding of the link between main shock slip, resulting stress changes and aftershock occurrence, we investigate the fine-scale properties of aftershock sequences. We compare the results of the aftershock sequence analysis to parameters of stress tensor inversions. Focusing on the immediate vicinity of the earthquake rupture allows us to evaluate the heterogeneous post- seismic stress-field hypothesis (HPSSF, Michael et al., 1990).

2. Results 2.1 Spatial comparison: The 1992 Lander earthquake We spatially and temporally map the parameters of the frequency-magnitude distribution, the modified Omori law, the stress tensor heterogeneity ß and the orientation of the maximum compressive stress axis S1 for the Landers aftershock sequence. We analyse the seismicity in map view (Figure 1) and projected onto cross-sections according to the finite- fault source model of (Wald and Heaton, 1994). Throughout the Landers rupture zone, the misfit values are relatively high (ß ≥40°) which emphasizes the disturbance of the local stress field due to the large earthquake. In agreement with Hauksson (1992), the stress tensor

heterogeneity measure ß increases from south to north, with the largest misfits found along the Emerson fault and the cross-over between the Emerson and the Homestead Valley fault where also the largest slip occurred (Figure 1). The superimposed lower hemisphere stereonet plots illustrate the complexity differences in the stress tensor heterogeneity for two locations, one along the Emerson fault in a region of complex fault structure and one along the Johnson Valley fault with a simpler fault system. A similar spatial b-value pattern is obtained: high b-values (b ≥ 1.1) in the northern section (Camp-Rock/Emerson fault) and lower b-values along the Homestead and Johnson Valley faults, decreasing southward. The same results are obtained for detailed mapping of these parameters along the cross-section. Similar complexity differences are found for various other Californian earthquakes.

Figure 1: a) Misfit angle ß (colored background) superimposed with the orientation of the maximum compressive stress axis S1 (black lines) determined from stress tensor inversion and b) b-value map for the 28 June 1992 Landers earthquake of the first year of the aftershock sequence. Example stereonet plots of the stress tensor inversion showing low and high heterogeneity and two frequency-magnitude distributions illustrating low and high b-values.

In summary, we find evidence for a correspondence of high slip and high stress- heterogeneity values, especially when considering the uncertainties in the slip distribution. Throughout the main shock rupture zone, we find ß-values above 45° in support of the HPSSF-hypothesis. ß- and b-values are positively correlated. However, a relation to the geometrical complexity of the fault system or a more fundamental relation of the b-value to differential stresses cannot be excluded with this approach.

2.2 Temporal dependence of the stress tensor heterogeneity ß The HPSSF-hypothesis predicts a sudden increase of the stress tensor heterogeneity ß in the immediate vicinity of the main shock rupture following the main shock. We determine the temporal evolution of ß(t) under the hypothesis that the stress field recovers with time back to the level before the main shock. All time series investigated demonstrate that the misfit angle decays rapidly from the observed high ß-values shortly after the main shock and then the decay slows down (Figure 2). This suggests that the stress regime recovers from the stress anomaly as the frequency of aftershock decreases, and hints to a causal link between the decay of the anomaly and the occurrence of aftershocks. We estimated the aftershock duration of the Coalinga sequence: The background rate is reached again about 22.07 years after the 1983 mainshock, (17.18- 27.20 years when considering the uncertainties in the modified Omori law parameters determined by a bootstrap approach). Accordingly considering uncertainties in the

background rates, the aftershock duration would be five to ten times longer as it takes the ß- value to decrease down to around 37°, the state before the main shock (Figure 2) which is on the order of 2 to 4 years.

Figure 2: Temporal dependence of the stress tensor misfit angle ß for the 1983 Coalinga and the 1989 Loma Prieta earthquake. Stereonet plots show stress conditions prior and post these events. In summary, the decay time of the ß-values are much smaller compared to the aftershock sequence durations. Remarkably, the time dependence of ß is similarly seen for the rotation of the S1 axis (Michael, 1987). The author argues that the return to previous state of stress due to elastic rebound may have been finished already 2 years after the Coalinga event. The temporal stress anomaly and the decay with time could also be attributed to elastic creep. Unfortunately, the time scales to recover the stress field and to return to the background rate are largely different and for most events the elapsed time since the main shock is sufficient.

3. Conclusion and Outlook We propose a conceptual model based on modeling the stress rotations resulting from stress transfer based on a Coulomb model. We calculated coseismic rotations of the principal stress axes in an elastic half space under the assumption of a uniaxial N7°E compression with varying regional stress levels. Qualitatively, we obtain good agreement between the stress tensor axis rotations and the modeled rotations when assuming a 30-50 bar uniform regional stress field (Figure 3) in support of the HPSSF-hypothesis. In this case, the Landers earthquake would locally cause a complete stress drop, triggering a diverse set of aftershocks and resulting in a temporary heterogeneous stress field. Our conceptual model not only bears the potential of modeling the stress rotations following a main shock, but may also be used to estimate the absolute state of stress in the Earths' crust in a quantitative study, and possibly provide constraints on the time decay of the stress tensor heterogeneity. In addition, including a variable background stress field or different stress loading mechanisms in the conceptual model with, e.g., a tectonic loading, viscoelastic relaxation, or fluid flow, might also lead to a better understanding of the stress field prior to the occurrence of a strong earthquakes.

4 Acknowledgement We thank P. M. Mai, E. Hauksson, and D. Schorlemmer for stimulating discussions. We also thank the Southern California Data Center (SCEDC) and the Northern California Data Center (NCEDC) for providing the catalogs of the Southern and Northern California Seismic Networks (SCSN, NCSN).

Figure 3: Conceptual model: Orientation of S1 prior and post (red) the Landers, Big Bear and Joshua Tree main shocks resolved on optimally oriented planes (depth of 8 km). Rotations are calculated at a 50 bar uniaxial background stress field (N7°E) and using the slip distribution of (Wald and Heaton 1994). The zoom in shows highlights rotations in regions of highest slip during the 1992 Landers earthquake.

5. References B. Gutenberg and C. F. Richter (1944). Frequency of earthquakes in California. Bull. Seism. Soc. Am. 34: 185-188. E. Hauksson (1992). State of stress from focal mechanisms before and after the 1992 Landers earthquake sequence. Bull. Seism. Soc. Am. 84: 917-934. Michael, A. J. (1984). Determination of stress from slip data: faults and folds. J. Geophy. Res. 89: 11517-11526. Michael, A. J. (1987). Stress rotation during the Coalinga aftershock sequence. J. Geophy. Res. 92: 7963-7979. Michael, A. J. (1987). Use of focal mechanisms to determine stress. J. Geophy. Res. 92: 357- 368. Michael, A. J., W. L. Ellsworth, et al. (1990). Coseismic stress changes induced by the 1989 Loma Prieta, California earthquake. Geophys. Res. Let. 17: 1441-1444. Ratchkovski, N. A., S. Wiemer, et al. (2004). Seismotectonics of the Central Denali Fault, Alaska, and the 2002 Denali Fault Earthquake Sequence. Bull. Seism. Soc. Am. 94: S156- S174. S. Wiemer, M. C. Gerstenberger, et al. (2002). Properties of the 1999, MW 7.1, Hector Mine earthquake: Implications for aftershock hazard. Bull. Seismol. Soc. Am. 92: 1227-1240. T. Utsu (1961). A statistical study of the occurence of aftershocks. Geophys. Mag. 3: 521- 605. Wald, D. J. and T. H. Heaton (1994). Spatial and temporal distribution of slip for the 1992 Landers, California, earthquake. Bull. Seism. Soc. Am. 84: 668-691.

Enhancement and diffusion of seismic activity around major intraplate earthquakes in Japan

A. Yoshida1 and G. Aoki 2

1Natinal Institute of Polar Research, Itabashi-ku, Tokyo 173-8515, Japan 2Sendai District meteorological Observatory, Japan Meteorological Agency, Miyagino-ku, Sendai 983-0842, Japan

Corresponding Author: A. Yoshida, [email protected]; [email protected]

Abstract: Spatio-temporal characteristics of large earthquake occurrence around intralplate disastrous earthquakes in Japan are investigated statistically by stacking the data about the time lag and epicentral distance of earthquakes with M≧5 from origin times and foci of earthquakes of M≧6 since 1926 when a modern seismograph network was deployed in the Japanese islands. It is found that the occurrence rate of large earthquake is increased significantly in a rather broad area around the foci of disastrous earthquakes. Decrease of the enhanced activity in the proximal area near the foci can be described by the revised Omori’s law. It is shown further that the area where the seismic potential is enhanced spreads outward. The diffusion velocity is estimated at about 10km/year.

1. Introduction Preceding studies on the spatio-temporal distribution of large intraplate earthquakes in Japan have shown that it is very rare that no other earthquake is found within the epicentral distance of 100km and within the time interval of 10 years from any one (Yoshida and Ito, 1995; Yoshida and Takayama, 1993). Enhancement or successiveness of the occurrence of large earthquakes is often observed (e.g., Yoshida, 198; Kagan and Knopoff, 1976; Kagan and Jackson, 1991). Objective of this study is to investigate such spatio-temporal characteristics of large earthquakes in the Japanese islands quantitatively.

2. Data We use the JMA (the Japan Meteorological Agency) catalog since 1926 when a modern seismograph network was deployed in the Japanese island. From the catalog we make a file of shallow (depth≦40km) intraplate earthquakes of M5 or larger and decluster the file by removing those that occurred within the time period of 100days and within the epicental distance of 50km, except the largest one in each of the clusters.

3. Analysis First we stack distribution of M5 or larger earthquakes around foci of earthquakes of M6 or larger. Next we calculate the normalized occurrence frequency in each doughnut area whose center is the focus of M≧6 earthquakes by dividing the number of earthquakes in each zone of 10km width by that area. Last we average the normalized frequencies in adjacent two zones to obtain the smoothed occurrence rate at the distance of 10km, 20km, 30km and so on. For example, the smoothed frequency at a distance of 30km is the average of normalized frequency in the doughnut zone from 20 to 30km and that from 30 to 40km.

4. Result It was found that the normalized occurrence frequency of M≧5 earthquakes is increased significantly in a rather broad area around the foci of M≧6 earthquakes. The increase is most conspicuous in the area near the focus. Reduction of the enhanced occurrence rate can be described by the revised Omori’s law where p-value is about 1.4 when change in the number of earthquakes per year is applied to the formula. The peak time of the increased occurrence rate is delayed for distant areas. The more the distant, larger the delay time, showing diffusive feature of the enhancement of the activity outward. The diffusion velocity is estimated at about 10km/year.

5. References Kagan, Y. and Knopoff, L. (1976), Statistical search for non-random features of the seismicity of strong earthquakes, Phys. Earth Planet. Inter. 12, 291-348. Kagan, Y. and Jackson, D. D. (1991), Long-term earthquake clustering, Geophys. J. Int. 104, 117-133. Yoshida, A. (1995), Characteristic space-time patterns in seismic activity in the northwest Chubu district of Honshu Island, Japan, and the 1984 Nagano-ken Seibu earthquake, Tectonophysics. 167, 93-102. Yoshida, A. and Ito, H. (1995), Precursory activity for intraplate large earthquakes, J Geography. 104, 544-550. Yoshida, A. and Takayama, H. (1993), Seismic activity around the focal region before and after occurrence of large intraplate earthquakes, J Seismol. Soc. Japan. 46, 17-24 (in Japanese with English abstract).

Comparative testing of alarm-based earthquake prediction strategies

Jeremy D. Zechar and Thomas H. Jordan

University of Southern California, Department of Earth Sciences, Los Angeles, CA 90089 USA

Corresponding Author: Jeremy D. Zechar, [email protected]

Abstract:

When earthquake forecasts are stated in terms of conditional intensity, a number of well-known methods are available for hypothesis testing. For example, the Regional Earthquake Likelihood Models (RELM) group of the Southern California Earthquake Center (SCEC) will soon begin systematic, simultaneous testing of a number of algorithms that forecast earthquake intensity (Schorlemmer et al, 2005). The likelihood framework, however, cannot be directly applied to prediction methods that generate alarms rather than probabilistic forecasts. Because a number of investigators are using alarm-based methods in earthquake prediction experiments, it is important that they be evaluated and, when possible, compared. We present an extension of Molchan’s error diagram – a plot of miss rate and fraction-of-alarm-space-time (Molchan 1991, 1997)– that enables establishment of statistical significance. We apply our methodology to strategies that predict M>=5 events in California in the period 2000-2010 and compare performance of forecasts derived from the USGS National Seismic Hazard Mapping Project (Frankel et al, 2002) and the Pattern Informatics and Relative Intensity methods of Rundle et al (2002).

References Frankel, A., M. Petersen, C. Mueller, K. Haller, R. Wheeler, E. Leyendecker, R. Wesson, S. Harmsen, C. Cramer, D. Perkins, and K. Rukstales (2002). “Documentation for the 2002 update of the national seismic hazard maps.” U.S. Geol. Surv. Open-file report 02-420, 33 pp.

Molchan G.M. (1991) “Structure of optimal strategies in earthquake prediction.” Tectonophysics 193: 267-276.

Molchan, G. M. (1997). "Earthquake Prediction as a Decision-making Problem." Pure and applied geophysics 149(1): 233 -248.

Rundle, J., K. Tiampo, W. Klein, and J. Sa Martins (2002). “Self-organization in leaky threshold systems: The influence of near-mean field dynamics and its implications for earthquakes, neurobiology, and forecasting.” Proc. Natl. Aca. Sci. USA 99: 2514-2521.

Schorlemmer, D., M. Gerstenberger, S. Wiemer and D. Jackson (2005). “Earthquake likelihood model testing.” Submitted to Seismol. Res. Lett.

The Role of Fault Interactions in the Generation of the 1997 Jiashi Strong Earthquake Swarm, Xinjiang, China

S. Zhou1 R.Robinson2 M. Jiang1

1 Geophysics Department, Peking University, Beijing 100871, China 2Institute of Geological & Nuclear Sciences, Box 30368,Lower Hutt, New Zealand

Abstract: A swarm of 15 large earthquakes occurred near Jiashi, western China, in 1997. This is a geologically stable area with very little previous seismicity. Using results from detailed relocations of events and focal mechanisms, we develop a synthetic seismicity model for the region involving two NNW striking en echelon faults. The purpose is to see if elastic stress interactions can reproduce the swarm like nature of the seismicity. The results indicate that they can do so. Some associated normal faulting mechanisms can be explained as well. We speculate that the rapid change in crustal thickness under the swarm region may have caused the localization of the seismicity.

1. Introduction A sequence of strong earthquakes occurred in Jiashi County, Xinjiang, western China in 1997 (Figures 1 and 2). From 21 January 1997, through 18 October 1997, 15 earthquakes occurred with Ms greater than or equal to 5.0, seven of which had Ms greater than or equal to 6.0. This earthquake swarm was located in the marginal region of the Tarim basin, which is the second largest inland sedimentary basin in the world and is thought to be a tectonically stable block by most geologists. Since instrumental seismograph records came into being in the 20th century it has been a rare phenomenon around the world that so many strong earthquakes clustered within a small area in a stable intraplate region. Another remarkable phenomenon is that the seismicity had been quite inactive in the swarm area and only 6 small earthquakes, all with magnitude ML≤3.0 , had been recorded in that area from 1970 (when a local seismic network was established) to the date when the swarm occurred in 1997. In recent years it has been shown that elastic interactions within fault networks can play an important role in determining the features of seismicity, such as event triggering, clustering of events, and inhibition of events. Here we study the Jiashi earthquake sequence in the light of a synthetic seismicity simulation using a modified version of the methods of Robinson & Benites (1996). Comparing the seismic features simulated with fault interactions and without fault interactions, we find that the elastic interactions between the seismogenic fault segments associated with the Jiashi swarm may play an important role in determining the swarm’s characteristics.

2. Data and Simulation of the Synthetic Seismicity in the Jiashi Swarm Region Based on the focus mechanisms and the relocations of Jiashi swarm earthquakes, a pair of en echelon strike-slip fault segments with NNW strike and a pair of en echelon normal-slip fault segments with NE strike (Figure 1 and 2) are inferred as the seismogenic fault network of the swarm in this study. Considering that the thickness of Quaternary sedimentation in the swarm region reaches 4,000~7000 meters and all earthquakes of the swarm occurred almost below a depth of 6.5 km, it should be reasonable to take 6.5 km as the burying depth of the fault segments associated with the Jiashi swarm. In any case, the results are not sensitive to this, judging from lots of numerical tests. The dip angles of the fault segments are taken from the average fault solutions of the focus mechanisms of the swarm earthquakes. All pertinent parameters of the seismogenic faults are given in Table 1.

Table 1. Fault Parameters Length Width Strike Dip Depth End 1 End 2 Fault (km) (km) (deg) (deg.) (km) (Lat, Lon) (Lat, Lon) Northern 77.05o E 76.91o E Strike-slip 20 20 -34.97 73.0 6.5 39.65o N 39.78o N Segment Northern 77.05o E 77.18o E Normal-Slip 15 15 60.0 45.0 6.5 39.65o N 39.70o N Segment Southern 77.08o E 76.92o W Strike-Slip 25 20 -52.46 76.5 6.5 39.51o N 39.63o N Segment Southern 77.08o E 77.20o E Normal-Slip 15 15 65.0 45.0 6.5 39.51o N 39.55o N Segment The synthetic seismicity model used here is developed from the model presented by Robinson & Benites (1996, 2001). The basic idea is that a network of faults can be defined geometrically, a loading mechanism applied, and then use the physics of fault failure and elastic dislocation theory to track the evolution of failure and slip. Each fault is subdivided into smaller patches and when the first patch of a slip episode occurs, it induces stress changes on all other patches. Thus the slip can be confined to the single initial failure or cascade onto other patches, resulting in a larger earthquake. The end result is a synthetic catalog of seismicity. The main modification we use here is in the way of specifying the loading. We take the loading to be defined by strain increments, constant along the fault and in the same direction as the desired slip, instead of taking constant stress increments. Thus, we could use GPS observation data directly as a constraint on the model.

Figure 1. Details of the positions and mechanisms of the Jiashi swarm events.

The epicenter distribution, magnitude vs. time plot, and b-value vs. time plot of a resulting synthetic catalogue of 5,000 years duration is shown in Figure 2. We can see that the seismicity versus time is quite inhomogeneous, and the seismicity of the earthquakes with Ms ≥ 6.0 especially shows clusters like swarms. Moreover, we can see that there are few earthquakes in the 2,000-3,000 years before 7 of 8 strong earthquake clusters, which is very consistent with the phenomenon that the seismicity had been quite inactive in the Jiashi swarm area before the swarm started in 1997. However, can we say that such seismicity clusters are caused by the fault interactions?

( a ) (b)

(c)

(d)

(e)

Figure 2: Results of the synthetic seismicity model. (a) The epicenters of the synthetic events; (b) The frequency-magnitude distribution, indicating a b-value of 1.05; (c) Fluctuation of the b-value with time; each value is based on 100events; (d) Magnitude vs. time for 5000 years; (e) An expanded section of 1000 years. To examine the effect of the fault interactions, we have also simulated synthetic catalogues with inter-fault elastic interactions suppressed. All model parameters are otherwise the same as above. We present our results in terms of the distributions of interevent times, dT, for “large ” events (Ms≥6.0) in the region. We then compare these distributions with the corresponding distributions when interactions have been suppressed. The ratio of these numbers will be about 1.0 for all dT if interactions have no effect. Clustering of events represented by ratios > 1.0 for small dT would occur if the interactions promoted the formation of secondary fractures. Conversely, mutual inhibition of large events would be represented by a ration <1.0 for small dT. In order to get enough number of “large ” events for statistical analysis, 50 different models with the same fault parameters, some same fixed mechanical parameters, but some different random mechanical parameters

chosen within reasonable ranges listed in Table 2, are used to generate catalogues of 5,000 years duration each. Figure 4 shows the comparison of the distribution of inter-event times dT of Ms≥6.0 events, taking all 50 models together. The dT bin size is taken 1.5 years in the plot based on the considering that the 9 large earthquakes with Ms≥6.0 of Jiashi swarm occurred within 1.5 years. Figure 3 indicates that there is an enhancement in the risk of multiple events within a short time (dT < 1.0 year), as compared to the corresponding cases with no interactions. The case where interaction is suppressed has a distribution of dT that closely matches that for a Possion random process with the same mean dT.

1000

N

0 0 20 40 60 80 100 120 140 dT, years Figure 3. Comparison of the distribution of inter-event times (dT) of large earthquakes with Ms≥6.0, taking all 50 models together. The solid line is for the case with interactions; the dashed line is for the case of no interactions. The open circles are the expected distribution for a random process with the same mean dT as the case without interactions. 3. Conclusion Considering the above results it seems likely that elastic fault interactions played an important role in determining the character of the 1997 Jiashi strong earthquake swarm. So this study adds still further to the strong evidence for the importance of static stress interactions in general.

4. Acknowledgement This work was jointly supported by the National Basic Research Program of China under grant number 2004CB418401, and the New Zealand Foundation for Research, Science and Technology. 5. References Okada, Y., 1992. Internal deformation due to shear and tensile faults in a half-space, Bull. Seis. Soc. Am., 82, 1018-1040. Robinson R. and Benites R., 1996. Synthetic seismicity models for the Wellington region, New Zealand: Implications for the temporal distribution of large events, J. Geophy. Res., 101, 27,833~27,844. Robinson R. and Benites R., 2001, Upgrading a synthetic seismicity model for more realistic fault ruptures, Geophy. Res. Lett., 28, 1843—1846 Improved Stress Release Model and its Application to Earthquake Prediction in Taiwan

Shoubiao ZHU1,2, Yaolin SHI2

1The Institute of Crustal Dynamics, Chinese Earthquake Administration, Beijing,China, 100085 2 Laboratory of Computational Geodynamics, Graduate University of Chinese Academy of Science, Beijing,China, 100049

Corresponding Author: Yaolin Shi, [email protected]

Abstract: Stress release model (SRM) and coupled stress release model (CSRM) have been applied to seismicity study. However, it is found that the peaks of earthquake conditional probability intensity often occur before the seismicity high in time domain. In this paper, we modified the SRM and CSRM in aspect of the relation between stress and earthquake probability. We assume that highest seismicity does not occur at the time of peak of highest stress, instead, there exist a time delay from the time of highest stress to the time of highest seismicity. The modified model is applied to the study of earthquakes in Taiwan. The result shows that the modified models, including the SRM and CSRM, can be applied to smaller scale of time (100 years) and space (~300km). Accuracy of earthquake occurrence time predicted by the modified coupled stress release model is higher than that by the old model in a test of retrospect earthquake prediction.

1. Introduction Stress release model (SRM) was proposed by Vere-Jones in 1978 for statistical study of seismicity. Physically it is a stochastic version of the elastic rebound theory of earthquake genesis. The classical elastic rebound model suggests that the stress has been slowly accumulating until the burst of an earthquake occurrence for stress release. This can be simulated by the jump Markov process in stochastic field, and SRM was developed on the basis of Knopoff′s Markov model. Vere-Jones (1988) applied SRM to historical earthquakes for North China and obtained some interesting results. Zheng and Vere-Jones (1991, 1994) further studied SRM in detail, provided a computational algorithm, and achieved good results in practical use. Although Zheng and Vere-Jones (1991) divided North China into 4 seismic zones and calculated the parameters of SRM with the zones combined, they did not take into account the interaction between seismic zones and still adopted the simple SRM. On the basis of their study, Shi et al. (1998) and investigated the application of SRM to synthetic earthquakes, and found that the SRM is a good model when being applied to the entire system, but it behaves degraded when applied to a region as only a part of the entire system because of neglecting the influences of stress changes produced by the earthquakes occurred outside the region. They therefore proposed the coupled stress release model (CSRM) as an improvement with the inclusion of terms accounting for the influences of stresses interaction among earthquakes occurred in different regions. Liu et al. (1998) applied CSRM to historical M>6.0 earthquakes from 1480 to 2000 in North China, and compared the results obtained from both CSRM and SRM models by AIC criterion. They found that CSRM is superior to SRM, and can significantly raise the earthquake occurrence probability before some main earthquakes. Zhu and Shi (2002) found both the SRM and CSRM are still applicable in the smaller time and space scale, and that the accuracy of prediction earthquake by SRM is higher than that of Poisson Model. Even so, both the SRM and CSRM still have a flaw, in the models, the earthquake conditional probability intensity and seismicity do not always vary consistently, e.g., the earthquake conditional probability intensity is sometimes already reduced to low values, but seismicity is still very active in the period of time. In this study, we will modify the stress release model and test the new model to seismicity analysis in Taiwan.

2. Modification of SRM and CSRM

In the stress release model, it is assumed that earthquake conditional probability intensity is proportional to the exponential of stress, and peak seismicity would occur at time of peak stress. However, this assumption contradicts observations sometimes. Earthquake M-t plot and variation of conditinal probability intensity with time in North China from 1480 to 2000 are shown in Figure 1 (Liu et al., 1998). Although the conditinal probability intensity reached the peak around the year 1660, and reduced to low values from 1670 to 1800, the seismicity was very active both in frequency and magnitude from 1670 to 1740 ( arrow donotes the inconsistent period of time) in North China. It is suggested that there exists a time delay between the stress high and the seismicity high.

Figure.1 Earthquake M-t plot and variation of conditinal probability intensity with time in North China from 1480 to 2000 (Liu et al., 1998 ) ( arrow denotes the inconsistent period of time)

Considering this time delay between the conditinal probability intensity and the seismicity, strong or moderate-strong events may still occur after a strong earthquake even though which may have significantly reduced the stress. Therefore, the equation for conditional intensity of earthquakes is modified as: λ(t) = exp{a + b[t'−cS(t')]} (1) The conditinal probability intensity λ at time t , is related to the stress at time t’, i.e., the summation of linear increased stress with time, a+bt’ , subtracts S(t') , the total released stress at time t'= t − Δt . In this case, not only a, b, c are model parameters to be decided by maximum likelihood method. t'= t − Δt , Δt is also a parameter to be searched for optimization. Figure.2 shows the compassion of intensity curves of SRM and modified SRM in North China from year 1450 to 2000. It is apparent that the modified SRM is better than SRM from two aspects: 1. seismicity almost keeps abreast of conditinal probability intensity; 2. AIC in modified SRM (AIC=395.9) is less than that of in SRM (AIC=402.3). In the same way, the conditinal probability intensity λ (t) in CSRM is modified as:

λ = + − − (t) exp{a b[t' c1S1(t') c2S2 (t')]} (2)

Where S1 (t') , S2 (t') are the sum of released stresses at time t’ of both the inner and outer regions, respectively, t’ is the same as in eqn. (1), and a, b, c1 ,and c2 are model parameters fitted from catalog data.

9

8

M 7

6

5 1500 1600 1700Year 1800 1900 2000

SRM AIC=402.3 0.24

0.16

Events/year 0.08

0 1500 1600 1700 1800 1900 2000

MSRM AIC=395.9 0.24

0.16

Events/year 0.08

0 1500 1600 1700 1800 1900 Year Figure.2 comparison of SRM and modified SRM in north China from the year 1450 to 2000 (top is M-t plot, the middle conditinal probability intensity varies with time in SRM, the bottom corresponding curve in modified SRM)

3. Comparison of the SRM with the MSRM in Taiwan area In order to examine the modified stress release (MSRM) and the modified coupled stress release model (MCSRM), we re-calculate the AIC value and R-score with the MSRM and the MCSRM in Taiwan region. R-score is a parameter defined as success rate subtract false alarm rate, popularly used in China to evaluate earthquake predictions (Shi et al., 2000). A Table of AIC value and R-score of 10 sub-regions is calculated for SRM, MSRM, CSRM, and MCSRM in Taiwan area. From the table, it is suggested that MSRM behaves better than SRM in AIC value, but not in the R-score. However, MCSRM is better than CSRM and SRM both in AIC value and in R-score.

Δt(year)

Figure 3 Difference between earthquake occurrence time predicted by ISRM and by Poisson model and the number of earthquakes predicted Figure 3 shows the difference between the actual earthquake occurrence time and predicted occurrence time by SRM, MSRM, MCSRM, and Poisson model respectively. The square root of the difference between earthquake occurrence time predicted by Poisson model and actual occurrence time is 1.40 years, whereas it is 0.74 years by SRM, 0.86 years by MSRM and 0.61 years by MCSRM. Obviously, MCSRM has higher accuracy in earthquake prediction. Parameter Δt , in principle, can be obtained in similar algorithm for a, b and c. As a matter of fact, Δt is searched by try and error method in this work so far. In Taiwan, Δt is usually assigned 2-4 years for trial. We found that larger Δt can usually improve the AIC value, but may reduce the R-score.

4 Conclusion From this study we conclude that the modified stress release model (MSRM, MCSRM), which allow the peak seismicity appears after a time delay of peak stress, behaves better than SRM and CSRM in study of regional seismicity. Moreover, MCSRM can give a better estimates when trying to apply stress release models to earthquake prediction. This research is Supported by Beijing Municipal Natural Science Foundation (8053020).

References Liu Jie, Vere-Jones D, Ma Li, et al. (1998). The principle of coupled stress release model and its application. Acta Seismologica Sinica, 11(3): 273~281 Shi Yaolin, Liu Jie, Vere-Jones D, et al, (1998). Application of mechanical and statistical models to the study of seismicity and of synthetic earthquakes and the prediction of natural ones. Acta Seismogogica Sinica, 11(4): 421~430 Shi Yaolin, Liu Jie, Zhang Guomin, Performance of Chinese Annual Earthquake Predictions in the Nineties, Journal of Applied Probability, 2001, 38A Vere-Jones D. (1988). On the variance properties of stress release models. J Statist, 30A: 123~135. Zheng X G, Vere-Jones D. (1991). Application of stress release models to historical earthquakes from north China. Pure Appl Geophys, 135: 559~576 Zheng X G, Vere-Jones D. (1994). Further application of stress release models to historical earthquakes data. Tectonophysics, 229: 101~121 Zhu S, Shi Y. (2002). Improved stress release model: Application to the study of earthquake prediction in Taiwan area. ACTA SEISMOLOGICA SINICA, 15(2):171~178.

Properties of the probability distribution associated with the largest event in an earthquake cluster

Jiancang Zhuang and Yosihiko Ogata Institute of Statistical Mathematics, 4-6-7 Minami Azabu, Minato-Ku, Tokyo 106-8659, Japan

Corresponding Author: Jiancang Zhuang, [email protected]

Abstract: The space-time epidemic-type aftershock sequence (ETAS) model is a stochastic branching pro- cess in which earthquake activity is classied into background and clustering components and each earthquake triggers other earthquakes independently according to certain rules. This paper gives the probability distri- butions associated with the largest event in a cluster and their limit properties for all three cases when the process is subcritical, critical and supercritical, and uses them to evaluate the probability of an earthquake to be a foreshocks, and to derive magnitude distributions of foreshocks and non-foreshock earthquakes. To verify these theoretical results, the JMA (Japan Meteorological Agency) earthquake catalog is analyzed, and we found no discrepancy between the analytic results and the inversion results by using the stochastic reconstruction method. We found that the proportion of foreshocks in background events is around 8%.

1. Introduction The foreshock has been one of the most important topics in the research of seismicity pattern and earthquake prediction for decades. To understand foreshocks, we need a good modelling of earthquake clustering. In seismicity studies, the ETAS models are generally adopted (see e.g., Ogata, 1998). These models assume that the seismicity can be divided into a background component and a clustering component, and that each event, no matter it is from the background or it is directly triggered by another event, triggers its own ospring according to some general rules.

2. Definition of the ETAS model In the ETAS model, the time varying seismicity rate (mathematically termed as conditional intensity) takes the form of

(t, x, y, m) = s(m) (t, x, y) = s(m) "(x, y) + ξ(t, x, y; ti, xi, yi, mi)# , (1) i:Xti

ξ(t, x, y; ti, xi, yi, mi) = κ(mi) g(t ti) f(x xi, y yi; mi). (2)

In (2), α (m−mc) κ(m) = Ae , m mc, (3) is the mean number of direct ospring from an event sized m, and

−p p 1 t g(t) = 1 + , t > 0, (4) c µ c¶

and −q q 1 x2 + y2 f(x, y; m) = 1 + . (5) π D exp[(m mc)] µ D exp[(m mc)]¶ are the p.d.fs for the occurrence times and locations of direct ospring, respectively.

3. Theoretical distributions associated with the largest magnitude of all the descendants from a given event The probability of an event of magnitude m to have no ospring greater than m0 can be derived from the model analytically (Zhuang and Ogata, 2004), i.e.,

0 ζ(m, m ) = Pr{an event of magnitude m has no ospring greater than m0} greater than m0 | m has n direct ospring} m0 0 = exp (κ(m) "1 s(m ) ζ(m , m ) dm #) . (6) Zmc It is evident that ζ(m, m0) has the form of

0 0 ζ(m, m ) = exp[κ(m) F (m )]. (7)

where

m0 0 0 F (m ) = 1 s(m ) exp[κ(m ) F (m )] dm Zmc m0 −β(m∗−m ) α(m∗−m ) 0 = 1 e c exp[A e c F (m )] dm Zmc 0 A F (m0) e α(m −mc) 0 β − β −1 −u = 1 [A F (m )] α u α e du (8) ZA F (m0) represents the probability that the largest earthquake in an arbitrary cluster, including the initial event and all its descendants, to be greater than m0. Based on the analysis with generating functions, Saichev and Sornette (2005) give similar forms of the above equations. But they do not discuss the case when the process is supercritical. In this article, we are going to discuss the properties of ζ under all the three cases. The extinct probability Pc(m) can be derived as

Pc(m) = Pr{The family tree from an event of magnitude m extinguishes} +∞ = exp κ(m) 1 s(m )Pc(m ) dm . (9) · µ Zmc ¶¸

Substitute Pc(m) = exp[Cκ(m)] into (9), we have

+∞ C = 1 s(m ) exp[Cκ(m )] dm . (10) Zmc

Function F (m) is closely related to Pc(m) by

log Pc(m) lim F (m) = C = . (11) m→+∞ κ(m)

Thus, when the process is supercritical, C > 0 yields

0 −κ(m) F (m0) −C κ(m) 0 lim ζ(m, m ) = lim e = e = Pc(m ). (12) m0→+∞ m0→+∞

For the subcritical case, which requires that > and % = A/() < 1, it is easy to see that ζ(m, m0) → 1 when m0 → +∞ because C = 0. In this case,

F (m) 1 lim = . (13) m→+∞ s(m) (1 %)

Furthermore, in subcritical case, according to (7), for a xed m, when m0 is large enough,

0 0 1 0 A αm−βm0 ζ(m, m ) exp s(m )κ(m) = exp e , (14) · (1 %) ¸ · (1 %) ¸ where A0 = Ae(β−α)mc . For the critical case,

−α(m−m ) −1 F (m) e c ∝ [κ(m)] , (15) and 0 α(m−m0) ζ(m, m ) exp Ae . (16) h i In conventional studies, foreshocks are dened as non-aftershock earthquakes that are followed by another larger earthquake occurring nearby. Here we dene a foreshock by a background event that has at least one ospring, direct or indirect, with a larger magnitude. Here we refer to Zhuang et al (2004) on how to determine whether an earthquake is a background event or a triggered events or a foreshock by using the 45 45 40 40 Latitude Latitude 35 35 30 30

130 135 140 145 150 0 5000 10000 15000 20000 25000

Longitude Time in days

Figure 1: Seismicity in the Japan region and nearby during 1926–1999 (MJ 4.2). (a) Epicenter locations; (b) Latitudes of epicenter locations against occurrence times. The shaded region represents the study space- time range.

1.0 1.0 7.0 7.0

0.8 0.8 6.5 6.5

6.0 0.6 6.0 0.6

5.5 5.5 0.4 0.4

5.0 5.0 0.2 0.2

4.5 4.5

0.0 0.0 4.5 5.0 5.5 6.0 6.5 7.0 4.5 5.0 5.5 6.0 6.5 7.0

Figure 2: Comparison between the theoretical ξ(m, m0) and reconstructed ξˆ(m, m0) for the JMA catalogue. Left panel: ξ(m, m0); right panel: ξˆ(m, m0).

stochastic declustering method. From (6), the probability that an event of magnitude m is a foreshock is H(m) = 1 ζ(m, m).

4. Data analysis and results The dataset used in this study is the JMA catalogue in a range of longitude 121◦ – 155◦E, latitude ◦ ◦ 21 ∼ 48 N, depth 0∼100 km, time 1926/Janary/1∼1999/December/31 and magnitude MJ 4.2. We choose a polygon as shown in Figure 1 as the target region and a time period of 10,000 – 26,814 days after 1926/Jan/1, and same depth and magnitude ranges as the whole data set for model tting. The events outside of this study space-time range are used as complementary events for calculating the boundary eect. The model parameters estimated by maximizing the likelihood function are Aˆ = 0.1692, cˆ = 0.0189 day, ˆ = 1.5014, pˆ = 1.1181, Dˆ 2 = 0.0008640, qˆ = 1.9107, ˆ = 1.0761 and ˆ = 1.9585. There are two ways to evaluate ζ(m, m0): one is based on solving (7) with the parameters from the tting results; the other is based on stochastic reconstruction. The theoretical ζ(m, m0) and the reconstruction ζˆ(m, m0) for the JMA are shown in Figure 2. In calculating the theoretical one in the left panel, we use the empirical magnitude distribution for the events falling in the study space-time range as s(m). The overall impression of these two images is their similarity to each other, even though the theoretical one is more smoother than the reconstructed one. After we plot H(m) and Hˆ (m) in Figure 3, we can see that, even if there is no big discrepancy between these two functions, the probability for an event to produce at least 1 larger descendant is about 15%. Such a high probability may be explained in the following way. According to Zhuang et al. (2004), the characteristics of the background ground events are dierent between the background events and their descendants. For example, the magnitude distributions of background and triggered events are dierent, denoted here by sb(m) and sc(m). Also a triggered event triggers more direct ospring in average than a background event of the same magnitude. Here we denote the productivity from a background event and a triggered event by 0.30 0.25 0.20 0.15 Prob. 0.10 0.05 0.00 4.5 5.0 5.5 6.0 6.5 7.0

Mag.

Figure 3: Foreshock probabilities of foreshocks in the JMA catalogue. Circles, triangles and crosses represent the reconstructed Hˆ (m), Hˆc(m) and Hˆb(m) for all the events, background events and triggered events, respectively. The black, red and orange solid lines are theoretical results for all the events, background events and triggered events, respectively.

κb(m) and κc(m), respectively. For a triggered event,

m0 0 0 ζc(m, m ) = exp (κc(m) "1 sc(m ) ζc(m , m ) dm #) . (17) Zmc

For a background event of magnitude m,

m0 0 0 ζb(m, m ) = exp (κb(m) "1 sc(m ) ζc(m , m ) dm #) . (18) Zmc

Using the stochastic reconstruction techniques introduced by Zhuang et al (2004), we reconstruct κb, κc and sc, and then use them to calculate Hb and Hc, as shown in Figure 3. As we can see, three theoretical curves are still close to the reconstructed ones after the above corrections. Basically, the probability that a background earthquake of magnitude about 4.2 to 5 triggers a larger earthquake is around 7%.

5. Conclusions In this paper, based on the space-time ETAS model, we obtained the key equation for the probability for the magnitude of the largest descendant from a given ancestor, as ζ(m, m0) in (6) and their asymptotic properties. We dene foreshocks naturally by the background events that have at least one larger descendant. The probability that a background event is a foreshock can easily obtained. We also analyze the JMA catalog to verify our theories. To obtained ζ(m, m0), two methods are used: one is directly computed from the integral equations, (7) and (8), and the other is by using stochastic reconstruction. Because of the dierent behaviors in triggering ospring between background events and triggered events, we evaluate the corresponding ζ-function for both background events and triggered events. For a background event of a magnitude from 4.2 to 6.0, the probability that it is a foreshock is about 8%, which is very close to the results obtained by using a conventional foreshock denition and a conventional declustering method.

6. Acknowledgement Helpful discussions with Annemarie Christophsen, Kazuyoshi Nanjo, Martha Savage, Euan Smith, Didier Sornetter are David Vere-Jones are gratefully acknowledged.

7. Main references Ogata, Y. (1998), Space-time point-process models for earthquake occurrences, Ann. Inst. Stat. Math., 50, 379-402. Saichev A. and Sornette D. (2005) Distribution of the Largest Aftershocks in Branching Models of Triggered Seismicity: Theory of the Universal Bath’s law, Phys. Rev. E 71, 056127. Zhuang, J., Y. Ogata and D. Vere-Jones (2004), Analyzing earthquake clustering features by using stochastic reconstruction. J. Geophys. Res., 109, B05301, doi:10.1029/2003JB002879. Zhuang J. and Y. Ogata (2004), Analyzing earthquake clustering features by stochastic reconstruction – foreshocks. Eos Trans. AGU, 85(47), Fall Meet. Suppl., Abstract S23A-0300.

Interevent times within aftershock sequences as a reflection of different processes

F.R.Zuniga1

1 Centro de Geociencias, Universidad Nacional Autónoma de México, UNAM Juriquilla, Querétaro, México 76230

Corresponding Author: R. Zuniga, [email protected]

Abstract: We have looked into the interevent time distribution of aftershock sequences from different tectonic regimes with the aim of gaining information on the aftershock process. To this end we looked at some well defined aftershock sequences from Alaska, New Zealand and Mexico and proceeded to identify the distribution patterns of interevent times (also called waiting times) in both time and space. Several results emanate from this exercise. In most cases, more than one coherent process can be discerned with those acting following the main process, occurring surprisingly not in the volume surrounding the main rupture volume but rather within it.

1. Introduction The aftershock process has been the subject of many studies throughout the history of seismological research (for a review see for example Utsu, 1995; Gross, 2003). Anyone who has been exposed to earthquakes is familiar with the notion that a sequence of (usually) smaller events follow the occurrence of a medium to large event. Intuition tells us that they most occur due to the perturbation introduced by the main rupture. However, the questions about why and where they take place are far from settled. In the most simplistic models, the main dislocation produces regions of increased stress in neighboring volumes where other events can be expected to occur. Other models attempt to explain the temporal characteristics of aftershocks as a hierarchical structure in which each aftershock can produce its own aftershock sequence. However, careful examination of the decay rate of aftershocks at different regions around the world for similar size main shocks, by means of, for example, the number of aftershock of a prescribed magnitude (Singh and Suarez, 1988) or by the p-value in Omori’s law (Wiemer and Katsumata, 1999) clearly show differences that can not be attributed to a single process and highlight the importance of the tectonic regime. Based on recent studies which looked at the probability distribution of interevent times of earthquakes occurring in a region, some authors have proposed a single universal scaling law which can account for the observed distributions appealing to self organized criticality as the inherent process (Bak et al. 2002). Based partly on their observations they claim that there is no inherent difference between mainshocks, foreshocks or aftershocks. These results, however, when examined in detail, may stem from the combination of events from different tectonic regimes which preclude any conclusion related to the individual earthquake process. In this work we have analyzed the interevent time distribution of events belonging to well defined aftershock sequences from different regions, with the objective of identifying possible different processes within each sequence and their causes.

2. Data and method Our data comes from selected seismicity catalogs from Alaska, New Zealand and Mexico which have been the focus of detailed studies regarding their homogeneity and completeness. Using those time intervals for which the best quality data is available, we extracted aftershock sequences by means of temporal and spatial associations. We looked at events with magnitude larger than 6.5 occurring in the upper crust since these are the most efficient with regards to aftershock generation.

Our selection criteria also helps avoid problems related to depth artifacts introduced in the locations. For each main shock a spatial criterium is employed following the relation for the maximum distance of expected aftershocks with respect to magnitude proposed by Kagan (2002). With this first set, a temporal criterium is used based on the rate of events following the main shock. Cumulative number of events Vs. time curves are used to select times of significant changes in seismicity rate. The interevent time distributions are then analyzed as a function of time and as a set of normal scores.

3. Discussion Different patterns can be clearly discerned for some of the sequences analyzed while others show an apparent single process. In Figure 1 we show the temporal characteristics of interevent times for an aftershock sequence in Alaska for different magnitude cut-offs. Figure 2 shows the distribution of normal scores as a function of magnitude. In Figure 1 we can see different trends which appear to act at certain times after the mainshock.

Fig. 1 Temporal behavior or interevent times for an aftershock sequence in Alaska

Another key point to note from the curves in Figure 1 is that the different trends are easier to see in the smaller events than in the larger ones, although this may be due to the limited amount of larger events. The data in Figure 2, on the other hand, show an apparent single regime dominating the process.

Fig. 2 Frequency of occurrence of normalized interevent times for the Alaska aftershock sequence in Figure 1.

As another example, the aftershock sequence shown in Figure 3 corresponds to a mainshock which occurred in the South Island of New Zealand where again several trends can be discerned.

Fig. 3 Temporal behavior or interevent times for an aftershock sequence in the South Island of New Zealand

The normalized scores, again do not show any particular change in tendency, as seen in Figure 4.

Fig. 4 Frequency of occurrence of normalized interevent times for the New Zealand aftershock sequence of Figure 3. The location of the separate sequences is shown in the following maps where it is apparent the spatial dependence of every set.

Figure 5 Epicenters of events located within the time intervals discerned from the interevent times of Figure 3.

4. Conclusion The analysis of interevent time distribution of aftershock sequences reveal that different processes start to act following the main shock. These processes can be due to stress concentrations incurred by both the main shock and other aftershocks in the vicinity. The location of events belonging to a particular process indicate the spatial dependence of the process which in turn can be traced to stress shadows. Normalized scores do not reflect such differences which can be the reason why previous studies call for a single universal behavior in the interevent time properties.

5. Acknowledgement This work was carried out with support from UNAM’s grant IN122602.

6. References Bak, P., K. Christensen, L. Danon and T. Scanlon. (2002). Unified Scaling Law for Earthquakes, Pysical Rev. Lett., 88, 178501-1-4. Gross, S. (2003), Failure-time Remmaping in Compound Aftershock Sequences, Bull. Seism. Soc. Am., 93, 4, 1449-1457. Kagan, Y. (2002), Aftershock Zone Scaling, Bull. Seism. Soc. Am., 92, 2, 641-655. Singh, S.K. and Suarez, G. (1988), Regional Variation in the number of aftershocks (mb>5) of Large Subduction Zone Earthquakes (Mw > 7.0), Bull. Seism. Soc. Am., 78, 1, 230-242. Utsu, T., Ogata, Y., and Matsuura, R.S. (1995), The Centenary of the Omori Formula for a Decay Law of Aftershock Activity, J. Phys. Earth 43, 1,1-33. Wiemer, S. and Katsumata, K., (1999), Spatial variability of Seismicity Parameters in Aftershock Zones, J. Geophys. Res.,104, 13135-13151.

Properties of seismicity and surface deformation generated by earthquakes on a heterogeneous strike slip fault in elastic half space

G. Zöller1, Y. Ben-Zion2 S. Hainzl3, and Matthias Holschneider4I

1Institute of Physics, University of Potsdam, POB 60 15 53, 14415 Potsdam, Germany 2 Department of Earth Sciences, University of Southern California, Los Angeles, CA 90089-0740 USA 3 Institute of Geosciences, University of Potsdam, POB 60 15 53, 14415 Potsdam, Germany 4Institute of Mathematics, University of Potsdam, POB 60 15 53, 14415 Potsdam, Germany

Corresponding Author: G. Zö l le r, [email protected]

Abstract: We attempt to develop a physical basic for earthquake dynamics and seismic hazard on large strike-slip fault zones by joint analysis of model realizations, with parameters representing specific fault zones, and multi-disciplinary observations of deformation phenomena. The model consists of discrete slip patches (representing structural segmentation) on a vertical plane in a 3-D solid, and it accounts for brittle-slip, creep-slip, realistic boundary conditions and 3-D elastic stress transfer (Ben-Zion and Rice, 1993; Ben-Zion, 1996). Recent developments extended the framework to incorporate quasidynamic rupture propagation, gradual healing, and creeping barriers along the fault (Zöller et al., 2004, 2005). The model produces for ranges of input parameters several realistic features of seismicity including frequency-size and temporal statistics, hypocenter distributions, realistic foreshock- mainshock-aftershock sequences, approach to and retreat from criticality, and accelerating seismic release. Our current efforts are directed in part toward extending the calculated observable quantities to include surface deformation, and to develop a better understanding of recurrence intervals of large earthquakes on faults with different levels of structural heterogeneities. Previous works have shown that the model behavior can be mapped onto phase diagrams that span, as a function of input parameters, several different dynamic regimes (e.g., Dahmen et al., 1998; Zöller et al., 2004). This may allow us to use various observables associated with a natural fault zone to classify the dynamic regime of the fault in terms of governing parameters, and then employ corresponding model realizations to produce long synthetic data sets of deformation phenomena. Analysis of such synthetic data sets can provide a better statistical characterization of the fault’s response than the limited available records.

1. Introduction The understanding of spatiotemporal earthquake occurrence on natural fault systems is an important scientific challenge. While some seismicity patterns show an almost universal behavior, others are observed less frequently and follow no obvious law. For example, aftershocks following the modified Omori law are observed after almost all large continental earthquakes, whereas foreshocks are a rare phenomenon (Utsu et al., 1995). The Parkfield experiment in California demonstrates that even if a fault segment shows regular behavior over several decades, a prediction of future activity can fail. One reason for this is that the available data sets are too limited to provide enough information for the understanding of the complex relationships between the various physical mechanisms in the earth's crust. This highlights the importance of model simulations which may cover many seismic cycles and enable a study of the dependency of seismicity and other deformation phenomena on the underlying parameters.

In this study, we analyze various quantities in data generated by a discrete model of a vertical strike-slip fault in a 3-D elastic half-space (Ben-Zion and Rice, 1993; Zöller et al., 2005). The model combines computational efficiency with realistic representation of a large heterogeneous fault zone. The simulations cover 1000s of years and allow us to reproduce seismicity patterns as well as postseismic and interseismic creep deformation. The model ingredients, including laws for brittle and creep deformation, stress transfer, and boundary conditions, are compatible with empirical macroscopic knowledge. Several relationships between imposed model parameters (e.g., frictional parameters, creep velocities, spatial heterogeneities) and observed seismicity quantities like frequency-size distributions, hypocenter profiles, and aftershock clustering have been previously quantified using analytical and numerical parameter-space studies (Ben-Zion, 1996; Fisher et al., 1997; Ben- Zion et al., 2003; Zöller et al., 2005). In this paper we consider end-member cases representing a relatively homogenous fault, a strongly heterogeneous fault, and a fault without dynamic weakening during rupture (“instantaneous healing”). Since the underlying properties of natural faults are generally not known, we examine whether observable parameters based on seismic catalogs and the surface deformation field can serve as proxies to identify the governing properties and evolving dynamics of a fault. We also analyze the statistics of inter- event and recurrence times of large earthquakes, which are of particular interest for seismic hazard assessment, in the framework of the model. The overall goal is an attempt to classify the dynamic regime of large a individual fault zones like San Andreas and San Jacinto faults by means of a small number of observable parameters, and then use this knowledge to provide refined estimated of seismic hazards associated with the faults.

2. The model

Our model includes a single rectangular fault embedded in a 3D elastic half space. A fault region of 70 km length and 17.5 km depth is covered by a computational grid, divided into 128x32 uniform cells (see Fig. 1), where deformational processes are calculated. Tectonic loading is imposed by a motion with constant velocity 35 mm/year of the regions around the computational grid. The space-dependent loading rate provides realistic boundary conditions. Using the static stress transfer function for slip in elastic solid, the continuous tectonic loading for each cell on the computational grid is a linear function of time and plate velocity. Additional loadings on a given cell occur due to brittle and creep failures on the fault. In a recent study (Zöller et al., 2005), creeping barriers (black in Fig. 1) have been introduced to account for aseismic regions between seismic fault segments.

Figure 1 Sketch of the fault model: A single rectangular fault is embedded in a 3D elastic half space.

3. Results and Discussion In previous works, it has been shown that the model produces realistic frequency-size and temporal statistics, of earthquakes realistic hypocenter distributions, realistic foreshock- mainshock-aftershock sequences, realistic slip histories, approach to and retreat from criticality, and accelerating seismic release (e.g., Ben-Zion et al., 2003; Zöller et al., 2004,

2005). Systematic analytical and numerical parameter-space studies indicate the existence of three basic dynamic regimes. The first is associated with strong fault heterogeneities, power law frequency-size statistics of earthquakes, and random or clustered temporal statistics of intermediate and large events. The second is associated with homogeneous or relatively regular faults, peaked frequency-size statistics (compatible with the characteristic earthquake distribution), and quasi-periodic temporal occurrence of large events. For a range of parameters, there is a third regime in which the response switches back and forth between the forgoing two modes of behavior. The results can be understood in terms of phase diagrams and intermittent criticality (Dahmen et al., 1998; Ben-Zion et al., 2003; Zöller et al., 2004).

Fig. 2 illustrates recent results which are closely related to the critical point concept for large earthquakes:

Figure 2: The frequency-magnitude distribution of all earthquakes, foreshocks and aftershocks, respectively. The dotted lines refer to b-values of 1 and 2.

While the frequency-size distribution for an entire synthetic earthquake catalog shows characteristic earthquake behavior, the foreshocks follow a Gutenberg-Richter law. In the view of the critical point concept, this gradual change of the frequency-size statistics can be interpreted as evolution from an uncritical state to a critical (scale-free) state, in which earthquakes of all magnitudes can occur. Therefore, the size distribution of earthquakes can serve as a proxy for the distance from the critical point. Additional seismicity parameters that may serve as a proxy for the evolution toward a critical state include the depth distribution of hypocenters and other spatial parameters (Eneva and Ben-Zion, 1997; Zöller et al., 2001).

In the present study, we follow this approach of linking model parameters to observational properties by investigating the surface deformation field for several end-member cases of faults. In a first step, we consider surface deformation fields produced by mainshocks and focus on the degree of disorder of these fields. Initial results indicate that the disorder of the stress field correlates with the disorder of the surface deformation field.

4. Conclusion

We attempt to combine systematic theoretical parameter-space studies with observational results of multi-disciplinary deformation signals. Phase diagrams spanned by key controlling parameters summarize the expected behavior of various observables in different dynamic regimes. Analysis of observed data associated with large individual fault zones may be used, together with the phase diagrams, to classify the dynamic regime of the faults. If this can be done, it will improve the understanding of earthquake dynamics and provide a contribution to the problem of predictability of large events.

5. References Ben-Zion, Y. and Rice, J. R. Earthquake failure sequences along a cellular fault zone in a three-dimensional elastic solid containing asperity and nonasperity regions. J. Geophys. Res. 98, 14109-14131, 1993. Ben-Zion, Y. Stress, slip, and earthquakes in models of complex single-fault systems incorporating brittle and creep deformations. J. Geophys. Res., 101, 56 77-5706, 1996. Ben-Zion, Y., M. Eneva, and Y. Liu, Large Earthquake Cycles and Intermittent Criticality On Heterogeneous Faults Due To Evolving Stress and Seismicity, J. Geophys. Res., 108, NO. B6, 2307, doi:10.1029/2002JB002121, 2003. Dahmen, K., Ertas, D. and Ben-Zion, Y. Gutenberg-Richter and characteristic earthquake behavior in simple mean-field models of heterogeneous faults, Phys. Rev. E, 58, 1494- 1501, 1998 Eneva M., Ben-Zion Y. Application of pattern recognition techniques to earthquake catalogs generated by model of segmented fault systems in three-dimensional elastic solids, J. Geophys. Res., 102, 24513-24528, 1997 Fisher, D. S., K. Dahmen, S. Ramanathan and Y. Ben-Zion, Statistics of Earthquakes in Simple Models of Heterogeneous Faults, Phys. Rev. Lett., 78, 4885-4888, 1997. Utsu, Y., Y. Ogata, and R. S. Matsu'uara, 1995. The centenary of the Omori formula for a decay law of aftershock activity, J. Phys. Earth, 43, 1-33. Zöller, G., Hainzl, S., and Kurths, J., 2001. Observation of growing correlation length as an indicator for critical point behavior prior to large earthquakes. J. Geophys.Res., 106, 2167-2175. Zöller, G., Holschneider, M., and Ben-Zion, Y. (2004). Quasi-static and quasi-dynamic modeling of earthquake failure at intermediate scales. Pure Appl. Geophys, 161, 2103- 2118, doi 10.1007/s00024-004-2551-0. Zöller, G., Holschneider, M., and Ben-Zion, Y. (2005a). The role of heterogeneities as a tuning parameter of earthquake dynamics. Pure Appl. Geophys, 162, 1027-1049, doi 10.1007/s00024-004-2660-9. Zöller, G., Hainzl, S., Holschneider, M., and Ben-Zion, Y. (2005b). Aftershocks resulting from creeping sections in a heterogeneous fault. Geophys. Res. Lett., 32, L03308, doi 10.1029/2004GL021871. Zöller, G., Hainzl, S., Y. Ben-Zion, and M. Holschneider. Earthquake activity related to seismic cycles in a model for a heterogeneous strike-slip fault. Tectonophysics (2006, in press).