Earthquakes clustering based on maximum likelihood estimation of point process conditional intensity function
Giada Adelfio and Marcello Chiodi
Dipartimento di Scienze Statistiche e Matematiche “Silvio Vianelli” (DSSM), University of Palermo, Italy
Abstract In this paper we propose a clustering technique to separate and find out the two main component of seismicity: the background seismicity and the triggered one. The method here proposed assigns each earthquakes to the cluster of earthquakes or to the set of isolated event according to the intensity function. This function is estimated by maximum likelihood methods following the theory of point process and iteratively changing the assignment of the events. This technique develops non- parametric estimation methods and computational procedure for the maximization of the point process likelihood function.
1 Introduction
Because of the different seismogenic features controlling the kind of seismic release of clustered and background seismicity (Adelfio et al. (2005)), to describe the seismicity of an area in space, time and magnitude domains, it could be useful to study the features of independent events and strongly correlated ones, separately. The two different kinds of events give different information on the seismicity of an area. For the short-term (or real-term) prediction of seismicity and to estimate param- eters of phenomenological laws we need a good definition of the earthquake clusters. Furthermore the prediction of the occurrence of large earthquakes (related to the as- sessment of seismic risk in space and time) is complicated by the presence of clusters of aftershocks, that are superimposed to the background seismicity and shade its principal characteristics. For this purposes the preliminary subdivision of a seismic catalog in background seismicity and clustered events is required. At this regard, a seismic sequences detec- tion technique is presented; it is based on MLE of parameters that identify the condi- tional intensity function of a model that describe seismic activity as a clustering-process, which represents a slight modification of the ETAS model (Epidemic Type Aftershocks- Sequences model; Ogata (1988), Ogata (1998)).
2 Previous seismic clustering methods
Several methods are proposed to decluster a catalog. We could identify two main classes: methods that require the definition of a rectangular time-space window, with size de- pending on the mainshocks magnitude, around each mainshock (Gardner and Knopoff (1974)) and methods closed to the single linkage: they identify aftershocks by modelling
1 an interaction zone about each earthquake assuming that any earthquake that occurs within the interaction zone of a prior earthquake is an aftershock and should be consid- ered statistically dependent on it (Resenberg (1985)). The second class of methods has the advantage of not imposing a window for the final size or shape of the cluster, but the choice of coefficients defining the link distances in space and time could be influenced by researchers experience. Ogata et al. (2002), to avoid such difficulties, proposes a stochastic method associat- ing to each event a probability to be either a background event or an offspring generated by other events, based on the ETAS model for clustering patterns; a random assignment of events generates a thinned catalog, where events with a bigger probability of being mainshock are more likely included and a nonhomogeneous Poisson process is used to model their spatial intensity. This procedure identifies the two complementary subpro- cess of seismic process: the background subprocess and the cluster subprocess or the offspring process.
3 Conditional intensity function in point processes
To provide a quantitative evaluation of future seismic activity the conditional intensity function is crucial. It is proportional to the probability that an event with magnitude M will take place at time t, in a point in space of coordinates (x, y). The conditional intensity function of a space-time point process can be defined as:
P r∆t∆x∆y(t, x, y|Ht) λ(t, x, y|Ht) = lim (1) ∆t,∆x,∆y→0 ∆t∆x∆y where Ht is the space-time occurrence history of the process up to time t; ∆t, ∆x, ∆y are time and space infinitesimal increments; P r∆t∆x∆y(t, x, y|Ht) represents the history- dependent probability that an events occurs in the volume {[t, t + ∆t) × [x, x + ∆x) × [y, y + ∆y)}. The conditional intensity function completely identifies the features of the associated point process (i.e. if it is independent of the history but dependent only on the current time and the spatial locations (1) supplies a nonhomogeneous Poisson process; a constant conditional intensity provides a stationary Poisson process). Other interesting point processes are defined by different conditional intensity func- tions. For example, we could consider probability models describing earthquakes cata- logs as a realization of a branching or epidemic-type point process and models belonging to the wider class of Markov point processes, that assume previous events have an in- hibiting effect to the following ones. The first type models could be identified with self-exciting point processes, while the second are represented by self-correcting pro- cesses as the strain-release model (Schoenberg and Bolt (2000)). ETAS model is a self-exciting point process and represents the activity of earthquakes in a region during a period of time, following a branching structure. In particular it could be considered as an extension of the Hawkes model (Hawkes (1971)), which is a generalized Poisson cluster process associating to cluster centers a branching process of descendants.
2 4 The proposed clustering method
The technique of clustering that we propose leads to an intensive computational proce- dure, implemented by software R (R Development Core Team (2005)). It identifies a partition of events Pk+1 formed by k + 1 sets: the one of background seismicity and those of clustered events (k disjoint sets). It iteratively changes the partition assigning events either to the background seismicity or to the j − th, (j = 1, . . . , k) cluster, given the current partition, on the basis of the likelihood function variation due to their moving from a set to another one. At each step ML estimation of seismicity parameters is performed.
4.1 Main steps Firstly, a starting classification, found by a like single-linkage method procedure, needs. Moving on, the parameters of the intensity function of ETAS model, describing clustering phenomena in seismic activity, are estimated, by ML, iteratively. Differently by ETAS model, in our approach space densities are estimated inside each cluster by a bivariate kernel estimator. The smoothing constant is evaluated with Silverman’s formula (Silverman (1986)). Let [T0,Tmax] and Ωxy time and space domains of observa- tion respectively; according to the theory of point process the likelihood function to be maximized is: n X Z Tmax Z log L = log λ(xi, yi, ti) − λ(x, y, t)dxdydt (2) i=1 T0 Ωxy where: k X K0 exp[α(mj − m0)] λ(x, y, t) = λtµ(x, y) + gj(x, y) p (3) (t − tj + cj) j j = 1 (tj < t)
In (3) tj and mj are time and magnitude of the mainshock of the cluster j respec- tively, gj(x, y) is the space intensity function of the cluster j (computed using the nj points belonging to the j − th cluster including the j − th mainshock) and µ(x, y) is the background space intensity function (computed using the n0 isolated points and the k mainshocks). In the evaluation of (3) we can have two different parametrization: one with a common parameter c and a common parameter p for the whole partition and one with k parameters cj and k parameters pj, varying over each cluster, for each found partition. The choices can be compared at the end of the procedure. To move events from their current position, the changes in the likelihood function (2) for each event of coordinates (xi, yi, ti,Mi) needs: indeed, our iterative procedure moves seismic events assigning each earthquake either to the set of background seismicity or to the j − th cluster, according to the term of (3) which is maximized. The iterative procedure stops when the current classification does not change after a whole loop of reassignment.
3 5 Conclusion
Starting from a seismic catalog, the procedure here proposed returns a plausible sepa- ration of the components of seismicity and clusters that have a good interpretability. This method could be the basis to carry out an analysis of the complexity of the seismogenetic processes relative to each sequence and to the background seismicity, sep- arately. Indeed parameters that control the way in which strain energy is released could be strongly different through clusters and isolated events; this could be also observed in significant differences in the parameter estimates of a phenomenological model applied to sets of earthquakes relative to the two different processes.
References
Adelfio, G., Chiodi, M., De Luca, L., Luzio, D., and Vitale, M. (2005). Southern- tyrrhenian seismicity in space-time-magnitude domain. Submitted for publication.
Gardner, J. and Knopoff, L. (1974). Is the sequence of earthquakes in outhern california, with aftershock removed, poissonian? Bulletin of the seimological Society of America, 64, pages 1363–1367.
Hawkes, A. (1971). Spectra of some self-exciting and mutually exciting point processes. Biometrika, 58, 1), pages 83–90.
Ogata, Y. (1988). Statistical models for earthquake occurrences and residual analysis for point processes. Journal of the American Statistical Association, 83, 401, pages 9–27.
Ogata, Y. (1998). Space-time point-process models for earthquake occurrences. Ann. Inst. Statist. Math, vol. 50, no. 2, pages 379–402.
Ogata, Y., Zhuang, J., and Vere-Jones, D. (2002). Stochastic declustering of space-time earthquake occurrences. Journal of the American Statistical Association, 97, 458, pages 369–379.
R Development Core Team (2005). R: A language and environment for statistical com- puting. R Foundation for Statistical Computing, Vienna, Austria. ISBN 3-900051-07- 0.
Resenberg, P. (1985). Second-order moment of central california seismicity, 1969-1982. Journal of Geophysical Research, vol. 90, no. B7, pages 5479–5495.
Schoenberg, F. and Bolt, B. (2000). Short-term exciting, long-term correcting models for earthquake catalogs.
Silverman, B. W. (1986). Density Estimation for Statistics and Data Analysis. Chapman and Hall, London.
4
The impact of the spatial uniform distribution of seismicity on probabilistic seismic hazard estimation
C. Beauval(*), S. Hainzl, and F. Scherbaum
Institute of Geosciences, University of Potsdam, Potsdam, Germany (*) now at IRD, Géosciences Azur, Sophia-Antipolis, France
Corresponding Author: C. Beauval, [email protected]
Abstract: In order to compute probabilistic seismic hazard, seismic sources must be characterized. In low-seismicity regions, seismicity is rather diffuse and faults are difficult to identify, thus large aerial source zones are mostly used. The hypothesis is that seismicity can be uniformly distributed inside each aerial seismic source zone. In this study, the impact of this hypothesis on the probabilistic hazard estimation is quantified. The D-value defines the degree of clustering of earthquakes. Fractal seismicity distributions are generated. Probabilistic hazard is computed for each fractal distribution generated; the impact quantifies the deviation from the uniform hazard value. Correlations between D-values and impacts are derived, so that in real cases uncertainty bounds on hazard can be deduced from D-value estimates.
1. Introduction The first step towards the establishment of a seismic building code in a country is the evaluation of probabilistic seismic hazard. The method for estimating probabilistic hazard was initiated more than 30 years ago (Cornell, 1968; McGuire, 1976). Since then, several variants of the method have been proposed and many aspects of the probabilistic computation have been studied and improved, but one hypothesis dealing with the characterisation in space of seismicity is still widely in use in low seismicity regions: seismicity is assumed to be uniformly distributed inside large aerial source zones. Indeed, because low-seismicity regions usually display diffuse seismicity and active faults are very difficult to identify, large aerial source zones are defined according to different geophysical and geological criteria. However, it is well-known that the distribution of seismicity in space is not uniform but clustered. One way of analyzing the spatial distribution of seismicity is to determine the fractal dimension (D-value). This D-value is an extension of the Euclidean dimension and measures the degree of clustering of earthquakes: in a 2d-space, D can be a decimal number and ranges from 0 (point) to 2.0 (uniform distribution in space). This study aims at characterizing the uncertainty on probabilistic hazard depending on the knowledge on the degree of clustering of the seismicity distribution. The fractal dimension considered in this study is the correlation dimension (Grassberger and Procaccia, 1983). We first establish a correlation between D and the associated uncertainty on hazard, through the generation of synthetic seismic distributions. Then we apply this approach in two regions of France and Germany and deduce, from the D-value estimations, the corresponding uncertainty bounds on probabilistic hazard estimations.
2. Synthetic source zone: which impact for which D? A quadratic seismic source zone is considered and the hazard is estimated for a grid of sites located inside the source zone (Fig. 1a,b). Synthetic seismic distributions are generated over the source zone, increasing the clustering of the seismicity from a line (D≅1.0) to a uniform distribution over the area (D≅2.0). The D-value is computed for each seismicity distribution by estimating the slope of the linear part of the correlation integral (Grassberger and Procaccia, 1
1983; Goltz, 1998). The correlation integral is established from 3000 events generated over the zone according to the spatial density probability and D is computed over the interval [5-30]km.
Figure 1. Upper row: D=1.05, lower row: D=1.45. (a) Synthetic seismicity distribution, number of M≥3 per year in 5×5km², quadrate: grid of sites. (b) Impact of uniform hypothesis on probabilistic hazard (T=475y), see Eq. 1; (c) corresponding distribution of impacts.
A uniform distribution of seismicity over the source zone (i.e. D=2.0) results in identical acceleration values all over the grid. The impact on probabilistic hazard is defined as the difference between the uniform acceleration Aunif and the estimated one A, normalized by the uniform value and expressed in percentage:
A − A I = unif ⋅100 (1) Aunif
Therefore, positive impacts correspond to sites where the uniform distribution of seismicity results in an increase of hazard. The probabilistic seismic hazard is estimated according to the classical methodology and earthquakes are assumed to follow a Poissonian process in time (Cornell, 1968; McGuire, 1976). An acceleration determined for a return period of e.g. 475yrs has a probability of 10% of being exceeded at least once over a time of 50 years. Once the spatial density probability distribution is obtained, the seismic rate is distributed over the source zone. The overall seismicity rate is fixed to 100 events with M≥3.0 per year. For each unit, the truncated Gutenberg-Richter recurrence curve for magnitudes is modeled with a slope b=1.0 and a maximum magnitude fixed to 6.0. Furthermore, the minimum magnitude considered in the probabilistic computation is 4.5 and the ground-motion predictions of the attenuation relationship are truncated at +3σ above the median. When the seismicity is distributed uniformly, the resulting acceleration at the PGA (Peak Ground Acceleration) is 0.29g at 475 years, which corresponds to sites of highest seismic hazard in countries like France or Germany.
2
However, the results of this study do not depend on absolute values since impacts correspond to a normalized difference with respect to the uniform hazard value. Examples of impact estimation are displayed in Fig. 1 for two synthetic seismicity distributions characterized by two different D-values (1.05 and 1.45). The impact distribution (Fig. 1b,c) can be considered as the uncertainty distribution characterizing the uniform hazard estimated at a site anywhere inside the source zone.
3. Impact on probabilistic hazard versus D-value of source zones For a fixed theoretical D-value, different spatial patterns and slightly different computed D-values may result. Impacts are determined for 30 runs per theoretical D-value. From each distribution, the percentiles 15% and 85% are computed. Then, for each percentile, mean values are determined within a 0.1-interval. The results are displayed on Fig. 2a for the return period 475yrs. Sites where the uniform distribution of seismicity results in an increase of hazard are more numerous than sites where it results in a decrease. Therefore, the uniform distribution of seismicity within a large aerial source zone leads globally to an over-estimation of hazard. If the seismicity embedded in an aerial source zone distributes in reality along a line, the impact of the uniform hypothesis on hazard can be as high as 70% for the 85% percentile. As expected, the percentiles 15% and 85% tend to zero when the D-value increases, the seismicity is less and less clustered.
Figure 2. Impacts of the spatial uniform distribution of seismicity on probabilistic hazard. (a) Results at 475yrs; crosses: percentiles 15% and 85% of distributions from 30 runs per D-value; curves: mean percentiles computed over 0.1-interval. (b) Percentiles 15% and 85% at 475, 104 and 105 yrs; the thicker the line the longer the return period. (c) Medians of impact distributions.
In Fig. 2b, the percentile curves for the return periods 104 and 107 years are superimposed to the results at 475 years. The median values of the impact distributions for the three return periods are also displayed (Fig. 2c). When the return period increases, the over-estimation is slightly increased: the percentiles shift to higher values.
4. Application to two regions: according to D, what is the uncertainty on hazard? The approach is applied in two regions of France and Germany. In the Alps, D-values are computed from the instrumental LDG catalog (homogeneous magnitude ML; Nicolas et al., 1998); all magnitudes higher than 3.0 are taken into account over the period [1975-1999]. For the Lower Rhine Area, D-values are computed from the instrumental part of the German catalogue (Leydecker, 2004); all magnitudes higher than 2.5 over the period 1975-2004 are included. As depths’ determination bear large uncertainties, distances are estimated in 2d only. A spatial mapping of D-values in these two regions indicates that the D-value ranges roughly between 1.3 and 1.6 (see Fig. 3 for the German part). According to the results on synthetic
3
seismic distributions, this range of D-value corresponds to an over-estimation on hazard of up to 60% for the 85% percentile, or an under-estimation of hazard of ~25% for the 15% percentile.
5. Conclusions If minimum and maximum bounds for the fractal D-value can be estimated for a region, the uncertainties on probabilistic hazard due to the uniform hypothesis in space can be bounded accordingly. A correlation between the impacts on hazard and the D-values of the source-zones is derived from the generation of synthetic seismicity distributions. The results show that a uniform distribution of seismicity inside an aerial source zone leads on average to conservative hazard estimates. The approach applied in the Alps and in the Lower Rhine Area yields an over-estimation of the probabilistic hazard of up to 60% for the 85% percentile. The only way to reduce the spatial distribution of seismicity is to identify active faults; more geophysical studies and longer instrumental catalogues are required in low-seismicity countries such as France or Germany.
Figure 3. Application in the Lower Rhine Area. (a) Mapping of D: for each grid point, all earthquakes M≥2.5 closer than 70km to the grid point are taken into account, D is computed from the correlation integral over the range [8-50]km; dashed lines define the seismotectonic source zone ‘Lower Rhine Area’ (zoning by Leydecker and Aichele ,1998). (b) Misfit (least square),only D-values with δD<0.1 are displayed. (c) Deduction of the impact of the uniform distribution of seismicity on the probabilistic hazard.
6. References Cornell, C.A. (1968), Engineering seismic risk analysis, Bull. Seism. Soc. Am., 58, 1583-1606. Goltz, C. (1998), Fractal and chaotic properties of earthquakes, Lecture notes in earth sciences, Springer, 175p. Grassberger, P, and Procaccia, I. (1983), Measuring the strangeness of strange attractors, Physica, 9D, 189-208. Leydecker, G., 2004. Earthquake catalogue for the Federal Republic of Germany and adjacent areas, http://www.bgr.de/quakecat. Leydecker, G., and Aichele H. (1998), the seismogeographical regionalisation for Germany: the prime example of third-level regionalisation, Geologisches Jahrbuch, E55, 85-98. McGuire R.K., (1976), Fortran computer program for seismic risk analysis, USGS Open-File Report 76-67. Nicolas M., N. Bethoux, and B. Madeddu, (1998), Instrumental seismicity of the Western Alps: A revised catalogue, Pageoph, 152, 707–731.
4
Producing Omori’s law from stochastic stress transfer and release
M. Bebbington1, and K. Borovkov2
1Massey University, Private Bag 11222, Palmerston North, New Zealand 2 University of Melbourne, Parkville 3052, Australia
Corresponding Author: M. Bebbington, [email protected]
Abstract: We present an alternative to the ETAS model for aftershocks.. The continuous time two-node Markov stress release/transfer model has one (mainshock) node which transfers `stress' to the second (aftershock) node. When the hazard function at the second node is an exponential function of the stress, the model reproduces Omori's law. When the second node is also allowed tectonic input, which may be negative, i.e., aseismic slip, the frequency of events decays in a form that parallels the constitutive law derived by Dieterich (1994). This fits very well to the modified Omori law, with a wide possible variation in the p-value. We illustrate the model by fitting it to aftershock data from the Valparaiso earthquake of March 3 1985, and show how it can be used in place of conventional `stacking’ procedures to determine regional p-values.
1. Introduction The modified Omori formula for the frequency of aftershocks is
nt()=+ Kt ( c )− p , (1)
where usually p ∈(0.9,1.8) , differing from sequence to sequence. Stochastic models for temporal shallow seismicity must include the existence of aftershocks, catering for this variation in p . An example is the ETAS model (Ogata, 1988), incorporating (1) together with a constant background rate. We construct an alternative model, which besides reproducing (1), also allows for more general main sequence behaviour.
2. Linked Stress Release Model (Liu et al., 1999; see also Bebbington and Harte, 2003) The model is based on a stochastic version of elastic rebound. The seismic area is divided into "regions", with the stress (Benioff strain) in region i evolving as
n ()j X iiiij()tX=+− (0)ρθ t∑ St () j=1
()j where St() is the cumulative stress release in region j , and θij the proportion of stress
drop transferred from region j to region i (1θii = ), which may be positive (damping) or
negative (excitation). The{ρi} are the rates of tectonic input. The probability of an
earthquake occurrence is controlled by a hazard function Ψii (x )=+ exp(µ iν iix ) . This results in a point process with conditional intensity
λανρθ()tXt=Ψ ( ()) = exp + t − St()j () , (2) iii( iiiij()∑ j )
for each region i , which can be numerically fitted to data using maximum likelihood. Using a two region model, we will represent mainshocks by one region (henceforth
node), denoted A, and aftershocks by the second node (B), as shown in Figure 1. We assume
that there is no transfer of stress from B to A, i.e., θ AB = 0 .
Figure 1 Two-node stress transfer aftershock model
3. Dieterich's Formula The two-node model produces (Borovkov and Bebbington, 2003) a decay formula
ψ ft()= , (3) eeas−−st+−(1 st )ψ /
νξB where sae==Ψ=−νρBB, ψ B (0),E 1, and ξB is the `aftershock size’ (stress). Dieterich (1994) created a general analytic model of earthquake nucleation and time to instability that leads to an aftershock rate
rτ/τ R = r (4) ττ −∆ −t exp−+ 1 exp 1 τσraAt
By setting
1 ∆τ τr sr==, ψ exp , a = tArtaaσ τ
(3) and (4) are equivalent, and hence, provided s > 0 , (3) gives the Omori law for tt< a , after which seismicity merges to the background rate given by ρ > 0 . Note that the two-node model is fitted to observed event data, rather than to physical quantities as in the formula (4), and provides a (stochastic) mechanism for variation around the mean rate.
4. Variation in p While (3) gives the Omori law if s > 0 , it is also possible for ρ (and hence s ) to be negative, representing aseismic stress decrease. Numerically fitting (3) to the modified Omori law (1) produces 0.5 < p < 1 (s > 0) and 1 < p < 1.5 (s < 0) for reasonable values of s, ψ and a (see Figure 2).
Figure 2 Variation in decay rate with s
5. The Valparaiso (Chile) Earthquake of March 3 1985 The Valparaiso aftershock sequence has 88 events of magnitude 5.0 or greater in the 802 days following the mainshock. The decay rate fits very well the modified Omori formula with p =1.038 (Kisslinger, 1988). Fitting the intensity (2) to the sequence of events as shown in Figure 3 produces s =νρ = 0.0248×(-0.0069) = -0.000172, ψ =Ψ(0) =eeα = 4.9677 = 143.7 and, averaging over the events, ae=−=E νξ 1 exp(0.0248 × 100.75(M k − 5.0) ) / 88 −= 1 0.153. ∑ k Numerically matching the 2-node decay equation (3) to the modified Omori formula (1) then gives p = 1.046.
Figure 3 Valparaiso aftershock sequence and fitted two-node intensity
6. Aftershock Stacking Felzer et al. (2003) estimate the p-value for California by scaling and then stacking aftershock sequences. Considering the 9 sequences (1983 – 1999, M ≥ 5.8) identified in the paper, we find (from the PDE catalog) 68 aftershocks of M ≥ 4.8 within (0.02, 180) days of the mainshock. Fitting the two-node (7 parameters) model, we get a ∆AIC of 1.85
over the full (8 parameters, i.e., θ AB ≠ 0 ) model, with estimates
−6.25 0.016 0.041 1 0 αν===Θ=,, ρ , −−−4.27 0.089 0.027 0.983 1
Now
0.75(Mag − 4.8) a =×−= ave.as/ () exp() 0.089 10 1 0.619
0.75(Mag − 4.8) ψ = ave.m/s () exp() 0.089×× 0.983 10 = 7216.4
and
s = -0.027× 0.089 = -0.0024.
This gives p = 1.074, compared to p = 1.08 from Felzer et al. (2003). Kisslinger and Jones (1991) estimate p = 1.11 for Southern California.
7. Summary The two-node model can reproduce both the Omori formula and Dieterich’s formula. Through the latter, it achieves a range of at least 0.5 < p < 1.7. Fitting the SRM to an aftershock sequence confirms that the model decays correctly. The two-node model provides a regional-scale model for both main sequence and aftershock events, and provides an alternative to existing methods of `stacking’ aftershocks for regional analyses.
8. References Bebbington, M. and Harte, D. (2003), The linked stress release model for spatio-temporal seismicity: formulations, procedures and applications. Geophys. J. Int., 154, 925-946. Borovkov, K. and Bebbington, M. (2003), A stochastic two-node stress transfer model reproducing Omori's law. Pure Appl. Geophys., 160, 1429-1455. Dieterich, J., (1994), A constitutive law for rate of earthquake production and its application to earthquake clustering, J. Geophys. Res., 99, 2601-2618. Felzer, K.R., Abercrombie, R.E. and Ekstrom, G., (2003), Secondary aftershocks and their importance for aftershock forecasting, Bull. Seismol. Soc. Amer., 93, 1433-1448. Kisslinger, C. and Jones, L.M., (1991), Properties of aftershock sequences in southern California, J. Geophys. Res., 96, 11947-11958. Liu, J., Chen, Y., Shi, Y. and Vere-Jones, D. (1999), Coupled stress release model for time-dependent seismicity, Pure Appl. Geophys., 155, 649-667 Ogata, Y., (1988), Statistical models for earthquake occurrences and residual analysis for point processes, J. Amer. Statist. Assoc., 83, 9-27.
Analysis of Aftershocks in a Lithospheric Model with Seismogenic Zone Governed by Damage Rheology Yehuda Ben-Zion1, and Vladimir Lyakhovsky2
1Department of Earth Sciences, University of Southern California Los Angeles, CA, 90089-0740, USA 2 Geological Survey of Israel, Jerusalem 95501, Israel
Corresponding Author: Yehuda Ben-Zion, [email protected]
Abstract: We perform analytical and numerical studies of aftershock sequences following abrupt steps of strain in a rheologically-layered model of the lithosphere. The model consists of a weak sedimentary layer, over a seismogenic zone governed by a visco-elastic damage rheology, underlain by a visco-elastic upper mantle. The damage rheology accounts for fundamental irreversible aspects of brittle rock deformation and is constrained by laboratory data of fracture and friction experiments. A 1-D version of the visco-elastic damage rheology leads to an exponential analytical solution for aftershock rates. The corresponding solution for a 3-D volume is expected to be sum of exponentials. The exponential solution depends primarily on a material parameter R given by the ratio of timescale for damage increase to timescale for gradual inelastic deformation, and to a lesser extent on the initial damage and a threshold strain-state for material degradation. The parameter R is also inversely proportional to the degree of seismic coupling across the fault. Simplifying the governing equations leads to a solution following the modified Omori power law decay with an analytical exponent p = 1. In addition, the results associated with the general exponential expression can be fitted for various values of R with the modified Omori law. The same holds for the decay rates of aftershocks simulated numerically using the 3-D layered lithospheric model. The results indicate that low R-values (e.g., R ≤ 1) corresponding to cold brittle material produce long Omori-type aftershock sequences with high event productivity, while high R-values (e.g., R ≥ 5) corresponding to hot viscous material produce short diffuse response with low event productivity. The frequency-size statistics of aftershocks simulated in 3-D cases with low R-values follow the Gutenberg-Richter power law relation, while events simulated for high R-values are concentrated in a narrow magnitude range. Increasing thickness of the weak sedimentary cover produces results that are similar to those associated with higher R-values. Increasing the assumed geothermal gradient reduces the depth extent of the simulated earthquakes. The magnitude of the largest simulated aftershocks is compatible with the Båth law for a range of dynamic damage-weakening parameter. The results provide a physical basis for interpreting the main observed features of aftershock sequences in terms of basic structural and material properties.
1. Introduction Large earthquakes typically produce an increasing number of earthquakes referred to as aftershocks, with decay rates that can be described (e.g., Utsu et al., 1995) by the modified Omori law ΔN / Δt = K(c + t) − p , (1) where N is the cumulative number of events with magnitude larger than a prescribed cutoff, t is the time after the mainshock, and K, c, and p are empirical constants. However, aftershock decay rates can also be fitted (e.g., Gross and Kisslinger; 1994; Narteau et al., 2002) with exponential and other functions. The occurrence and properties of aftershocks exhibit a number of spatio-temporal variations that are unlikely to be associated with statistical fluctuations. The depth extent of aftershocks is correlated with the geothermal gradient (Magistrale, 2002) and the early parts of aftershock sequences may extend deeper than the regular ongoing seismicity (Rolandone et al., 2004). Cold continental regions with low heat flow have high aftershock productively and long event sequences associated, while hot continental regions and oceanic lithosphere have low aftershock productively and short event sequences with fast decay (e.g., Kisslinger and Jones, 1991; Utsu et al., 1995; Utsu, 2002; McGuire et al., 2005). Geothermal areas and volcanic regions with high fluid (magma, water) activity and high heat flow often have swarms of events without a clear separation between mainshocks and aftershocks (Mogi, 1967). In this work we attempt to establish a physical basis for aftershocks behavior using a lithospheric model with a seismogenic crust governed by a continuum damage rheology that accounts for key observed features of irreversible rock deformation (Lyakhovsky et al., 1997; Hamiel et al., 2004).
2. Results A detailed background on the employed damage rheology and full derivation of the results are given by Ben-Zion and Lyakhovsky (2006). Here we outline the main analytical results for a 1-D version
of the model, and show illustrative examples associated with the analytical results and 3-D numerical simulations. For a 1-D version of the model, associated with uniform damage evolution, the stress-strain relation is σ = 2μ0 (1−α )ε , (2) where μ0(1–α) is the effective elastic modulus with μ0 being the initial rigidity of the undamaged solid.
2 2 The 1-D version of the kinetic equation for damage evolution is α& = Cd (ε − ε 0 ) , (3) where ε0 is a threshold value separating states of strain associated with material degradation and healing and Cd is a rate constant for material degradation. For positive damage evolution (ε > ε0), the damage-related inelastic strain rate before the final macroscopic failure is given by e = Cvα& σ , (4) where Cv ⋅dα dt is the inverse of an effective damage-related viscosity. Since we are interested in the relaxation process in a region following the occurrence of a large event, we assume a constant total strain boundary condition. This implies that the rate of elastic strain relaxation is equal to the viscous strain rate, i.e., 2⋅ε& = −e (the factor 2 stems from the common definitions of the strain and strain-rate tensors). With this set of equations and boundary conditions, the result for damage evolution is
dα 2 2 2 2 = Cd ⋅{}ε s exp[R()(1−α − R 1−α s )]− ε 0 , (5) dt where αs and εs are the initial levels of damage and strain, respectively, and R = μ0 ⋅Cv is a material parameter. As demonstrated below analytically and numerically, the parameter R plays a dominant role in the aftershock behavior. Ben-Zion and Lyakhovsky (2006) show that R characterizes the ratio between a brittle damage timescale and a viscous relaxation timescale, and that R is inversely proportional to the degree of seismic coupling across the fault. To convert the evolution of the damage state variable α to evolution of aftershocks number N, we assume that α increases linearly with aftershocks number, α = α s + φN . We thus get
dN 2 2 2 2 φ = Cd ⋅{}ε s exp[R()()1−α s −φN − R 1−α s ]−ε 0 . (6) dt The corresponding solution for a 3-D volume with many interacting elements is expected to be associated with a sum of exponentials similar to (6). Figure 1 illustrates results based on equation (6) for several values of R and the simplifying limit values αs = 0 and ε0 = 0. Small R-values, corresponding to highly brittle cases with little viscous relaxation, produce long aftershock sequences with slow decay and high aftershock productivity. In contrast, high R-values, corresponding to relatively viscous cases, produce short aftershock sequences with fast decay and low aftershock productivity. While the aftershock rates are generated using the exponential equation (6), the results can be fitted well by the modified Omori power law relation K(c+t)–p. The obtained good fits to the results with the modified Omori law suggest that the leading term of equation (6) is associated with a power law. This can be derived explicitly by making 2 simplifications. Assuming that φN is sufficiently small so that the (φN)2 term in (6) can be neglected, gives dN φ = C ⋅{ε 2 exp[]− 2φNR()1−α − ε 2 }. (7) dt d s s 0
If we assume further that the initial strain εs induced by the mainshock is large enough so that
2 2 ε 0 << ε s exp[]− 2φNR(1−α s ) , we get a decay rate following the modified Omori law
dN N& N& 1 = 0 = 0 ⋅ , (8a) dt 2φR()1−α s N& 0t +1 2φR()1−α s N& 0 t +1 2φR()1−α s N& 0
2 where N& 0 = Cd ε s /φ is the initial aftershocks rate. Equation (8a) provides a mapping between parameters of the modified Omori law and parameters of our damage rheology model (in a 1-D analysis of uniform deformation), 1 1 k k = , c = = , p =1. (7b) 2φR()1−α s 2φR()1−α s N& 0 N& 0
Numerical simulations in a 3-D lithospheric structure confirm that the material parameter R is the major factor controlling the decay rate, duration and productivity of aftershocks, as well their frequency-size statistics. The simulations also show that a sedimentary layer thicker than a few km produces weaker aftershock sequences with a smaller number of events and faster decay, similar to what is produced by high R-values. Increasing the assumed temperature gradient leads to a smaller number of events, but the aftershock decay rate and duration remain largely unaffected (if the R-value is kept the same). However, increasing temperature gradient reduces the overall depth extent of the brittle seismogenic zone and the simulated aftershocks. In addition, the maximum hypocenter depth decreases with time from the mainshock due to a transient deepening of the brittle-ductile transition depth produced by the high strain rates generated by the mainshock (Figure 2). These results are compatible with observed correlations between the depth of seismicity and temperature gradient (Magistrale, 2002), and temporal evolution of the depth of aftershocks following the 1992 Landers CA earthquake (Rolandone et al., 2004).
References Ben-Zion, Y. and V. Lyakhovsky, Analysis of Aftershocks in a Lithospheric Model with Seismogenic Zone Governed by Damage Rheology, Geophys. J. Int., in press, 2006. Gross, S. J., and C. Kisslinger, 1994. Tests of models of aftershock rate decay, Bull. Seismol. Soc. Am. 84, 1571-1579. Hamiel, Y., Liu, Y., V. Lyakhovsky, Y. Ben-Zion and D. Lockner, 2004. A Visco-Elastic Damage Model with Applications to Stable and Unstable fracturing, Geophys. J. Int., 159, 1155-1165, doi: 10.1111/j.1365-246X.2004.02452.x. Lyakhovsky, V., Ben-Zion, Y. & Agnon, A., 1997. Distributed damage, faulting and friction. J. Geophys. Res. 102, 27,635-27,649. Magistrale, H., 2002. Relative contributions of crustal temperature and composition to controlling the depth of earthquakes in southern California, Geophys. Res. Lett., 29(10), 1447, doi:10.1029/2001GL014375. McGuire, J.J., M.S. Boettcher, and T.H. Jordan, 2005. Foreshock Sequences and Short-Term Earthquake Predictability on East Pacific Rise Transform Faults, Nature, 434, 457-461. Mogi, K., 1967. Earthquakes and fractures, Tectonophysics, 5, 35-55. Narteau, C., P. Shebalin and M. Holschneider, 2002. Temporal limits of the power law aftershock decay rate, Journal of Geophysical Research, 107, Art. No. 2359. Rolandone, F., R. Burgmann and R. M. Nadeau, 2004. The evolution of the seismic-aseismic transition during the earthquake cycle: Constraints from the time-dependent depth distribution of aftershocks, Geophys. Res. Lett., 31, L23610, doi:10.1029/2004GL021379. Utsu, Y., Y. Ogata, and R. S. Matsu'uara, 1995. The centenary of the Omori formula for a decay law of aftershock activity, J. Phys. Earth, 43, 1-33. Utsu, T., Statistical Features of Seismicity, in International Handbook of Earthquake and Engineering Seismology, Part A 2002. Timescale of fracturing Events rate for several values of R = Timescale of stress relaxation 180
160 Small R: R = 0.1 •expect long 140 active aftershock 120 sequences
100
R = 1 80 Modified Omori 60 law with p = 1 Number of aftershocks per day of aftershocks Number Large R: 40 •expect short R = 10 diffuse 20 sequences 0 0 102030405060708090100 Time (day) Fig. 1. Analytical results based on equation (6) for aftershock decay rates in a 1-D version of the damage model for different R-values. The dotted line is least-squares fit to the exponential analytical solution with the modified Omori power law.
0
-5
-10
-15
-20
-25 Depth (km) -30 R = 0.1 -35 T = 20 C/km
-40 0 30 60 90 120 150 180 210 240 270 300
Time (days)
Fig. 2. Depth of aftershock hypocenters in a 3-D model realization with R = 0.1, sedimentary cover of 1 km, and temperature gradient of 20 oC/km. Increasing thermal gradient and/or R lead to a thinner seismogenic zone. The maximum event depth decreases with time from the mainshock.
1 A STATISTICAL SEISMOLOGY SOFTWARE SUITE
Ray Brownrigg Statistical Computing Manager, School of Mathematics, Statistics and Computer Science, Victoria University of Wellington, Wellington, NEW ZEALAND
Corresponding author: R. Brownrigg, [email protected] Statistical Seismology is a relatively new field which applies statistical methodology to earthquake data in an attempt to raise new questions about earthquake mechanisms, to characterise aftershock sequences and to help make some progress towards earthquake prediction. A suite of R packages, known as SSLib (Statistical Seismology Library), is available for use with Statistical Seismology in general, but also with some applications outside this field. This poster session will introduce R and the SSLib packages and demonstrate some of the exploratory data analysis functions available within the packages.
An application of the Dieterich model for seismicity rate change to the 1997 Umbria-Marhe, central Italy, seismic sequence.
Flaminia Catalli and Rodolfo Console
Istituto Nazionale di Geofisica e Vulcanologia, via di Vigna Murata 605, 00142, Rome, Italy Corresponding Author: F. Catalli, catalli@ingv.
Abstract: We tried to extend the model of Dieterich (1994) for the seismicity rate change after a stress perturbation to a model that could reproduce the distribution of events during and after a seismic sequence (i.e. after a few stress perturbations). Thus, the major events of the sequence are treated like interacting among each other, although the secondary are not. A first important result of our effort to modeling this forecast algorithm is that it has just one free parameter to be evaluated, all the others could be assessed from observations or physical relations. The model is therefore introduced in a code that tests its validity and performs the best value of the free parameter on real seismicity. The code also compares it with a plain time-independent Poissonian model through likelihood-based methods. We applied the model to the 1997 Umbria-Marche (central Italy) seismic sequence with the aim to investigate if it is possible to explain the migration of seismicity from NW toward SE. The results show a qualitatively good accordance between the expected rates and data. Moreover, the simulations show a movement of expected seismicity in the direction NW-SE. We have the aim to objectively test this observations in the prosecution of the work .
1. Introduction The spatial evolution of seismicity depends on the stress transfer and on the static friction coefficient. The Coulomb failure model is founded exactly on the hypothesis that stress increases promote, and stress decreases inhibit, fault failure depending on the static friction coefficient. The validity of this model has been extensively proved in numerous papers published in the literature (Stein et al., 1997; Gomberg et al., 2000; King and Cocco, 2001; Stein, 1999; Kilb et al., 2002; Toda and Stein, 2003; Nostro et al., 2005; Toda et al., 2005; and many others). All these papers show a correlation of the Coulomb stress change with locations of most events and with increased seismicity rates. But stress change alone can not explain the time behavior of seismicity, such as Omori decay of aftershocks, or a phenomenon like migration of seismicity from the fault. The Coulomb model is a static model of stress transfer. It is essential to introduce the time dependence in the modeling to explain such type of phenomenon. It is also important to consider that seismicity depends not only on the stress transfer or friction coefficient, but also on the background seismicity of the studied area and on the physical characteristics of the fault. It is why we coupled the concept of Coulomb stress transfer with the rate-state model of Dieterich (1994). This link is considered in some current papers (Toda and Stein, 2003; Toda et al., 2005) the beginning of the more useful and accurate earthquakes forecast model. This type of modeling is more complicated than the simple stress transfer model of Coulomb and some parameters are poorly constrained too. Nevertheless it appears to explain many fault behaviors and it can justify the time delay between mainshocks and later events. Furthermore, it introduces the real concept of ”to promote” (or “to inhibit”) the fault failure because very small stresses may unleash, or repress, earthquakes in a population of faults. To model repeated stress perturbations ΔCFF’s we used the expression of seismicity
rate R as a function of the state variable γ at the reference shear stressing rate τ& r (Dieterich, 1994; Toda et al., 2005): r R = , (1) τγ &r where r is the background seismicity of the considered area. In this way, it is the state variable that evolves with stressing history:
1 γ = , (2) τ&r in the steady state;
Δ− CFF ⎛ ⎞ = γγ n−1 exp⎜ ⎟, (3) ⎝ Aσ ⎠ when an earthquake strikes;
⎛ 1 ⎞ ⎛ Δ− tτ ⎞ ⎜ ⎟ &r n+1 ⎜γγ n −= ⎟ exp⎜ ⎟, (4) ⎝ τ&r ⎠ ⎝ Aσ ⎠ during a time step Δt, where Aσ is a fault’s physical parameter.
2. Application to the 1997 Umbria-Marche (central Italy) sequence We applied the model to the 1997 sequence of Umbria-Marche, central Italy. For this purpose we adopt the source parameters for the nine mainshocks listed in Table 1 of the work of Nostro et al., (2005). One of the final goals of this application is to answer the question if is it possible to interpret the migration to S-W of the Colfiorito sequence with a rate-state type model. We believe that this model could have the capability to explain the phenomenon. The choice of the reference seismicity rate plays an important role in the Dieterich model, as already observed by Toda et al. (2005). In this regard, we represented in the model the background seismicity rate r smoothing on the considered area the seismicity of the previous ten years before 1997 (Figure 1-a). The background seismicity rate and the reference shear stress rate are not independent in a specific area. Under a number of assumptions, it can be shown that these two parameters are related by the following relationship (Console et al., (2005)):
b 7 3/2 ⎛ * ⎞ 3/1 )(1( −− mmb 0max ) (5) &r = rπτ ⎜ M 0 ⎟ Δσ 10 , 5.1 − b ⎝16 ⎠
* where b is the Gutenberg-Richter slope, M 0 is the seismic moment of an earthquake of magnitude m0, ∆σ is the stress drop (supposed constant for all the earthquakes), and m0 and mmax are the minimum and maximum magnitudes, respectively. Defining r in each point (i,j) of a grid we can so determineτ&r ji ),( and finally ta(i,j) by the relation:
Aσ ta = , (6) τ&r
just selecting a value for Aσ, the unique free parameter of our model. This is an important
improvement to the model with respect to previous works as that of Toda et al., (2005).
a) b)
c) d)
Figure 1: For all the maps: depth=5 km and Aσ=0.0021 MPa. The color scale represents the number of events expected per 1000 km2 per year. (a) Smoothed seismicity of the years 1986-1996 on the studied area; the superimposed relative epicenters are from Chiarabba et al., (2005). (b) Expected seismicity rate before the 10/03/97 event marked with the star; epicenters are from Chiarabba et al., (2005). (c) As (b) before the 10/14/97 event; epicenters are from Chiarabba and Amato, (2003). (d) As (b) and (c) after 8 years from 01/01/97; black stars point to the occurred mainshocks (the ninth is out of the area).
3. Discussion and conclusion We observe a qualitative good accordance between the expected seismicity rate changes and real data (Figure 1-b,c). Moreover, this correlation seems to grow up if we utilize more accurate earthquake locations (catalogue from Chiarabba and Amato, (2003)). This highlights the importance of the choice of the catalogue to test the match of the model with real observations. It appears that the model could explain the migration of the observed seismicity in the sense of its direction: the directivity NW-SE is correctly pointed out by the maps (Figure 1-b,c,d). But it is evident that, with our model and especially with all the crude assumptions we made, in a few cases we can not yet justify the unilateral direction the seismicity prefers. We believe that a more accurate choice of the reference seismicity could really improve the results of the model; for example, it could be very important to establish if and where had
been shadow zones in the studied area that could inhibit expected seismicity we now observe in some unexpected points. In the imminent prosecution of this work we are going to objectively quantify the accordance between expectations and data and to optimize the free parameter on real seismcity. We believe that background seismicity is an essential ingredient of such a model and a choice to be valued in a better accurately way.
5. Acknowledgement Improvements to this work were made because of some stimulating discussions with Massimo Cocco who we would like to gratefully thank. We are thankful to Claudio Chiarabba, Lauro Chiaraluce and Concetta Nostro for providing some useful data. We have really appreciated some stimulating comments from Lauro Chiaraluce and Anna Maria Lombardi during the course of the work.
6. References
Chiarabba, C., and Amato, A., (2003), Vp and Vp/Vs images in the Mw 6.0 Colfiorito fault region (central Italy): a contribution to the understanding of seismotectonic and seismogenic process, J. Gephys. Res., 105(B5), 2248, doi:10.1029/2002JB002166. Chiarabba, C., Jovane, L., and DiStefano R., , (2005), A new view of Italian seismicity using 20 years of instrumental recordings, Tectonophysics, 395, 251-268. Console, R., M Murru, and F. Catalli (2005 in press), Physical and stochastic models of earthquake clustering, Tectonophysics. Dieterich, J.H. (1994), A constitutive law for rate of earthquake production and its application to earthquake clustering, J. Gephys. Res., 99, (18):2601-2618. Gomberg, J., Beeler, N.M., and Blanpied. M.L., (2000), On rate-state and Coulomb failure models, J.Geophys. Res., 105, (14):7857–7872. Kilb, D., Gomberg, J., and Bodin, P., (2002), Aftershocks triggering by complete Coulomb stress changes, , J. Geophys. Res., 107, doi: 10.1029/2001JB0002002. King, G.C.P., and Cocco, M., (2001), Fault interaction by elastic stress changes: new clues from earthquake sequences, Adv. Geophys., 44:1-39. Nostro, C., Chiaraluce, L., Cocco, M., Baumont, D., and Scotti, O., (2005), Coulomb stress changes caused by repeated normal faulting earthquakes during the 1997 Umbria-Marche (central Italy) seismic sequence, J. Geophys. Res., 110, B05S20, doi:10.1029/2004JB003386. Stein, R.S., (1999), The role of stress transfer in earthquake occurrence, Nature, 402:605–609. Stein, R.S., Barka, A.A., and Dieterich, J.H., (1997), Progressive failure on the North Anatolian fault since 1939 by earthquake stress triggering, Geophy. J. Int., 128: 594–604. Toda, S., and Stein, R. S. (2003), Toggling of seismicity by the 1997 Kagoshima earthquake couplet: A demonstration of time-dependent stress transfer, J. Geophys. Res. 109, (B12, 2567), doi:10.1029/2003JB002527. Toda, S., Stein, R.S., Richards-Dinger, K. and Bozkurt, S.B., (2005), Forecasting the evolution of seismicity in southern California: animations built ion earthquakes stress transfer, J.Geophys. Res., 110, B05S16, doi:10.1029/2004JB003415.
The relationship between the earthquake rate change and the gradient of stress change
Y. I. Chen 1, C. S. Chen1, W. H. Yang1, and K. F. Ma2
1Institute of Statistics, National Central University, Jhongli, Tauyen 32054, Taiwan 2Department of Earth Science, National Central University, Jhongli, Tauyen 32054, Taiwa
Corresponding Author: Y. I. Chen; [email protected]
Abstract: It has been widely discussed that the seismicity rate change before and after a mainshock is related to the mainshock induced stress change. To see if it is true for the complex fault rupture of the 1999 Chi-Chi, Taiwan, earthquake, we use a universal Kriging method to explore how well the depth-dependent coulomb stress change explains the seismicity rate change. We also make such an exploration by correlating the rate change with the magnitude of the gradient of the stress change. The results demonstrate that the gradient of the stress change is preferred to the stress change itself for describing the seismicity rate change.
1. Introduction One problem of interest regarding to stress transfer leading to changes in the probability of earthquake occurrence (Steacy et al., 2005) is how the mainshock induced Coulomb stress change is related to the seismicity rate change. Toda et al. (1998) found that the rate change is related to the stress change induced by the 1995 Kobe, Japan, earthquake. However, how well the stress change explains the spatial variation of the rate change remains unknown. Ma et al. (2005) calculated the depth-dependent Coulomb stress change induced by the 1999 Chi-Chi, Taiwan, earthquake based on strike-slip fault models. They found that, even for the complex fault rupture of the Chi-Chi earthquake, the spatial distribution of the seismicity rate change is still consistent to the calculated Coulomb stress change. However, again, they did not investigate how well these two changes are correlated in space. To investigate the contribution of the stress change induced by the Chi-Chi earthquake to the explanation of the associated seismicity rate change, we employ a universal Kriging method (Cressie, 1993) to correlate the stress change and seismicity rate change. Note that the stress changes calculated from heterogeneous slip fault models (Ma et al., 2005) are not only depth-dependent, but also quite different in stability over the study region. Therefore, we also measure the associated magnitude of the gradient of the stress change and then consider the spatial model for relating the gradient magnitude and seismicity rate change.
2. Spatial statistical model In this study, we consider the non-overlapping grids in the study region. In each grid, let
RA and RB be the rate of earthquakes after the Chi-Chi earthquake and the associated background earthquake rate, respectively. Then, the seismicity rate change in the grid is defined to be RB/RA and the response of interest is, in fact, Z()⋅ = log(RB/RA). For modeling
the spatial variation of the response Z()⋅ , we consider a spatial process {():Ss s∈ D }, where Ss()=+μ () sδ (), s s ∈ D , (1) where μ ⋅)( is the overall trend related to the covariate, denoted by X ()⋅ , in the way of
μ()sX≡+ββ01 ()sandδ ⋅)( is a zero-mean Gaussian process with a certain spatial-dependent covariance function. Note that, in general, the correlation of Z()⋅ in two different locations becomes weaker when the distance of the two locations, h, gets further. Therefore, we employ the covariance function suggested in Matern (1986) in the following:
⎧σκ22(ah || || / 2)ν 2 ( ah2 || ||) ⎪ δν;ifh ≠ 0, Chδ ( )=+ cov(δδ (s h ), (s)) =⎨ Γ()ν ⎪ 2 ⎩ σδ ;ifh = 0, where κν ()⋅ is the second kind of modified Bessel function with order ν > ,0 ν is a
2 smoothness parameter, a > 0 is a scaling parameter, andσδδ = var( (s )) .
Note that the related covariate under study is either the mean stress change or the associated gradient’s magnitude. Denote the mean stress change in grid (i, j) by SCi,j, the magnitude of the gradient of the related stress change is then given by
22 MGij,,=−+−()( SCij+−1, SC ij11 SCi +−,1 j SC i, j), (3) which reflects the heterogeneity of the stress change in space. For modeling the spatial variation of the seismicity rate change, we consider the following two models:
log(RB/RB A)=α0+α1sgn( SCij )log(| SCij |)+δij +ε ij , (2) where sgn(a)= +1 for a>0, 0 for a=0, and -1 otherwise, and
log(RB/RB A)= β0+ β1log( MGij )+δij +εij , (3)
2 where the errors ε ij ’s are independent normal white noises with varianceσ ε . Since we employ the universal Kriging method (Cressie, 1993) and find the maximum likelihood estimates (MLE) of the parameters in the two spatial models:
3. Data analysis and results To correlate the seismicity rate change to the stress change induced by the 1999, M=7.6, Chi-Chi, Taiwan, earthquake, we consider 900 (=30x30) grids over the region (119.5 o E, 122.5 o E)x(22.5 o N, 25.5 o N) and the area of each grid is 0.1 o x0.1 o . The M ≥ 3.0 earthquakes 50 months before the Chi-Chi earthquake provide the background and the M ≥3.0 earthquakes 50 months after the Chi-Chi earthquake are under study. Since the hyper center of the Chi-Chi earthquake is at 8 km, the seismicity rate changes, the mean stress changes and the associated magnitudes of gradients for the layer with depth 3-9km are computed and presented in Figure 1.
Applying the universal Kriging method, we have the fitted model (2) as given by 22 μ = 0.088- 0.004 sgn(SC)log(|SC|) and (,ν a, σσδ , ε )= (0.227, 0.579, 0.453, 0.001), (4) while the fitted model (3) is obtained as 22 μ = 0.511+ 0.607 log(MG) and (ν , a, σσδ , ε )= (1.628, 0.044, 0.290, 0.000). (5) Since the fitted model (4) showing a negative correlation between the stress change and seismicity rate change is contradiction to the general knowledge, we only present, in Figure 2, the overall trend and the unexplained spatial variation in the fitted model (5).
Figure 1. The seismicity rate change (left), the mean stress change (middle) and the magnitude of the associated gradient (right) for the Chi-Chi earthquake.
Figure 2. The fitted overall trend (left) and spatial variation (right) for the model with the magnitude of the gradient of the stress change as the covariate.
4. Conclusions and Discussions The fitted model (4) implies that a grid with a larger positive stress change has a more serious decrease in the seismicity rate post to the Chi-Chi earthquake. Meanwhile, a grid with a smaller negative stress change or stress shadow has a more increase in the
seismicity rate after the Chi-Chi earthquake. This result is somehow against our knowledge about the relationship between the stress change and the seismicity rate change. On the other hand, the fitted model (5) shows that a grid located in the area with more heterogeneous stress change in space tends to have a larger seismicity rate post to the Chi-Chi earthquake. Figure 2 indicates that the overall trend that counts on the stability of the stress change contributes about 50% of the variation of seismicity rate change. Therefore, the result sheds light on the study of earthquake probability changes by taking into account of the homogeneity of the stress change in space. Note that, we also find, though not shown here, the unexplained spatial variation is irrelevant to b value (Gutenberg and Richter, 1944) change or the p value (Utsu, 1961; Ogata, 1983). Therefore, it remains a problem in a future study to explore something else that might be related to the seismicity rate change.
5. Acknowledgement The work of this paper was supported by the Taiwan Ministry of Education, iSTEP 91-N-FA07-7-4.
6. References Cressie, N., Statistics for Spatial Data, (John Wiley & Sons, New York, 2003). Gutenberg, R. and Richter, C. F. (1944), Frequency of earthquakes in California, Bull. Seism. Soc. Am. 34, 185-188. Ma, K. F., Chan, C. H., and Stein, R. S. (2005), Response of seismicity to Coulomb stress triggers and shadows of the 1999 Mw =7.6 Chi-Chi, Taiwan, earthquake, J. Geophys. Res. B05S19, Doi: 10.1029/2004JB003157. Matern, B., Spatial Variation, (Springer, New York, 1986). Ogata, Y. (1983), Estimation of the parameters in the modified Omori formula for aftershock sequences by the maximum likelihood procedure, J. Phys. Earth, 31, 115-124. Steacy, S., Gomberg J., and Cocco, M. (2005), Introduction to special section: Stress transfer, earthquake triggering and time-dependent seismic hazard, J. Geophys. Res. B05S01, Doi: 10.1029/2005JB003692. Toda, S. Stein, R. S., Reasenberg, P. A., Dieterich J. H., and Yoshida, A. (1998), Stress transferred by the Mw=6.9 Kobe, Japan, shock: Effect on aftershocks and future earthquake probabilities, J. Geophys. Res.103, 24,543-24,565. Utsu, T. (1961), Statistical study on the occurrence of aftershocks, Geophys. Mag. 30, 521-605.
Analysis of Aftershock Hazard Using Power Prior Information
Y. I. Chen and C. S. Huang
The Institute of Statistis, National Central University, Jhongli, Tauyen 32054, Taiwan
Corresponding Author: Y. I. Chen; [email protected]
Abstract: Near real-time hazard assessment for large aftershocks post to a major earthquake is usually urgent for disaster relief and rescue efforts. For regions that are short of enough previous aftershock sequences for an appropriate empirical prior distribution, we suggest to implement a Bayesian analysis based on only one or few available previous aftershock sequences and limited current data in a short time after the mainshock. The prior information in this Bayesian analysis is obtained directly from the likelihood of previous aftershock sequence with a certain power. We illustrate the use of the power prior Bayesian analysis on the evaluation of the hazards of three aftershock sequences post to the Chi-Chi earthquake (ML=7.6, 1999/9/21), the 331 event with ML=6.8 on 2002/3/31 and the 1210 event with ML=6.4 on 2004/12/10.
1. Introduction Real-time information about the hazard of large aftershocks is often urgent for ongoing emergency services to reduce loss of life, especially, in a short time after a disastrous mainshock over a region near the mainshock epicenter. To fulfill this purpose, Reasenberg and Jones (1989) proposed to combine the modified Omori law (Utsu, 1961) and the frequency-magnitude relationship (Gutenberg and Richter, 1944) as a model, referred to as the RJ model hereafter, for describing the time-magnitude distribution of earthquakes after a mainshock in California. Reasenberg and Jones (1989) considered the maximum likelihood estimated (MLE) RJ model. Due to the limited current data available within a short time after the mainshock, Reasenberg and Jones (1989) further suggested a Bayesian analysis of the RJ model with conjugate normal prior distribution. However, such a Bayesian analysis may not be appropriate as indicated in Rydelek (1990) and Chen et al. (2004) since the time and magnitude distributions of aftershocks are, in general, skewed. Therefore, Chen (2003) proposed using a Markov chain Monte Carlo (MCMC) algorithm for the Bayesian analysis for any prior distributions of the parameters in the RJ model. The MCMC Bayesian analysis of the RJ model is then expected to be better than the Bayesian analysis with conjugate normal prior if the pre-specified prior distributions are reasonable. To do so, the empirical prior distributions obtained from sufficiently large amount of previous aftershock sequences are needed. In practice, however, there are many areas like Taiwan that have complex tectonic structures, but then do not have enough previous aftershock sequences in each tectonic region for the appropriate empirical prior distribution. Therefore, we propose a Bayesian analysis of the RJ model based on only one or few previous aftershock sequences. In the proposed Bayesian analysis, the prior information is obtained not for the establishment of the prior distribution of the parameters in the RJ model, but for the reference or contribution to the
current data through the likelihood of the available previous aftershock sequences with a certain power. Therefore, the proposed Bayesian analysis is termed as power prior Bayesian analysis hereafter (Chen et al., 2000). We demonstrate, herein, the use of the power prior Bayesian analysis on the evaluation of the hazards of three aftershock sequences and, for each of them, one historical aftershock sequence is employed for providing the prior information.
2. Power prior Bayesian analysis
Suppose that we observe the current data {(tMmi , mi ), i= 1,2,..., nm } after a major event
with magnitude Mm and have a historical aftershock sequence {(tMhi , hi ), i= 1,2,..., nh } post to the mainshock with magnitude Mh. Furthermore, suppose that both sequences of aftershocks can be well described by the RJ model:
p λαβ(tM , )=−−+ exp{ (M Mc )}/( t c ) for M c ≤ M , (1) where the parametersα , β , c and p are nonnegative constants and Mc is the cut-off magnitude for both earthquake sequences. Then, following the Ogata (1983), we have the likelihood functions of the parameters given different sets of data
Luu(α ,βαβ ,cp , )== L ( , , cp , | ( ti , Mui ), i 1,2,..., nu ) n u T = exp{ ln[λλ (tM , )]− ( tMdt , ) } for u= m or h. (2) ∑ ui ui ∫0 ui c i=1 We consider herein the power prior information from a historical data set as
a {(,,,)}Lcph αβ , 0 ≤ a ≤ 1. Therefore, the posterior distribution of α , β , c and p is
proportional to
a {(,,,)}(,,,)LcpLchmαβ αβ p. (3)
In this setting, we suggest to find the posterior modes of α , β , c and p, which correspond to the most probable values of α , β , c and p given the current data by taking the reference from the historical data. In other words, we find the values of α , β , c and p so that the equation in (3) reaches its maximum.
3. Data analysis For illustrative purpose, we consider three current aftershock sequences post to the major events occurred in central (M=7.6, 1999/9/21, event 921), northeastern (M=6.8, 2002/3/31, event 331) and southeastern (M=6.4, 2004/12/10, event 1210) of Taiwan. The historical sequence as a reference for the aftershocks of the 921 event corresponds to the event with ML=6.2 on 1998/7/17 is selected. The aftershock sequence post to the event with ML=6.5 on 1994/6/5 is chosen as the historical data for the aftershocks of the 331 event. Finally, for the aftershock sequences post to the 1210 event, we take the historical sequence after the event with ML=7.1 on 1996/9/5. The related epicenters are shown in Figure 1. Note that the background colors, showing the spatial variation of the minimum magnitude for the complete earthquake catalog (Wiemer and Wyss, 2000), indicate an appropriate overall
cut-off magnitude of Mc =3.0. Therefore, for simplicity, we demonstrate herein the application of the power prior Bayesian analysis of the modified Omori’s model (Utsu, 1961) for assessing the hazard of M ≥ 3.0 aftershocks. The power under study is a= 0. 1, 0.5 and 1. We investigate how well the occurrence of current aftershocks can be forecasted by using the modified Omori’s law based on power prior Bayesian analysis. To do so, we computed the number of aftershocks during a week N days after the current major event based on the estimated Omori’s law obtained from both the current N-day data post to the major event and the associated historical data. We also study if the power prior Bayesian analysis is preferred to the MLE-based analysis for the forecasting, where the forecasted number of current aftershocks is calculated based on the Omori’s law with MLE obtained from the first N-day current data only. The results are then presented in Figure 2.
Figure 1 The epicenters of the current and historical major earthquakes. The rectangular regions indicate the areas of the epicenters of the current aftershocks under study.
Figure 2 The forecasted number of aftershocks in next 7 days based on the first 2-day (upper three) and 10-day (lower three) current data post to the 921 event (left), 331 event (middle) and 1210 event (right). The black dots are the observed number, the black line is the forecasted one from MLE, the red, green and blue lines are the forecasted ones from the power prior Bayesian analysis with a=0.1, 0.5 and 1.0.
4. Conclusions and Discussions Figure 2 shows that, when the magnitude of previous mainshock is much smaller than that of the current one, like the case with 921 event, the power prior Bayesian analysis tends to give a lower aftershock hazard for using a larger power of the prior information. On the other hand, if the magnitude of previous mainshock is much greater than that of the current one, like the case with 1210 event, then the power prior Bayesian analysis produces a higher aftershock hazard for using a larger prior power. However, when the magnitudes of the current and historical major events are about the same, like the case with 331 event, the power prior Bayesian analysis gives a good forecast for the aftershock hazard in the future 2 days with a small prior power about 0.1, but, yields a better forecast for the aftershock hazard in the future days with a larger prior power. Figure 3 further indicates that, comparing to the power prior Bayesian analysis, the MLE-based Omori’s model estimated from 10-day current data gives a competitive or even better forecast for the aftershock hazard in next week. Note that, in the power prior Bayesian analysis, the related parameters in the RJ model remain the same for both the historical and current aftershock sequences. Therefore, in the use of the power prior Bayesian analysis, the selection of a historical aftershock sequence that has a similar behavior in the time-decaying and the frequency-magnitude distribution to the current one is important. In fact, the power prior Bayesian analysis can be easily extended to the situation with more than one historical aftershock sequences. In general, the Bayesian analysis with reasonable prior information preserves some benefit for the near-time assessment of the aftershock hazard. However, in our study, about 10 days after the current mainshock when the available current data are getting more, the MLE-based analysis becomes more competitive. For simplicity, the MLE-based analysis is then suggested.
5. Acknowledgement The work was supported by the Taiwan Ministry of Education, iSTEP 91-N-FA07-7-4.
6. References Chen, M. H., Ibrahim, J. G., and Shao, Q. M. (2000), Power prior distributions for generalized linear models, J. Stat. Plann. Infer. 41, 121-137. Chen, Y. I. (2003), Bayesian analysis of aftershock hazard, contributed paper in AGU Fall Meeting. Chen, Y. I., Huang, C. S., and Hong, C. H. (2004). Statistical assessment for the hazard of large aftershocks post to the Chi-Chi mainshock, TAO, 15, 493-502. Gutenberg, R. and Richter, C. F. (1944), Frequency of earthquakes in California, Bull. Seism. Soc. Am. 34, 185-188. Ogata, Y. (1983), Estimation of the parameters in the modified Omori formula for aftershock sequences by the maximum likelihood procedure, J. Phys. Earth, 31, 115-124. Reasenberg, P. A. and Jones, L. M. (1989), Earthquake hazard after a mainshock in California, Science 243, 1173-1176. Rydelek, P. A. (1990), California aftershock model uncertainties, Science 247, 343. Utsu, T. (1961), Statistical study on the occurrence of aftershocks, Geophys. Mag. 30, 521-605.
Time-dependent b value for aftershock sequences
Y. I .Chen and C. S. Huang
Institute of Statistics, National Central University, Jhongli, Tauyen 32054, Taiwan
Corresponding Author: Y. I. Chen; [email protected]
Abstract: The b value in the Gutenberg-Richter law (Gutenberg and Richter, 1944) is found to be suddenly descending right after a mainshock and then increasing back to the background value for the aftershock sequence post to the 1999, M=7.6, Chi-Chi, Taiwan, earthquake. This observation suggests a time-varying b value for the frequency-magnitude distribution of aftershocks. Therefore, we consider a modification of the Reasenberg-Jones (Reasenberg and Jones, 1989) model for describing the time-magnitude distribution of the aftershocks in the Taiwan area. We also compare the relative performance of the modified Reasenberg-Jones model to the original Reasenberg-Jones model on describing the aftershock hazard post to the Chi-Chi earthquake.
1. Introduction It is well-known that the frequency-magnitude distribution of earthquakes in a large scale for a long time follows the Gutenberg-Richter law (Gutenberg and Richter, 1944). However, it is also generally believed that large aftershocks is more hazardous to occur right after a mainshock, but then they tend to have a small chance to occur when time goes by. This produces a conjecture that aftershocks preserve a time-dependent frequency-magnitude distribution. The modified Omori’s power law (Utsu, 1961) gives a time-decaying aftershock occurrence rate. Assuming that magnitude and time of earthquakes post to a mainshock are independent, Reasenberg and Jones (1989) suggested combining the Gutenberg-Richter law and the modified Omori’s power law to describe the time-magnitude distribution of aftershocks, referred to be the RJ model, hereafter. However, if the magnitude distribution of aftershocks is time-dependent, then we should consider the joint time-magnitude distribution of aftershocks as the product of its time distribution and the conditional magnitude distribution given time. This is then providing a modification of the original RJ model. In this study, we demonstrate that the b value in the Gutenberg-Richter law for the aftershock sequence post to 1999, M=7.6, Chi-Chi, Taiwan, earthquake is time-varying. We then consider a simple liner time-trend for the b value and suggest a modified RJ model. Finally, based on the Chi-Chi aftershock sequence, we make a comparison of the modified and original RJ models for the description of the aftershock hazard.
2. Time-dependent b value and the modified Reasenbeg-Jones model According to the Gutenberg-Richter law (Gutenberg and Richter, 1944) that the magnitude of earthquakes is distributed according to a left-truncted exponential distribution with the survival function
= obMS {Pr)( Magnitude M => −β − MM c )}(exp{} for M ≤ M c , (1) where M c is the cut-off magnitude and β is a constant. Note that β = b ln10 , where b is the b value in the Gutenberg-Richter law and ln denotes the natural logarithm. In addition, the occurrence of aftershocks decaying in time is described according to the modified Omori’s law (Utsu 1961) : λ()tKtc=+ /( )p , (2)
where λ t)( is the intensity or hazard of aftershocks at time t after the mainshock, and K , c, and p are positive constants. Reasenberg and Jones (1989) combined the models in (1) and (2) and suggested a hazard model for the intensity of aftershocks with magnitude M or larger at time t after the mainshock as λ = λ MStMt )()(),( (3) However, if the β or b value in (1) is time-dependent as β (t), then the RJ model in (3) should be modified to be
λ λ tMt {β −×−×= MMt ci )()(exp)(),( }. (4)
We consider, herein, the simple linear function β t)( = α + α10 ln t . Following Ogata (1983), we obtain the likelihood function of the related parameters,θ = pcK α α10 ),,,,( N N ti L θ|t,Μ λ ti {}−×= λ )(exp)()( β tdss i {}β i −×−×× MMt ci ,)()(exp)( ∏ ∫t ∏ i=1 i−1 i=1
The associated maximum likelihood estimates (MLE) of pcK α α10 ),,,,( can then be obtained by sing an iteration algorithm.
3. Data analysis and results We consider the aftershock sequence within 200 days post to the Chi-Chi earthquake in the study region of (23.4o o × (120.5N)N,24.4 o o E)E,121.5 . Since the minimum magnitude for the complete recording of the aftershocks is M c = 2.2 (Wiemer and Wyss, 2000), we illustrate the use of the modified RJ model based on only the M ≥ 2.2 aftershocks (Figure 1). We estimate the single b value in the Gutenberg-Richter law and the time-dependent b value proposed herein. To see if the b value varies in time, we also compute the b value based on 500 time-ordering aftershocks, sliding by 100. Figure 2 shows clearly that the time-dependent b value is necessary for the evaluation of the aftershock hazard post to the Chi-Chi earthquake.
Figure 1. The Chi-Chi aftershock sequence under study. The red triangle locates the epicenter of the Chi-Chi earthquake and the black dots in the rectangular study region represent the M ≥ 2.2 earthquakes occurred within 200 days after the 1999 Chi-Chi, Taiwan, earthquake.
Figure 2. The observed and estimated b values. The observed b value is computed from 500 aftershocks sliding by 100 (dot-line), the blue line represents the estimated single b value, and the red line shows the estimated time-dependent b value.
To further confirm if the modified RJ model (4) outperforms the original RJ model (3) for describing the Chi-Chi aftershock hazard (Chen et al., 2004), we computed the
MLE of the parameters Kcp, , , α0 and α1 as 14962.5, 3.46, 1.39, 1.5 and 0.24, respectively. We also find the MLE of the single b value as 0.82. The observed and the estimated cumulative numbers of MM≥≥3.0 or 4.0 aftershocks during the 200 days under study are then obtained (Figure 3). Note that a best fit of the cumulative numbers of aftershocks should be the 45 o dashed line. It is obvious that the modified RJ model provides with a better model for the time-magnitude distribution of the Chi-Chi aftershock sequence than does the original RJ model.
Figure 3 The observed and estimated cumulated number of M ≥ 3 (left) and M ≥ 4 (right) aftershocks under study. The blue line shows the cumulated number of aftershocks computed from the original RJ model and the red line represents that computed based on the modified RJ model.
4. Conclusions and Discussions In this study, we explore the time-varying b value for the frequency-magnitude distribution of the earthquakes after the Chi-Chi mainshock. Based on the time-dependent b value, we then consider a modified RJ model. The modified RJ model further shows a better description of the frequency-magnitude distribution of the Chi-Chi aftershock sequence. In fact, other than the simple linear time-trend of the b value, we can also consider
the b value of a quadratic time-trend. Though not showing here, we also demonstrate that the time-dependent b value holds for the aftershock sequences post to the two recent earthquakes in the northeastern and southeastern of the Taiwan area. It maybe deserves a future study to explore, for aftershock sequences in different regions of the Taiwan area, how their frequency-magnitude distributions are related to time post to the associated mainshocks.
5. Acknowledgement The work was supported by the Taiwan Ministry of Education, iSTEP 91-N-FA07-7-4.
6. References Chen, Y. I., Huang, C. S., and Hong, C. H. (2004). Statistical assessment for the hazard of large aftershocks post to the Chi-Chi mainshock, TAO, 15, 493-502. Gutenberg, R. and Richter, C. F. (1944), Frequency of earthquakes in California, Bull. Seism. Soc. Am. 34, 185-188. Ogata, Y. (1983), Estimation of the parameters in the modified Omori formula for aftershock sequences by the maximum likelihood procedure, J. Phys. Earth, 31, 115-124. Reasenberg, P. A. and Jones, L. M. (1989), Earthquake hazard after a mainshock in California, Science 243, 1173-1176. Utsu, T. (1961), Statistical study on the occurrence of aftershocks, Geophys. Mag. 30, 521-605. Wiemer, S. and M. Wyss (2000), Minimum magnitude of completeness in earthquake catalogs: Examples from Alaska, the westen US and Japan, Bull. Seism. Soc. Am, 90, 859-869. Particle filtering of a state-space model for seismic sequences
N. Chopin1, E. Varini2
1Department of Mathematics - University of Bristol University Walk, Bristol BS8 1TW, UK 2Institute for Applied Mathematics and Information Technologies, CNR via Bassini 15, 20133 Milan, Italy
Corresponding Author: E.Varini, [email protected]; N. Chopin, nicolas.chopin at bris.ac.uk
Abstract: A state-space model is proposed in order to analyse a sequence of earthquakes; the basic assumption is that, at each time, the physical process is in a hidden state and that each state corresponds to a different observable marked point process. We considered three possible states corresponding respectively to the following marked point processes for seismic sequences: the Poisson model, the stress release model and the Etas model. The statistical analysis is carried out by exploiting a Bayesian sequential Monte Carlo (or particle filtering) method. This recent statistical methodology allows us to estimate both the model parameters and the filtering probabilities (the probability that one of the considered marked point process is active at time t). Moreover, it allows us to update the present estimates as new information comes in.
1. A state-space model n Let {(ti,mi)}i=1 be the set of the earthquakes occurred in a region such that ti ∈ [0,T] denotes the time occurrence and mi ≥ M0,whereM0 is a fixed threshold. Many studies on marked point processes are proposed in the literature; by exploiting some of these processes, I carried out a temporal analysis of the seismic phenomenon where the magnitude is considered as a mark. A marked point process is completely and uniquely defined by its conditional inten- sity function λ(t, m), representing the instantaneous probability that an event of magni- tude m occurs at time t conditionally to the history of the process up to t; the expres- n sion of the likelihood function is based on λ(t, m)andisgivenbyL = i=1 λ(ti,mi) · T exp − M λ(t, m) dt dm . I considered time and magnitude to be independent, so 0 that the magnitude component can be omitted in the analysis of the temporal part and the conditional intensity function can be simply denoted by λ(t). In this context, I focused on three important, widely studied marked point processes: the Poisson, the stress release (Zheng and Vere-Jones, 1991) and the Etas models (Ogata, 1999). They are founded on different physical behaviours of the seismic phenomenon: the Poisson model assumes the hypothesis of random time occurrences; the stress release model assumes that some elastic stress gradually accumulates and is suddenly released when the stress exceeds the strength of the medium; the Etas model describes the ten- dency of the earthquakes to occur in clusters. The most common and simple functional expression of their conditional intensity functions is respectively the following:
γ(mi−M0) α+β(ρt−R(t)) k · e λ1(t)=µλ2(t)=e λ3(t)= p (t − ti + c) i:ti
λ(t)= λj(t) · δj(Xt) j∈S where δj(Xt)=1ifXt = j and δj(Xt) = 0 otherwise. I assumed that an earthquake does not occur in correspondence of a state change. Later on, θ denotes the vector of all the unknown parameters of the state-space model and Xs:t = {Xr : s ≤ r ≤ t}.
2. The Bayesian sequential Monte Carlo method The aim is to estimate the parameter vector θ and the filtering probabilities p(xt | θ, y0:t), that is the probabilities that the system at time t is in a state j given the past history of the observable process. The filtering probabilities are interesting because they are linked to the predictive distributions as follows
p(x | θ, y )= p(x | θ, x )p(x | θ, y )dx for t
Since the filtering probabilities are related to the posterior distributions p(x0:t,θ | y0:t) (t ∈ [0,T]), I equivalently considered the latter as target distributions. The analysis is carried out by using a sequential Monte Carlo method, whose details and advantages I briefly explain in this section, referring to the extensive research re- cently developed on this new statistical methodology (Doucet, Freitas and Gordon, 2001; Kitagawa, 1998). Let p(x) be the density of a probability distribution on a measurable space and f(x) an integrable function having finite variance with respect to p(x). It is known that, if there exists no analytical solution of f(x)p(dx), but a sample x1, ..., xm from p(x)is −1 m available, then the Monte Carlo approximation m i=1 f(xi) is an unbiased estimator of the integral and it converges to the true integral value when m → +∞. If sampling from p(x) is complex, then the importance sampling principle can be used, keeping the same properties of the Monte Carlo integration: a sample x1, ..., xm is drawn from a “easy-to-sample” distribution π(x), called importance distribution, and the integral −1 m m is approximated by m i=1 ωif(xi), where ωi = p(xi)/π(xi). The set {xi,ωi}i=1 is called particle set and the ωi’s are called importance weights. The importance distribution π(x) is usually heavy-tailed with respect to p(x) and its support contains the support of p(x); moreover, if π(x)isclosetop(x), faster convergence of the importance sampling is observed. A simple choice is that the importance distribution are equal to the prior distribution. The importance sampling principle is exploited in the sequential Monte Carlo method n+1 in order to approximate the sequence of posterior distribution {p(x0:tk ,θ| y0:tk )}k=0 (t0 = m (k) (k) (k) 0, tn+1 = T ). Each p(x0:tk ,θ | y0:tk ) is approximated by a particle set xi ,θ ,ωi i=1 (k) such that xi is a realization of the hidden process in [0,tk] given the parameter vector (k) θ . The particle set at time tk+1 is obtained by the particle set at time tk as follows: (k+1) (k) each xi is equal to xi in [0,tk] and the remaining states in (tk,tk+1] are drawn from π(x); θ(k) = θ(k+1) and, by assuming that prior and importance distribution are equal, the following simple expression of the weights holds:
(k+1) (k+1) (k) (k+1) (k+1) ωi ∝ ω˜i = ωi · p(ytk:tk+1 | θ ,xtk:tk+1 ,y0:tk )
m (k+1) where the normalising constant is i=1 ω˜i and the updating factor is simply the likelihood function on the interval [tk,tk+1]. For the strong law of the large numbers, an accurate approximation should be obtained for a sufficiently large number m of particles but, as in practice the variance of the weights is increasing as new data are collected, the sequential Monte Carlo method could quickly degenerate. In order to avoid this effect, a resampling step is sometimes applied following the computationally advantageous (cost O(m)) technique proposed by Carpenter, Clifford and Fearnead (1999).
3. Simulated data and results In order to test the model and the statistical methodology, I analysed a simulated data set. The parameter values used in the simulation procedure are fixed as follows: µ =0.5, α = −4.5, β =0.02, ρ =2.8, k =0.1, γ =0.6, c =0.5, p =1,q12 =0.018, q13 =0.002, q21 =0.015, q23 =0.0135, q31 =0.054, q32 =0.006, P (X0 = j)=1/3foreachj ∈ S. I simulated the marked point processes by using the inverse transform method and the thinning method (Lewis and Shedler, 1976); the magnitude of the events is simulated from a truncated exponential distribution by following the Gutenberg-Richter law. The simulated data set is composed by 908 earthquakes and 72 state changes, setting T = 2000 and M0 =4.5. Prior and importance distributions are chosen on the basis of the available numerous studies on the involved processes. The lower part of Figure 1 represents the simulated conditional intensity function, while the upper part represents the mixture of the plug- in estimates of the conditional intensity functions by using the posterior means of the parameters.
12 8 4 3
2
1
expected cond. intensity 0 12 8 4 3
2
1 ’true’ cond. intensity 0 0 500 1000 1500 2000
time
Figure 1 Comparison of the true conditional intensity function (bottom) with the expected conditional intensity function (top).
4. Acknowledgement This work is a part of my Ph.D. thesis in Statistics at the University Bocconi of Milan (Italy); I would like to thank my advisors dr. Renata Rotondi and prof. Nicolas Chopin.
5. References Carpenter, J., Clifford, P., and Fearnhead, P. (1999),An improved particle filter for non linear problems, Radar, Sonar and Navigation, IEE Proceedings 146,1,2-7. Doucet, A., De Freitas, N., and Gordon, N., Sequential Monte Carlo methods in practice, (Springer-Verlag, New York, 2001). Kitagawa, G. (1998) A self-organizing state-space model, Journal of the American Statistical Association, 93, 443,1203-1215. Lewis, P.A., Shedler G.S. (1976) Simulation of nonhomogeneous Poisson processes with log linear rate function, Biometrika, 63, 3,501-505. Ogata, Y. (1999) Seismicity analysis through point-process modeling: a review, Pure and Applied Geophysics, 155, 471-507. Zheng, X.G., Vere-Jones, D. (1991) Application of stress release models to historical earthquakes from North China, Pure and Applied Geophysics, 135, 4,559-576.
Are foreshocks really mainshocks with larger aftershocks?
A. Christophersen1, and E.G.C. Smith1
1The Institute of Geophysics, Victoria University of Wellington, PO Box 600, Wellington, New Zealand
Corresponding Author: [email protected]
Abstract: The question whether foreshocks behave like mainshocks that happen to have larger aftershocks is important for the understanding of earthquake nucleation and the modelling of earthquake clustering. A commonly used model for short-term earthquake occurrence combines empirical observations of aftershock occurrence and extends them to foreshocks assuming a single triggering mechanism for earthquake clusters. However, observed foreshock probabilities are significantly lower than predicted by this kind of model. We conclude that foreshocks differ in some way from mainshocks and look for statistical and physical causes in two earthquake catalogues.
1. Introduction Earthquakes cluster in space and time. In retrospect, the largest earthquake in a cluster is called the mainshock. The earthquakes preceding a mainshock are named foreshocks, and the earthquakes following a mainshock aftershocks. Minor earthquakes of similar magnitude are referred to as swarms (e.g. Evison and Rhoades, 2004) and larger earthquakes of similar magnitude are sometime called multiplets (e.g. Felzer et al., 2004). The earthquakes that occur outside a cluster are referred to as background. As no physical differences between these earthquakes are known, some of this labelling is arbitrary. Careful considerations need to be given to potential biases introduced by arbitrary data selection. In a different approach to earthquake modelling, complete earthquake catalogues are described as one family of earthquakes, which interact through an epidemic model (Console et al. 2003, Zhuang et al., 2004). We briefly review the most common empirical laws for earthquake clustering. They can be used to derive the probability for a foreshock to occur. We review prospective foreshock probabilities and compare them with recalculated values from a homogeneous global catalogue. We find the observed values to be significantly smaller than the model predictions. We discuss reasons for the discrepancy between data and model.
2. Empirical laws for aftershock occurrence Two empirical laws have been used successfully to model the short-term clustering of earthquakes. Omori’s law describes the decay of earthquake activity with time t.
+= tcKdtdN )/(/ p , (1)
where dN is the number of earthquakes in the time interval dt; K is a parameter that is proportional to the aftershock productivity; p describes the decay and takes values around 1.0; and c stands for a small time interval just after the mainshock (e.g. Utsu et al., 1995). The Gutenberg-Richter relation describes the magnitude-frequency distribution
10 )(log −= bMaMN , (2)
where N (M) is the number of earthquakes of magnitude M and a and b are parameters (e.g. Gutenberg and Richter, 1949).
Two less well explored empirical relations describe the increase of the average number N of aftershocks (3) and the aftershock area A (4) with mainshock magnitude M.
10 )(log −= dcMMN , (e.g. Singh, and Suárez, 1988) (3)
10 MA −= 01.402.1log , (Utsu and Seki, 1954) (4)
3. Predicting mainshocks from aftershock models Reasenberg and Jones (1989, 1994) studied 62 Californian aftershock sequences and combined equations (1) and (2) to derive the probability for the occurrence of earthquakes in a given time and magnitude interval following an initiating earthquake. They extended the model for magnitudes larger than the initiating earthquakes, assuming that foreshocks trigger the mainshocks in the same way as the mainshocks trigger aftershocks. The Californian model predicts a probability of 10% that an earthquake outside an aftershock sequence is followed by one of equal or larger size. Another model formulation combining equations (2) and (3) and assuming equal growths constants b and c, also finds a probability of 10% that an initial earthquake is followed by one of equal size or larger (Vere-Jones et al., 2005).
4. The earthquake catalogues and defining earthquake sequences We used a global catalogue based on the reports of the International Seismological Centre (2003). The catalogue includes more than 45,000 earthquakes of magnitude 5.0 and above in the period 1964 – 2001 and shallower than 70 km (Christophersen, 2000). We have also started to work with the ANSS (Advanced National Seismic System) catalogue for California covering the years 1984 – 2004 inclusive. We used a simple window method in time and space to search for related earthquakes. In space we calculated a search radius of four times the radius of a circular area according to Utsu and Seki’s model (1954), resulting in a search radius of 8 km for M =5.0 and 150 km for M = 7.5. In time we use a sliding time window of 30 days, extending the sequence each time a new event is found. We separated our global earthquake sequences into different tectonic regimes, following a method suggested by Kagan (1999).
5. Review of prospective foreshock probabilities Table 1 compares some previously reported foreshock probabilities with recalculated values from our global catalogue. We found good agreement with previous observations when matching our space, time and magnitude window. All observations are significantly lower than predicted by the aftershock models. The only exception is one observation in a volcanic area (Merrifield et al., 2003). We note that we found some significant variation in foreshock probabilities between different tectonic settings which we do not show here though. However, all these probabilities were still smaller than expected from the model.
6. The ETAS model The ETAS (Epidemic Type Aftershock Sequence) model allows each earthquake to trigger its own offsprings (e.g. Ogata, 1988). The ETAS model separates between the background earthquakes and earthquake clusters. Interestingly, Zhuang et al. (2004) observed that the background in which the foreshocks occur have different triggering parameters than the clusters of aftershocks. This might be another indication that foreshocks require different modelling to aftershocks.
Table 1. Comparison of previously reported and recalculated foreshock probabilities in different magnitude, space and time windows. Previously reported foreshock probabilities Recalculated foreshock probabilities Location, catalogue threshold, Magnitude Foreshock Foreshock Comments space and time window, author restriction probability probability California, (Jones, 1985) 6% Not calculated yet Global catalogue, completeness MS ≥ 5.0 7.5% 4.2% ±0.2% The discrepancy between for M ~ 5.6, 75 km and 7 days MS ≥ 6.0 2.3% 1.9%±0.1% MS ≥ 5.0 earthquakes can be (Reasenberg, 1999) MS ≥ 7.0 0.4% 0.5%±0.1% explained by incomplete data in the Reasenberg catalogue. New Zealand, 30 km and 5 days MS ≥ 5.0 4.5 ± 0.7% 4.3%±0.3% The recalculated values of old subduction zones are (Savage and Rupp, 2000) MS – FS ≥ 0.8±0.3% 0.8%±0.1% compared. Old subduction 1.0 zones have the highest foreshock and aftershock rates. Taupo Volcanic Zone 9.6+1.7% No comparable zone in global (Merrifield et al., 2004) catalogue
7. Discussion The probability of an earthquake to be a foreshock, i.e. to be followed by an earthquake of equal or larger size, within 75 km and 7 days, is about 4% in a global earthquake catalogue. The result confirms earlier observations and gives us some confidence that our clustering methodology to define earthquake sequences is consistent with commonly used methods of declustering earthquake catalogues. The observed foreshock probability is significantly smaller than 10% as derived from models using aftershock parameters. Thus we conclude that there is some difference between foreshocks triggering mainshocks and mainshocks triggering the subsequent sequence. One possible cause for this difference could be the ‘cumulative nature’ of the aftershock parameters. The parameters are derived from aftershock sequences in which some large aftershocks might have triggered their own sequence of events while the foreshock analysis does not extend past the largest earthquake in a sequence. To investigate possible differences in foreshock and aftershock parameters we have looked at the distribution of time and magnitude differences between foreshock and mainshock pairs. The distribution of time differences between foreshocks and mainshocks is consistent with Omori’s law. The distribution of magnitude differences between foreshocks and mainshocks follows the Gutenberg-Richter relation. However, the b-value seems to be smaller than for aftershock sequences. We are still investigating in how far this is an artefact of the data selection. Physical differences between foreshocks and aftershocks could be another possible cause for the difference between observed and model foreshock probabilities. For example, foreshocks could be triggered by the nucleation of the mainshock rather than trigger the up-coming mainshock (e.g. Dodge et al., 1995). In this case, the average foreshock size, the area of foreshock occurrence, and the average number of foreshocks per mainshock should all increase with mainshock size. Felzer et al. (2004) could not find any evidence for this. Instead they confirmed that a single triggering mechanism explains the occurrence of foreshocks, aftershocks and mulitplets. We will revisit this analysis in our global and Californian data sets and will report on our results in January.
8. Conclusion Observed prospective foreshock probabilities are significantly lower than predicted by simple statistical models for aftershock occurrence. We conclude that foreshocks do not trigger mainshocks in the same manner as mainshocks trigger foreshocks. We consider two possible causes: Statistical bias in data selection and physical differences between foreshocks and mainshocks. We are still looking for evidence in two catalogues for both causes.
9. Acknowledgement The presenting author thanks Matt Gerstenberger and David Vere-Jones for stimulating discussions. The study is funded by the New Zealand EQC Research Foundation. The presenting author has gratefully received travel support from the conference organisers.
10. References Christophersen, A. (2000), The probability of a damaging earthquake following a damaging earthquake, PhD Thesis, Victoria University of Wellington. Console, R., M. Murru, and A. M. Lombardi (2003) Refining earthquake clustering models, J. Geophys. Res. 108, 10.1029/2002JB002130 Evison, F. F., and D.A. Rhoades (2004), Demarcation and scaling of long-term seismogenesis, Pure appl. geophys. 161, DOI 10.1007/s00024-003-2435-8. Felzer, K. R., R.E Abercrombie, and G Ekstroem (2004), A common origin for aftershocks, foreshocks, and multiplets, Bull. Seism. Soc. Am. 94, 88-98. Gutenberg, B., and C. F. Richter (1949), Seismicity of the earth and associated phenomena, Princeton University Press. International Seismological Centre, Bulletin Disks 1-12 [CD-ROM], Internatl. Seis. Cent., Thatcham, United Kingdom, 2004. Jones, L.M. (1985), Foreshocks and time-dependent earthquake hazard assessment in Southern California, Bull. Seism. Soc. Am. 75, 1669-1679. Kagan, Y. Y. (1999), Universality of the seismic moment-frequency relation, Pure appl. Geophys. 155, 537-573. Merrifield, A., M.K. Savage, and D. Vere-Jones (2004), Geographical distributions of prospective foreshock probabilities in New Zealand, New Zealand Journal of Geology & Geophysics 47, 327-339. Ogata, Y. (1988), Statistical models for earthquake occurrences and residual analysis for point processes, J. Am. Stat. Ass. 83, 9-27. Reasenberg, P.A. (1999), Foreshock occurrence before large earthquakes, J. Geophys. Res. 104 B3, 4755-4768. Reasenberg, P.A., and L.M. Jones (1989), Earthquake hazard after a mainshock in California, Science 243, 1173-1176. Reasenberg, P.A., and L.M. Jones (1994), Earthquake Aftershocks: Update, Science 265, 1251-1251. Savage, M.K., and S.H. Rupp (2000), Foreshock probabilities in New Zealand, New Zealand Journal of Geology & Geophysics 43, 461-469. Singh, S.K., and G. Suárez (1988), Regional variation in the number of aftershocks (mb ≥ 5) of large subduction zone earthquakes (Mw ≥ 7.0), Bull. Seism. Soc. Am. 78, 230-242. Utsu, T., Y. Ogata, and R. S. Matsu’ura (1995), The Centenary of The Omori Formula for a decay law of aftershock activity, J. Phys. Earth 43, 1–33. Utsu, T., Y., and A. Seki (1955), A relation between the area of aftershock region and the energy of the mainshock, Zisin, 7, 233-240. Vere-Jones, D., J. Murakami, and A. Christophersen (2005), A further note on Bath’s law, Conference Proceedings 4th International Workshop on Statistical Seismology, Japan, 2006. Zhuang, J., Y. Ogata, and D. Vere-Jones (2004), Analyzing earthquake clustering features by using stochastic reconstruction, J. Geophys. Res. 109 B05301, 4755-4768. Seismicity migration and time-dependent stress transfer: supporting and conflicting evidence from the Umbria-Marche (Central Italy) seismic sequence.
Massimo Cocco, Lauro Chiaraluce and Flaminia Catalli
Istituto Nazionale di Geofisica e Vulcanologia, via di Vigna Murata 605, 00142, Rome, Italy
Corresponding Author: M. Cocco, [email protected]
Abstract: We study fault interaction by modeling the spatial and temporal pattern of seismicity through elastic stress transfer. The changes in the rates of earthquake production can be modeled both through time dependent stress changes and by accounting for the frictional properties of fault populations. In this study we investigate the prolonged sequence of normal faulting moderate- magnitude earthquakes that struck central Italy (Umbria-Marche regions) in 1997-1998. We have modeled the aftershock patterns and the spatial and temporal migration of seismicity in terms of Coulomb stress changes and pore pressure relaxation in a fluid-saturated fractured medium. We use this case study to discuss the physical processes underlying earthquake triggering.
1. Introduction Earthquake triggering is currently investigated by modeling stress transfer to explain observations of measurable changes in earthquake occurrence rates following a causative event. The most common representation of elastic stress interaction is based on the calculation of Coulomb stress changes caused by earthquake dislocations (see Harris, 1998; King and Cocco, 2001, and references therein). Coulomb stress changes are widely used in the literature to investigate fault interaction between large magnitude earthquakes as well as to model aftershock patterns and seismicity rate changes over longer time windows. Despite numerous observations of earthquake triggering, many questions remain unanswered about the physical mechanisms that may explain earthquake occurrence and the spatio-temporal pattern of seismicity. In particular, it is well known that other time dependent processes can modify the induced coseismic stress field, such as afterslip, viscoelastic relaxation of the lower crust and fluid flow. These stress perturbations occur at different temporal scales. By restricting the analysis to the aftershock duration, pore-fluid flow may play a dominant role in triggering seismicity. Diffusive processes of pore pressure relaxation in fractured and saturated rocks have been proposed to explain both earthquakes and induced seismicity (see Shapiro et al., 2003; Antonioli et al., 2005 and references therein). The efficiency of static stress change in triggering seismicity is supported by a huge number of phenomenological studies (Stein, 1999; Toda and Stein, 2003; Steacy et al., 2005 and references therein). However, in order to answer to questions related to the underlying physical processes, the frictional failure models or the fault constitutive behavior have to be taken into account. This is necessary, for instance, to explain why very small stress changes lead to measurable changes in failure rates or why failure is often delayed from the occurrence of the stress change. To this goal the most common approach proposed in the literature is the Dieterich (1994) model, which uses rate-state friction theory to model failure rate changes. This model has been discussed in detail both from a theoretical point of view (see Gomberg et al., 2005-a) and it has been used to compare model predictions with observations (Gross, 2001; Toda and Stein, 2003; Toda et al., 2005 among others). This model predict basic observations like aftershock rates that decay inversely with time (Omori law). The delayed occurrence of triggered seismic events is explained in this model as the effect of the frictional response of the fault to the applied stress perturbations. It is important to point out that most of the frictional models only consider static stress changes to promote failure or trigger faults that are close to fail. In the present study we will not face the problem of triggering of seismicity by dynamic stress perturbations. Nevertheless, it is important to emphasize that transient dynamic stress perturbations account for the time dependency of elastic coseismic stress changes. The Dieterich model is a key ingredient of recent approaches aimed to compute change of probability of a large impending earthquake caused by a nearby seismic event (see Hardebeck, 2004; Gomberg et al., 2005-b). In this study we aim to discuss the modeling of seismicity patterns, and in particular the migration of seismicity, through time dependent stress transfer. We apply our analyses to the seismic sequence that struck the Umbria-Marche (central Italy) in 1997-1998. We first present the results of modeling fault interaction through elastic stress transfer among repeated moderate- magnitude main shocks (5 Figure 1. Map of seismicity and focal mechanisms of the main shocks. Colors refer to distinct time intervals. 2. The Umbria-Marche Seismic Sequence. The 1997 Umbria-Marche seismic sequence (central Italy) consists of six moderate magnitude earthquakes (5 Figure 2. Temporal evolution of seismicity. Several features make the study of this sequence particularly interesting: (1) the availability of accurate earthquake locations (the resulting errors are of the order of tens of meters, Chiaraluce et al., 2003); (2) seismicity migrated toward the SE-direction progressively activating the main fault planes before the occurrence of their impending mainshocks (Chiaraluce et al., 2004); (3) Coulomb stress changes reveals that these normal faults interact with each other (Nostro et al., 2005), but it cannot explain the presence of seismicity in the hanging wall of the main faults (Miller et al., 2004) as well as the temporal migration of seismic events (Antonioli et al., 2005); (4) there are evidence of the presence of fluids in this area revealed by previous studies (Chiodini et al., 2002). 3. Modeling Coulomb Stress Changes. Our modeling results show that 8 out of 9 main shocks of the sequence occur in areas of enhanced Coulomb stress, implying that elastic stress transfer may have promoted the occurrence of these moderate-magnitude events. Our calculations show that stress changes caused by normal faulting events re-activated and inverted the slip of a secondary NS-trending strike slip fault (see event 8 in Figure 3) inherited from compressional tectonics in its shallowest part (1-3 km). The adoption of a isotropic poroelastic model to compute the Coulomb stress perturbations (Cocco and Rice, 2002) yields a better score in explaining the location of main shocks of the 1997 seismic sequence. Of the 1517 available aftershocks, 82% are located in areas of positive stress changes for optimally oriented planes (OOPs) for Coulomb failure. Therefore, we might conclude that the spatial distribution of aftershocks is consistent with Coulomb stress changes, with seismicity primarily occurring in areas experiencing positive stress changes. However, our computations demonstrated that the number of promoted aftershocks decreases when stress changes are resolved onto the nodal planes of focal mechanisms computed from polarity data and the corresponding focal mechanisms associated with each OOPs do not coincide with the fault plane solutions computed from polarity data. Thus, we can conclude that the join prediction of both location and focal mechanism of an impending aftershock remains an extremely difficult task in the study area. It is important to note that most of the analyses of earthquake triggering relying on the correlation between seismicity rate changes and Coulomb stress perturbations do not account for the faulting mechanisms of promoted aftershocks. This approach has the advantage to provide a robust statistical measure of the increase in the rate of earthquake production in a seismogenic area. However, it contains the same limitations we have experienced in our application to the Umbria-Marche 1997 seismic sequence. The comparison between 322 available real focal mechanisms and those inferred from Coulomb stress modeling reveals that no more than 45% of them are similar. The comparison does not improve if we compute the optimally oriented planes for Coulomb failure by fixing the strike orientation of OOPs using information derived from structural geology. Figure 3. Main shocks, aftershocks and fault planes of the main events of the sequence. We have tested in our study if aftershock failure planes are controlled by geological structures by calculating different sets of OOPs having the strike angle fixed “a priori” by geological observations. Our results suggest that, despite the agreement between the average orientation of aftershock rupture planes and the orientation of the geological structures mapped at the surface, the structural constraints do not improve the agreement between OOPs and observed fault plane solutions. Our results show that a structural control does exist, even if it is not enough to allow the prediction of OOPs in agreement with focal mechanisms computed from polarity data. The modeling results presented in this study allow us to conclude that elastic stress transfer promoted the occurrence of the largest magnitude earthquakes of the Umbria-Marche seismic sequence. However, we confirm the findings of Kilb et al. (1997) concerning the incapability to reliably reproduce aftershock mechanisms. This might be due in our opinion to: (1) the complex fault mesh and to the 3D tectonic setting characterizing such a relatively small seismogenic volume; (2) the poor knowledge of the absolute stress amplitudes and the depth dependence of the tectonic stress tensor; (3) the details of slip distributions at short wavelength that can affect the prediction of focal mechanisms for aftershocks located very close to the main rupture planes; (4) the presence of fluids and the pore pressure relaxation caused by coseismic (undrained) stress changes. Our interpretation of these outcomes is that, despite several modeling results point out that elastic stress transfer contribute to promoting earthquakes and explain interaction among the main faults where the largest magnitude events nucleated, Coulomb stress perturbations cannot explain the observed temporal migration of seismicity and correctly predict the aftershock faulting mechanisms. 4. Fluid Flow and seismicity pattern. We model the spatial and temporal evolution of seismicity during the 1997 Umbria-Marche (central Italy) seismic sequence in terms of subsequent failures promoted by fluid flow. The diffusion process of pore-pressure relaxation is represented as a pressure perturbation generated by coseismic stress changes and propagating through a fluid saturated medium. The pore pressure relaxation is modeled through the Darcy law. Using this approach we have evaluated an average value of hydraulic diffusivity for the study area. The estimated values of isotropic diffusivity range between 22 and 90 m2/s, which is consistent with observations in other areas. We have also calculated a value of anisotropic diffusivity (see Antonioli et al., 2005): it is largest along the average strike (N140°) direction of activated faults yielding an anisotropic value of 250 m2/s. Our results suggest that the observed spatio-temporal migration of seismicity is consistent with fluid flow. Our interpretation is that pore-pressure relaxation and the drained response of a poro-elastic fluid saturated medium is activated by the coseismic stress changes caused by earthquakes. The evidence of intermediate-deep fluid circulation and CO2 degassing in this portion of the Apennines (Chiodini et al., 2000) further corroborates our results. For this area, Miller et al. (2004) proposed that seismicity on the hanging wall of normal faults is promoted by a pressure pulse originating (co-seismically) from this known deep source of trapped high-pressure CO2 and propagating into the damage region created by the earthquake. Our results suggest that fluid flow and pore pressure relaxation explain the observed migration of seismicity along the fault strike in the southern portion of the seismogenic volume. We emphasize that in complex highly fractured media both elastic and poro-elastic interactions act together in promoting seismicity and controlling its spatial and temporal pattern. 5. Conclusive remarks Our numerical simulations allowed us to investigate earthquake interaction and to provide a physical model to explain main shocks and aftershocks occurrence. We have shown that elastic and poro-elastic interactions are necessary to explain the evident spatial and temporal migration of seismicity. We are now trying to verify if a frictional failure model, such as the Dieterich (1994) model, can predict the observed migration of seismicity. Our preliminary results are encouraging and this raises the question of the main physical parameters of this approach that control the spatio-temporal migration. We aim to discuss these issues in this lecture. 6. References Antonioli A., Piccinini D., Chiaraluce L. and M. Cocco (2005). Fluid flow and seismicity pattern: evidence from the 1997 Umbria-Marche (central Italy) seismic sequence, Geophys. Res. Lett., in press. Chiaraluce L., W. L. Ellsworth, C. Chiarabba and M. Cocco (2003), Imaging the complexity of an active normal fault system: The 1997 Colfiorito (central Italy) case study, J. Geophys. Res., 108, doi:10.1029/2002JB002166. Chiaraluce L., et al. (2004), Complex Normal Faulting in the Apennines Thrust-and-Fold Belt: The 1997 Seismic Sequence in Central Italy; Bull. Seism. Soc. Am. 94(1), 99-116. Chiodini G., F. Frondini, C. Cardellini and L. Peruzzi (2000), Rate of diffuse carbon dioxide Earth degassing estimated from carbon balance of regional aquifers; the case of central Apennine, Italy, J. Geophys. Res., 105, 8423-8434. Cocco M. and Rice J. R. (2002), Pore pressure and poroelasticity effects in Coulomb stress analysis of earthquake interactions, J. Geophys. Res., 107, NO. B2, 10.1029/2000JB000138. Dieterich, J. (1994), A constitutive law for rate of earthquake production and its application to earthquake clustering. J. Geophys. Res., 99, 2601-2618. Gomberg, J., M. E. Belardinelli, M. Cocco, and P. A. Reasenberg (2005a), Time- dependent earthquake probabilities, J. Geophys. Res., 110, doi:10.1029/2004JB003404. Gomberg, J., P. A. Reasenberg, M. Cocco, and M. E. Belardinelli (2005b), A frictional population model of seismicity rate change, J. Geophys. Res., 110, doi:10.1029/2004JB003405. Gross, S. (2001), A model of tectonic stress state and rate using the 1994 Northridge earthquake sequence, Bull. Seismol. Soc. Am., 91, 263-275. Hardebeck, J. L. (2004), Stress triggering and earthquake probability estimates, J. Geophys. Res., 109, B04310, doi:10.1029/2003JB002437. Harris R. A. (1998), Introduction to special section: stress triggers, stress shadows, and implications for seismic hazard, J. Geophys. Res., 103, 24347-23358. Kilb, D., Ellis M., Gomberg J., and Davis S. (1997), On the origin of diverse aftershock mechanisms following the 1989 Loma Prieta earthquake, Geophys. J. Int., 128, 557–570. King G. C. P. and Cocco M. (2001), Fault interaction by elastic stress changes: new clues from earthquake sequences, Advances in Geophysics, 44, 1-38. Miller S. A., C. Collettini, L. Chiaraluce, M. Cocco, M. Barchi, J. Boris and P. Kraus (2004), Aftershocks driven by a high-pressure CO2 source at depth, Nature, 427, 724-727. Nostro, C., L. Chiaraluce, M. Cocco, D. Baumont, and O. Scotti (2005), Coulomb stress changes caused by repeated normal faulting earthquakes during the 1997-1998 Umbria-Marche (Central Italy) seismic sequence, J. Geophys. Res., 110, doi:10.1029/2004JB003386. Parsons, T. (2005), Significance of stress transfer in time-dependent earthquake probability calculations, J. Geophys. Res., 110, doi:10.1029/2004JB003190. Shapiro S. A., R. Patzig, E. Rothert and J. Rindshwentner (2003), Triggering of seismicity by pore- pressure perturbations: permeability-related signature of the phenomenon, Pure Appl. Geophys., 160, 1051-1066. Steacy S., Gomberg J. and M. Cocco (2005). Introduction to special section: stress transfer, earthquake triggering and time-dependent seismic hazard, J. Geophys. Res., 110, doi: 10.1029/2005JB003692. Stein R. S. (1999), The role of stress transfer in earthquake occurrence, Nature, 402, 605-609. Toda S. and Stein R. S. (2003), Toggling of seismicity by the 1997 Kagoshima earthquake couplet: A demonstration of time-dependent stress transfer, J. Geophys. Res., 108, B12, 2567, doi: 10.1029/2003JB002527. Toda, S., R. Stein, K. Richards-Dinger, and S. Bozkurt (2005), Forecasting the evolution of seismicity in southern California: Animations built on earthquake stress transfer, J. Geohys. Res., doi: 10.1029/2005JB003415. Tidal Triggering of Earthquakes: Response to Fault Compliance? E.S. Cochran1 1Institute of Geophysics and Planetary Physics, Scripps, La Jolla, CA 92093, USA Corresponding Author: E. S. Cochran, [email protected] Abstract: Clear tidal forcing of earthquakes has recently observed by several local and global studies. Using tidal amplitudes in addition to tidal phase the stress level required for tidal forcing is determined. Cochran et al. (2004) find that natural faults are highly responsive to relatively small stress changes and faults behave more weakly than would be expected from laboratory rate and state friction and stress corrosion measurements. Stress changes on the order of a tenth of a bar can cause modulation of the timing of earthquakes, similar to that measured for aftershock sequences. In addition, recent geodetic and seismologic studies suggest that faults are not simple planes of slip, but zones of highly damaged, sheared rock. These damage zones are highly compliant and may result in higher stress concentration in the fault than in the surrounding intact rock. Thus, fault zone compliance may reduce the stress loads required for rupture. 1. Introduction Seismological data is often mined to determine the amplitude of stress perturbation, whether natural or induced, required to trigger an earthquake. One application of standard statistics to a basic seismological question is the question of whether Earth tides affect the timing of earthquakes. For a century or longer, studies have been conducted to determine whether Earth tides modulate the timing of earthquakes. Each of these studies depends on the accuracy of tidal calculations and the statistical tests applied to the data. In addition, an additional step is taken to match recent seismologic and geodetic observations of fault zone properties with the notion that relatively low amplitude stresses trigger earthquakes. Tidal modulation of the timing of earthquakes is based on the assumption that faults are more likely to rupture when stress levels encourage failure. Computing tidal stresses is not trivial and includes the direct solid Earth tide, caused by the gravitational pull of the Sun and the Moon, and the indirect Earth tide due to ocean loading. An accurate accounting for the tides is an imperative first step in a global study of tidal triggering (e.g. Tanaka et al., 2002; Cochran et al., 2004) Local studies have had greater success at showing a tidal correlation as tidal stress can more easily be inferred from local tidal gauge measurements (e.g. Tolstoy et al., 2002). Comparing tidal triggering rates with the laboratory-derived measurements of triggering by stress loads suggests that natural faults are more responsive to low amplitude stress changes. Therefore, a mechanism is needed to explain failure of faults due to relatively low applied stress. The answer may lie in recent seismological and geodetic measurements that suggest faults are significantly more compliant than the surrounding country rock. Seismological measurements include observations of velocity reduction within a 100 – 200 m fault core by fault zone trapped waves (e.g. Li et al., 2003; Li et al., 2004; Rubinstein and Beroza, 2004). Shear wave splitting measurement indicate a highly sheared material with a width that corresponds to the low-velocity trapping structure (Cochran et al., 2005; Boness and Zoback, 2004). InSAR provides further clues of a highly compliant fault zone that experiences larger strains that the surrounding intact rock (Fialko et al., 2002). We infer a link between the responsiveness of faults to low amplitude tidal stresses with the observed fault zone compliance. 2. Observations of Tidal Triggering Cochran et al. (2004) determined the amplitude of stress required to trigger shallow, thrust earthquakes by Earth tides. The two statistical tests used to determine significance levels are: Schuster’s test and a binomial test. Two independent tests were used to compare the resulting significance levels. The Schuster test is used to test for periodicity in a sequence and is used to determine if the timing of the earthquakes is modulated by the periodic fluctuation in the tidal stress. Not only do we find a clear periodicity to the timing of events, but we also see a clear peak of events at the tidal phase (θ) defined to encourage failure. The vector sum over the tidal phase angles is: $ N ' 2 $ N ' 2 2 , (1) D = & #cos" i ) +& #sin" i ) % i =1 ( % i =1 ( The resulting p-value gives a measure of how random (p ~ 1) or non-random (p ~ 0) the distribution is: ! # D2 & p = exp% " ( , (2) $ N ' Schuster’s Test indicates a greater than 99% probability that the events are non-randomly distributed. In addition, the peak of events is show to be at nearly 0° tidal phase, the phase ! expected to promote failure. To confirm the results of the Schuster test, Cochran et al. (2004) also employ a binomial test. This simple statistic tests whether the earthquakes are divided equally across the tidal phase. The tidal phase is divided into two tidal phase bins, each covering 90 degrees. One bin contains tidal phases expected to most promote failure (-90 < θ < 90) and the other bin Figure 1 Percentage of excess events versus the peak tidal stress. Points are located at the mean peak tidal stress and solid horizontal lines indicate the range. Grey circles are global thrust data; solid triangle denotes California strike-slip data. Crosses and diamonds show the least squares fit to rate- and state- dependent friction and stress corrosion, respectively. From Cochran et al., (2004). contains the remaining tidal phases. A probability of >99.99% that earthquakes correlate with the tides is found for large tidal stresses (> 0.01MPa). The amplitude of tidal stress required to trigger an earthquake is determined by dividing the data into four bins based on amplitude of the background peak tidal stress. A clear trend of increasing triggering of events with the increasing peak tidal stress amplitude is observed (figure 1). Laboratory experiments have been conducted to determine the relationship between the number of events triggered and the applied load. The data are fit to rate and state friction and stress corrosion. To fit both laboratory-derived trends we have to assume parameters for the fault viscosity or material properties that suggest natural faults are more responsive to stress loads than those measured in the lab. 3. Fault Zone Compliance The responsive nature of faults to relatively small stress loads is suggestive of a dependence on a more complex fault structure than a simple slip plane. Several seismology and geodetic studies have found evidence for a complex zone of deformation near the main slip plane. These studies may explain the relative weakness of fault to applied stresses whether tidal, seismic, or any number of man-induced stress changes. I will briefly outline recent studies that indicate that faults have reduced shear moduli reflecting a highly damaged zone with a strong shear fabric. Seismologic studies have been conducted near numerous active faults and a few common features have been observed. Clear indications of reduced P and S velocities have been observed in addition to waves trapped in the low velocity fault core. The shear velocities in the San Andreas Fault are reduced by 30 – 40 % compared to the nearby intact rock (Li et al., 2004). The velocity reduction has been shown to vary across the seismic cycle and may account for some of the variation in measured velocities. In addition, shear wave splitting studies conducted near active faults suggest faults have a shear fabric that reflects the highly damaged nature of the fault zone (Boness and Zoback, 2004; Cochran et al., 2005). Shear wave splitting is used to determine the anisotropic field near a fault and can be used to look for variation near a fault plane. Peng and Ben-Zion (2004) found a large spatial variation in splitting measurements that were dependent on whether the wave traveled through the main fault core. A recent study by Cochran et al., (2005) of shear wave splitting along the Parkfield segment of the San Andreas fault shows a clear spatial variation in splitting measurements. Fast directions preferentially oriented parallel to the fault within 100 m of the main fault plane correspond to the location of the main trapping structure detailed by Li et al. (2004) and discussed above. InSAR studies clearly highlight the compliance of faults. Fialko et al. (2002) showed a clear coseismic concentration of strain on faults near the M7.1 Hector Mine earthquake. The strain is concentrated on a zone that is approximately 1 km wide with an inferred reduction in shear moduli of 50%. This compliant zone extends to at least 5 km depth (Fialko et al., 2002). While the width of the compliant zone observed with InSAR is wider than that seen with the seismologic data, the overall implications are the same. Likely the two data sets are sampling different aspects of the same structure with the damage grading from a highly sheared fault core out to intact rock (figure 2). 4. Discussion Clear evidence for tidal triggering has been observed in several recent studies suggesting are more highly responsive to applied stress loads than expected from laboratory results. Seismic and geodetic data suggest faults are highly compliant and localize strain in the highly damaged fault core. The triggering of faults by relatively small stress loads (> 0.01 MPa) may be attributed to the physical structure of the fault itself. Figure 2 Schematic cross-section of an active fault zone depicting a gradient in damage across the fault. The fault core is a zone of highly deformed rock that contains the main slip planes. Oustide of the main fault core the cracks are mostly mode 1 opening fractures that decrease in density away from the fault (after Chester et al., 2003). 5. Acknowledgement The work presented here is a collaboration with many researchers including Y. Fialko, Y.-G. Li, P. Shearer, S. Tanaka, and J.E. Vidale. We also thank J. Dieterich and Z. Peng for useful discussions. 6. References Boness, N.L. and M.D. Zoback, Stress-induced seismic velocity anisotropy and physical properties in the SAFOD Pilot Hole in Parkfield, CA, Geophys. Res. Lett. 31, L15S17, doi:10.1029/2003GL019020, 2004. Cochran, E.S., J.E. Vidale, and Y.-G. Li, Near-fault anisotropy following the Hector Mine earthquake, J. Geophys. Res. 108, 2436, doi:10.1029/2002JB002352, 2003. Cochran, E.S., J.E. Vidale and S. Tanaka, Earth tides can trigger shallow thrust fault earthquakes, Science 306, 1164-1166, 2004. Cochran, E.S., Y.-G. Li, and J.E. Vidale, Anisotropy in the shallow crust observed around the San Andreas Fault before and after the 2004 M6 Parkfield earthquake, Bull. Seis. Soc. Am., in review, 2005. Fialko, Y., D. Sandwell, D. Agnew, M. Simons, P. Shearer, B. Minster, Deformation on nearby faults induced by the 1999 Hector Mine Earthquake, Science 297, 1858-1862, 2002. Li, Y-G., J.E. Vidale, E.S. Cochran, Low-velocity damaged structure of the San Andreas Fault at Parkfield from fault zone trapped waves, Geophys. Res, Lett. 31, L12S06, doi:10.1029/2003GL019044, 2004. Peng, Z., and Y. Ben-Zion, Systematic analysis of crustal anisotropy along the Karadere-Duzce branch of the North Anatolian Fault, Geophys. J. Int. 159, 253-74, 2004. Tanaka, S., M. Ohtake, H. Sato, Evidence for tidal triggering of earthquakes as revealed from statistical analysis of global data, J. Geophys. Res. 107, 2211, doi:10.1029/2001JB001577, 2002. Tolstoy, M., F.L. Vernon, J.A. Orcutt, F.K. Wyatt, Breathing of the seafloor: Tidal correlations of seismicity at Axial volcano, Geology 30, 503-6, 2002. Applying the Rate-and-State friction law to an epidemic model of earthquake clustering R. Console, M. Murru, and F. Catalli Istituto Nazionale di Geofisica e Vulcanologia, Rome, Italy Corresponding Author: R. Console, [email protected] Abstract: Stochastic modeling provides a good description of earthquake clustering, usable for earthquake forecasting purposes. Nevertheless it doesn’t contain much information about the physical properties of the earthquake process. A possible physical explanation is given by the elastic stress transfer model and the rate-and-state constitutive law that in combination allow to model both the spatial and the temporal behavior of the earthquake interaction. Nevertheless a complete application of these physical concepts to a stochastic model is difficult due to the lack of the necessary information concerning the location and fault mechanism of all the moderate earthquakes. 1. Introduction Among different physical models proposed to explain the phenomenon of earthquake interaction, the model of the static stress change produced by a dislocation in the elastic medium has become one of the most popular (Toda et al., 2005; Freed, 2005). Combined with the rate-and-state friction constitutive law (Dieterich, 1994) this model has been successfully applied both to sequences of large earthquakes and to aftershock series (Stein et al., 1997; Toda et al., 1998¸ 2005; Toda and Stein, 2003). The success of this model has suggested the idea of using it for forecast purposes. It is common understanding that the predictive value of a model for earthquake forecast must be preceded by a statistical test. A popular way to perform such a test is using the concept of likelihood (Ogata, 1998; Console, 2001; Console and Murru, 2001; Schorlemmer et al., 2004). In order to implement a likelihood test of an earthquake forecast hypothesis it is necessary that the model is quantitatively described so as to allow the computation of the occurrence rate density in the space-time-magnitude space given the history of the previous seismic activity in the concerned area. In this presentation we want to review the steps to be followed and the problems encountered in the way towards the application of the stress transfer concept and the rate-and-state friction constitutive law to an algorithm of epidemic type for earthquake clustering. 2. A stochastic model for earthquake clustering and its likelihood Following Ogata (1998) and Console and Murru (2001), earthquakes are regarded as the realization of a point process modeled by a generalized Poisson distribution. It is assumed that the Gutenberg-Richter law describes the magnitude distribution of all the earthquakes in a sample, with a constant b-value. The occurrence rate density of earthquakes in space and time is modeled by the sum of two terms, one representing the independent, or spontaneous, activity, and the other representing the activity induced by previous earthquakes: N (1) λλ(,,,xytm )=⋅ fri0 (,, xym ) +∑ Ht ( −⋅ t ) λi (,,, xytm ) i=1 The first term depends only on space, and is modeled by a continuous function of the geometrical co-ordinates, obtained by smoothing the discrete distribution of the past seismicity. The second term depends also on time, and it is factorized in two terms that respectively depend on the space distance and on the time difference from the past earthquakes. A simple way to express these two terms is assuming the spatial distribution as an isotropic function and the time dependence as the modified Omori law. From the expected rate density, the log-likelihood of any realization of the process (actually represented by an earthquake catalog) can be computed straightforwardly through the following equation: N lnL=−∑ ln[λλ ( xjjjj , y , t , m ) V0 ]∫∫∫∫ (x , y , t , m ) dxdydtdm , (2) j=1 X TY M where λ(xj,yj,tj,mj) is the occurrence rate density computed for the earthquakes of the catalog and V0 is a dimensional constant. 3. The application of the stress transfer model Equation (1) does not contain any physical explanation for the phenomenon of earthquake triggering, Nevertheless Console et al. (2005) tried to modify the form of equation (1) by taking into account some physical constraints that help reducing the number of free parameters in the model. They assumed that the occurrence rate density is described by the equation introduced by Dieterich (1994) as a consequence of the rate-and-state constitutive law: R tR )( = 0 ⎡ ⎛ ⎞⎤ ⎛ ⎞ ⎢ ⎜ τ ⎟⎥ ⎜ −Δ− t ⎟ (3) ⎢ −− exp11 ⎜ ⎟⎥ exp⎜ ⎟ ⎜ ⎟ ⎜ t ⎟ ⎣⎢ ⎝ Aσ ⎠⎦⎥ ⎝ a ⎠ where R0 is the previous background rate density, Δτ is the stress change at the point where R(t) is estimated, and A, σ and ta are parameters of the constitutive law. The latter three parameters are related to the tectonic stress rate τ& by the equation ta = Aσ/τ& . Under a number of very crude assumptions, Console et al. (2005) found an expression with only two free parameters for the occurrence rate R’(t) = R(t) - R0 of events triggered by a number of previous earthquakes. This expression can be used in place of λ(t) in equation (1). It can be applied to a catalog of earthquakes containing only information about the origin time, epicentral coordinates and magnitude of every event. This expression still preserves the circular shape without any azimuth dependence, as in the purely stochastic models. In the applications to the seismic catalogs of Japan, Italy and California, the two free parameters were obtained by a maximum likelihood best fit. The comparison with the results of the purely stochastic model applied to the same catalogs has shown that the new model incorporating the rate-and-state physical model performs better than the stochastic model only for the catalog of Japan. A possible explanation of this circumstance can be that the purely stochastic model contains a free parameter that adjusts the spatial distribution of triggered seismicity to the location error of the epicenters in the catalog (typically 2.5 km for Italy and 1.2 km for California). On the contrary, the physical model does have a spatial distribution scaled with the magnitude of the events, implying a strong decay at distances smaller than the location error in the low magnitude range. So, only for the Japanese seismicity, with a magnitude threshold of 4.5, this effect is not too relevant. All the models considered so far are characterized by isotropic spatial distribution, and positive triggering, so that the effect of previous earthquake can be only that of increasing the occurrence rate of the following activity. If we computed the stress change at any point around a given earthquake source, we would find that there exist areas where the Coulomb stress change is negative and the occurrence rate becomes smaller than the previous background rate. In this case equation (1) is not applicable any more and a different algorithm should be adopted to compute the likelihood of the model. 4. Developing a more realistic approach A new method described by Toda et al. (2005) consists in the computation of the Coulomb stress changes produced by the largest earthquakes in a catalog, and plotting the respective occurrence rate density obtained from equation (3) on a dense grid of points distributed in the study area. We tried to apply this method to a seismic sequence occurred in Umbria, Central Italy, between September 1997 and April 1998. In this sequence, the source parameters (including fault dimensions and focal mechanism) for nine earthquakes of magnitude equal to or larger than 5.0 had been studied in detailed and the respective fault parameters are available in the literature. In our algorithm we have reduced the number of free parameters to just one: Aσ in equation (3). To do so, we have hypothesized that the stress drop is equal to 2.5 MPa for all the earthquakes. Moreover, we have used an original algorithm for obtaining the spatially variable tectonic stress rateτ& from the background seismicity. The expected seismicity rate is compared with the real observations of seismic activity. The results are encouraging and it is possible to observe how the Coulomb stress changes affects the spatial and temporal distribution of the seismicity as predicted by the theory. However, a complete match of the modeled earthquake distribution with the observations is not achievable, due to uncertainties in epicentral locations and lack of details about the stress heterogeneity on the faults. Moreover, the algorithm is applicable only for a limited number of major earthquakes as causative sources, while all the minor events take a role of triggered earthquakes only. This is not the case if, as shown by Helmstetter (2003) and Helmstetter et al. (2005), small earthquakes play a relevant role in the stress transfer and triggering of subsequent seismicity. 5. Conclusion Stochastic modeling allows the computation of the expected earthquake rate density on a continuous space-time volume, suitable for the validation of a model with respect to others and for real time forecasts. The significant steps made during the last decades in the physical modeling of earthquake clustering provide a tool for the refinement of these stochastic models. Jointly with the improvement of the seismological observations, these steps appear as a progress towards the possible practical application of the scientific research about earthquake clustering. 6. Acknowledgement We express our gratitude to Massimo Cocco for stimulating discussions that have improved our work. 7. References Console, R., (2001), Testing earthquake forecast hypotheses, Tectonophysics, 338, 261-268. Console, R., and Murru, M. (2001), A simple and testable model for earthquake clustering, J. Geophys. Res. 106, 8699-8711. Console, R., Murru, M., and Catalli, F. (2005 in press), Physical and stochastic models of earthquake clustering, Tectonophysics. Dieterich, J.H., 1994. A constitutive law for rate of earthquake production and its application to earthquake clustering, J. Geophys. Res., 99, 2601-2618. Freed, A. M. (2005), Earthquake triggering by static, dynamic, and postseismic stress transfer, Annu. Rev. Earth Planet. Sci. 33, 335-367, doi:10.11146/annurev.earth.33.092203.122505. Helmstetter, A. (2003), Is earthquake triggering driven by small earthquakes? Phys. Res. Lett., 91, (058501). Helmstetter, A., Kagan, Y. Y., and Jackson, D. D., (2005), Importance of small earthquakes for stress transfer and earthquake triggering. J. Geophys. Res., 110, (B05S08), doi:10.1029/2004JB003286. Ogata, Y., (1998), Space-time point-process models for earthquake occurrences, Ann. Inst. Statist. Math. 50, 379-402. Schorlemmer, D., Gerstenberger, M., Wiemer, S., Field, E., Jones, L., and Jackson, D. D. (2004), T-RELM Testing Center: Rigorous testing of probabilistic earthquake forecast models, SSA Meeting 2004, Lake Tahoe, USA. Stein, R. S., Barka, A. A., and Dieterich, J. H. (1997), Progressive failure on the North Anatolian fault since 1939 by earthquake stress triggering, Geophys. J. Int. 128, 594-604. Toda, S., Stein, R. S., Reasenberg, P.A., and Dieterich, J. H. (1998), Stress transferred by the MW=6.5 Kobe, Japan, shock: Effect on aftershocks and future earthquake probabilities, J. Geophys. Res. 103, 24,543-24,565. Toda, S., and Stein, R. S. (2003), Toggling of seismicity by the 1997 Kagoshima earthquake couplet: A demonstration of time-dependent stress transfer, J. Geophys. Res. 109, (B12, 2567), doi:10.1029/2003JB002527. Toda, S., Stein, R. S., Richards-Dinger, K., and Bozkurt, S. (2005 in press), Forecasting the evolution of seismicity in southern California: Animations built on earthquake stress transfer, J. Geophys. Res. Anomalies in the temporal decay of seismic sequences S. D’Amico1, D. Caccamo2, F. Parrillo2, F. M . B a r b i e r i 2 and C. Laganà2 1Istituto Nazionale di Geofisica e Vulcanologia,via Di Vigna Murata, 60, 00143, Roma, Italy 2 Department of Earth Sciences, University of Messina, Salita Sperone 31, 98166 Messina, Italy Corresponding Author: S. D’Amico, [email protected]; [email protected] Abstract: The purpose of this work is to highlight some anomalies in the temporal decay of seismic sequences following a mainshock with magnitude M≥7. The decay can be modeled as a non-stationary Poissonian process where the intensity function is given by the Omori’s formula. The method used estimates the absolute value of differences between the observed and theoretical temporal trend. Several days before the occurrence of large aftershock we can see some “seismic anomalies in the decay” that could be considered as precursors. 1. Introduction The employment of statistics is useful to estimate the probability of future earthquakes. These probabilities are very interesting from the point of view of earthquake physics, and crucial for attempts to forecast the hazards due to large, damaging earthquakes. Actually a theoretical model that successfully describes earthquake recurrence is unknown, so it is necessary to adapt probability distribution based on earthquake history. Aftershock typically being immediately after a mainshock and are distributed through the source volume. Usually the frequency of occurrence of aftershocks decays rapidly according to the Omori’s law: n(t) = c /(k + t) − p (1) where n(t) is the frequency of aftershocks at time t after mainshock; k, c and p are costnts thatdepend on the size of earthquake. The p-value is usually close to1.0-1.4. The distribution in space of the aftershock is often related to the fault area or its lenght. The are a lot of empirical relations to estimate the fault area (A) or the lenght fault (L). Utsu developed the empirical formula: log A = 1.02M s + 6.0 (2) An other of these relations was found by Utsu (1969) relating to the fractured fault segment length L to the seismic event magnitude: log L = 0.5M s −1.8 (3) An other important formula is the Gutenberg-Richter’s relation (1954) between the size and frequency occurrence. They first proposed that the frequency of occurrence can be represented by the formula: log N = a − bM (4) in a given period of time and in a given region. In the previous formula N is the number of earthquakes with a fixed magnitude M, a and b are constants characteristic for the given area. Usually the constant b is called b-value. In general, b-values are between 2/3 and 1 and do not show much regional variability. Concerning the occurrence of large aftershocks, in the first ten days, after a mainshock with magnitude greater than 7, we analyzed 153 sequences all over the world from 1973 to 2004. We noticed that in the first ten days there is a probability about 81% to have a large aftershock (with magnitude greater than 5.5). 2. Data and analysis The study is focalized on the Loyalty Islands seismic sequence, S1, happened on 27 December 2003, mainshock latitude, longitude are respectively -21.94N, 169.69E (Caccamo et al. 2005a); on the New Guinea seismic sequence, S2, happened on 29 April 1996 in New Guinea, mainshock latitude, longitude are respectively -6.52N, 155E. (Caccamo et al. 2005b), and on Colfiorito seismic sequence happened on 26 September 1997, mainshock latitude, longitude are respectively 43.08N, 12.81E (Parrillo et al. 2005). Data come from NEIC-USGS data bank (http://neic.usgs.gov/neis/epic/) and INGV catalogues for Italian data (www.ingv.it). We calculated “a priori” the dimensions of the involved area using the Utsu (1969) empirical relation (relation 3). We acquired data in a square with sides at a distance 3L from the mainshock epicenter and in the period of the first 10 days starting from the mainshock. Using these data, we calculated the “barycentre” of the aftershocks sequence (D’Amico et al.,2004). The evaluation of the completeness threshold of a dataset is made using the Gutenberg-Richter diagram. For the temporal duration D of the seismic sequence, D=n1+n2, where n1 is the number of days equals to the number of aftershocks in the first 24 hours starting from the occurrence of the mainshock, and n2 is the number of days, after n1, that n − n > 2.5 reach and include 10 days in which real theor there are not earthquakes. This is an hypothesis “a priori” to cut, in a appropriate way, the sequence. In mathematical terms, the observed temporal series of the shocks per day can be considered as a sum of a deterministic contribution and a stochastic contribution. These contributions are, respectively, the aftershock decay with a power law, i.e. the modified Omori formula (Utsu, 1995), and the random fluctuations around a mean value represented by the above mentioned power law. The decay phenomenon can be modeled as a non-stationary Poissonian process where the intensity function is equal to the Omori’s law (Matsu’ura, 1986), the number of aftershocks in a time interval ∆t is, as a mean value, n(t)⋅∆t, with a standard deviation :σ = n(t)∆t . We may expect that the random fluctuations around the mean value n(t) will be within a 2.5σ range (that represents approximately the 99%). We find the anomalies estimating the ratios of the absolute values of differences between the observed temporal trend and the theoretical trend (∆) and the standard deviations of the theoretical values (σ). We consider the quantity: ∆/σ>2.5 as an “anomaly”. We considered the term K1 take into account the background seismicity. This parameter should be typical of the seismicity of an area. We assumed in this work K1=1 (Caccamo et al., 2005). 3. Discussion Using these data the temporal trend n(t), number of shocks per day for S1, S2, and S3 can be calculated as shown in figure 1a,1d and 1g. The three plots of figure 1, for S1, S2 and S3 respectively, have the same temporal scale. In figure 1b, 1e and 1h shocks with magnitude M>5.5 are shown. In figure 1c, 1f and 1i the ∆/σ values versus time t in days are shown, excluding the initial 10 days. Regarding to S1 there is one ∆/σ value exceeding this threshold (2.5 as before described ): ∆/σ=3.68 on the 12th day with shock with magnitude M=5.8 on th the 17 day (plots 1b and 1c). Regarding to S2 there are 3 ∆/σ values exceeding the threshold: ∆/σ=2.91 on the 10th day, ∆/σ=2.56 on 11th day and ∆/σ=3.70 on the 13th day, with shocks with magnitude M=6.3 and M=5.6 on the 12th and 13th day respectively (plots 1e and 1f). th Regarding to S3 there are 5 ∆/σ values exceeding the threshold: ∆/σ=2.98 on the 11 day, ∆/σ=7,98 on the 17th day, ∆/σ=3,99 on the 18th day, ∆/σ=4,99 on the 19th day and ∆/σ=3,99 on the 20th day, with shocks with magnitude M=5.8 on the 11th day and M=5.7 on the 19th day (plots 1h and 1i). Figure 1 (a) Temporal trend n(t)for the sequence S1;(b) temporal trend of the shocks with M>5.5 for S1; (c) ∆/σ values versus time for S1. (d) Temporal trend n(t)for the sequence S2;(e) temporal trend of the shocks with M>5.5 for S2; (f) ∆/σ values versus time for S2. (g) Temporal trend n(t)for the sequence S3;(h) temporal trend of the shocks with M>5.5 for S3; (i) ∆/σ values versus time for S3; for every sequences the three plots have the same temporal scale. 4. Conclusion It is observed that, several days before the occurrence of large aftershocks, it is possible to point out the anomalies in the decay, considered as deviations from the mean value n(t) greater than a 2.5 quantity, that, likely, are not random like fluctuations but probably precursors. The data concerning the temporal series, checked according to completeness criteria, come from the NEIC-USGS data bank and INGV catalogues for Italian data. From the scientific point of view, this kind of analysis offers new horizons to the research about seismic anomalies and precursors. 5. Acknowledgement The authors wish to thank Aybige Akinci from Istituto Nazionale di Geofisica e Vulcanologia of Rome (Italy). Moreover the authors would like to thank Prof. Yosi Ogata and the Organization of “the 4th International Workshop on Statistical Seismology. 6. References Caccamo, D., Barbieri, F. M., Laganà, C., D’Amico, S., Parrillo, F (2005a in press), A study about the aftershocks sequence of 27 December 2003 in Loyalty Islands, Bollettino di Geofisica Teorica ed Apllicata Caccamo, D., Barbieri, F. M., Laganà, C., D’Amico, S., Parrillo, F (2005 in press), The temporal series of the New Guinea 29 April 1996 aftershock sequence, Physics of the Earth and Planetary Interiors. D’Amico, S., Caccamo, D., Barbieri, F. M., Laganà, C, Some methodological aspects related to the prediction of large aftershocks, (ESC, book of abstract, 2004) pp 132-133 Gutenberg, B., Richter, C. F.., Seismicity of the Earth (Princeton University Press, Princeton, N. J., 1954), pp. 310. Matsu’ura, R.S. (1986), Precursory quiescence and recovery of aftershock activities before some large aftershocks, Bulletin of the Earthquake Research Institute University of Tokyo, 61, 1-65. Parrillo, F., Caccamo, D., D’Amico, S., Barbieri, F. M., Laganà, C, Analisi sella sequenza sismica di Colfiorito (Umbria) del 26 Settembre 1997 con l’applicazione del metodo delta/sigma, (Conegno GNGTS). Utsu, T., Ogata, Y., Matsu’ura, R.S. (1995), The centenary of the Omori formula for a decay law of aftershocks activity, J. Phys, Earth, 43,1 – 33 Utsu, T., (1969), Aftershocks and earthquake statistics (I) source parameters which characterize an aftershock sequence and their interrelations, J. Fac. Sci. Hokkaido Univ., Ser. VII, 3, 129-195 Some analysis of seismicity in the Sumatra area S. D’Amico1, D. Caccamo2, A. Zirilli3, F. Parrillo2, S.M.I. D’Amico3, A.Alibrandi3 1Istituto Nazionale di Geofisica e Vulcanologia,via Di Vigna Murata, 60, 00143, Roma, Italy 2 Department of Earth Sciences, University of Messina, Salita Sperone 31, 98166 Messina, Italy 3 Faculty of Statistic Sciences, University of Messina,Via Centonze, 98100 Messina, Italy Corresponding Author: S. D’Amico, [email protected]; [email protected] Abstract: Investigating both aftershock behaviour and a wide spectrum of parameters may find the key to better explain the mechanism of seismicity as a whole. In particular the purpose of this work is to apply some different statistical approaches related to the seismicity of “Sumatra area” (Indonesia). We focalised our attention on the seismicity from 1973 to 2004 and on different seismic sequences happened in this area. We estimated the variation of p-value, b-value, the energy involved into the decay process and the fractal dimensions D0 and D2 to highlight the clustering of events in time. 1. Introduction The earthquakes are natural phenomena that occur in particular zones of the Earth’s surface. They are related to complex processes. One of the classic problems in global seismology has been the mapping of earthquake distribution. This mapping has played a key role in the evolution of the theory of plate tectonics, which describes the large-scale relative motion of a mosaic of lithosferic plates on the Earth’s surface. Usually the energy during an earthquake is released from the mainshock. It could be preceded by foreshocks and follows by a lot of aftershocks. Statistical studies related to the seismicity and occurrence of earthquakes could give us useful interpretation to better explain the mechanism of seismicity. 2. Data and analysis In particular the purpose of this work is to apply some different statistical approach related to the seismicity of “Sumatra zone”. We analysed also three different seismic sequence happened in this area on 13 February 2001, 2 November 2002 and 26 December 2004 having the magnitude of the mainshock respectively M=7.4, M=7.6 and M=9. We evaluated in this case also the b-value, p-value, cumulative energy, and the fractal dimensions D0 and D2 to highlight the clustering in time of events; everything is related to some anomalies in the temporal decay. We consider an anomaly when, during the aftershock decay, the absolute values of differences between the observed and the theoretical temporal trend are greater than 2.5σ quantity being σ the standard deviation (Caccamo et al., 2005a,b; D’Amico et al. 2005, Parrillo et al.,2005). Data used in this study come from NEIC-USGS data bank (http://neic.usgs.gov/neis/epic/). A large earthquake cause a lot of damages. Certainly it will likely be followed by several aftershocks of considerable size in a short time span. Earthquakes located within a characteristic distance from the main event can be considered aftershock. Regarding to the seismicity of the area and the behaviour of aftershocks we are trying to use other statistic methods like the un-parameter procedure “NPC Ranking” and the Cox-Stuart’s approach. The first one is a statistic method that consent to draw up some classifications. With this method it is possible draw up some classifications concerning every observation. His peculiarity is that it to consent and pass easily from a partial classification to a general. This peculiarity has an important statistical interest because it summarizes the effects of single observations . In order to estimate temporal aspects of the seismicity we used un-parameter text because his use permit to establish phenomenon’s trend in a given period. Cox and Stuart’s test permit to verify, for the variable that was considered into different methods, if exist a monotonous trend at the increase or at the decrease of the central trend or of variability. 3. Discussion Some analysis were focalised on different seismic sequence happened in this area on 13 February 2001 (Latmain=–4.68N; Lonmain=102.56E), 2 November 2002 (Latmain=2.82N; Lonmain=96.08E) and 26 December 2004 (latmain=3.3N; lonmain=95.98E) having the magnitude of the mainshock respectively M=7.4, M=7.6 and M=9. To define in space a sequence we calculated “a priori” the dimension of the involved area using the Utsu’s (1969) empirical relation. We calculated the “barycentre” of the aftershock sequence, too (D’Amico et al.,2004). The completeness threshold was made using the Gutenberg- Richter’s relation (1954). Typically, the frequency of occurrence of aftershock decays rapidly. For the temporal duration d of a seismic sequence we empirically considered d=n1+n2 where n1 is the number of days equals to the number of aftershock in the 24 hours starting from the occurrence of the mainshock and n2 the number of days, after n1, that reach and include 10 days in which there are not earthquakes. Time clustering of seismic activity are investigated by fractal dimensions. Here we used the fractal method based on the estimation of the box- counting dimension and the correlation dimension related to the seismic sequences located in the Sumatra area. The analysis are made before large aftershocks with M>5.5 occurred in the time sequence. In the analysed sequence we can notice a clustering in time of box- counting dimension and of the correlation dimension. They have values very close to zero. Relating to the NPC Ranking, in order to classify the earthquakes, that were registered in the investigated area, we individuated the magnitude, M, in four intervals (5 4. Conclusion Concerning to the aftershock sequences we notice that there are some anomalies in the temporal decay before the occurrence of large aftershock. These anomalies are related to the variations, in a short period, of the parameters above mentioned. 5. Acknowledgement The authors wish to thank Aybige Akinci from Istituto Nazionale di Geofisica e Vulcanologia of Rome (Italy). Moreover the authors would like to thank Prof. Yosi Ogata and the Organization of “the 4th International Workshop on Statistical Seismology. 6. References Caccamo, D., Barbieri, F. M., Laganà, C., D’Amico, S., Parrillo, F (2005a in press), A study about the aftershocks sequence of 27 December 2003 in Loyalty Islands, Bollettino di Geofisica Teorica ed Apllicata Caccamo, D., Barbieri, F. M., Laganà, C., D’Amico, S., Parrillo, F (2005 in press), The temporal series of the New Guinea 29 April 1996 aftershock sequence, Physics of the Earth and Planetary Interiors. D’Amico, S., Caccamo, D., Barbieri, F. M., Laganà, C, Some methodological aspects related to the prediction of large aftershocks, (ESC, book of abstract, 2004) pp 132-133 D’Amico, S., Caccamo, D., Barbieri, F. M., Laganà, C, Use of fractal geometry for a study of seismic temporal series, (E.G.U.,Geophysical Research Abstract, 2005) Vol. 7 Gutenberg, B., Richter, C. F.., Seismicity of the Earth (Princeton University Press, Princeton, N. J., 1954), pp. 310. Parrillo, F., Caccamo, D., D’Amico, S., Barbieri, F. M., Laganà, C, Analisi sella sequenza sismica di Colfiorito (Umbria) del 26 Settembre 1997 con l’applicazione del metodo delta/sigma, (XXIV Convegno G..N.G.T.S.). Utsu, T., (1969), Aftershocks and earthquake statistics (I) source parameters which characterize an aftershock sequence and their interrelations, J. Fac. Sci. Hokkaido Univ., Ser. VII, 3, 129-19 Stress- and State-Dependence of Earthquake Occurrence James Dieterich University of California, Riverside, Department of Earth Sciences, and Institute of Geophysics and Planetary Physics, Riverside, CA 92521, USA Abstract: A general physics-based formulation has been developed that quantitatively relates earthquake rates and stressing history [Dieterich,1994, Dieterich and Kilgore, 1996; Dieterich and others, 2003]. Because earthquake nucleation processes determine the time and location of earthquakes, this model treats seismicity as a sequence of nucleation events. Using solutions for the nucleation of unstable fault slip on faults with rate- and state- dependent constitutive properties, a general constitutive law for rate of earthquake production as a function of stressing history is derived. Forward modeling applications of this formulation include evaluations of the effect of stress changes on earthquake probabilities, earthquake triggering, foreshocks, and aftershocks. Used in a reverse mode, observed seismicity rate changes may inverted to calculate the changes of stress that drive the earthquake process. Two topics are currently being investigated. The first involves seismicity and aftershocks associated with stress changes that arise from geometric incompatibilities for slip on non-planar faults. The second topic is the use of observed seismicity rate changes to solve for changes of stress in space and time. Application of this method at Kilauea Volcano, Hawaii, yields results that agree with forward calculations based on deformation observations. The solutions for stress changes at the time of moderate and large earthquakes at Kilauea consistently show stress shadows, but comparable effects are less evident for earthquakes in California. . References Dieterich, J.H. (1994), A constitutive law for rate of earthquake production and its application to earth-quake clustering, J. Geophys. Res., 99, 2601-2618. Dieterich, J. H., and Kilgore,B.D. (1996), Implications of fault constitutive properties for earthquake prediction, Proc. Natl. Acad. Sci., 93, 3787-3794. Dieterich, J. H., V. Cayol, and P. Okubo (2003), Stress Changes Before and During the Puu Oo–Kupianaha Eruption, in: The Puu Oo–Kupaianaha Eruption of Kilauea Volcano, Hawaii: The First 20 Years, U. S. Geological Survey Professional Paper 1676, 187–202. Tutorial on Stress- and State-Dependent Model of Earthquake Occurrence: Part 1 – Model and Formulation James Dieterich University of California, Riverside, Department of Earth Sciences, and Institute of Geophysics and Planetary Physics, Riverside, CA 92521, USA Abstract: This tutorial will review the background and derivation of the stress- and state- dependent formulation for earthquake rates. This includes a review of rate- and state- dependent friction, and earthquake nucleation, that are the basis for the model. The derivation of the model will be examined, including discussion of approximations and limitations of this modeling approach. Finally, general characteristics of the formulation and modeling strategies with examples will be presented. Tutorial on Stress- and State-Dependent Model of Earthquake Occurrence: Part 2 – Applications James Dieterich University of California, Riverside, Department of Earth Sciences, and Institute of Geophysics and Planetary Physics, Riverside, CA 92521, USA Abstract: This tutorial will work through step-by-step examples of calculations with the stress- and state-dependent formulation for earthquake rates. These will include forward models of aftershocks, and earthquake triggering by tides or stress waves. Alternative strategies and parameter estimation for reverse calculations, which employ use of observed seismicity to infer changes of Coulomb stress will be covered as well. Seismology in the Source: The San Andreas Fault Observatory at Depth W. L. Ellsworth1, S. H. Hickman1, and M.D. Zoback2 1U. S. Geological Survey,345 Middlefield Road, MS-977, Menlo Park, CA 94025, USA 2Department of Geophysics, Stanford University, Stanford, CA 94305, USA Corresponding Author: W. L. Ellsworth, [email protected] Abstract: The main drill hole of the San Andreas Fault Observatory at Depth successfully penetrated the fault in the summer of 2005, setting the stage for the deployment of a multi-component earthquake observatory within the core of this major plate boundary fault. Results obtained from instruments at the edge of the near field region provide a tantalizing glimpse of earthquake nucleation and propagation processes. 1. Introduction The San Andreas Fault Observatory at Depth (SAFOD) is a comprehensive project to establish a multi-stage geophysical observatory in the hypocentral zone of repeating M~2 earthquakes within the San Andreas Fault as part of the U.S. National Science Foundation’s EarthScope Program. The goals of SAFOD are to exhume rock and fluid samples for extensive laboratory studies, to conduct a comprehensive suite of downhole measurements in order to study the physical and chemical conditions under which earthquakes occur, and to monitor changes in deformation, fluid pressure and seismicity directly within the fault zone during multiple earthquake recurrence cycles (Hickman, et al, 2004). The main drilling target is a pair of M~2 earthquakes that have recurred multiple times over the past two decades (Figure 1). These target earthquakes are located at a depth of 3 km on the San Andreas Fault immediately to the northwest of the Parkfield segment (Waldhauser et al., 2004). Prior to the September 28, 2004 M 6.0 Parkfield earthquake, the target events occurred every 2-3 years, with the “S.F.” event typically preceding the “L.A.” event by a minutes to a few days. Clearly, these two sources interact. Following the 2004 Parkfield earthquake, the recurrence rate dramatically increased, with these repeating earthquake sources contributing events to the Parkfield aftershock sequence. The interval between the events has been steadily lengthening since, following Omori-like behavior. The close coupling between “S.F.” and “L.A” has also been broken. Consequently, there is much to learn about earthquake interaction at SAFOD, in addition to earthquake nucleation and propagation. 2. Building the Observatory Extending 4 km into the Earth along a diagonal path that crosses the divide between Salinian basement accreted to the Pacific Plate and Cretaceous sediments of North America, the main hole at the San Andreas Fault Observatory at Depth is designed to provide a portal into the inner workings of a major plate boundary fault. The successful drilling and casing of the main hole in the summer of 2005 to a total vertical depth of 3.1 km make it possible to conduct spatially extensive and long-duration observations of active tectonic processes within the actively deforming core of the San Andreas Fault. The observatory consists of retrievable seismic, deformation and fluid pressure sensors deployed inside the casing in both the main hole (maximum temperature 135 C) and the collocated pilot hole (1.1 km depth), and a fiber optic strainmeter installed behind casing in the main hole. By using retrievable systems deployed on either wire line or rigid tubing, each hole can be used for a wide range of scientific purposes, with instrumentation that takes maximum advantage of advances in sensor technology. To meet the scientific and technical challenges of building the observatory, borehole instrumentation systems developed for use in the petroleum industry and by the academic community in other deep research boreholes have been deployed in the SAFOD pilot hole and main hole over the past year. These systems included 15Hz omni-directional and 4.5 Hz gimbaled seismometers, micro-electro-mechanical accelerometers, tiltmeters, sigma-delta digitizers, and a fiber optic interferometeric strainmeter. A 1200-m-long, 3-component 80-level clamped seismic array was also operated in the main hole for 2 weeks of recording in May of 2005 by Paulsson Geophysical Services, collecting continuous seismic data at a rate of 4000 samples/sec. Figure 1 Cross section in the plane of the San Andreas Fault showing repeating earthquakes of up to M2 in the drilling target, 1984-2005. The main trace of the San Andreas is associated with a northwestern event in red (San Francisco) and a southwestern event(Los Angeles) in blue; The green events are associated with a subsidiary trace of the San Andreas located 230 m closer to the SAFOD wellhead . Earthquakes are shown as circles centered on their hypocentroids with a diameter scaled by their moment for a 9 MPa stress drop (Imanishi et al., 2004). The S.F. and L.A. repeating earthquake sources are M 1.8 – 2.2. Hypocentroid locations courtesy of F. Waldhauser (after Waldhauser et al., 2004). 3. Discussion Some of the observational highlights include capturing one of the M 2 SAFOD target repeating earthquakes (Figure 1) in the near-field at a distance of 420 m, with accelerations of up to 200 cm/s and a static displacement of a few microns. Numerous other local events were observed over the summer by the tilt and seismic instruments in the pilot hole, some of which produced strain offsets of several nanostrain on the fiber optic strainmeter. We were fortunate to observe several episodes of non-volcanic tremor on the 80-level seismic array in May, 2005. These spatially unaliased recordings of the tremor wavefield reveal that the complex tremor time series is comprised of up-and down-going shear waves that produce a spatially stationary interference pattern over time scales of 10s of seconds. 4. Conclusion The SAFOD observatory is now in operation. All data collected at SAFOD as part of the EarthScope project are open and freely available to all interested researchers. The Northern California Earthquake Data Center at U.C. Berkeley is the principal data repository for these SAFOD data. The more than 3 TB of 80-level array data are also available at the IRIS DMC as an assembled data collection. 5. Acknowledgement SAFOD is a component of the U.S. National Science Foundation’s EarthScope Project. SAFOD has also received major financial support from the International Continental Drilling Program and the U.S. Geological Survey. We thank the scores of researchers from around the world who have been instrumental to the success of the SAFOD program 6. References Hickman, S.H., Zoback, M.D., and Ellsworth, W.L. (2004), Introduction to special section: Preparing for the San Andreas Fault Observatory at Depth, Geophysical Res. Lett, 31, L12S01, doi:10.1029/2004GRL020688. Imanishi, K., Ellsworth, W.L., and Prejean, S.G. (2004), Earthquake source parameters determined by the SAFOD Pilot Hole seismic array, Geophysical Res. Lett, 31, L12S09, doi:10.1029/2004GRL029420. Waldhauser, F., Ellsworth, W.L., Schaff, D.P., and Cole, A.T. (2004), Streaks, multiplets and holes: High-resolution spatio-temporal behavior of Parkfield seismicity, Geophysical Res. Lett., 31, L18608, doi:10.1029/2004GRL020649. Quantifying the early aftershock activity of the 2004 Mid Niigata Prefecture Earthquake (Mw6.6) Bogdan Enescu1, Jim Mori1, Masatoshi Miyazawa1, Takuo Shibutani1, Kiyoshi Ito1, Yoshihisa Iio1, Takeshi Matsushima2 and Kenji Uehira2 1Disaster Prevention Research Institute, Kyoto University, Kyoto, Japan 2Inst. of Seismology and Volcanology, Faculty of Sciences, Kyushu University, Shimabara, Japan Corresponding Author: Bogdan Enescu, e-mail: [email protected] Abstract: The goal of this work is to determine an accurate image of the aftershock activity immediately following the 2004 Mid Niigata (Chuetsu) earthquake. For this purpose, we examined the JMA earthquake catalog and also the continuous seismograms recorded at six Hi-Net seismic stations located close to the aftershock distribution. By high-pass filtering the seismograms we were able to count aftershocks starting at about 35 sec. after the mainshock. The results show that the c-value, which expresses the saturation of the aftershock rate close to the mainshock, is very small (less than about 3 min.), but has a non-zero value of about 100 sec. that relates to the physics of the aftershock generation process. 1. Introduction The occurrence rate of aftershocks is empirically well described by the Modified Omori formula (Utsu, 1961) : n(t) = k/(t+c)p, where n(t) is the frequency of aftershocks per unit time, at time t after the mainshock, and k, c and p are constants. The parameter c, which relates to the rate of aftershock activity in the earliest part of an aftershock sequence, typically ranges from 0.5 to 20 hours in empirical studies (Utsu et al., 1995). Since the behaviour of aftershock sequences during the first minutes and hours after the mainshock is a significant component of theoretical models of seismicity (e.g., Dieterich, 1994; Rubin, 2002), it is essential to determine whether the coefficient c is a physical parameter (Vidale et al., 2003; Shcherbakov et al., 2004) or if it simply indicates a measure of instrument and technique deficiency (Kagan, 2004). We focus in this study on the decay of the early aftershock activity following the 2004 Niigata earthquake (M6.8) and try to estimate the c parameter by using a) JMA (Japanese Meteorological Agency) earthquake catalog data and b) continuous Hi-Net waveform data. 2. Analysis of the JMA catalog data 2.1 Analysis of the decay of aftershocks number We first analyse the decay of the aftershocks of the 2004 Niigata earthquake by using the JMA catalog data. By constructing the frequency-magnitude distribution of earthquakes after 0.01, 0.05, 0.2 and 0.5 days after the mainshock, for a 100-event window, we could easily estimate the magnitude of completeness (Mc) of the catalog at these times. By fitting these data, we could determine a semi-logarithmic relationship between the level of catalog completeness, expressed by Mc, and the time elapsed since the mainshock, t. The decay of aftershocks, for different threshold magnitudes, is fitted using the Modified Omori formula and the parameters p, c and k are determined in each case using a maximum likelihood algorithm (Ogata, 1983). For example, the c-values obtained by using threshold magnitudes of 3.0 and 4.0 are 0.027 and 0.012, respectively. These values, however, are comparable with the times at which the catalogue becomes complete in events above the corresponding threshold magnitude. This observation demonstrates that it is practically impossible to estimate a “real” c-value from earthquake catalog data. 2.2 Analysis of the decay of the moment release rate of aftershocks We also determine the decay of the seismic moment release rate of aftershocks, using the approach suggested by Kagan and Houston (2004). An advantage of using the moment summation of aftershocks instead of the usual counting of aftershocks number is that the missing small events at the beginning of the catalog are weighted much less because their seismic moment is relatively small. Assuming that the aftershock size distribution follows the Gutenberg-Richter relation, we could also calculate the moment rate which is due to the missing small aftershocks and compensate for an incomplete catalogue record (Kagan and Houston, 2004). As one can notice in Fig. 1, there is little difference between the moment release rate calculated using the catalogue data and the “corrected” rates, which account for the under-reported small aftershocks. The only notable difference is at the very beginning of the aftershock sequence. The decay of the corrected moment release rate is well fitted by a power-law with a slope of 1.2 in the log-log plot of Fig. 1. The graph suggests that there is no significant saturation of the moment release rate of aftershocks close to the mainshock. As the first aftershock recorded in the JMA catalogue occurred after about 3 minutes from the mainshock, it is impossible to get information on the aftershock moments at earlier times. We can only speculate that the c-value is smaller than about 3 minutes (~0.002 days). We also show in the same figure the source time function of the mainshock, determined by Miyazawa et al. (2005). The extrapolation of the fit of the moment decay of aftershocks to the end of the mainshock rupture seem to suggest a smooth transition from the mainshock rupture to the aftershock process. Figure 1 Decay of the uncorrected and corrected moment release rates of aftershocks and a power-law fit to the data (slope of 1.2 in the log- log graph). The source time function of the mainshock is also shown. 3. Analysis of the Hi-Net continuous waveform data. As a next step in our analysis, we try to identify as many events as possible on the continuous seismograms recorded at six Hi-Net stations located closely to the aftershock distribution. The waveforms were high-pass filtered at 7Hz to attenuate the low-frequency coda of the mainshock. Clear early aftershocks can be identified on the filtered waveforms, starting at about 35 sec. after the mainshock (Fig. 2). Most of these early events are not recorded in the JMA catalog. We then select two representative stations (YNTH and NGOH), situated on opposite sides and close to the aftershock distribution, and count the events with amplitudes above some threshold value which can be clearly seen on the continuously recorded waveforms. This threshold value corresponds to about a magnitude 3.3 earthquake. The identified events satisfy well the Modified Omori law, with a p-value close to 1.0. The c- value is about 100 sec (~0.001 days), i.e. the number of events is smaller than predicted by a simple power-law distribution at very short times after the mainshock. Figure 2 Clear early aftershocks (indicated by arrows) can be identified on these high-pass filtered waveforms, starting at about 35 sec. after the mainshock. The names of the recording Hi-Net stations are also shown. 4. Conclusion By using three different approaches we tried to estimate the c-value of the Modified Omori law for the aftershocks of the 2004 Niigata earthquake. We showed that the estimated value of the parameter by using usual aftershocks counting mainly reflects the under-reporting of small events at the beginning of the aftershock sequence. An alternative approach, using the decay of the moment release rate of aftershocks, indicates that the c-value is smaller than about 3 minutes (~0.002 days). An additional argument for a very small c-value is suggested by the smooth transition between the source-time function of the mainshock and the moment release rate of aftershocks. By analysing the high-pass waveform data recorded at several High-Net stations, we confirmed that the c-value is very small, but has a non-zero value of about 100 sec (~0.001 days). This observation relates to the physics of the aftershock generation process. 5. Acknowledgements We would like to thank NIED and JMA for allowing us to use their earthquake data. The Snowball Effect: Statistical Evidence that Big Earthquakes are Rapid Cascades of Small Aftershocks K. R. Felzer1 1United States Geological Survey, 525 S. Wilson, Pasadena, CA 91106, USA Corresponding Author: K. R. Felzer,, [email protected] Abstract: Subevents can be clearly seen in event seismograms, and subevent models are necessary to fit strong motion data. To what extent might earthquake subevents be made up of smaller subevents? And smaller subevents still? Vere Jones (1976) and Kagan and Knopoff (1981) demonstrated that an ultimate subevent model, in which the smallest, tiny component subevents are of equal size, could recreate the Gutenberg-Richter earthquake magnitude frequency distribution. In the Kagan and Knopoff (1981) model, mainshocks grow when tiny subevents rapidly trigger each other via a process identical to aftershock triggering. This model successfully reproduces realistic seismograms and realistic foreshock and aftershock sequences over longer time scales. Here I propose a model similar to Kagan and Knopoff (1981) except that equal faulting area rather than the original equal seismic moment characterizes the smallest subevents. This change is supported by physical and statistical considerations. My proposed model predicts that the number of aftershocks produced by an earthquake should scale with its faulting area and that no property of an aftershock sequence except for the total number of aftershocks should be influenced by mainshock size. These predictions are well supported by observations. 1. Introduction There are significant differences in the rupture of homogeneous and heterogeneous materials. In a homogeneous glass or metal a small crack may often grow quickly and catastrophically. In a heterogeneous material, however, such as rock, strength and elastic moduli may vary and a crack that starts in one place may have difficulty in spreading through neighboring stronger material. The amount of growth difficulty will depend on the applied load. For a small load cracks will never propagate beyond the areas of lowest strength; for a high load unbounded growth will always occur. At the critical load growth is possible but jerky as the crack feels the small-scale heterogeneity. Dynamic elastic waves are emitted by this jerky motion, which may initiate slip in new areas. Scale invariant (power law) avalanche type slip occurs, and the crack tip is self affine (Ramanathan and Fisher, 1998). Earthquakes clearly rupture through heterogeneous materials, and it is likely that they occur at critical stress loads. Tectonic stresses accumulate slowly, and slow stress accumulation means that stresses can stabilize fairly precisely at the level just large enough to make slip possible. The observed power law distribution of earthquake sizes provides further evidence for critical rupture dynamics. Physically, the avalanche like behavior of critical point rupture is very difficult to model. Statistical stochastic models, may recreate the process quite easily and accurately. Kagan and Knopoff (1981) proposed a compelling stochastic age-dependent continuous state branching process model for earthquake propagation. In the Kagan and Knopoff (1981) model small seismic subevents of equal size (hereafter referred to as “unit earthquakes”) trigger each other rapidly, via the aftershock triggering process, to create larger earthquakes. The idea that aftershock triggering might occur over such a short time scale is supported by the fact that a minimum triggering time, while it must exist, has not yet been found (Peng et al., 2005; Kilb et al., 2004). Furthermore, Kagan and Knopoff (1981) find that allowing the unit earthquakes to trigger each other with time delays that adhere to Omori’s law for aftershock decay creates realistic seismograms. When the activity level of the model is tuned to the critical point, where sustained growth just becomes possible (the point where each unit earthquake triggers, on average, one other) Gutenberg-Richter magnitude frequency statistics are also reproduced. In the Kagan and Knopoff (1981) model unit earthquakes are defined as having equal seismic moment, but the model does not reproduce a b value of 1.0 for the Gutenberg-Richter relationship. Alternatively, I propose that the unit earthquakes should be of equal area. Equal area subevents fit easily into a physical model. If we assume that points of high strength are randomly distributed in a heterogeneous medium, then this will produce a characteristic length scale over which we expect initiated slip to grow before it starts to slow down. The “equal area” model does produce a Gutenberg-Richter relationship with b=1.0. Three testable predictions can be made from the equal-area unit earthquake stochastic model of earthquake growth: 1) The propagation of single earthquakes and the production of aftershocks are one and the same process. Therefore the triggering of aftershocks, like the growth of mainshocks, must be accomplished by dynamic stress changes. 2) All aftershock triggering is accomplished by unit earthquakes, whose characteristics (other than final slip) and triggering ability is always the same independent of final mainshock magnitude. Thus the characteristics of aftershock production by earthquakes of every magnitude should be the same except for the number of aftershocks, which should vary linearly as a function of faulting area (e.g. vary linearly with the number of unit earthquakes). 3) In the Kagan and Knopoff (1981) branching process, the activity of each branch is independent of the activity of all other branches. This means that each aftershock can be traced to a unique mainshock, and that subsequent earthquakes will not affect the timing of each aftershock. In the following sections we will provide brief descriptions of tests of these predictions. 2. Test 1: Are aftershocks triggered by dynamic stress changes? I have performed two tests with E. E. Brodsky (Felzer and Brodsky, 2005; Felzer and Brodksy, submitted), on whether aftershocks are triggered by static or dynamic stress changes. The first test was for stress shadows, or whether earthquake-induced stress changes are capable of slowing other earthquakes down. This is predicted to occur if earthquake timing is affected by static stress changes, but may not occur if dynamic stress changes are most important. We found no evidence for stress shadows in California, supporting the dynamic triggering hypothesis. A similar lack of stress shadowing has also been found by other authors, at least for the first several months following a mainshock (Parsons, 2002; Marsan, 2003; Mallman and Zoback, 2003). The second test performed for static vs. dynamic triggering was to inspect the rate of decay of aftershock density with distance from mainshocks in California. We found a constant, inverse power law decay rate from distances ranging from a fraction of a fault length (of M 5-6 earthquakes) to over a hundred fault lengths (of M 2-3 mainshocks). The single, continuous decay rate indicates that a single aftershock triggering mechanism is operating over the entire distance range. Only dynamic stress changes are large enough to trigger at distances over several fault lengths. This suggests that dynamic stress changes must be the triggering agent at all distances (Figure 1). Figure 1 Distance from the mainshock vs. aftershock linear density (aftershocks/km) for combined aftershock data of M 2-3(A) and M 3-4 (B) California mainshocks. All aftershocks are M>2 and occur in the first 5 minutes after their mainshock. The decay is well fit by r-1.30 ± 0.1 for the M 2-3 data and r-1.37 ± 0.07 for the M 3-4 data, where r is the distance from the mainshock hypocenter. 3. Test 2: Aftershock scaling and mainshock magnitude There are really two predictions that the model makes with respect to aftershock scaling with mainshock magnitude. The first is that the total number of aftershocks will scale linearly as mainshock faulting area, or as 10αM where α =1, since 10M scales as faulting area. This scaling has been found by Yamanaka and Shimazaki (1990), Michael and Jones (1998), Felzer et al. (2004), and Helmstetter et al. (2005) among others. The second prediction is that no other property of aftershocks, besides number, will be affected by mainshock magnitude. Many authors have found that aftershock magnitude is independent of mainshock magnitude (Kagan 1991; Michael and Jones, 1998; Felzer et al. 2004), (Figure 2). Felzer and Brodsky (submitted) demonstrated that the spatial distribution of aftershocks is independent of mainshock magnitude. Helmstetter et al. (2005) and Gerstenberger et al. (2005) and others have found the temporal distribution of aftershocks to be independent of mainshock magnitude. 4. Test 3: Does each aftershock have a single parent? The last prediction of the model is that aftershock triggering is a binary process, such that each aftershock is triggered by one and only one mainshock. This implies that aftershock triggering is effectively an “on” or “off” process; a fault can be either triggered or left alone by another earthquake, and once triggered cannot be turned off again before its aftershock occurs. While this idea is counterintuitive with respect to many physical models (such as models of accelerating failure), it nonetheless finds support in the data. The density of aftershocks swiftly drops with distance from the mainshock fault. Yet Felzer (2005) found that the temporal distribution of aftershocks does not vary with mainshock distance. This indicates that stress change amplitude affects the total number of aftershock triggered, but not the timing of aftershocks that do occur – so the stress change must work as a simple “on” or “off” switch (Figure 3). While this finding does not prove that an aftershock cannot be multiply affected by different mainshocks, this “on or off” quality of aftershock triggering suggests that it is possible. Figure 2 The average number of aftershocks/mainshock for different relative mainshock – aftershock magnitude differences. The data set is composed of 101,680 M≥ 2.2 California earthquakes (1975-2001). Mainshocks are used if they do not occur within the first 30 days of the aftershock sequence of a larger earthquake. Aftershocks occur within the first 2 days and 2 fault lengths of their mainshocks. The black line gives the relationship predicted if the aftershock magnitude distribution is simply the Gutenberg-Richter distribution (with b=1) regardless of mainshock magnitude, and if the total number of aftershocks produced varies linearly with mainshock faulting area. Figure 3 We inspect the first 30 days of M≥3 aftershocks of the 1992 M 7.3 Landers, California earthquake. Groups of 200 earthquakes are taken from three different distance ranges from the mainshock fault. Stress change amplitudes and resulting seismicity rate changes affecting each group are different. The ratio of the average daily seismicity rate in the first 30 days of aftershocks vs. the average since 1975 is 4,546 for 0-1 km, 649 for 1-14.4 km, and 613 for 14.3 – 43.7 km, respectively (there is little difference between 1-14.4 km and 14.3 – 43.7 km because the Joshua Tree aftershock sequence created high prior seismicity rates in the 1-14.4 km zone). Despite the differences in stress amplitude and seismicity rate changes, the decay rates of the aftershocks in the three regions lie on top of each other – their temporal distribution is the same (Kolmogorov-Smirnoff test, 95% confidence). This indicates that stress change amplitude does not affect aftershock timing, suggesting that aftershock triggering is a binary “on” or “off” process. 5. Conclusion I propose that earthquake propagation can be physically modeled as the propagation of a crack through a heterogeneous medium, with stress loading set to the critical point. I propose that the statistical characteristics of this model is well represented by the stochastic model of Kagan and Knopoff (1981) in which earthquake propagation is modeled as a branching process of rapid aftershock triggering by tiny, equal-sized subevents. I model the subevents, or unit earthquakes as having equal area, and make several predictions from the model. These include that aftershocks should be triggered by dynamic stress changes (because aftershock triggering and mainshock propagation are the same process), that aftershock production should scale with mainshock faulting area, that all properties of aftershock production, besides their total number, should be independent of mainshock magnitude, and that each aftershock should have a unique mainshock. Support for all of these predictions can be found statistically in aftershock catalog data. 6. Acknowledgement I thank Emily Brodsky, Rachel Abercrombie, Göran Ekström, Jim Rice, Thorsten Becker, Lucy Jones, Alan Felzer, Mike Harrington, and many, many others for assistance with different components of this work. 6. References Felzer, K.R., Abercrombie R. E., and Ekström G. (2004), A common origin for aftershocks, foreshocks, and multiplets, Bull. Seis. Soc. Am. 94, 88-99. Felzer, K.R., and Brodsky E. E. (2005), Testing the stress shadow hypothesis, J. Geophys. Res. 110, B05S09, doi:1029/2004JB003277. Felzer, K.R., and Brodsky E. E. (2005 submitted), Decay rate of aftershocks with distance indicates triggering by dynamic stress change Gerstenberger, M. C., Wiemer, S., Jones, L. M., and Reasenberg, P. A. (2005), Real-time forecasts of tomorrow’s earthquakes in California, Nature, 435, 328-331, doi: 10.1038/nature03622. Kagan, Y. Y. and Knopoff, L. (1981), Stochastic synthesis of earthquake catalogs, J. Geophys. Res., 86, 2853-2862. Kagan, Y. Y. (1991), Likelihood analysis of earthquake catalogs, Geophys. J. Int., 106, 135-148. Kilb, D., Martynov, V., and Vernon, F. (2004), Examination of the temporal lag between the 31 October 2001 Anza, California M 5.1 mainshock and the first aftershocks recorded by the Anza Seismic Network, Seis. Res. Lett., 74, 254. Mallman, E. P. and Zoback, M. D. (2003), Using seismicity rates to assess Coulomb stress transfer models, EOS Trans. AGU., 84(46), Fall Meet. Suppl., Abstract S32F-03. Marsan, D. (2003), Triggering of seismicity at short timescales following Californian earthquakes, J. Geophys. Res., 108, 2266, doi:10.1029/2002JB001946. Michael, A. J. and Jones, L. M. (1998), Seismicity alert probabilities at Parkfield, California, revisited, Bull. Seis. Soc. Am., 88, 117-130. Parsons, T. (2002), Global Omori decay of triggered earthquakes: Large aftershocks outside the classical aftershock zone, J. Geophys. Res., 107, doi:10.1029/2001JB000646. Peng, Z, Vidale, J. E., Ishii, M., and Helmstetter, A. (2005), Early aftershock decay rates, Abstract, Southern California Earthquake Center, Annual Meeting. Vere-Jones, D. (1976), A branching model for crack propagation, Pure App. Geophys., 114, 711-725. Yamanaka Y. and Shimazaki K. (1990), Scaling relationship between the number of aftershocks and the size of the mainshock, J. Phys. Earth, 38, 305-324. Earthquake Dynamic Triggering and Ground Motion Scaling J. Gomberg1, K. Felzer2,E. Brodsky3 1US Geological Survey, 3876 Central Ave., Memphis, Tennessee, 38152, USA 2 US Geological Survey, 525 Wilson Ave., Pasadena, California, 91106, USA 3 Univ. of Calif. Los Angeles, 1708 Geology Bldg., Los Angeles, California, 90095, USA Corresponding Author: J. Gomberg, [email protected] Abstract: While a growing body of evidence indicates that dynamic deformations (seismic waves) trigger earthquakes at all distances, it is not clear what characteristics of the dynamic deformations are most relevant to the mechanisms that cause triggering. In addition, if we are ultimately to account for dynamic triggering in time-dependent hazard assessments, we need to know how the relevant deformations vary, or scale, with distance from and size of the earthquake that generates them. In this study we address these issues by assuming that the probability of triggering aftershocks is proportional to a power of the peak amplitude of dynamic deformations affecting a fault, or a combination of these and rupture duration. We then ask whether these probabilities are consistent with measurements of triggered seismicity, in particular, linear aftershock densities. Preliminary results suggest that the hypothesis that peak acceleration alone controls aftershock triggering can be rejected. The other mechanisms remain viable, but require different values of d, the effective dimension of the fault population. 1. Introduction Testing the consistency of dynamic deformations with aftershock densities requires developing an analytic model relating the two. The model is constrained by measured scaling parameters and tested using numerical simulations. In this study ‘aftershocks’ are those events that occur within minutes to days after a larger mainshock (triggering earthquake) without a priori limits on their distance. We consider not just the transfer of stresses from triggering to potentially triggered faults but dynamic deformations that we are able to measure using recorded ground motions, which include displacements and their temporal rates of change at the surface. These serve as proxies for displacements, strains, and strain rates at seismogenic depths (Gomberg and Agnew, 1996). 2. Testing Our Hypothesis We assume that the probability, P(r), of dynamic deformations triggering a single aftershock at distance r is proportional to some power, χ, of the peak deformation amplitude, G(r), or alternatively of the product of G(r)χ and rupture duration, which scales as 10M/2. We measure peak ground motions to constrain a model of G(r), G(r)=K [Dm/(αD+r)n] (1) D is the rupture dimension. Constants K, m, n, and α are constrained observationally for velocities only in the near-field (r< ρ (r)=[Nf(r)/Δr] P(r) (2) Felzer and Brodsky, (2005) show empirically that ρ(r) = C10M-Mminr-γ (3) with constant distance decay rate, γ, to within ~10s of meters of the mainshock. To measure Nf(r) we sum all the faults within a volume defined by a surface, S, surrounding the mainshock at distance r everywhere, and width Δr, or N f (r) = [ ∫ F(r ′) ds]Δr (4) S F(r') describes the number of faults per unit volume and its variation throughout the crust. Numerical models show that F(r') can be considered locally, as Ar(d-3), in which d is the € effective dimensionality of the fault system. If we consider only planar, rectangular faults, S is simply defined and the analytic model is derived as ˜ ˜ (r) 4 AK 2 / m D2[1 (D ) 1 (D )2 ]rd −1( D r)−2n / m CD210−M min r−γ (5) ρ = π + r + 2π r α + = The tilde indicates m or m+1 if triggering depends on peak ground motions only or on the product of these and duration, respectively. We can now ask if measured ground motion € scaling parameters are consistent with this model and measured aftershock densities. In the far-field, r>>D, this model applies to acceleration, velocity, and displacement (because we can constrain it observationally) and may be simplified to ˜ ˜ ρ(r) = 4πAK 2 / m rd −1−2n / m = C10−M min r−γ (6) This constrains d, noting that d =1+ 2n − γ (7) € m˜ Viable triggering deformations must satisfy the limit that d<3, and according to previous studies, d<~2. In the near-field our model applies only to velocities or their product with € duration and to be consistent they must have parameters that satisfy the following: ˜ ˜ ˜ ρ(r) = [2Aα −2n / m K 2 / m ]D2−2n / m rd −3 = C10−M min r−γ (8) This is only satisfied if n = m˜ . We examine three datasets. The first includes aftershock locations and magnitudes, and € PGAs, PGVs, and PGDs for 22 Japanese earthquakes with magnitudes of M3.0-6.1. € Observations come from the National Research Institute for Earth Science and Disaster Prevention (NIED)’s Hi-net network in Japan, comprised of >700 seismic stations. The second dataset includes the catalog of the California Integrated Seismic Network (CISN) and PGAs, PGVs, and PGDs for the 2005, Mw5.2 Anza and 2005 Mw4.9 Yucaipa earthquakes in southern California, measured from waveforms recorded by California Geological Survey CSMIP, the US Geological Survey NSMP, University of California San Diego’s (UCSD) Anza networks, and the CISN. We chose these two earthquakes because of the abundance of data and well-characterized aftershock sequences. Because of the depths and source dimensions of these earthquakes, almost all the measurements are in the far-field. We measure the closest distance between the fault and observation point for the California and global data (below) and hypocentral distance for the Japanese data. Our final dataset of PGVs only was compiled to examine ground motion scaling at both near- and far-field distances, which required data from large earthquakes that rupture near the surface. Measurement of aftershock densities uniformly would be extremely difficult given the heterogeneity of available catalogs and was not done. The PGVs were measured for distances of 0.1 km to 5300 km from earthquakes with magnitudes M4.4 to M7.9. They include already measured values or waveforms from which we measured PGVs; data sources included the Pacific Earthquake Engineering Research Center, the Institutions for Research in Seismology, Japan’s NIED K-Net, and the Seismology Lab at the University of Nevada Reno. We first constrain the ground motion model parameters. The Japanese data provide the primary constraint on the amplitude scaling of Dm, with values of m~1.0, 1.5, and 2.0 for PGA, PGV, and PGD, respectively. The global and California datasets are consistent with these, but do not provide as strong a constraint. We find best fitting far-field distance decay rates for PGA, PGV, and PGD of ~r-n with n~2.0, 1.7, 1.5 and n~2.1, 1.7, 1.4 for the Japanese and California data, respectively. The far-field global PGV are best fit with n~1.5; the higher estimates for the Japanese and California fall within the scatter. The global data provide the only observational constraint on near-field ground motions, and show that as the triggering faults are approached, the decay rate decreases and PGVs converge to approximately the same value. This is a consequence of fault finiteness; as the triggering fault is approached, smaller areas contribute to the radiation (Brune, 1970, Anderson, 2000). In this particular case, in which m~n, a convenient scaling of r by D also removes all magnitude dependence (Fig. 1); this is evident from our ground motion model that becomes G(r)~K/(α+r/D)n when m~n. Figure 1 PGVs for our global dataset versus the distance scaled by rupture dimension D and our ground motion model (red curve); deformations are ~the same for all earthquakes at the same scaled distance, and maximum at < r/D~1, where triggering potential is greatest. Observations of aftershock densities for the Japanese and California earthquakes suggest distance decay rates of γ~1.1 and 1.3, respectively. These, the PGA, PGV, and PGD parameter estimates and equation (5) imply faulting dimensionality of d~3.9-4.2, 2.0-2.5, and 1.1-1.7, respectively, if we consider peak amplitudes alone. If we also consider the effect of duration, the dimensionalities become d~1.8-2.2, 0.8-1.3, and 0.4-1.0, respectively. The large value of d for PGA (strain rate) alone suggests it does not control dynamic triggering. We use ground motion observations, G(r), and estimated scaling parameters to predict aftershock densities, to test our model and the consistency of the dynamic deformations with the densities (Fig. 2). Figure 2 Anza and Yucaipa earthquake aftershock densities and those predicted from the ground motion observations, G(r), directly via our amplitude-only model (colored symbols); predictions agree within the scatter of the density measurements (triangles). Our analytic model applies to velocities at all distances, so we use it with scaling parameters measured in the far-field to predict aftershock densities in the near-field (red dashed curve); these are consistent with those observed. The relationships between deformations and densities in the near-field potentially provide strong constraints on viable deformation characteristics, although additional numerical modeling and data acquisition and analyses are required to verify this. 3. Discussion & Conclusion We test a suite of hypotheses that the probability of dynamic triggering of a fault at distance r from a triggering rupture is related directly to maximum dynamic deformations of displacements, strains, or strain rates, or to the product of these with the rupture duration. To do this we have developed an analytic model that may be constrained observationally and with numerical experiments. Preliminary results suggest we can reject the hypothesis that aftershocks are triggered by maximum strain rates alone. Additional analyses of near-field data may permit us to rule out other deformation characteristics. Our model allows us to quantify the probability that a given deformation will trigger an earthquake on a particular fault at distance r, which is important for evaluating the viability of physical models of dynamic weakening and nucleation (Beeler and Lockner, 2003, Brodsky and Prejean, 2005, Johnson and Jia, 2005, Marone and Savage, in preparation). We show an example in Figure 3. Since the deformations approach a constant as the fault is approached, the probabilities very close to the fault are the same. At larger distances, they depend on the rupture dimension such that by ~1 rupture dimension they have decreased to 10% of the maximum probability (corresponding to PGVs of ~15 cm/s or ~50 µstrain), except in cases with exceptionally large (or perhaps long duration) ground motions at remote distances, when the probabilities will be correspondingly elevated (Gomberg and Johnson, 2005). Aftershock densities also describe probabilities, of an aftershock occurring at distance r anywhere surrounding the triggering fault. However, they fall off at a constant rate with distance, regardless of the rupture dimension. Numerical models show that all these behaviors may be explained if the deformations and triggering from a large rupture are simply the sums of those from smaller ones on the same fault. Our results do not preclude the possibility that other characteristics of the deformation field may be important, some of which we will measure and test similarly. These include shaking duration, total radiated energy, and perhaps frequency content. Figure 3. Left) Probabilities calculated as [G(r)/G(0)]χ with m=2, n=1.75, a=.4 for hypothetical faults with dimensions labeled. Right) The probability of an aftershock occurring at any azimuth within a distance Δr varies as ρ(r); γ=1.5 in this example. The integral of this provides the probability an aftershock occurring at any distance less than r. 4. Acknowledgements This work is supported by the USGS National Earthquake Hazards Reduction Program. We also acknowledge Miaki Ishii and Frank Vernon (both at UCSD) for providing data. 5. References Anderson, J.G. (2000), Expected shape of regressions for ground-motion parameters on rock, Bull. Seismo. Soc. Am., 90, S43–S52. Beeler, N. M., and Lockner, D. A. (2003), Why earthquakes correlate weakly with the solid Earth tides: Effects of periodic stress on the rate and probability of earthquake occurrence, J. Geophys. Res., 108(B8), 2391, doi:10.1029/2001JB001518. Brodsky, E. E., and S. G. Prejean (2005), New constraints on mechanisms of remotely triggered seismicity at Long Valley Caldera, J. Geophys. Res., 110, B04302, doi:10.1029/2004JB003211. Brune, J.N. (1970), Tectonic stress and the spectra of seismic shear waves from earthquakes, J. Geophys. Res., 75, 4997-5009. Felzer, K.R. and Brodsky, E.E. (2005 in press), Evidence for dynamic aftershock triggering from earthquake densities, Nature, 14 p. Gomberg, J. and Agnew, D.C (1996), The accuracy of seismic estimates of dynamic strains from Pinyon Flat Observatory, California, strainmeter and seismograph data, Bull. Seismo. Soc. Am., 86, 212-220. Gomberg, J. and Johnson, P. (2005), Dynamic triggering of earthquakes, Nature, 437, p. 830, doi:10.1038/nature04167. Helmstetter, A., Kagan, Y.Y. and Jackson, D.D. (2005), Importance of small earthquakes for stress transfers and earthquake triggering, J. Geophys. Res., 110, B05S08, doi: 10.1029/2004JB003286. Johnson, P. and Jia, X. (2005), Nonlinear dynamics, granular media and dynamic earthquake triggering, Nature, 437, 871-874, doi:10.1038/nature04015. Kanamori, H. and Anderson, D.L. (1975) Theoretical basis of some empirical relations in seismology, Bull. Seismo. Soc. Am., 65, 1073-1095. RETAS: a restricted ETAS model inspired by Bath’s law D. Gospodinov1 and R. Rotondi2 1 Geophysical Institute of the Bulgarian Academy of Sciences, Centralna Posta PK 258, 4000 Plovdiv, Bulgaria 2 C.N.R. - Istituto di Matematica Applicata e Tecnologie Informatiche, Via Bassini 15, 20133 Milano, Italy Corresponding Author: R. Rotondi, [email protected] Abstract: A restricted trigger model is proposed to analyse the temporal behaviour of some aftershock sequences. The conditional intensity function of the model is similar to that of the Epidemic Type Aftershock-Sequence (ETAS) model with the restriction that only the after- shocks of magnitude bigger than or equal to some threshold Mtr can trigger secondary events. For this reason we have named the model restricted epidemic type aftershock-sequence (RE- TAS) model. Varying the triggering threshold we examine the variants of the RETAS model which range from the Modified Omori Formula (MOF) to the ETAS model, including such models as limit cases. 1 Introduction Bath’s law states that the difference between the main shock magnitude and the mag- nitude of the strongest aftershock is constant, on average 1.2 for Richter and 1.4 for Utsu (Ogata, 2001). By extending this principle to the subsequences generated by primary events in a trigger model, we get that the difference between the weakest primary event and the weakest event in the aftershock sequence must be at least 1.2. This suggests choosing the primary events among those of magnitude larger than a suitable threshold Mtr. The aim of our work is to examine the class of trigger models obtained by varying Mtr between the cut-off M0 and the maximum magnitude Mmax. At first we have simulated two data sets according to the ETAS and RETAS models respectively and we have checked that the true model was identified by the Akaike criterion. Then we have analysed two aftershock sequences showing a different behaviour: the first, generated by the strong earthquake of M = 7.8 on April 4, 1904 in Kresna region, SW Bulgaria, is a typical mainshock-aftershock sequence, while the second is the seismic swarm started on September 26, 1997 in Umbria- Marche region, central Italy. Our idea is to look for the model that fits best the data according to the Akaike criterion and hence to identify which physical interpretation can be given to each different behaviour, among the following: the aftershock sequence was triggered by the only main shock, by every event or by just some selected events. Then we compare the results obtained through the stochastic modelling with the description of the seismic crisis provided by the seismological studies. 2 Trigger models Let us consider the conditional intensity function of a generic trigger model: Km λ(t|Ht) = µ + p (t − tm + c) {m∈PX;tm K Ki λO = p λE(t|Ht) = µ + p . (t + c) (t − ti + c) tXi By borrowing from Bath’s law the idea of a gap between the magnitude Mtr of the triggering event and that of the largest event generated, we have examined the model in which, like in Ogata (2001), the primary events are those having magnitude larger than or equal to a threshold Mtr, but, contrary to Ogata (2001), the parameters Km obey the restriction α(Mi−M0) Ki = K0 e (Gospodinov and Rotondi (2005)). The conditional intensity function of our model turns out to be α(Mi−M0) K0 e λ(t|Ht) = µ + p . X (t − ti + c) ti < t Mi ≥ Mtr Moreover we note that in this model, contrary to the original trigger model (Vere-Jones and Davies, 1966; Vere-Jones, 1970), each primary event belongs to the offspring of the preceding primary events. We have not fixed the size of the gap between Mtr and M0, even better by varying Mtr we have examined all the models between the MOF and the ETAS model on the basis of the Akaike criterion. We recall that in the MOF µ is generally set equal to 0 and α(M1−M0) K0 e = K for the identifiability of the model. After having chosen the best model among those proposed, we have evaluated its goodness of fit through the residual analysis. 3 Data analysis We sketch just the analysis of the 1997 aftershock sequence in Umbria-Marche region, because it highlights better the properties of the RETAS model. A seismic sequence struck Umbria-Marche region during September-October 1997. It was characterized by the occurrence of six earthquakes of magnitude larger than 5 in a period of 20 days. More than 2000 events were recorded from September 26, 1997 up to November 3, 1997 (Waveforms, arrival times and locations of the 1997 Umbria-Marche aftershocks sequence). This data set can be considered complete above magnitude 2.9 and above this cut-off it contains 508 events. Studies on different features of the sequence reveal that it is an untypical compound aftershock sequence for a magnitude 6 main shock. Deschamps et al. (2000) suggest that the strongest events delineate three subzones and are spatially close to the rupture zones of the main shocks. By fitting the class of RETAS models to the aftershock sequence we have obtained that the best model is the ETAS model with Mtr = M0 = 2.9. The α parameter generally varies in the range 0.4 − 3.0 and, according to Ogata (1999, p. 487), its value is smaller in case of swarm-type activity than in ordinary mainshock and aftershock activities. In our case α = 0.89 classifies the sequence as swarm. The residual analysis reveals that the number of the observed events exceeds that of the expected ones nearly in the whole time interval under study. (a) (b) −1600 40 −1650 30 −1700 20 −1750 10 AIC AIC 0 −1800 −10 −1850 −20 −1900 −30 −1950 3.0 3.5 4.0 4.5 5.0 5.5 3.5 4.0 4.5 5.0 5.5 6.0 Triggering magnitude level Triggering magnitude level Figure 1: Umbria-Marche region: AIC value of the RETAS model for different Mtr triggering magnitudes with cut-off magnitude (a) M0 = 2.9 and (b) M0 = 3.6. The best model is (a) the ETAS and (b) the RETAS model with Mtr = 5.0. Figure 1a shows the trend of the AIC values with respect to different triggering magni- tudes. Like in the analysis of the data set simulated from a RETAS model, we observe the presence of a local minimum in Mtr = 4.6. This observation led us to the idea of applying RETAS models with Mtr ≥ 4.6 to the subsequence with cut-off magnitude M0 = 3.6 so that the difference between Mtr and the new M0 is at least 1. By raising the threshold magnitude, secondary clusters appear generated by the six strongest events. The AIC values versus the triggering magnitudes are plotted in Fig. 1b. The residual analysis shows that, apart from a slight decrease of the activity between the 13th and the 16th day, the observed cumulative number of events is always within the error bounds. Summarizing the algorithm we have applied is composed of two stages. The first consists in the following steps: N i. order the set D = {Mi}i=1 in increasing way and build the set of different magnitude levels K {Magk}k=1 obtained by removing from D the repeated values, so that Magk < Magk+1, k = 1, . . . , K − 1, ∀k Magk ∈ D, and ∀i ∃k: Mi = Magk; set k = 0; ii. k ← k + 1, Mtr = Magk; iii. given the conditional intensity function α(Mi−M0) K0 e λ(t|Ht) = µ + p , X (t − ti + c) ti < t Mi ≥ Mtr maximize log λ(·) and evaluate AICk; iv. if k = 1, then set kbest = 1 and AICbest = AIC1; otherwise, if AICk < AICbest, then set kbest = k and AICbest = AICk; v. if k < K then go to (ii), otherwise stop. The final output, expressed by the value kbest, identifies the best model: ETAS if kbest = 1, MOF if kbest = K and RETAS with triggering magnitude Magkbest if 1 < kbest < K. If the ETAS model comes out to be the best model, it is interesting to examine whether particular correlations between the strongest aftershocks and the remaining seismicity emerge by raising the threshold M0. In this second stage the analysis can be carried out throught the following steps: K i. plot the set of pairs {(Magk, AICk)}k=1; ii. if this set shows a monotonic trend, then stop; otherwise find the index of the local minimum, let’s say j, set Mtr = Magj and M0 ≈ Mtr − 1, and go back to the first stage considering the subset formed by the events of size exceeding M0. 4 Conclusions We have found agreement between the physical interpretations of the results obtained through the stochastic modelling and the description of the seismic crisis provided by de- tailed seismological studies. It appears that the RETAS model can be applied for modelling compound seismic sequences in regions of complex tectonic structure with highly fractured earth crust. Acknowledgements During this work Gospodinov D. was supported by Grant No. 219.34 of the N.A.T.O.-C.N.R. Outreach Fellowships Programme 2001. References Deschamps, A., Courboulex, F., Gaffet, S., Lomax, A., Virieux, J., Amato, A., Azzara, R., Castello, B., Chiarabba, C., Cimini, G.B., Cocco, M., Di Bona, M., Margheriti, L., Mele, F., Selvaggi, G., Chiaraluce, L., Piccinini, D. and Ripepe, M. (2000), Spatio-temporal distribution of seismic activity during the Umbria-Marche crisis, 1997, Journal of Seismology, 4, 377-386. Gospodinov, D. and Rotondi, R. (2005) Statistical analysis of triggered seismicity in Kresna region SW Bulgaria, 1904) and Umbria-Marche region (central Italy, 1997), Pure and Applied Geophysics, in press. Ogata, Y. (1988), Statistical models for earthquake occurrences and residual analysis for point pro- cesses, J. Amer. Statist. Assoc., 83, 401, 9-27. Ogata, Y., Seismicity analysis through point-process modeling: a review, In Seismicity Patterns, Their Statistical Significance and Physical Meaning (eds. M. Wyss, K. Shimazaki and A. Ito) (Birkha¨user, Basel 1999), Pure and Applied Geophysics, 155, 471-507. Ogata, Y. (2001), Exploratory analysis of earthquake clusters by likelihood-based trigger models, J. Appl. Probab., 38A, 202-121. Vere-Jones, D. (1970), Stochastic models for earthquake occurrence (with discussion), J. Roy. Statist. Soc., Ser. B 32, 1-62. Vere-Jones, D., and Davies, R.B. (1966), A statistical survey of earthquakes in the main seismic region of New Zealand, Part 2, Time Series Analyses, N. Z. J. Geol. Geophys. 9, 251-284. Waveforms, arrival times and locations of the 1997 Umbria-Marche aftershocks sequence (Istituto Nazionale di Geofisica e Vulcanologia, Rome, Italy; UMR G´eoscience Azur, CNRS/UNSA, Valbonne, Grance; Laboratorio di Geofisica, Universit`a di Camerino, Camerino, Italy). Statistical Measures of Dynamic Triggering Susanna Gross CIRES, University of Colorado, 216 UCB, Boulder, CO 80309, [email protected] November 4, 2005 Abstract: A new theory of dynamic triggering based upon fault friction that responds to both slip rate and contact state will be presented, along with observational tests of the theory using aftershocks surrounding 3 southern California events, the 1992 Landers, 1994 Northridge and 1999 Hector Mine mainshocks. This theory of transient fault movement naturally explains the temporal distribution of dynamically triggered aftershock sequences lasting many months. A fast approximate dynamic stress transfer model is used to compute the dynamic failure func- tion predicted by the theory, and 4 different statistical measures are used to evaluate it. Failure functions involving static, dynamic, and hybrid failure functions are quantitatively evaluated by fitting exactly the same data using functional forms that have exactly the same number of de- grees of freedom. The temporal and the spatial structure of a 40,000 event southern California ¡ ¢ £ catalog with is used to constrain these models. The results suggest that both static and dynamic triggering occur, with the dynamic triggering occurring through transient reversal of the resolved shear stress and consequent fault movement. 1 Introduction Several careful studies (e.g. Gomberg et al. 1998) have found that when positive shear stresses are applied to a fault and then removed, almost all triggering ceases at their removal. The shear stresses are positive in the sense that they increase the pre-existing tectonic stresses on the fault, driving them towards static failure. Few long-lasting effects from positive stress transients are a natural consequence of the approximate linearity of rate and state friction models. The unloading process nearly mirrors the loading process, and so it returns the fault to almost the same state, and produces few longer term effects like delayed aftershocks. In contrast, the dynamic triggering mechanism presented below produces delayed aftershocks as a result of transient shear stresses which act in a sense opposite to the tectonic stresses. Negative shear stresses can easily have effects that last long after they are gone, because the loading process does not mirror the unloading process. Dynamically triggered aftershocks result from the movement of faults that failed recently, and therefore have a relatively small amount of shear stress accumulated on them. These events consequently should have unusually low stress drops. This paper extends Dietrich seismicity theory to cases in which shear stresses on the fault are transiently reversed by dynamic stresses carried in the seismic wave train from a mainshock. Comparisons with observed earthquakes are made for failure stresses computed from dynamic and static stress transfer models that involve exactly the same number of free parameters. Finally, the models are evaluated with various statistical measures. 2 Dynamic and Static Triggering Dynamic triggering of aftershock sequences could result from a new population of contacts being formed as the shear stress on the fault is transiently reversed and the surfaces are repositioned slightly in response to dynamic stresses from a mainshock. After the wave train has propagated away, the fault may not experience any residual static stress as a result of the mainshock, so the resolved shear stress to its original value, but the new contacts that form in response are not the same as the old ones. They are much younger, and therefore smaller, but also there may be a residual displacement, because the trajectory the fault follows in reloading need not be identical to the trajectory it followed in unloading. These new contacts will naturally give rise to an aftershock sequence as they mature. It is much like the situation following fault rupture in a mainshock. In the case of relatively young faults having small contact areas, which are particularly easily triggered, it is possible to derive an expression (please see http://strike.colorado.edu/ ¤ sjg/dynamic)for the number of aftershocks ¥ as a function of minimum dynamic shear stress ¦ § ¨ © involving 3 unknown constants, ¥ ¦ § ¨ © ¦ § ¨ © ¦ § ¨ © (1) 1 ¢ Dynamic Failure Stress If we define the dynamic failure stress ¡ to be that stress which generates after- £ ¤ shocks, so it is proportional to ¥ , we can define a new unknown constants , £ ¦ ¦ ¦ ¢ § ¨ © § ¨ © § ¨ © ¥ ¡ (2) which is linearly proportional to ¥ . It’s fortunate that only two constants are needed, because then an 8 parameter model including 6 parameters to define the background stress state can be used. The static failure stress model discussed below also involves 8 parameters. This dynamic failure stress is labeled “minfail” in the statistical tables below. Static Coulomb Failure Stress The static stress model computes a change in static failure stress at the hypocenter of each aftershock using the ¦ three-dimensional stress tensors from before and after the mainshock. The static failure stress ¡ is computed from the principal stresses § , ¨ © § § § § § ¡ ¦ ¥ (3) in which the effective coefficient of friction includes the effects of fluid pressures. This failure stress is called “Coulomb” in the tables below. Static stress models involve 8 parameters, including 7 to define the background stress state, which includes variations of vertical normal stress with depth. Like the other 7 parameters, the effective coefficient of friction is a free parameter, adjusted to match the observed distribution of seismicity. Dynamic failure stress (max Coulomb) Most previous studies of dynamic triggering have used a failure stress closely related to the static failure stress discussed above, and assumed that dynamic triggering would be linearly proportional to the Coulomb failure stress, maximized over time, ¨ © § § § § § ¡ § ¥ (4) which makes some intuitive sense, because it is a measure of how far the triggered faults have been driven toward static failure. In the tables below it is labeled “max Coulomb”. Hybrid Triggering - both Static and Dynamic I have constructed a hybrid aftershock model that includes terms representing both static and dynamic stress transfer triggering, but lacks other effects. This hybrid model provides an additional measure of how strongly the data support the dynamic triggering as contrasted with static triggering in the generation of aftershocks. One of the free parameters in the hybrid model is a weight , ¤ ¦ ¦ ¦ ¦ § ¨ © ¦ § ¨ © ¦ ! ¡ (5) determiningthe relative importanceof static and dynamic triggering. In orderto keep the numberof free parameters "# ¦ down, a constant shear stress scaling ! bar was assumed. With the elimination of the effective density, a term that permits the vertical normal stress to increase with depth, only 8 free parameters are used, the same number as for the other failure stresses being evaluated. This model is labeled “min shear + Coulomb” in the tables below. Modified Omori Aftershock decay The modified Omori relation (Utsu 1961) with an additional background term (Gross and Kisslinger 1994) is % & ) $ % ' ¥ (6) ( % % $ where is the number of events occurring per unit time at time . Dieterich Aftershock decay A paper by Dieterich (1994) explores the implications of rate and state dependent friction for the generation of ¦ + aftershocks. If a population of faults in equilibrium with a steady tectonic stressing rate * and normal stress , so - ¦ that it produces a rate of background seismicity , , is subjected to a step in shear stress , the new seismicity rate % $ will follow the following function of time : / , . / 0 $ 1 . (7) / / ; 8 9 : . 2 3 4 6 < 0 2 3 4 5 6 7 / . < = ¦ > in which * is the stress rate after the mainshock, is a constitutive parameter measurable for laboratory samples 8 9 % and / is the time at which the sequence decays to the steady background rate of activity. . 2 3 Data Southern California catalogs The catalog of seismicity used in this work was compiled by the Caltech USGS southern California seismic network # ¢ from January 1981 December 2000. The minimum magnitude used was ¡ but no spatial cut was applied. The catalog was divided into a spatial grid of boxes whose sizes reflect the level of seismic activity. Dynamic source models We have used inversions for the moment distribution of Wald and Heaton (1994) to define the Landers source, (Wald et al. 1996) to define the Northridge source and (Ji et al. 2002) to define Hector Mine. All elements in the source models are retained in the stress models, each one represented with a point double-couple, with a slip history taken from the original inversion of waveform and geodetic data as published in the references cited above. 4 Models Fast approximate numerical model for waveforms Because of difficulties computing accurate static stresses using a reflectivity model, I developed a much simpler model for dynamic and static stresses based upon the equations for a point double couple presented in Aki and Richards (1980). This model neglects surface waves, but it is numerically stable and very fast. Single event stress state inversions The orientations and relative magnitudes of the principal stresses for a general background stress state were fit to the seismicity. The magnitude of background stress and parameters relating to the failure function such as effective coefficient of friction were determined by finding the spatial distribution of the failure stress - which best separates the aftershocks from the background seismicity. % - £ ¤ ¥ ¦ § ¤ - ¨ ¥ © ¤ § ¤ ¥ ¤ ¥ © (8) £ ¤ ¥ ¦ § ¤ ¨ ¥ © ¤ § £ ¤ ¥ ¦ § ¤ ¨ ¥ © ¤ § £ ¤ ¥ ¦ § ¤ The average change in failure stress from background seismicity - is compared with the average change in - failure function from the aftershocks ¨ ¥ © ¤ § , and normalized by the pooled standard deviation. To balance the effects of distant and near aftershocks upon the solution, t-statistics are computed for seismicity in 8 concentric shells, and summed. Summed t-statistics for 20,000 models were evaluated in the search for the best fitting 8-parameter model for each of the inversions listed in Table 1. Models are evaluated in 20 groups of 1000 each, following a customized genetic algorithm that has proven successful in previous work (Gross and Kisslinger 1997). The statistics in Table 2 quite similar to the single-event summary t-statistic approach discussed above, but the robust KS statistic is used instead of the t-statistic. A major advantage of the KS statistic compared to the t-statistic is that it does not assume that the failure stresses are normally distributed. Stress Transfer And Nucleation (STAN) model The Stress Transfer and Nucleation (STAN) model is another way to evaluate the aftershock decay and stress transfer models. It compares seismicity models to the temporal distribution of seismicity in a large number of non- overlapping spatial subsets, and the best-fitting background stress rate and decay parameters are found. Candidate STAN models are evaluated by computing Komolgorov-Smirnov (KS) statistics for the temporal distribution of seismicity in each subset. The probabilities of randomness computed from each KS statistic are combined by multiplying, which assumes the activity in each subset is independent. The background stress accumulation rate and a few aftershock decay parameters are fit to observations using a nested grid search technique, finding the largest combined KS probability. In addition, KS statistics from the subsets are summed, to compute a measure which is distributed between 0 and 1, providing a measure of absolute misfit reported in Table 4. 5 Results Single mainshock fits Table 1 Summed t-statistics Table 2 Summed KS statistics case Landers North- Hector case Landers North- Hector ridge Mine ridge Mine Coulomb -109.9 -20.0 -96.1 Coulomb 4.53 3.62 4.79 max Coulomb -130.0 -16.4 -84.6 max Coulomb 4.56 3.49 4.73 min shear -157 -42 -129 min shear 4.05 3.43 4.44 min shear + Coulomb -149.2 -18.2 -96.4 min shear + Coulomb 4.71 3.74 5.56 The largest t-statistics in Table 1 are associated with min shear, the purely dynamic stress state. But when this model is evaluated with other measures, like the rank correlations it is revealed to not be nearly as successful a model as the t-statistics would suggest. The KS statistics in Table 2 are most positive for the best models, and 3 they clearly show that the purely dynamic min shear model is not in good agreement with the observations. The problem is that the t-statistic is parametric, it assumes Gaussian distributions, and that assumption is violated for purely dynamic failure stresses like min shear. STAN fits Table 3 Pooled KS probabilities Table 4 Summed KS statistics case Dieterich MOM Omori case Dieterich MOM Omori Coulomb -5147 -2590 -2622 Coulomb .371 .246 .248 max Coulomb -5134 -3427 -3509 max Coulomb .368 .302 .308 min shear -5096 -4950 -5135 min shear .373 .365 .366 min shear + Coulomb -4796 -2553 -2571 min shear + Coulomb .357 .245 .246 Tables 3 and 4 use the same catalogs, the same triggering models and different statistical measures, but they tell a consistent story in terms of which models are most successful. The best model is min shear + Coulomb, which combines both static and dynamic triggering elements. The best fitting temporal decay is the Modified Omori model, with the simpler Omori model fitting almost as well. The second best spatial model is purely static stress transfer triggering. These results are consistent with the single-event statistics presented in Tables 1-2, which also found that the most successful triggering model was min shear + Coulomb, with the second best being the purely static Coulomb model. 6 Conclusions Four different statistical measures have been constructed to test the dynamic triggering theory. Two different statistical measures were used to evaluate spatial distributions of aftershocks and background events surrounding three southern California sequences, 1992 Landers, 1994 Northridge and 1999 Hector Mine. Two more statistical measures were used to evaluate the temporal distribution of seismicity in a catalog containing the same three mainshocks. Purely static, purely dynamic, and hybrid triggering models were evaluated each time, with consistent results found from 3 of the 4 statistical measures. The best fitting triggering model is a hybrid, incorporating the transient stress reversal process presented here in combination with more familiar static Coulomb failure stresses which have been studied by many authors. This result suggests that both static and dynamic triggering occur, with static triggering being the strongest triggering mechanism for most aftershocks. The one failed statistical measure can be explained in terms of violated assumptions, because t-statistics were derived using the assumption that the data can be approximated with a Gaussian distribution. The purely dynamic stress reversal triggering mechanism is identically zero when it does not trigger, it has no ”stress shadows”, and therefore its distribution is not even approximately Gaussian. Acknowledgements This material is based upon work supported by the National Science Foundation under Grant No. 0003505. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. Research supported by the U.S. Geological Survey (USGS), Department of the Interior, under USGS award number 01HQGR0008. The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the U.S. Government. 7 References Aki, K. and P. G. Richards, Quantitative seismology, Theory and methods, pp913, W. H. Freeman and Company, NY, 1980. Dieterich, J. H., (1994), A constitutive law for rate of earthquake production and its application to earthquake clustering, J. Geophys. Res., 99, 2601-2618. Dieterich, J.H., (1999) State Dependence of Frictional Contacts: Micromechanical Observations and Role in Earthquake Processes, Materials Research Society Tribology Workshop, San Jose, California, Gomberg, J., N. Beeler, M. L. Blanpied and P. Bodin, (1998) Earthquake triggering by transient and static deformations, J. Geophys. Res., 103, 24411-24426. Gross, S. J., and C. Kisslinger, (1994), Tests of models of aftershock rate decay, Bull. Seismol. Soc. Am. 84, 1571-1579. Gross, S. J. and C. Kisslinger, Estimating tectonic stress rate and state with Landers aftershocks, J. Geophys. Res., 102, 7603-7612, 1997. Ji, C., D.J. Wald and D.V. Helmberger, (2002) Source Description of the 1999 Hector Mine, California, Earthquake, Part II: Complexity of Slip History Bull. Seismol. Soc. Am. 92, 1208-1226. Utsu, T., (1961), A statistical study on the occurrence of aftershocks, Geophys. Mag., 30, 521-605. Wald, D. J., T. H. Heaton, and K. W. Hudnut, (1996), The slip history of the 1994 Northridge, California earthquake determined from strong-motion, teleseismic, GPS, and leveling data, Bull. Seism. Soc. Am., 86, Wald source models Wald, D. J. and T. H. Heaton, (1994), Spatial and temporal distribution of slip for the 1992 Landers, California earthquake, Bull. Seismol. Soc. Am., 84, 668-691 1994. 4 Analysis of complex seismicity pattern generated by fluid diffusion and aftershock triggering S. Hainzl1 and T. Kraft2 1Institute of Geosciences, University of Potsdam, 14415 Potsdam, Germany 2Department of Earth and Environmental Sciences, Ludwig-Maximilians-University, Germany Corresponding Author: S. Hainzl, [email protected] Abstract: According to the well-known Coulomb failure criterion, the variation of both, stress and pore pressure, can result in earthquake rupture. Aftershock sequences charac- terized by the Omori law are often assumed to be the consequence of mainshock induced stress changes, whereas earthquake swarms are supposed to be triggered by fluid intru- sions. In practice, both types of seismicity are often not clearly distinguishable that might result from an interplay of stress triggering and fluid flows in the earthquake generation process. We show that the statistical modeling of earthquake clustering by means of the Epidemic Type Aftershock (ETAS) model and simple pore pressure diffusion models can be an appropriate tool to extract the primary fluid signal from such complex seismicity patterns. This is demonstrated for two examples of natural swarm activity observed in Central Europe. 1. Introduction Stress triggering has been identified as an important mechanism for earthquake genera- tion. This mechanism is based on a well-known criterion for earthquake occurrence, the Coulomb failure criterion, CFS ≡ τ − µ(σ − P ) ≥ 0, stating that a positive Coulomb failure stress (CFS) could promote failures. Here, τ defines the shear and σ the normal stress on the failure plane (positive for compression), µ is the coefficient of friction, and P is the pore pressure which reduces the effective normal stress. Thus earthquakes can be triggered by stress changes as well as due to pore pressure changes. Aftershock activity which is usually associated with the stress changes of mainshocks generally decays according to the modified Omori law, λ(t)=K(c + t)−p,whereK, c and p are constants, and t is the elapsed time since the main event (Utsu et al. 1995). The Epidemic Type Aftershock Sequences (ETAS) model is a stochastic point process incorporating the empirically observed characteristics of stress triggered activity, where each earthquake has some magnitude-dependent ability to trigger its own Omori law type aftershocks (Ogata 1988). In particular, the rate of aftershocks induced by an earthquake that occurred at time ti with magnitude Mi is given by K α(Mi−Mmin) λi(t)= p · e (1) (c + t − ti) for time t>ti,whereα is a constant and Mmin is the lowest magnitude cut of the catalog. The total occurrence rate, λ(t)=λ0(t)+ λi(t) , (2) {i:ti 2. Vogtland Earthquake Swarms The Vogtland region is well known for the episodic occurrence of earthquake swarms. The most recent and best documented strong earthquake swarm occurred between August and December 2000 and consisted of approximately 5000 earthquakes with local magnitudes exceeding the magnitude of completeness (mc=0.2). The results presented here are an overview of previous investigations (Hainzl & Fischer 2002, Hainzl & Ogata, 2005). Method and Results At first, we fit the ETAS model to the whole activity assuming a constant forcing rate λ0. The five parameters (λ0, K0, α, c, p) of the ETAS model are estimated by the maximum likelihood method. To explore this non-stationary behavior, then the ETAS model is fitted in a moving time window. We find that the ETAS model fits the earthquake sequence very well. The great majority of earthquakes in this swarm is found to belong to self-triggered activity, in particular to embedded aftershock sequences. All estimated parameters are in agreement with values for tectonic earthquake activity except the α-value (α=0.7) which is similar to previous findings for other earthquake swarm activity, but lower than typical values found for non-swarm activity (α≈2). Although the background forcing is found to trigger only a small fraction of earthquakes, our analysis in moving time windows reveals a large systematic variation of the external forcing strength λ0(t). The earthquake swarm seems to be initiated by a strong fluid impulse which decreases within the first 10 days. Our results are in good agreement with with the identified phases of diffusion-like hypocenter migration (Parotidis et al. 2003). The efficiency of the deconvolution procedure has also been proved by an analogous analysis of simulated earthquake activity, which is forced by pore pressure diffusion and incorporates stress field changes in a 3-dimensional elastic half-space (Hainzl 2004, Hainzl & Ogata 2005). 3. Bad Reichenhall Earthquake Swarms Reports of felt earthquakes in the area of the Staufen Massif near Bad Reichenhall, SE Germany, range back several hundred years reaching maximum intensities of I0 =Von the macroseismic scale (Kraft et al. 2005). The underlying mechanism is not fully resolved so far. We analyze data of a network recently installed by the University of Munich and the Geological Survey of the Bavarian State. The data set consists of 1171 earthquakes recorded in the year 2002. The seismicity is characterized by two swarm type phases in March and August, the later occurred after intense rainfall which led to severe flooding in Central Europe. Method and Results We test the hypothesis proposed by Kraft et al. (2005) that rainfall triggered the earth- quake via the mechanism of pore pressure diffusion. According to the Coulomb failure criterion, earthquakes can be triggered in the absence of stress changes only if the pore pressure increases. Our statistical earthquake model has therefore the intensity function λ0(z, t)=λ00 + c p˙(z, t)if˙p(z.t) > 0; else λ0(z, t)=λ00 (4) 140 1 120 pressure in depth 100 daily rain 80 0.8 60 40 rain [ mm/day ] 0.6 20 0 0.4 30 25 20 0.2 correlation coefficient 15 10 0 earthquake [#/day] 5 pressure increase [Pa/day] 0 0 50 100 150 200 250 300 350 -15 -10 -5 0 5 10 15 time [days since 2002] time delay [days] Figure 1: Left: The rain rate (top) and the seismic rate (bottom) observed in the region of Reichenhall, SE Germany. The bold line indicates the mean pressure change in the depth resulting from linear diffusion of the rain water. Right: The linear correlation coefficient between the daily rain amount (dashed line), respectively the mean daily pressure increase in depth (solid line), and the daily number of earthquakes as a function of the time shift. where c and λ00 are constants. The latter describes the part of the forcing rate which cannot be related to pore pressure changes. The pressure changes in depth z are calcu- lated from the pressure changes at the surface P0(t) due to the rainfall by means of the Greensfunction G of the diffusion equation t d p˙(z, t)= G(z, t − τ)P (τ)dτ 0 dt−∞ H(t − τ) z2 with G(z, t − τ)= exp − (5) 4πD(t − τ) 4D(t − τ) where H is the Heaviside function. In our case, P0(t) is the average of the four meteorologic stations in this region. The estimation of the three parameters (λ00, c, D) is carried out by the maximum likelihood method. To avoid complications near the surface, we focus on earthquakes in the depth interval 1-4 km. The maximization of the likelihood function yields the hydraulic diffusivity D=0.3 m2/s. This value for the diffusivity corresponds well to results obtained from fluid pressure experiments at boreholes. With the estimated diffusivity, we are now able to calculate the effect of the time-dependent surface rain on the pore pressure in depth. The figure shows the calculated mean pressure increase in the depth interval 1-4km as a function of time. This curve is compared with the observed daily number of earthquakes. Both curves are closely related. We calculate the linear correlation coefficient between both curves as well as between the time series of the daily earthquake number and the daily rain amount. While the seismicity is not correlated with the rain data with zero time delay, it shows significant correlation if the seismicity is shifted 8 days back (Rmax=0.5). On the other hand, the pore pressure change in depth is strongly correlated without delay time with a maximum correlation coefficient which almost doubles that of the rain data (Rmax=0.8). 4. Discussion The study of natural swarm activity occurred in Vogtland region indicates that aftershock sequences are embedded and can even dominate swarm activity (Hainzl & Ogata 2005). In this case, the ETAS model can explain most of the activity and the fluid signal is almost burried. However, the fluid signal can be reconstructed by means of the time- dependence of the background forcing rate. In the Mt. Hochstaufen region, aftershock activity make up only a minor fraction of the activity and the shape of the extracted background rate is almost only the shape of the smoothed activity. In this case, however, we have additional information about the potential fluid source and can directly test the pore pressure diffusion model. 5. Conclusions In natural seismicity, aftershocks are often embedded even in swarm-like activity. The ETAS modeling is an effective method to deal with aftershocks. To deal with swarm-like seismicity, the hypothesis of a constant background activity has to weakened because fluid flows, silent earthquakes and other mechanisms enhance or reduce the earthquake proba- bility. In cases where we have no knowledge about the source of earthquake generation, the statistical modeling can reveal the time-dependence of the mechanism. This is the case for the swarm activity in Vogtland. The revealed signature is found to be in good agreement with the signal of fluid diffusion from a high pressure source. In cases where we have some information about the potential source function, we can go one step further and replace the back ground forcing by a functional form. This is done in the case of the Bad Reichenhall swarm activity where we test the triggering mechanism of pore pressure diffusion due to rainfall. The resulting high correlation between model and observation suggests that rain water diffusion is really the responsible mechanism in this case. 6. References Hainzl, S. (2004), Seismicity patterns of earthquake swarms due to fluid intrusion and stress triggering, Geophys. J. Int. 159, 1090–1096. Hainzl, S. and Fischer, T. (2002), Indications for a successively triggered rupture growth un- derlying the 2000 earthquake swarm in Vogtland/NW Bohemia, J. Geophys. Res., 107, 2338, doi:1029/2002JB001865. Hainzl, S., and Ogata, Y. (2005), Detecting fluid signals in seismicity data through statistical earthquake modeling, J. Geophys. Res. 110, B05S07, doi: 10.1029/2004JB003247. Kraft, T., Wassermann, J., Schmedes, E. and Igel, H. (2005 submitted), Meteorological triggering of earthquake swarms at Mt. Hochstaufen, SE Germany, Tectonophysics. Ogata, Y. (1988), Statistical models of point occurrences and residual analysis for point processes, J. Am. Stat. Assoc. 83, 9–27. Parotidis, M., Rothert, E., and Shapiro, S. A. (2003), Pore-pressure diffusion: A possible trig- gering mechanism for the earthquake swarms 2000 in Vogtland/NW-Bohemia, central Europe, Geophys. Res. Lett. 30, 2075, doi:10.1029/2003GL018110. Utsu, T., Ogata, Y., and Matsu’ura, R. S. (1995), The centenary of the Omori formula for a decay law of aftershock activity, J. Phys. Earth, 43, 1–33. Wang, H. F., Theory of linear poroelasticity with applications to geomechanics and hydrogeology (Princeton University Press, 2000). Spatial-Temporal Statistics Between Successive Earthquakes on 2D Burridge-Knopoff Model T.Hasumi Department of Physics, Faculty of Science and Engineering, Waseda University, Shinjyuku-ku, Tokyo 169-8555, Japan Corresponding Author: T. Hasumi, [email protected] Abstract: We numerically investigate two-dimensional spring-block model for earthquakes called Burridge-Knopoff (BK) model and study the statistical properties of spatio-temporal interval between successive earthquakes. The cumulative distribution of spatio-temporal interval obeys the q-exponential distribution, namely in time interval (waiting time) case is qt>1, and spatial interval (distance between the hypocenters) case is qr<1, respectively. In the waiting time case only, the distribution depends on the frictional property. These results are consistent with the observation one, hence it is concluded that 2D BK model has a reality from a complex network and non-additive statistical mechanics. 1. Introduction Earthquakes (EQs) are complex phenomena. The source mechanism of them has not been clear yet, however some empirical laws are known such as Gutenberg-Richter (GR) law and Omori law. Very recently, Abe and Suzuki (2003, 2005) have found the new statistical properties for EQ, namely spatio-temporal interval (hereafter waiting time and distance) between successive EQs with non-additive statistical mechanics proposed by Tsallis (1988). They have shown that the cumulative distribution of waiting time and distance have obeyed the q-exponential distribution with Southern California and Japan. More precisely, q-value of waiting time (qt) and distance (qr) are qt>1, qr<1, respectively. In 1980s Bak et al. (1987, 1988) have proposed the Self-organized Criticality (SOC) in the non-equilibrium open system. Following their idea, EQ seemed to be SOC, hence many models have proposed and discuss the GR law. In particular, 2D spring-block model (originally proposed Burridge and Knopoff (1967)) have studied by Otsuka (1972) and Carlson (1991) to discuss the GR law. It is however that in this model, the waiting time and distance statistics have not been reported. We report numerical investigation about 2D BK model to study the waiting time and distance statistics. 2. Model The model is 2D spring-block model for earthquakes called BK model and shown in Fig.1 (a). We assume that block’s slip is restricted y-direction. Following this, non-dimensional equation of motion of cite (i, j) is 2 2 && , = ()+ ,1 − ,1 2 , +−+ ( + jijiyjijijixji −1,1, 2 ) UUUUlUUUlU ,, jiji (2αφ ( +−−−+ Uv & , ji )), (1) where U , ji is scaled displacement. Another parameters are given by 2 x 2 y = cx = cyp /,/ kklkkl p . (2) The 4th term of right hand side is the nonlinear friction term, namely velocity weakening type (see Fig. 1(b)).This system has characterized by four parameters: lx, ly, α and ν. lx and ly are stiffness, ν is a scaled plate velocity, α is governing the friction function. We numerically investigate eq. (1) the (100, 25) on the (x,y) plane under free boundary th 2 2 condition by 4 Runge-Kutta method. In simulation, lx = 1, ly = 3, ν = 0.01 are fixed and α is changed. Figure 1 (a) 2D spring-block model for EQs called BK model. (b) The nonlinear friction function, namely velocity-weakening type. 3. Results and Discussion In this model, we consider a slip of the block as EQ whose seismic moment M0 and magnitude M are defined by n 0 = ∑δ , ji = log, MMUM 0 , (3) , ji where δU , ji is the total displacement during a events and n is the number of slipping block. The block which slips for the first time is regarded as a hypocenter. Hence, we define the distance and waiting time as the spatio-temporal interval between successive EQ-like events. In this research, 105 order of events are taken to discuss on the statistical properties, it is however, initial 103 order of events are neglected due to the randomness of initial configuration effect. Kumagai et al. (1999) have reported that the magnitude-frequency distribution depended on the friction parameter α. More precisely, the case for α = 3.5, those for 1.5 and 2.5 and for 4.5 and 5.5 corresponds to the critical state, subcritical state, supercritical state, respectively. Hence, we change the parameter α from 2.5 to 4.5 as a representative of three-types of critical state and discuss the waiting time and distance statistics. Firstly, following the Abe and Suzuki (2005), we try to fit cumulative distribution of waiting time to q-exponential distribution that is defined by (1/1 −q) q ()xe [() (11 −+= )xq ]+ []awhere + ≡ [ ,0max a], (4) if q>1 this is equivalent to the Zipf-Mandelbrot power law. The inverse function called q-logarithmic function written by 1 ln ()x = (x1−q − ).1 (5) q 1− q The cumulative distribution of waiting time in case for α = 3.5 is shown in Fig. 2 (a), (b) with these fitting function. The simulation study says that qt is always more than 1 for any α, it is concluded that the waiting time distribution obeys the Zipf-Mandelbrot power law. This is consistent with the observation result (Abe and Suzuki, 2005). We also mention that this distribution depends on the frictional parameter α. Secondly, the cumulative distribution of distance is discussed as well as waiting time statistics. Fig. 2 (c), (d) is an example in case for α = 3.5. In distance statistics, 0 < qr < 1 for any α, so it is equivalent to the modified Zipf-Mandelbrot law. This result also satisfied the observation result (Abe and Suzuki, 2003). Figure 2 The spatio-temporal interval statistics between successive EQ-like events with 2 2 lx = 1, ly = 3, α = 3.5 : (a) and (b) are waiting time statistics, (c) and (d) are distance statistics. In each figure, dots are the simulation result and solid line is fitting function that is q-exponential function defined by eq. (4). In case for waiting time (q, τ0) = (1.06,4.50) and correlation coefficient ρ = 0.989, and for distance (q,r0) = (0.497, 50.1) and correlation coefficient ρ = 0.9992, respectively. 4. Conclusion We report numerical investigation about the 2D BK model for EQs to discuss the waiting time and distance statistics. The cumulative distribution of waiting time and distance obey the q-exponential distribution in waiting time case for qt>1, in distance case for qr<1, respectively. These results are consistent with the observation one, hence it is concluded that 2D BK model has a reality from the complex network and the non-additive statistical mechanics. 5. Acknowledgement The author thanks Dr. M. Kamogawa, Dr. M. Ohtaki and Dr. Y. Yamazaki for useful discussions and S. Komura for numerical suggestions. This work is partly supported by grant for the 21st Century COE Program, "Holistic Research and Education Center for Physics of Self-organization Systems" at Waseda University from Ministry of Education, Culture, Sports, Science and Technology, Japan. 6. References Abe, S, and Suzuki, N. (2003), Law for distance between successive Earthquakes, J. Geophys. Res., 108, 2113-2116, 2003. Abe, S, and Suzuki, N. (2005), Scale free statistics between successive earthquakes, Physica A, 350, 588-596. Bak, P, Tang, C, and Wiesenfeld, K. (1987), Self-organized criticality: an explanation of 1/f niose, Phys. Rev. Lett., 59, 381-384. Bak, P, Tang, C, and Wiesenfeld, K. (1988), Self-organized criticality, Phys. Rev. A., 38, 364-374. Burridge, R, and Knopoff, L. (1967), Model and theoretical seismisity, Bull. Seism. Soc. Am., 57, 341-371. Carlson, J. M. (1991), Two-dimensional model of a fault, Phys. Rev. A, 15, 6266-6232. Kumagai, H, Fukao, Y, Watanabe, S, and Baba, Y. (1999), A self-organized model of earthquakes with constant stress drops and the b-value of 1, Geophys. Res. Lett., 26, 2817-2820. Otsuka, M. (1972), A Simulation Earthquake Occurrence, Phys. Earth. Planet. Int., 6, 311-315. Tsallis, C. (1988), Possible generalization of Boltzmann-Gibbs statistics, J. Stat. Phys., 52, 479-487. Estimating stress heterogeneity from aftershock rate Agn`es Helmstetter and Bruce E. Shaw Lamont-Doherty Earth Observatory, Columbia University, New York Corresponding author: [email protected] Abstract: We estimate the rate of aftershocks triggered by a heterogeneous stress change, using the rate-and-state model of Dieterich [1994]. We show than an exponential stress distribution P (τ) ∼ exp(−τ/τ0) p gives an Omori law decay of aftershocks with time ∼ 1/t , with an exponent p = 1 − Aσn/τ0, where A is a parameter of the rate-and-state friction law, and σn the normal stress. Omori exponent p thus decreases if the stress ”heterogeneity” τ0 decreases. We also invert the stress distribution P (τ) from the seismicity rate R(t), assuming that the stress does not change with time. We apply this method to a synthetic stress map, using the (modified) scale invariant ”k2” slip model [Herrero and Bernard, 1994]. We generate synthetic aftershock catalogs from this stress change. The seismicity rate on the rupture area shows a huge increase at short times, even if the stress decreases on average. Aftershocks are clustered in the regions of low slip, but the spatial distribution is more diffuse than for a simple slip dislocation. This stochastic slip model gives a Gaussian stress distribution, but nevertheless produces an aftershock rate which is very close to Omori’s law, with an effective p ≤ 1, which increases slowly with time. The inversion of the full stress distribution P (τ) is badly constrained for negative stress values, and for very large positive values, if the time interval of the catalog is limited. However, constraining P (τ) to be a Gaussian distribution allows a good estimation of P (τ) even with a limited number of events and catalog duration. We show that stress shadows are very difficult to observe in a heterogeneous stress context. 1 Introduction Much progress has been made in describing earthquake behavior, in particular the Omori law decay of aftershocks with time, based on the predictions of rate-and-state friction law [Dieterich, 1994]. However, this model does not explain the abundance of aftershocks on the rupture surface, unless introducing a strong heterogeneity of the slip and stress on the fault. A simple model of an earthquake source, with a uniform stress change on the rupture area, predicts a decrease of the seismicity rate on the rupture area. In order to explain the increase of the seismicity rate on the fault, we need to introduce stress heterogeneity in the rate-and-state model, which can be due to the variability of slip on the fault [Helmstetter and Shaw, 2005; Marsan, 2005], or to fault roughness [Dieterich, 2005]. Here we estimate the seismicity rate triggered by a heterogeneous stress change. We also develop a method to invert for the stress distribution on the fault from the seismicity rate, assuming that stress changes instantaneously after the mainshock. 2 Relation between stress distribution and seismicity rate Dieterich [1994] estimated the seismicity rate R(t, τ) triggered by a single stress step τ, assuming a population of faults governed by rate-and-stare friction R R(t, τ)= r , (1) e−τ/Aσn − 1 e−t/ta +1 where ta = Aσn/τ˙r is the duration of the aftershock sequence, andτ ˙r the tectonic loading stressing rate. For −τ/Aσn each positive stress value, the seismicity rate is constant for t ≪ tae , and then decreases with time as R(t) ∼ 1/t for t ≪ ta. For a negative stress change, the seismicity rate decreases after the mainshock. In both cases, the seismicity rate recovers its reference value R = Rr for t ≫ ta. For a heterogeneous stress field τ(~r), with a distribution (probability density function) P (τ), the seismicity rate integrated over space is ∞ R(t)= R(t, τ(~r)) d~r = R(t, τ) P (τ) dτ . (2) Z Z −∞ 1 5 5 −5 (a) (b) (c) 10 10 −10 15 15 −15 20 20 −20 25 25 −25 z (km) z (km) z (km)u 30 30 −30 35 35 −35 40 40 −40 45 45 −45 10 20 30 40 10 20 30 40 5 10 15 20 25 30 35 40 45 x (km) x (km) x (km) slip (m) τ (MPa) 0 2 4 6 8 10 −50 0 50 Figure 1: Slip generated with the modified k2 model (a), shear stress change (b) and (c) map of aftershocks generated with the rate-and-state model, using the stress change shown in (b) and Aσn = 1 MPa. p The rate-and-state model gives an Omori law decay of aftershocks with time R(t) ∼ 1/t , for t << ta, if the −τ/τ0 stress distribution follows an exponential distribution P (τ) ∼ e . The Omori exponent p = 1 − Aσn/τ0 increases toward 1 as the stress heterogeneity τ0 increases. The rate-and-state model with a uniform stress step (1) cannot explain an Omori law decay with p > 1. The only solution in order to obtain p > 1 in the rate-and-state model, as observed for many aftershock sequences, is to have a variation of stress with time, which may be due to postseismic slip or viscous relaxation. We can invert for the complete stress distribution P (τ) from the temporal evolution of the seismicity rate using (2), provided we observe R(t) over a wide enough time interval. In practice, the estimation of P (τ) for large τ is limited by the minimum time tmin at which we can reliably estimate the seismicity rate. The largest stress we can resolve is of the order of τmax = −Aσn log(tmin/ta). For negative stress, we are limited by the maximum time tmax after the mainshock, and by our assumptions that secondary aftershocks are negligible, and that the stress does not change with time (e.g., neglecting post-seismic relaxation). 3 Application of the method to a stochastic slip model We have tested the rate-and-state model on a realistic synthetic slip pattern. Herrero and Bernard [1994] proposed a kinematic, self-similar model of earthquakes. They assumed that the slip distribution at small scales, compared to the rupture length L, does not depend on L. This led to a slip power-spectrum for high − wave-number equal to u(k)= Cσ0Lk 2/µ, where σ0 is the stress drop (typically 3 MPa), µ is the rigidity, and C is a shape factor close to 1. We have modified the k2 slip model in order to have a finite standard deviation 3 − of the stress distribution. We used a slip power-spectrum given by u(k)= Cσ0L (kL + 1) η/µ, with η =2.3. We have then computed the stress change on the fault from this synthetic slip model, for a fault of 50 × 50 km, with a resolution dx =0.1 km, and a stress drop σ0 = 3 MPa. The stress field has large variations, from about -90 to 90 MPa, due to slip variability (see Fig 1). We have then estimated the seismicity rate predicted by the rate-and-state (2) on the fault from the stress change, assuming Aσn = 1 MPa (see Fig 2). While the stress on average decreases on the fault, the seismicity rate shows a huge increase after the mainshock. It then decays with time approximately according to Omori law, with an apparent exponent p =0.93. At large times t ≈ ta, the seismicity rate decreases below its reference rate due to the negative stress values. While the pure Omori law with p< 1 occurs for the exponential distribution of shear stress changes, we find that a Gaussian stress distribution (which the k2 model and many other models give), also gives realistic looking p-values over wide ranges of time scales. 2 6 (a) 10 (b) 0.025 5 10 p=0.93 0.02 4 10 3 0.015 r 10 R(t)/R 2 10 0.01 1 10 probability density function 0 0.005 10 −1 10 −7 −6 −5 −4 −3 −2 −1 0 1 2 0 10 10 10 10 10 10 10 10 10 10 −80 −60 −40 −20 0 20 40 60 80 t/t a τ (MPa) Figure 2: (a) Seismicity rate predicted by the rate-and-state model (2), using the stress map shown in Fig. 1b (thin black line), and computed from the synthetic catalog (thick red line). The dashed black line is a fit by Omori’s law for t We have generated synthetic earthquake catalogs according to the rate-and-state model (1), using the stress change shown in Fig 1b, without including earthquake interaction. We found that, for a realistic catalog (less than 6 orders of magnitude in time), the limited time interval available does not allow the inversion of the complete stress distribution P (τ) from the seismicity rate R(t) (see Fig 2b). However, we can estimate P (τ) with a good accuracy if we constrain P (τ) to be a Gaussian distribution, as predicted by the k2 slip model, and ∗ if we know the stress drop and Aσn. We can measure the standard deviation τ of the stress distribution and the aftershock duration ta from the observed aftershock rate. 4 Off-fault aftershocks We can estimate the stress change and seismicity rate off of the fault plane for the synthetic k2 slip model shown in Fig 1. The power-spectrum of the shear stress change off of the fault, at a distance y smaller than the rupture length L, is given by τ(k,y) ∼ e−kyτ(k, 0). The standard deviation of the stress distribution decreases very fast with the distance to the fault, which produces a strong drop of the seismicity rate off of the fault (see Fig 3). The average stress decreases much slower with y. For y/L > 0.1, the stress field is very homogeneous and mostly negative (the standard deviation is smaller than the absolute mean stress). Therefore, the seismicity rate for y/L > 0.1 is smaller than the reference rate at all times t 5 Conclusion We have shown how the rate-and-state seismicity model, with an heterogeneous stress field, can explain the spatial distribution of aftershocks on or close to the rupture area, and an Omori law decay with exponent p ≤ 1. 3 d/L=0 20 (a) 8 (b) 10 d/L=0.01 d/L=0.02 15 6 10 d/L=0.03 10 r d/L=0.04 4 10 5 d/L=0.05 R(t)/R d/L=0.06 2 10 0 d/L=0.07 standard deviation and average stress change −5 0 10 −10 −10 −8 −6 −4 −2 0 2 10 10 10 10 10 10 10 0 0.05 0.1 0.15 0.2 t/t distance from the fault / L a Figure 3: (a) Seismicity rate for different values of the distance to the fault y/L decreasing from y/L = 0 (top) to y/L = 0.2 (bottom), using the slip model shown in Figure 1a, and assuming Aσn = 1 MPa. (b) Standard deviation (solid line) and absolute value of the mean (dashed line, the average stress change is always negative) of the stress distribution as a function of the distance to the fault. Deviations from p = 1 are mapped onto measures of stress change heterogeneity on the fault. We have shown that modest catalogue lengths allow an accurate inversion for the standard deviation of the stress distribution and for the aftershock duration ta. The stress drop is however not constrained if the catalog is too short (tmax < ta). This model cannot explain why some aftershock sequences have p > 1. A possible explanation involves stress relaxation with time due to post-seismic deformation. Inverting the stress distribution from aftershock rate requires to know Aσn. We used Aσn = 1 MPa, based on laboratory experiments, which give A = 0.01 [Dieterich, 1994], and assuming Aσn = 100 MPa (lithostatic pressure at 5 km). Dieterich [1994] suggests Aσn =0.1 MPa from the relation between aftershocks duration, tectonic loading rate, and recurrence time. We have neglected in our study the effect of secondary aftershocks. Ziv and Rubin [2003] found, using numerical and analytical study of the rate-and-state model, that secondary aftershocks increase the reference ∗ rate Rr but do not change p or τ . We have also assumed that Aσn is uniform. But fluctuations of Aσn do not change the seismicity rate if Aσn is less heterogeneous than the coseismic shear stress change τ. References [1] Dieterich, J. (1994), A constitutive law for rate of earthquake production and its application to earthquake clustering, J. Geophys. Res. 99, 2601-2618. [2] Dieterich, J. (2005), Manuscript in preparation, talk at http://online.itp.ucsb.edu/online/earthq c05. [3] Helmstetter, A. and B. E. Shaw (2005), Estimating stress heterogeneity from aftershock rate, submitted to J. Goephys. Res (http://www.arxiv.org/pdf/physics/0509252). [4] Herrero, A. and P. Bernard (1994), A kinematic self-similar rupture process for earthquakes, Bull. Seism. Soc. Am. 84, 1216-1228. [5] Marsan, D. (2005) Can co-seismic stress variability suppress seismicity shadows? Insights from a rate-and- state friction model, submitted to J. Goephys. Res. [6] Ziv, A. and A. M. Rubin (2003), Implications of rate-and-state friction for properties of aftershock sequences: quasi-static inherently discrete simulations, J. Geophys. Res. 108, 2051, doi:10.1029/2001JB001219. 4 Earthquake Source Parameters of Microearthquakes at Parkfield, CA, Determined Using the SAFOD Pilot Hole Seismic Array K.Imanishi1 and W. L. Ellsworth 2 1 Geological Survey of Japan, AIST, Tsukuba, Ibaraki 305-8567, Japan 2 U.S. Geological Survey, Menlo Park, CA 94025, USA Corresponding Author: K. Imanishi, [email protected] Abstract: We estimate the source parameters of 34 microearthquakes at Parkfield, CA, ranging in size from M -0.3 to M 2.1, by analyzing seismograms recorded by the 32-level, 3-component seismic array installed in the SAFOD Pilot Hole. We succeeded in obtaining stable spectral ratios by stacking the ratios calculated from the moving windows taken along the record following the direct waves. These spectral ratios were inverted for seismic moments and corner frequencies using the multiple empirical Green's function approach. Both static stress drops and apparent stresses of microearthquakes at Parkfield do not largely differ from those reported for moderate and large earthquakes. It is likely that the dynamics of microearthquakes at Parkfield is similar to that of natural tectonic earthquakes in a macroscopic viewpoint. Half of the static stress drops are above 50 MPa, which are near the strength of the rock. This suggests that locally there can be high levels of stress even in active mature fault zones such as the San Andreas Fault. 1. Introduction Static stress drop and radiated seismic energy are key parameters needed to understand the physics of earthquakes, investigate seismic source scaling relationships, and infer the local stress level in the crust. Recent work by Ide et al. (2003) suggests that propagation effects can contaminate measurements of source parameters even in the borehole recordings. One method for overcoming the problem is to use the spectral ratio method, which removes the propagation effects by deconvolving the smaller earthquake seismograms from those of the larger one. However, the deconvolution procedure is an unstable process, especially for higher frequencies, because small location differences result in the profound effects on the spectral ratio. This leads to large uncertainties in the estimations of corner frequencies. In this study, we propose a method to determine a stable spectral ratio by stacking the ratios calculated from the moving windows taken along the record following the direct waves. We apply the method to microearthquakes occurring at Parkfield, CA, using the SAFOD Pilot Hole seismic array. 2. Data In the summer of 2002, the Pilot Hole for the San Andreas Fault Observatory at Depth (SAFOD) was drilled to a depth of 2.2 km. The site lies just north of the rupture zone of the M 6 1966 Parkfield earthquake (Fig. 1). The seismic array consists of 32 levels of 3-component geophones (GS-20DM) at 40 meter spacing (856 to 2096 m depth). The sensors have a natural frequency of 15 Hz and damping constant of 0.57. The data were recorded at a sampling rate of 1kHz in July 2002 and then increased to 2kHz in December 12, 2002. On the basis of waveform similarity, we detected 34 events which classified into 14 groups and range in size from M -0.3 to M 2.1 (Fig. 1). These events are included in the cluster 9. Seismograms of the cluster 10 recorded at 428 m depth below mean sea level are shown in Fig. 1. Figure 1. Location of the SAFOD Pilot Hole (square) and earthquake clusters analyzed (circles). The gray line shows the surface trace of the San Andreas Fault. Seismograms of three clusters recorded at 428 m depth below mean sea level are shown. 3. Method We apply the spectral ratio method (Ide et al., 2003) to determine corner frequencies and seismic moments between the events within each cluster. As mentioned above, the deconvolution procedure is an unstable process, especially for higher frequencies, because small location differences result in the profound effects on the spectral ratio. According to Chaverria et al. (2003), the wavetrain recorded in the Pilot Hole is dominated by reflections and conversions and not random coda waves. So, we expect that the spectral ratios of the waves between P and S wave will also reflect the source, as will the waves following S wave. In fact, we succeeded in obtaining stable spectral ratios by stacking the ratios calculated from the successive windows taken along the record following the direct waves. In this study, we further stacked ratios obtained from each level of the array and component. This makes the ratio more stable (Fig. 2a) and permits us to estimate reliable source parameters. We determine the source radius from the corner frequency using the circular crack model of Sato and Hirasawa (1973). The static stress drop is then calculated by the formula of Eshelby (1957). Following Ide et al. (2003), we derived attenuation function at each level of the array and component by taking the ratio between the observed and calculated spectrum. By the average of the functions over all events of the cluster, we obtain the average attenuation function, which is assumed to represent the propagation effects between the source and station. We calculate the radiated seismic energy from spectra corrected by the attenuation function. Figure 2. (a) Seismograms of the cluster 10 recorded at 428 m depth below mean sea level. (b) Spectral ratios (solid curves) and fitting curves for the theoretical spectral ratios assuming an omega square model (broken curves) . 4. Results The observed spectral ratios were well modeled by the omega square model (Fig. 2b). The estimated static stress drops range from approximately 0.5 to 120 MPa (Fig. 3a), where they do not vary with seismic moment. Our result is consistent with other previous study using deep borehole data that there is no breakdown in the constant stress drop scaling over the wide range of magnitude. The apparent stresses, defined as the rigidity multiplied by the ratio between radiated energy and seismic moment, range from approximately 0.1 to 20 MPa (Fig. 3b). Each data set in the previous studies shows the apparent stress scales with seismic moment. However, Ide and Beroza (2001) and Ide et al. (2003) have shown that most of the observed size dependence can be attributed to artifacts arising from underestimation of energy due to a finite recording bandwidth or inadequate corrections for path effects, and concluded that the apparent stress is almost constant over a wide range of magnitude in seismic moment. Our result is consistent with their conclusion. The estimated static stress drops seem to be high end of the compilation, where half of them are in excess of 10 MPa and some are above 50 MPa. Assuming hydrostatic pore pressures, laboratory-derived coefficients of friction (0.6<μ<0.85), and a vertical stress SV equal to the overburden pressure, the Coulomb failure criterion predicts a maximum shear stress of 43-54 MPa in a strike-slip faulting environments at a depth of 5 km. If we consider that the maximum shear stress is a good approximation of the average shear stress acting on the fault plane to cause slip, the static stress drops of some events are near the strength of the rock. The San Andreas fault has long been thought to be significantly weak as evidenced by the lack of heat flow anomaly adjacent to the fault, and the shear strength of the fault has been predicted about 20 MPa or less (e.g., Brune et al., 1969). High stress drop microearthquakes might suggest that locally there can be high levels of stress even in active mature fault zones. Figure 3. (a) Static stress drop versus seismic moment. (b) Apparent stress versus seismic moment. The results of this study are plotted as solid circles. Gray circles represent lower limits of the apparent stress. 4. Discussion and Conclusions We have shown that both static stress drops and apparent stresses of microearthquakes at Parkfield do not depend on earthquake size, and are similar to those reported for moderate and large earthquakes. It is likely that natural tectonic earthquakes are self-similar over a wide range of earthquake size and the dynamics of small and large natural tectonic earthquakes are similar in a macroscopic viewpoint. Bridging the gap between laboratory friction experiments and seismological measures of earthquake sources makes it possible to use the laboratory results as a basis for models of earthquake generation process. It should be noted that the earthquakes studied in this study include events with dimensions less than 10 m, approaching those in laboratory studies (about 1 m). Further analysis of microearthquakes at Parkfield is expected to narrow the gap between laboratory experiments and natural tectonic earthquakes. Based on the recurrence interval of repeating microearthquakes in Parkfield, Nadeau and Johnson (1998) derived the scaling relation for displacements, source dimensions, and static stress drops with seismic moment. The implied stress drops decrease from ~2000 MPa to 100 MPa for M from ~0 to 3. Their scaling relation is shown by a broken line in Fig. 3a, which are much higher than our estimations. Failure of repeating earthquakes is often thought to occur on a strong fault patch (asperity) embedded in an aseismically creeping fault plane. According to an asperity model, the small strong asperity patch is surrounded by a much weaker fault that creeps under the influence of tectonic stress. When the asperity patch ruptures, the surrounding area slips as it is dynamically loaded by the stress release of the asperity patch. Based on this asperity model, our estimated source sizes correspond to the latter, while source dimensions by Nadeau and Johnson (1998) can be considered as the former. If so, the stress drop averaged over the asperity patch should be higher than that over the entire slip surface. This might explain the discrepancy seen in Fig. 3a, but it is unlikely that the stress drops exceed the strength of the rock. Although several models have been proposed to resolve the stress drop question at Parkfield, it remains unresolved. We should re-examine the asperity model based on the results obtained in this study. 5. Acknowledgement We thank Eylon Shalev and Peter Malin of Duke University for their dedication to the operation of the SAFOD Pilot Hole seismic array and for distributing the data within the research community. 6. References Brune, J. N., T. L. Henyey, and R. F. Roy (1969), Heat flow, stress, and rate of slip along the San Andreas Fault, California, J. Geophys. Res., 74, 3821-3827. Chavarria, J. A., P. Malin, R. D. Catchings, and E. Shalev (2003), A look inside the San Andreas fault at Parkfield through vertical seismic profiling, Science, 302, 1746-1748. Eshelby, J.D. (1957), The determination of the elastic field of an ellipsoidal inclusion and related problems, Proc. R. Soc. London, 241, 376 – 396. Ide, S. and G. C. Beroza (2001), Does apparent stress vary with earthquake size?, Geophys. Res. Lett., 28, 3349-3352. Ide, S., G. C. Beroza, S.G. Prejean and W.L. Ellsworth (2003), Apparent break in earthquake scaling due to path and site effects on deep borehole recordings, J. Geophys. Res., 108 (B5), 2271, doi: 10.1029/2001JB001617. Nadeau, R. M., and L. R. Johnson (1998), Seismological studies at Parkfield VI: Moment release rates and estimates of source parameters for small repeating earthquakes, Bull. Seismol. Soc. Am., 88, 790-814. Sato, T., and T. Hirasawa (1973), Body wave spectra from propagating shear cracks, J. Phys. Earth, 21, 415-431. Statistical Models Based on the Gutenberg-Richter a and b values for Estimating Probabilities of Moderate Earthquakes in Kanto, Japan M. Imoto1 1NationalResearch Institute for Earth Science and Disaster Prevention Tennodai 3-1, Tsukuba-shi, Ibaraki-ken 305-0006, Japan Corresponding Author: M. Imoto, [email protected] Abstract: We attempt to construct models estimating probabilities of moderate size earthquakes in Kanto, central Japan, with recurrent times of target earthquakes considered in reverse relation to the hazard. In estimating the recurrent times, the a and b values of the Gutenberg-Richter relation for an area of interest are estimated and the relation is extrapolated to a magnitude range of targets. We consider two alternative cases; the case of the b value varying from point to point and the case of the b value remaining constant over the whole space under study. Retrospective analysis based on the catalogue of the NIED Kanto-Tokai network for the period from 1990 to 1999 indicates that the model with constant b performs better than that with varying b. This implies that the spatial variation in b will not work effectively for estimating earthquake probability. However, a comparison between the b value distributions during the background period and over a set of time-space points conditioned by target earthquakes suggests that some information could be obtained from the b value, since the Kullback-Leibler information statistic between the two distributions indicates a significant value. In order to incorporate variations in b into the model with b remaining constant, a hazard function with a single parameter, b, is considered, in which a normal distribution is adopted and optimized to the data. The b value is distributed independently from the distribution of the a value. Taking these two values as the parameters for earthquake precursors, a formula for earthquake probabilities based on multi-disciplinary parameters could be applied. Thus, we obtain a model of earthquake probability using both a and b values which performs better than either model of the Gutenberg-Richter relation, with constant or variable b . 1. Introduction Utsu (1977, 1982), Aki (1981) and others have formulated an earthquake probability calculation based on multiple precursory phenomena. These calculations assume that different precursory phenomena occur independently of each other. In such a case, the probability expected from detecting multiple precursory phenomena is given by the product of the probability gains for respective observations and the probability calculated from secular seismicity. Therefore, the probability becomes large when multiple precursory phenomena are detected simultaneously. Only a small number of reports have been published based on this formula, perhaps due to a common understanding that no reliable precursor has been observed except for foreshocks and others. This formulation, in which phenomena are treated as two categorical values in the original form, could be reformulated in light of recent point-process modeling to apply to cases of a quantitative observed value. As an example, the a and b values of the Gutenberg-Richter relation could be treated as such parameters and are indeed being used for building probability models for moderate earthquakes in Kanto, central Japan. 2. Data and Models Hazard functions for a moderate-size earthquake (M≥5.0) occurring within the study volume (200 x 200 x 90 km3) for the period from January 1990 to December 1999 were obtained using the Gutenberg-Richter relation. The a and b values at a given space-time point, which measure the recurrent intervals of targets, are calculated based on earthquakes within 20 km of the point occurring over a period of 3650 days prior to the assessment. The assessment was made at 2km-spaced points and at 10-day intervals. We classified the points of assessment into two categories, points of target events and points of background. A point of target event was the grid point of assessment nearest to a target in space and just before it in time. The other grid points are classified as background. We thus obtained two distributions, the background distribution and conditional distribution, for each of the a and b parameters. Figures 1a and 1b illustrate the background distribution with a blue dashed line and the conditional distribution with a red solid line for a and b, respectively. In Fig. 1a, the difference between the two distributions is quite obvious. On the other hand, in Fig. 1b, the difference between the two distributions is not extremely large but is still significant. Figure 1a Cumulative distributions of the a value for background points (blue dashed line) and for points conditioned by target earthquakes (red solid line). Figure 1b Same as Fig.1b but for the b value. A-model: Assuming a single constant b value over the study volume, only the spatial variation in a is taken into account when estimating the recurrent interval of a target event. The hazard function for this model is given as: ⋅Δ− bm λa Ms )0.5;( MsN 10)0.2;( ⋅⋅≥=≥ rv , (1) where the first term of the right-hand side is the number of earthquakes in a sample space and time volume, the common logarithm of which defines the a value. The second term is a constant in this case; the last term is a normalizing factor between the sample volume and the unit volume for the hazard function. B-model: The same as the A-model but with the b variation. The hazard for this case is given as: ⋅Δ− sbm )( λb Ms )0.5;( MsN 10)0.2;( ⋅⋅≥=≥ rv . (2) AxB-model: Taking the two distributions of b in Fig. 1b into consideration, a hazard function is assumed as in the following form: 2 +⋅⋅ 21 bcbc λaxb Ms )0.5;( MsN 10)0.2;( ⋅⋅≥=≥ rv , (3) where c1 and c2 are optimized to data for the learning period (January 1990 to December 1999). 3.Performance To estimate the performance of the above-mentioned models, the difference in the log likelihood between a proposed model and the stationary Poisson model, which is called information gain (IG value; Darley and Vere-Jones, 2003 ), is calculated using data from both the learning period and a test period (January 2000 to September 2005). Table 1 lists the IG values for the A-, B-, and AxB-model for both periods. The IG values for the A- model are larger than those for the B-model for both periods. This suggests that the spatial variation in the b value does not work effectively for estimating probabilities. The IG values for the AxB-model, however, are larger than those for either the A- or B-model for both periods. Thus, the b parameter of the Gutenberg-Richter relation is built more effectively in this model than in the model that uses the Gutenberg-Richter relation directly, with the b value constant or varying in space. Information Gain Period N A-model B-model AxB-model Learning 1990.1-1999.12 38 38.07 36.49 44.07 Test 2000.1-2005.9 19 33.05 31.61 37.23 Table 1. IG values for the A-, B-, and AxB-model for the learning and test periods. 4. References Aki, K., A probabilistic synthesis of precursory phenomena, in Earthquake Prediction, edited by D.W. Simpson and P.G. Richards, (AGU, 1981), pp566-574. Darley, D. J. and Vere-Jones, D., An introduction to the theory of point processes, vol1, Elementary theory and methods, Second edition, 469pp, (Springer, New York, 2003). Utsu, T., Probalities in earthquake prediction, Zisin II, 30, 179 – 185, 1977 (in Japanese). Utsu, T., Probabilities in earthquake prediction (the second paper), Bull. Earthq. Res. Inst., 57, pp499- 524 , 1982 (in Japanese). Seismicity Cycle and Large Earthquake S. Itaba 1 and K. Watanabe 2 1Geological Survey of Japan, AIST, 1-1-1, Higashi, Tsukuba, Ibaraki, 305-8567, Japan 2Disaster Prevention Research Institute, Kyoto University, Gokasho, Uji, Kyoto, 611-0011, Japan Corresponding Author: S. Itaba, [email protected] Abstract: There are seismicity cycles with repetitions of large earthquake, and the duration is hundreds to tens of thousands of years. However, the period of observations for modern seismology is about a hundred years, and is only a small portion of one cycle. We clarify the outline of the seismicity cycle by the relation between the lapsed rate (LR) from latest large earthquake and present seismic activity of large earthquake occurrence zones (active fault and plate boundary). 1. Introduction There are periodicities in seismicity also including large earthquake occurred in plate boundary or active fault, and microearthquake. For inland active faults, after the occurrence of a large earthquake, the aftershocks usually continue for several to tens of years, decreasing with time [Watanabe, 1989]. In the region of the 1891 Nobi Earthquake, it is known that aftershock activity has been continuing for more than a hundred years. Following the high level of aftershock activity, the seismicity cycle moves into a phase of much lower seismic activity [Toda, 2002], and then the next major earthquake occurs. For the inland active faults, the seismicity cycle is thousands to tens of thousands of years. For plate boundary, it is usually tens to hundreds of years. The duration of a cycle widely varies between various faults and regions. The period of observations for modern seismology is about a hundred years, and is only a small portion of one cycle. Therefore, it is difficult to recognize what portion of the seismicity cycle the current seismic activity corresponds to. However, the current stage of the cycle can be estimated from geological data. By combining both geological data and current seismicity for many different faults or plate boundaries, we can view a complete seismicity cycle. We evaluated quantitatively the seismic activity of 98 major active faults in Japan (98 faults), 17 segments of the San Andreas Fault System (SAFS), and 34 focal regions of plate boundary or interplate in adjacent seas in Japan (34 regions). Then, we investigated the relation between the lapsed time (LT) from the last large earthquake, and the present seismic activity. Although there is large variation in the scales and recurrence times (RT) of faults, we developed a method to correct for these factors. We find a clear correlation between the LR from a large earthquake and the present seismic activity. 2. Method The LT and RT are extracted from the following data source. For the 98 faults, the newest information available on Earthquake Research Committee, The Headquarters for Earthquake Research Promotion [2002, 2003 and 2004] are used For the SAFS, the report “Probabilities of Large Earthquakes Occurring in California on the San Andreas Fault” by the Working Group on California Earthquake Probabilities [1988] is used. For the 34 regions, Japan Seismic Hazard Information Station (J-SHIS) by National Research Institute for Earth Science and Disaster Prevention is used. The seismic activity of inland active faults is evaluated quantitatively using the method of Itaba et al [2003]. Since there are large variations in RT for each fault, the concept of LR is introduced. LR is defined as from RT and LT of a fault, as LR = (LT/RT) ・100 [%]. (1) Moreover, it is thought that the pace of seismicity also changes with length of cycle period. Therefore, the following formula defines the earthquake number density per 1% of RT (NPR), and comparison with LR is performed. The N is total number of earthquake occurred within the target zone, the A is area of the target zone, and the P is observation period (year). NPR = N・RT/(A ・P・100) [count / km 2・1%RT] (2) 3. Result and discussion The relations between LR and NPR are shown in Figure 1 to Figure 3. In all examples, there are clear correlation between the LR and NRP. For inland faults, after the occurrence of latest large earthquakes, the seismicity decays smoothly inversely proportional to the lapsed time for over a thousand years. For inland faults, after a large earthquake, it is usually assumed that the aftershock activity decays to the background level in a few tens to about a hundred years. If this is the case, the correlation between LR and seismic activity should be recognized only for values of LR or less. However, according to the result of this in this investigation, there is a dependence on the LR in most of the seismicity cycle. Moreover, it turns out that the pattern is consistent with the modified Omori formula [ Utsu , 1961]. It turns out that it continues linear attenuation for about one cycle also in 34 focal regions. From these results, we conclude that the low level of seismicity that is usually thought to be steady background activity may actually be decaying aftershocks that last over most of the seismicity cycle, which can be hundreds to thousands of years for the inland faults, or tens to hundreds of years for around plate boundary. 1000 (98 faults in Japan) 10 17 segments of SAFS p value = 0.954 100 101after 93-1 101(Tottori) South part of SAFS 93-2 1 p value = 1.18 10 km2*LR] 73-120 13 / 04after 13 81 t 73-297-1 25 97-2 72 94 30 6726 5 79 71-2 79(Hyogo) 65 80 coun count / km2*LR] / count 64-1 8 14 15 58 4764-2 61-076 82-28871-136 11 24 834456-2 R [ R [ 77 1 69 21 5170 56-1102(Niigata) 4 34 NP 1 NP 43 12 15 40 18 16 4 86 41 0.1 35 50 75-0 0.1 89 North and central 3 9 85 10 part of SAFS 7 p value = 1.03 2 0.01 0.01 8 0.01 0.1 1 10 100 1000 1 10 100 1000 Lapsed Rate [%] 103(Fukuoka) Lapsed rate [%] Figure 1. (Left) Relation between LR and NPR for Japan. There is a relation that NPR falls linearly as LR increases. The p value of the modified Omori formula is 0.954. A brown round mark shows some examples before and after large earthquakes (for three years). Figure 2. (Right) Relation between LR and NPR for SAFS. There is a relation that NPR falls linearly as LR increases. For the north and central part of SAFS, the p value of the modified Omori formula is 1.03, for the south part of SAFS, it is 1.18. A brown round mark shows some examples before and after large earthquakes (for three years). Jan, 1997 - Dec, 1999 Jan, 2000 - Dec, 2002 1.0E+00 1.0E+00 p value = 0.780 p value = 0.743 1.0E-01 1.0E-01 20 21 21 20 2 23 2 1.0E-02 24 1.0E-02 23 4 4 26 24 9 18 15 26 9 13 15 13 14 28 1 1 NPR [count/km2*RT] NPR [count/km2*RT] 5 28 5 1.0E-03 1.0E-03 14 30 30 29 A 29 B 1.0E-04 1.0E-04 0.1 1 10 100 1000 0.1 1 10 100 1000 Lapsed Rate [%] Lapsed Rate [%] Jan, 2003 - Jul, 2005 1997-1999, 2000-2002 and 2003-2005(July) 1.0E+00 1.0E+00 p value = 0.527 p value = 0.639 1.0E-01 1.0E-01 20 21 21 20 20 21 20 21 2 23 4 2 1.0E-02 2 1.0E-02 24 2 23 24 4 24 24 26 9 18 26 15 5 4 5 4 19 23 19 9 13 15 23 13 15 26 15 1 9 26 14 28 NPR [count/km2*RT] 1 NPR [count/km2*RT] 9 1 5 1 5 28 13 14 13 1.0E-03 1.0E-03 14 14 30 30 29 28 28 30 29 C 30 29 29 1.0E-04 1.0E-04 0.1 1 10 100 1000 0.1 1 10 100 1000 Lapsed Rate [%] Lapsed Rate [%] Figure 3. Relation between LR and NPR for focal regions in 3 periods (A-C) respectively, and all periods (lower right). There is a relation that NPR falls linearly as LR increases in all cases. 4. Conclusion The hypocenters of the present seismic activity and the geologic information from active fault and trench investigation can be used together to study the changes of seismicity as related to the seismicity cycle. Although there is large variation in the scale and RT of a fault, we develop corrections for these factors, and clear correlation is recognized between the LR and the present seismicity. The rates of the seismicity show decay that last for almost the entire seismicity cycle. Using our methodology, we can identify how the present seismicity fits in the seismicity cycle. It is thought that some faults shift to a preparatory stage before a large earthquake. The LR is found to be a large portion of the seismicity cycle, when it shifts to a different preparatory stage from the aftershock activity, the seismicity may increase. Based on this idea, it may be useful to try to detect the shift to a preparatory stage or calm stage before a large earthquake in the seismicity pattern. Thus, this study contributes to the overall understanding of the seismicity cycle, which will contribute to the ongoing efforts in earthquake prediction and disaster mitigation. 5. Acknowledgement I used the hypocentral data by Japan Meteorological Agency and The catalog of the Council of the National Seismic System. I used the fault trace data by Research group for Active Faults of Japan and California Department of Conservation Division of Mines and Geology. I used the position data and evaluating result of focal region by Japan Seismic Hazard Information Station (J-SHIS) by National Research Institute for Earth Science and Disaster Prevention. I used the evaluating result of the active fault by Earthquake Research Committee, Headquarters for Earthquake Research Promotion, and The Working Group on California Earthquake Probabilities. I appreciate for these indispensable data. 6. References Earthquake Research Committee, Headquarters for Earthquake Research Promotion, Publications of Earthquake Research Committee (Evaluations on long-term and strong motion) -1995-2001- (in Japanese), 476p, 2002. Earthquake Research Committee, Headquarters for Earthquake Research Promotion, Publications of Earthquake Research Committee –January-December 2002- (in Japanese), 916p, 2003. Earthquake Research Committee, Headquarters for Earthquake Research Promotion, Publications of Earthquake Research Committee –January-December 2003- (in Japanese), 1, 793p, 2004. Itaba, S., K. Watanabe, R. Nishida, T. Noguchi, and J. Mori, Quantitative Evaluation of Inland Seismic Activity in Japan using GIS –Part 2-, AGU2003 Fall Meeting , San Francisco, F1138, 12 th Dec., 2003. Toda, S., Y. Sugiyama, Implications of seismicity on active faults in Japan (in Japanese with English abstract), 2002 Japan Earth and Planetary Science Joint Meeting, Tokyo, 29 th Mar., 2002. Utsu, T., Aftershocks and earthquake statistics (1) , J. Fac. Sci., Hokkaido Univ., Ser. 7, 2, 129-195, 1969 Watanabe, K., On the Duration Time of Aftershock of Earthquake of the Central Part of Gifu Prefecture, September 9, 196 9 (in Japanese), Bull. Disas. Prev. Inst., Kyoto Univ., 39, Parts 1, 2, 1-22, 1989. Working Group on California Earthquake Probabilities, Probabilities of Large Earthquake Occurring in California on the San Andreas Fault , USGS Open-File Report, 88-398, 1988. The Correlation Between the Phase of the Moon and the Occurrences of Microearthquakes in the Tamba Region T. Iwata1,andH.Katao2 1The Institute of Statistical Mathematics, 4-6-7, Minato-ku, Tokyo 106-8569, Japan 2Disaster Prevention Research Institute, Kyoto University, Gokasho, Uji, Kyoto 611-0011, Japan Corresponding author: T. Iwata, [email protected] Abstract: We study the correlation between the phase of the moon and the occurrence of microearthquakes in Tamba region which is close to the fault of the 1995 Kobe earthquake. Since the existence of the correlation after the Kobe earthquake has been suggested in the previous study, we investigate the statistical significance of the correlation in this study. Using point-process modeling and AIC (Akaike Information Criterion), we can confirm that the existence of the correlation is statistically significant and that the correlation is strong just after the Kobe earthquake and that then it becomes weaker year by year. 1. Introduction There are previous studies which made an attempt to find a correlation between oc- currences of earthquakes and the phase of the moon, and Katao [2002] (referred to K2002 hereafter) is the one of these studies. K2002 reports that, after the 1995 Kobe earth- quake, the occurrences of microearthquakes (MEs) correlated with the phase of the moon in Tamba region which is close to the focal region of the Kobe earthquake. K2002 ex- amines the frequency distribution of the MEs after approximately two years after the occurrence of the Kobe earthquake, and showed that the frequency of the MEs varies depending on the phase of the moon. In K2002, the statistical significance of the variation was not investigated. Thus, to confirm the significance, we apply a statistical analysis called “point-process modeling” [e.g., Ogata, 1983a] to the data examined in K2002. In the analysis, we investigate not only the significance of the correlation but also the temporal variation of the correlation. 2. Data The earthquakes used in this study occurred at Tamba region, listed in the catalogue compiled by Disaster Prevention Research Institute (DPRI), Kyoto University, Japan. Tamba region is neighbor to the focal region of the 1995 Kobe earthquake, and the seismicity in this region increased after the occurrence of the Kobe earthquake [e.g., Toda et al., 1998]. We make a dataset of earthquakes inside a rectangular area which is the same as defined in K2002. Considering the detection capability, the earthquakes of M ≥ 1.4 are used. 3. Statistical Analysis Using Point-Process Modeling To examine the statistical significance of the correlation shown in K2002, we did a statistical analysis called “point-process modeling” [e.g., Ogata, 1983a]. To investigate the periodicity of seismicity with point-process modeling, Ogata [1983a] suggested the following type of function as λ(t): λ(t)=µ + (trend) + (cluster) + (periodicity), (1) where λ(t) is the intensity function which shows the occurrence rate of earthquakes in unit time (one day, in this study). We use a polynomial function to indicate trend, ETAS model [Ogata, 1988] to indicate cluster, and trigonometric functions to indicate periodicity. Thus, the intensity function used in this study is as follows: n − L1 L1 k K exp(α(Mi Mz)) k · k · λ(t)=µ + akt + p + A1kt sin θ(t)+ B1kt cos θ(t) − i k=1 i;ti L2 L2 k k + A2kt · sin(2θ(t)) + B2kt · cos(2θ(t)). (2) k=0 k=0 (ak, K, c, p, α, A1k, B1k, A2k, B2k: parameters) where ti and Mi are an occurrence time and magnitude of i-th ME, respectively. Mz indicates the lower threshold of the magnitude in our dataset, and θ(t) is a function which converts actual time t into the phase angle. As the phase angle, we assigned 0◦ or 360◦ to the times of new moon, and 180◦ to the times of full moon. Then, the phase angle is assigned to the time linearly from 0◦ to 180◦ or from 180◦ to 360◦.Notethat θ(t)and2θ(t) indicate the phase angle related with a synodic and a half-synodic month, respectively. In equation (2), we consider four kinds of constraint for the orders L1 and L2:(i) L1 = L2 = 0; (ii) L2 = 0; (iii) L1 = 0; and (iv) no constraint. Each constraint corresponds to the case when we consider (i) no triggering effect related with the phase of the moon, (ii) only the effect related with a synodic month, (iii) only the effect related with a half- synodic month, and (iv) the effect related with both a synodic and a half-synodic month. For each of the four cases, we estimate the parameters which provide the best fit to the observed time series of the MEs using maximum likelihood method [e.g., Ogata, 1983b]. We also estimate the AIC (Akaike Information Criterion) [Akaike, 1974] for each of them. Among the cases, the case whose AIC is the smallest is considered to offer the best fit to the observed occurrences of the MEs. Note that the orders N’s, L1’s and L2’s are also determined using AIC. We analyze the MEs during approximately four years after the 1995 Kobe earthquake; from 3:00 on 17 January 1995 to 3:00 on 17 December 1999 (JST). The numbers of used MEs are 5438. The AICs and the orders N’s, L1’s and L2’s of the four cases are shown in Table 1. The AIC of the case (iv) is the smallest among the four cases. Since the cases (i), (ii), and (iii) are restricted versions of the case (iv), the difference of the AIC of the case (iv) and those of the other cases can be converted into the log-likelihood ratio statistic [e.g., Ogata, 1983b]. Using this feature, we can estimate the probability that the fit of the case (iii) whose AIC is the second smallest is better than the case (iv). Following the values shown in Table 1, the estimated probability is 2.56%, which is too small to accept the null hypothesis. This implies that the correlation between the activity of the MEs and the phase of the moon is statistically significant, and we should consider the triggering effect related with both a synodic and a half-synodic month. Figure 1(a) shows the intensity of the trigonometric part, which represents the periodicity. The value of the intensity after the moments of new/full moon is generally higher compared with that in the other period. Since Li Li k 2 k gi(t)=( Aikt ) +( Bikt )(i =1, 2) (3) k=0 k=0 shows the amplitudes of the intensity functions related with a synodic or a half-synodic month, we plot these in Figure 1(b). These are in maximum just after the occurrence of the 1995 Kobe earthquake, and generally decreases as time elapsed. We also apply the same statistical test to the dataset during four years before the occurrence of the Kobe earthquake; from 0:00 on 17 January 1993 to 0:00 on 17 January 1995 (JST). The case (i) is chosen as the best one; the significant correlation is not found. 4. Discussion After the Kobe earthquake, the seismicity rate in Tamba region was increased com- pared with before, and this increase is assumed to be caused by increase of static stress due to the main rupture of the Kobe earthquake [e.g., Toda et al., 1998]. As mentioned in the previous sections, the significant correlation is found only after the Kobe earthquake and not found before. Additionally the correlation is the most remarkable just after the Kobe earthquake, and becomes weaker. This feature supports our interpretation that rupture of the Kobe earthquake makes stress/strain state in Tamba region critical and that the seismicity correlates with the phase of the moon. Recent laboratory experiments imply the occurrence of acoustic emissions/earthquakes affected by the amplitudes of stress/strain changes [Lockner and Beeler, 1999]. Yamamura et al. [2003] examine the seismic velocity observed in a vault at the coast of Miura Bay, Japan and find the temporal variation of the velocity with 14-days (approximately a half- synodic month) periodicity. These studies suggest that the changes of the amplitudes of stress/strain caused by the phase of the moon could affect the seismicity or the physical properties of the earth, and support our results in this study. One may note that the feature that not only 2θ(t) but also θ(t) is correlated with the occurrence of the MEs. A suggestion for the cause of this feature is that the maximum amplitude of stress/strain changes varies for new and full moon. Using the program of Agnew [1997], we calculated the strain changes at the center of the focused area. In some periods, it is remarkable that the maximum amplitude of strain changes varies between around times of new and full moon. Our results may suggest that this variation would influence the occurrence of the earthquakes and that we should include the variation directly for more precise modeling of the occurrence of earthquakes. 5. Acknowledgement This study was carried out under ISM Cooperative Research Program (03-ISM-CRP- 2026). Figures are generated using the GMT software [Wessel and Smith, 1998]. 6. References Akaike, H., A New look at the statistical model identification, IEEE Trans. Autom. Control, 19, 716-723, 1974. Agnew, D. C., NLOADF: A program for computing ocean-tide loading, J. Geophys. Res., 102, 5109–5110, 1997. Katao, H., Relation between moon phase and occurrences of micro-earthquakes in the Tamba Plateau (in Japanese with English abstract and figure captions), J. Geography., 111, 248–255, 2002. Lockner, D. A., and N. M. Beeler, Premonitory and tidal triggering of earthquakes, J. Geophys. Res., 104, 20,133-20,151, 1999. Ogata, Y., Likelihood analysis of point processes and its applications to seismological data, Bull. Int. Statist. Inst., 50, Book 2, 943–961, 1983a. Ogata, Y., Estimation of the parameters in the modified Omori formula for aftershock frequencies by the maximum likelihood procedure, J. Phys. Earth, 31, 115–124, 1983b. Ogata, Y., Statistical models for earthquake occurrences and residual analysis for point processes, J. Amer. Statist. Assoc., 83, 9–27, 1988. Toda, S., R. S. Stein, P. A. Reasenberg, J. H. Dieterich, and A. Yoshida, Stress transferred by the 1995 Mw =6.9 Kobe, Japan, shock: Effect on aftershocks and future earthquake probabilities, J. Geophys. Res., 103, 24,543–24,565, 1998. Wessel, P., and W. H. F. Smith, New, improved version of the Generic mapping Tools released, EOS, 79, 579, 1998. Yamamura, K., O. Sano, H. Utada, Y. Takei, S. Nakao, and Y. Fukao, Long-term ob- servation of in situ seismic velocity and attenuation, J. Geophys. Res., 108, 2317, doi:10.1029/2002JB002005, 2003. (a) 360 2 0 0 ] 270 1 -1 0 180 0.5 0 0 0 90 0 -1 intensity[day phase angle [degree] -0.5 0 0 -2 0 200 400 600 800 1000 1200 1400 (b) 1.0 ] 0.8 -1 0.6 0.4 intensity [day 0.2 0.0 0 200 400 600 800 1000 1200 1400 elapsed time [day] Figure 1 (a) Temporal variation of the intensity functions of the trigonometric part in equation (2) obtained by the analyses of the MEs from 12:00 on 17 January 1995 to 12:00 on 17 January 1999. The vertical axis indicates the elapsed days since 12:00 on 17 January 1995. (b) Temporal variations of gi(t) as shown in equation (3), which denotes the amplitudes of the intensity functions of the trigonometric part in equation (2). The solid and dotted lines correspond to the amplitudes related with a synodic (i=1 in equation (3) and a half-synodic month (i=2), respectively. constraint (i) (ii) (iii) (iv) (N, L1, L2) (3, 0, 0) (3, 3, 0) (3, 0, 3) (3, 3, 3) AIC −4933.03 −4938.84 −4936.18 −4942.14∗ * The minimum value of AIC is obtained. Table 1 The AICs and the orders N, L1 and N2 of the polynomial function representing trend, temporal variations of periodicities related with a synodic and a half-synodic month using the intensity function as shown in equation (2), for the analyses of the MEs from 12:00 on 17 January 1995 to 12:00 on 17 January 1999. A New Multiple Dimension Stress Release Statistic Model based on co-seismic stress triggering Mingming Jiang and Shiyong Zhou Geophysics Department, Peking University, Beijing 100871, China Abstract Based on the stress release model proposed by Vere-Jones (1978), we have developed a multiple-dimensional stress release model, which involves a quantitative model of static stress triggering. The model is applied to the historic catalog in North China. According to the AIC, our model is better than the original model and the static Poisson process. Also we give a reasonable physical interpretation of the risk function in the stress release model. Introduction Since the elastic rebound theory was presented by Reid (1910), kinds of models, such as spring-block model and cell automation model, have been presented to simulate the earthquake occurrence. However, many features of earthquake generation are not captured in these physical models. These unknown features result in a degree of randomness, which could be factored into forecasts through the medium of stochastic models. First, Knopoff (1971) took the earthquake sequence as a Markov process and made the first stochastic model for earthquake occurrence. Vere-Jones (1978) proposed the stress release model, a stochastic version of the elastic rebound theory, incorporating a deterministic build-up of stress within a region and its stochastic release through earthquake. Earthquake interaction by stress triggering and stress shadows is now well accepted. Liu et al. (1999) considered the spatial interaction of earthquake occurrence between seismic belts and developed the coupled stress release model. However, the coupled stress release model can not assess the interaction between two single earthquakes quantitatively. The motivation of this paper is to involve static stress triggering model in the stress release model and to formulate a new space-time magnitude version of the stress release model. A physical interpretation of the risk function In the stress release model, the risk function, which relate the stress level in a region to the probability of an earthquake occurring, is as follows, λ = exp(μ +νX ) (1) where λ is the conditional intensity function, X is the stress level, and μ and ν are constants. This exponential relation has been proved the best choice compared to the other relations via many numerous trials and analysis (Zheng & Vere-Jones, 1991, 1994). But what is the nature of this risk function? It is well known that the strength of most brittle materials, include rocks, is time dependent. If such a material is subjected to a constant load it will, in general, fracture after some time interval. This weakening in time is known as static fatigue. In static fatigue studies in the laboratory, for most of materials that have been studied a good formula of static fatigue is given by (Scholz, 1968) Sbat ∗ −⋅= σ )(exp (2) T [ ] where σ tyx = σ + ρ + σ s tyxtyx ),,(),()0(),,( (3) where σ(x,y,t) is the shear stress in the slip direction of one fault, ρ(x,y) is shear loading rate from external tectonic force, and can be calculate in our loading model, σs(x,y,t) is accumulated shear stress variety due to earthquakes including the effect by stress triggering and stress release is negative. To simplify the problem, we consider our region in a homogeneous three dimensional elastic half-space. Assume the external tectonic force is horizontal and time-independent, the loading rate tensor can be express as two orthogonal constant loading rate ρ1 & ρ2 (ρ3 = 0) in the primary coordinates and a azimuth θ between the direction of ρ1 and the east direction; Furthermore, we take ρ2 = −ρ1 for more simplification. If we have the value of ρ1 and θ, we can project the loading rate tensor to the slip direction on the faults of earthquakes and get the loading rate ρ(x,y) in equation (3). The accumulated stress variety due to earthquakes includes not only the direct stress release due to the local earthquake, but also the influence due to the stress triggering far away. It is Figure 1. The three coordinate systems used in the new physical model. E (east), N (north) and Up are the master coordinate in which each fault is specified; x, y, and z are the fault-referenced coordinates used by Okada (1992); 1, 2 and 3 are our fault-referenced coordinates used for calculation of shear stress on the fault plane. Here δ is the fault dip, φ is the fault strike, and L and W are the fault length and width. convenient to introduce the coordinate systems first (Figure 1). Okada’s (1992) computer routines are used to derive the stress variety due to earthquakes. His formulation gives the displacements and their spatial derivatives anywhere in an elastic half-space due to slip of any direction and magnitude on a rectangular fault (here is the fault which occurs the earthquake). Then the induced strain and stress tensor of object point can be easily got. The induced shear stress on the object fault can be got by projecting the stress tensor on the fault. To calculate this, we need to know the source mechanics of earthquakes: strike, dip, length, width and slip vector of faults, besides the common data in the earthquake catalogs. Construction of 4-D SRM Based on the new physical model, the conditional intensity function can be gained. λ ⋅= λ tyxmfmtyx ),,()(),,,( 1 (4) mf exp)( {}+⋅= []+ σρνα S tyxtyx ),,(),( where f(m) is the probability distribution of magnitude. Obviously, σ(0) has been absorbed into the other parameters. The model parameters can be obtain by using the log-likelihood method, TN )( T N log L = λ − λ ),,(),,(log dsdttyxtyx + mf )(log (5) ∑ iii ∫∫∫ ∑ i=1 oS 1 where N(T) is the number of earthquakes within the interval (0, T). Because the space points where no earthquakes have occurred in the time interval (0, T) may never have earthquake occur, only the space points where earthquakes have ever occurred in the time interval (0, T) are used in the inversion. To obtained the conditional intensity function at the space points where no earthquakes have occurred in our time interval, we consider a weight function g(x,y), that is, λ2 = ⋅ λ1 mtyxyxgmtyx ),,,(),(),,,( . (6) The weight function g(x,y) can be obtained by using the procedure by Stock & Smith (2002). Application to the historic earthquake catalog of North China When the statistic model is applied to the historic earthquake catalog in North China, we select only MS ≥ 6.5 earthquakes from 1300 to 2000. The focus region is from 108° E to 121° E in longitude and from 34° N to 42° N in latitude (figure 2). Consider there are no source mechanics solutions of the earthquakes before 1970, we need obtain them indirectly. The strike and dip of the earthquake fault can be obtained through geologic survey. The rupture length, rupture width and slip distance can be obtained by scaling relation. The slip angle can be obtained by projecting the mean tectonic stress to the earthquake faults. Consider the slip angle is calculated from the mean tectonic stress field, we take the azimuth θ between the direction of ρ1 and the east direction as a constant. We compared the result from the original stress release model and static Poisson process by using AIC (Akaike, 1977) (Table 1). The new model is much better than them. We show the risk function versus time in Figure 2. Table 1. The result of maximum likelihood method α ν σ1 (Pa) b-value AICNSRM AICOSRM AICPoisson -6.874 2.71e-6 1671.9 0.8176 535.1459 583.8810 586.0022 0.12 0.08 0.04 Events/Year 0 1300 1400 1500 1600 1700 1800 1900 2000 Year 10 8 6 4 Magnitude 2 0 1300 1400 1500 1600 1700 1800 1900 2000 Year Figure 2. The risk function (events/year) versus time (year) calculated by the original SRM (dotted line) and the new SRM (solid line). The M-t figure of earthquake catalog is also plotted. Conclusions We interpreted the nature of the risk function used in the stress release models based on the results in the laboratory. We confirm the stress level in SRM is the true stress. Based on this result, we involved a static stress triggering model to SRM and extended the SRM to multiple-dimensions. The new SRM is applied the North China by using the historical catalog. We proved the new model is more effective than the original model. References Akaike, H., 1977, On entropy maximization principle, in Applications of statistics, edited by P. R. Krishnaiah, pp.27-41, North-Holland, New York; Liu, J., Chen, Y., Shi, Y. & Vere-Jones, D., 1999. Coupled stress release model for time dependent seismicity, Pure appl. Geophys., 155, 649-667; Knopoff, L., 1971, A stochastic model for the occurrence of main sequence events, Rev. Geophys. Space Phys., 9, 175-188; Okada, Y., 1992, Internal deformation due to shear and tensile faults in a half-space, Bull. Seismol. Soc. Am., 82, 1,018-1,040; Reid, H.F., 1910, The mechanism of the earthquake, in the California Earthquake of April 18, 1906, Report of the State Earthquake Investigation commission, vol.2, pp.16-28, Carnegie Institute of Washington, Washington, DC; Sholz, C.H., 1968, Mechanism of creep in brittle rock, J. Geophys. Res., 73, 3,295-3,302; Stock, C. &E. G. C. Smith, 2002, Adaptive kernel estimation and continuous probability representation of historical earthquake catalogs, Bull. Seismol. Soc. Am., 92, 904-912; Vere-Jonse, D., 1978, Earthquake prediction-a statistician’s view, J. Phys. Earth, 26, 129-146; Zheng, X. & Vere-Jones, D., 1991, Applications of stress release models to earthquakes from North China, Pure appl. Geophys., 135, 559-576 Zheng, X. & Vere-Jones, D., 1994, Further applications of the stochastic stress release model to historical earthquake data, Tectonophysics, 229, 101-121; An Accelerating Moment Release Version of the Stress Release Model Steven Johnston 1 1School of Mathematics, Statistics and Computer Science, Victoria University of Wellington, PO Box 600, Wellington, New Zealand Corresponding Author: Steven Johnston, [email protected] Abstract: We develop a new stochastic point process model for Accelerating Moment Release (AMR), by combining features of the Stress Release Model and a previous point process model for AMR. Simulated earthquake sequences are produced from the new model and tested for the AMR pattern in the lead-up to the largest events. 1. Accelerating moment release The concept of Accelerating Moment Release (AMR) is based on the observation that in the time leading up to some large earthquakes, the seismic activity in a region surrounding the epicentre of the large earthquake is not constant, but increases as the large earthquake approaches. In particular, it is often observed that the accumulation of (scalar) seismic moment released by earthquakes in the region can be well approximated by a power-law of the time remaining until the large earthquake. That is, α m ∑ Si = A − B(t f − t) , (1) ti ≤t where: • the ti’s and Si’s are the times and seismic moments of the earthquakes, • tf is the time of the large event, • A, B and m are parameters to be fitted from the data ( A > 0, B > 0 and 0 < m <1), • and α is usually taken to be 0.5. See Jaume and Sykes (1999) for a review of the observational evidence and further discussion. Vere-Jones et al (2001) developed a marked point process model for AMR, in which the conditional intensity function (instantaneous rate of events) has the form −k1 t λ(t) = λ0 1− . (2) t f The mark associated with each event is assumed to be a seismic moment from a tapered Pareto distribution, whose upper “corner” parameter can change over time so that the mean moment has the form −k2 t µ(t) = µ0 1− . (3) t f The parameters k1 and k2 are such that k1 + k2 = 1− m . This allows for the acceleration in seismic moment release to be a result of either an increase in the number of earthquakes, or an increase in the sizes of the earthquakes, or a balance between the two factors. A limitation of this model is that it only describes the activity in one AMR “episode”. How should the model be modified to include the seismic activity after the large earthquake at tf? This is the main focus of the present study. 2. The stress release model The Stress Release Model (SRM) was proposed by Vere-Jones (1978) as a stochastic version of the elastic rebound theory of earthquakes. It assumes that the probability of occurrence of an earthquake at time t is related to the underlying “stress”, X(t) , of the region, which is given by X (t) = X )0( + ρt − ∑ Si , (4) ti λ(t) = f (X (t)). (5) f is usually taken to be an exponential function. 3. A new model The key idea of this poster is to formulate a new model that combines elements of Vere-Jones et al’s (2001) AMR model and the SRM. Let −k λ(t) = λ0 (X c − X (t)) , (6) where Xc is a critical stress level needed to invoke a large event, analogous to the failure time tf in equations (1) to (3). The right hand side of (6) is a function of the underlying stress X(t) , and therefore the model is a form of SRM. The conditional intensity has the same form in as Vere-Jones et al’s (2001) AMR model (equation (2)), although it is a function of the stress rather than the time. As in the SRM, the conditional intensity increases between events and drops back by an amount related to the size of the earthquake when an event occurs. In the new scenario, the increase in the conditional intensity between earthquakes will have the form of the AMR-type power law, rather than the exponential shape of the usual SRM. The new approach allows for modeling multiple episodes of AMR; i.e. modeling a longer sequence of earthquakes, containing multiple large events, in which the AMR pattern can be seen in the lead up to each of the large events. It is not restricted, as in the Vere-Jones et al (2001) model, to modeling the seismic activity before only one large event. This is demonstrated in the poster by simulating from the model, and testing for the AMR pattern using the methods developed by Robinson et al (2005) and Zhou et al (2005). 4. References Jaume, S. C., and Sykes, L. R. (1999), Evolving towards a critical point: a review of accelerating seismic moment/energy release prior to large and great earthquakes , Pure Appl. Geophys. 155 , 279-306. Robinson, R., Zhou, S., Johnston, S., and Vere-Jones, D. (2005), Precursory accelerating seismic moment release (AMR) in a synthetic seismicity catalog: a preliminary study , Geophys. Res. Lett. 32 , L07309, doi:10.1029/2005GL022576. Vere-Jones, D. (1978), Earthquake prediction: a statistician’s view , J. Phys. Earth 26 , 129-146. Vere-Jones, D., Robinson, R., and Yang, W. (2001), Remarks on the accelerated moment release model: problems of model formulation, simulation and estimation , Geophys. J. Int. 144 , 517-531. Zhou, S., Johnston, S., Robinson, R., and Vere-Jones, D. (2005), Tests of the precursory accelerating moment release (AMR) model using a synthetic seismicity model for Wellington, New Zealand , submitted to J. Geophys. Res. Spacial distribution and time variation in seismicity around Antarctic Plate before and after the M9.0 Sumatra Earthquake, 26 December 2004 Masaki Kanao 1, Yoshifumi Nogi 1 and Seiji Tsuboi 2 1 Department of Earth Science, National Institute of Polar Resaerch, 1-9-10 kaga, Itabashi-ku, Tokyo 173-8515, Japan [[email protected], [email protected] ] 2 Institute for Research on Earth Evolution, Japan Agency for Marine-Earth Science and Technology, 2-15 Natsushima-cho Yokosuka 237-0061, Japan [ [email protected] ] A large earthquake (Magnitude 9.0, 3.30N, 95.96E, depth=30km) occurred Off the West Coast of Northern Sumatra, on December 26, 2004. This is the fourth largest earthquake in the world since 1900 and is the largest since the 1964, Alaska earthquake. In total, more than 280,000 people were killed, 14,000 are still listed as missing and 1,100,000 were displaced by the earthquake and subsequent tsunami in 10 countries in South Asia and East Africa. The tsunami caused more casualties than any other in recorded history and was recorded nearly world-wide on tide gauges in the Indian, Pacific and Atlantic Oceans including coastal area of Antarctica. In this study, spacial distribution and time variations in seismicity around the Antarctic Plate – Indian Ocean (0-160E, 20-80S) is evaluated in relation to the occurrence of large earthquakes, such as the Sumatra event, based on the compiled data since 1964 by International Seismological Centre. The Antarctic continent and surrounding ocean were believed to be one of the aseismic regions of the Earth for many decades. However, the following four earthquakes with Magnitude 8.0 and greater are observed since 1990 around the Antarctic Plate – Indian Ocean region. 1) Balleny Islands Region (Mw=8.1, March 25, 1998), 2) North of Macquarie Island (Mw= 8.1, December 23, 2004), 3) Off West Coast of Northern Sumatra (Mw=9.0, December 26, 2004) and 4) Northern Sumatra, Indonesia (Mw= 8.7, March 28, 2005). In spite of the existence of a total 12 large events with Magnitude 8.0 and greater are recorded since 1990 in the world, occurrence of the above four large earthquakes imply the recent anomalous dynamic feature of the oceanic lithosphere in surrounding region. Hypocentral data since 1994, however, indicate no significant events with Magnitude 7.0 and greater inside the Antarctic Plate and along the oceanic ridges / transform faults in the Indian Ocean. Moreover, less than 30 earthquakes with Magnitude 6.0 and greater observed in the region during these ten years. Spacial distribution and time variations in seismicity around the oceanic ridges between the Antarctic Plate and the Indian-Australian Plate represent characteristic features before and after the large earthquakes. Seismicity of the aseismic ridges in eastward adjacent area of the Australia-Antarctic Discordance (AAD) increased in a year before occurrence of the Balleny Islands earthquake in 1998 March, together with before the Off West Coast of Northern Sumatra earthquake in 2004 December. Seismicity of the oceanic ridge around 90E, moreover, has similar activity associated with the occurrence of large events. The most specified area in the Indian Ocean is recognized at Kerguelen Plateau, where has relatively high seismicity compared with surrounding Indian Oceans. Hot spot activity together with high heat flow around the Plateau involves the excitation of high seismicity. Nevertheless of the development of Global Seismic Networks and local seismic arrays, there is a possibility that several small earthquake events occurred around the aseismic ridges involving non-volcanic extension along the oceanic plate boundaries. Seismic activities at these regions reflects the far-field tectonic stress around the Antarctic Plate – Indian Ocean region. An experiment in intermediate-term earthquake prediction: possibility of an M 8 earthquake occurring off the coast of eastern Hokkaido in 2005 K. Katsumata, and M. Kasahara Institute of Seismology and Volcanology, Hokkaido University, Sapporo 060-0810, Japan Corresponding Author: K. Katsumata, [email protected] Abstract: We observed precursory seismicity rate changes lasting five years prior to the two recent great earthquakes off the coast of eastern Hokkaido: the Hokkaido-Toho-Oki earthquake in 1994 (M = 8.3) and the Tokachi-Oki earthquake in 2003 (M = 8.3). A seismographic network maintained by Hokkaido University has detected another significant seismicity rate change, and has identified the signals as intermediate-term precursors to a great interplate subduction earthquake in the near future. The seismicity rate decreased by 48% in June 2000 around the northeastern edge of the asperity ruptured by the Nemuro-Oki earthquake in 1973 (M = 7.4). Based on the above empirically supported information, we can state that if an M ≈ 8 earthquake follows the Nemuro-Oki quiescence with same time interval as the Tokachi-Oki earthquake, the time of occurrence could be in 2005. 1. Introduction Several significant cases support the hypothesis that seismic quiescence precedes large earthquakes [e.g. Mogi, 1969; Ohtaka et al., 1977; Wyss, 1985]. Wyss and Habermann [1988] summarized seventeen cases of precursory seismic quiescence to main shocks with magnitudes ranging from M = 4.7 to 8.0, and found that (1) the rate of decrease ranges from 45% to 90%, (2) the duration of the precursors ranges from 15 to 75 months, (3) if a false alarm is defined as a period of quiescence with a significance level larger than a precursory quiescence in the same tectonic area, the false alarm rate may be in the order of 50%, and (4) failure to predict may be expected in 50% of the main shocks, even in carefully monitored areas. Among the seventeen cases, there are three cases which can be considered successful predictions in the sense that a quiescence anomaly was recognized and interpreted as a precursor before the occurrence of the main shock [Ohtake et al., 1977; Kisslinger, 1988; Wyss and Burford, 1985; Wyss and Burford, 1987]. Recently more reliable examples of the seismic quiescence were reported: the Spitak (M = 7.0) earthquake in 1988 [Wyss and Martirosyan, 1998], the Landerse (M = 7.5) earthquake in 1992 [Wiemer and Wyss, 1994], the Hokkaido-Toho-Oki (M = 8.3) earthquake in 1994 [Katsumata and Kasahara, 1999]. Based on these previous studies, it is now possible to predict some, if not all the major earthquakes several years prior to the main shock by monitoring the seismicity rate. Unfortunately, almost all precursory seismicity rate changes were identified not before but after the main shock. The purpose of the present study is to locate significant seismicity rate changes and identify them as precursors to a main shock before the occurrence of the main shock. This is an experiment in intermediate-term earthquake prediction in the Hokkaido subduction zone. 2. Data and Analysis A temporally homogeneous seismic catalog is the most important factor to the success of this experiment. Seismicity data sets are very simple lists, including the origin time, latitude, longitude, depth of hypocenter, and magnitude. However, they reflect the output of an extremely complex and highly variable detection and reporting system. These complexities cause man-made changes in all seismicity data sets even if they are local, regional, or global [Haberman, 1987]. In this paper, we have produced a new seismic catalog that is free from man-made changes, which allows for a more reliable analysis than was possible with previous studies. We manually reexamined all waveform data, one by one, from eight thousand earthquakes with M ≥ 3.0, located by the seismographic network of Hokkaido University from January 1994 to September 2004. We considered the following three points in the reexamination process. First, seventeen seismographic stations, which had already been installed in 1994, were selected. Any station installed after 1994 was not used in this process because installation of new seismic stations can cause man-made changes such as a magnitude shift and magnitude-scale compression or stretch. In addition, the station combination was fixed while calculating magnitude and reading P and S wave arrival times, in order that the condition of hypocenter determination remained constant. Finally, a single person carried out the reexamination of waveform data to avoid differences among individuals. From this new seismic catalog, we selected 4162 earthquakes within the subducting Pacific plate, that is located deeper than the upper boundary of the deep seismic zone modeled by Katsumata et al. [2003]. Clustered events such as aftershocks and earthquake swarms were removed by the Reasenberg’s algorithm [Reasenberg, 1985] and the aftershocks of the 2003 Tokachi-Oki earthquake were manually identified and deleted. From the declustered catalog, we selected 2887 earthquakes with M ≥ 3.3, which was the basis of our analysis. Using the algorithm GENAS [Haberman, 1983], we found that the reporting in this catalog was homogeneous except for the period after the Tokachi-Oki earthquake. The reasons for the heterogeneous reporting are: (1) some aftershocks remained in the catalog and (2) seismicity was activated in the entire study area following the Tokachi-Oki earthquake. We used the gridding technique ZMAP [Wiemer and Wyss, 1994] to produce an image of the significance of rate changes in space and time. The study area was divided into grids spaced at 0.1° in latitude and longitude. A circle was drawn around each node and its radius r was increased until it included a number N = 100 earthquakes. The circle defines the resolution circle for a given N. The cumulative number of events vs. time was plotted for each node. The cumulative number plot started at a time t0 (January 1, 1994) and ended at a time te (September 30, 2004). We place a time window starting at t1 and ending at t2, where t0 ≤ t1 ≤ t2 ≤ te and Tw = t2 - t1 (here Tw = 4.0 years), and move t1 through time, stepping forward by one month. For each window position, we calculate Z-value, generating the function LTA defined by Wiemer and Wyss [1994], which measures the significance of the difference between the mean seismicity rate within the window Tw, R2, and the background rate R1, defined here as the mean rate in time period between t0 and te, except for Tw. The Z-value is defined as, 1/2 Z = ( R1 - R2 ) / ( S1/n1 + S2/n2 ) , where S and n are the variances and number of samples, respectively. 3. Seismicity Rate Change As a result of scanning time and space using ZMAP [Wiemer and Wyss, 1994], we identified nine areas with significant rate changes. Two temporal synchronizations were observed. The first one started between August and October 1998 in Areas 1, 2, and 3. Seismicity rate decreased in Area 1 near the asperity ruptured by the 2003 Tokachi-Oki earthquake [Yamanaka and Kikuchi, 2003], and increased in Areas 2 and 3. Depths of hypocenter in Areas 2 and 3 are in the range of 120-270 km and 240-340 km, respectively. Seismic quiescence in the shallow part appears to correlate with seismic activation in the deep part of the subducting Pacific plate. The second synchronization started between March and July 2000 in Areas 4, 5, 6, 7, and 9. Seismicity rate decreased in Areas 4 and 9, and increased in Areas 5, 6, and 7. Depths of hypocenter in Areas 5 and 6 are in the range of 110-250 km and 220-360 km, respectively. Except for Areas 7 and 9, seismic quiescence in the shallow part appears to correlate with seismic activation in the deep part, as in the case of the synchronization in 1998. Moreover, a triangle consisting of Areas 1, 2, and 3 moved eastward to a triangle consisting of Areas 4, 5, and 6. The synchronized change started in 1998, lasting five years; it was followed by the M = 8.3 Tokachi-Oki earthquake in September 2003. The change does not appear to be a man-made change because we have reexamined all waveform data and the seismic quiescence was observed homogeneously in the magnitude band from 3.3 to 4.5. The quiescence occurred on the edge of the asperity ruptured by the main shock. This is consistent with a numerical simulation of an earthquake cycle based on a friction law [Kato and Hirasawa, 1999]. The simulation suggested that (1) the stress changes due to aseismic sliding cause a decrease in the seismicity rate lasting several years before the main shock and (2) the decreasing areas are located around the edge rather than the center of the asperity. Mogi [2004] pointed out that the deep earthquake seismicity increased prior to a great shallow subduction earthquake. His studies focus on earthquakes with a magnitude larger than 5. Our results show that deep earthquakes smaller than M = 5 also became more frequent prior to a great shallow earthquake. In order to estimate the probability of these seismicity rate changes occurring by chance, we obtained the distribution of Zmax and Zmin for Tw = 4 years, by calculating LTA functions for 2000 samples from 100 earthquakes, randomly selected from the declustered catalog used in this study. The calculation was repeated 10000 times. The Z-values below which 50, 95, and 99 per cent of the results fall are 3.01, 3.48, and 3.70 for Zmax, respectively, and -3.08, -3.46 and -3.65 for Zmin, respectively. These percentages are shown in Table 1 as an indicator of reliability. Even though the statistical confidence level was not very high in Area 1 and the physical mechanism of the synchronization is unknown, we suggest that the first synchronized seismicity change is a candidate for intermediate-term precursor to the 2003 Tokachi-Oki earthquake. 4. Discussion The most important point to consider is whether the second synchronized change that started in the first half of 2000 is a precursor to another great earthquake. Since large earthquakes occurred in 1894 and 1973 in the Nemuro-Oki area [Shimazaki, 1974], this area should not be considered an aseismic region and the crustal stress was released by large earthquakes repeatedly. There are two reasons why we believe that stress large enough to produce a great earthquake has been accumulated in this area. First, the M = 7.4 earthquake in 1973 released energy equal to only half of the M = 7.9 earthquake in 1894 [Shimazaki, 1974]. Second, since the seismic coupling is almost 100% in the region, as shown by data from the GPS network, the coseismic slip of 260 cm is expected 32 years after the 1973 earthquake. If the area of the seismic fault is 100 × 100 km2, the moment magnitude will be 7.9. In addition to the stress accumulation, the seismic quiescence in Area 4 shows a Z-value of +3.6, which is more significant than the seismic quiescence prior to the 2003 Tokachi-Oki earthquake, and the area is located on the deeper edge of the asperity ruptured by the 1973 event. It is possible that the seismic quiescence in Area 4 is caused by the precursory aseismic sliding that begins several years before the main shock, as was the case with 2003 Tokachi-Oki earthquake. The coseismic slip and the following after-slip of the 2003 Tokachi-Oki earthquake [Ozawa et al., 2004] could possibly induce a low-angle-thrust-type reverse fault slip on the plate boundary in the Nemuro-Oki area. Another outstanding fact is that rapid subsidence has been recorded at HNS in the Nemuro peninsula. The rate of subsidence increased from 0.0 to 0.6 cm/year in 1999, as estimated from tide gauge data. Rapid subsidence had occurred twice before at HNS [Katsumata et al., 2002]. First, the subsidence rate increased from 0.3 to 1.5 cm/year, four years before the 1973 Nemuro-Oki earthquake. After the earthquake, the rapid subsidence stopped and the rate was 0.3 cm/year up to 1990. Another rapid subsidence started in 1990 with a rate of 1.1 cm/year, five years before the 1994 M = 8.3 Hokkaido-Toho-Oki earthquake. After the earthquake, the rapid subsidence stopped and the rate was 0.0 cm/year up to 1999. A third rapid subsidence started in 1999, and has continued at HNS. The rapid subsidence was not recorded at KSR 100 km southwest of HNS during this period indicating a signal source closer to the HNS tide meter. Recently, Murakami at the Geographical Survey Institute found that a slow slip with a velocity of 10 cm/year started in mid 2004 on the plate boundary in Area 4, based on data from GEONET, which is a GPS network in Japan [Murakami, 2004, personal communication]. He interpreted the slip as the after-slip associated with the 2003 Tokachi-Oki earthquake. However, we suggest that the slip is a provable precursory slip to the Nemuro-Oki earthquake that is predicted in this paper. 5. Conclusion Based on the seismic quiescence hypothesis, the ongoing quiescence in the Nemuro-Oki area is interpreted as an intermediate-term precursor to a great earthquake. The seismic quiescence precursor to the Tokachi-Oki earthquake in September 2003 started in October 1998. The Nemuro-Oki quiescence started in June 2000, twenty months after the Tokachi-Oki quiescence started. Although there is approximately a fifty percent probability of a false alarm, we venture to state that if a great earthquake follows the Nemuro-Oki quiescence with same time interval as the Tokachi-Oki earthquake, the time of occurrence could be in 2005. 6. Acknowledgement We thank Stefan Wiemer for providing ZMAP software. Temporal Power-laws on Preseismic Activation and Aftershock Decay Affected by Transient Behavior of Rocks Y. Kawada1, H. Nagahama1 1Department of Geoenvironmental Sciences, Graduate School of Science, Tohoku University, 6-3 Aramaki-Aza-Aoba, Aoba-ku, Sendai 980-8578, Japan Corresponding Author: Y. Kawada, [email protected] Abstract: A constitutive law for rock behaviors including the brittle, transient and steady-state behaviors is derived to recognize the temporal seismicity patterns on surface displacement or prior and subsequent to mainshocks, and formulated as the relaxation modulus (i.e., the ratio of stress to strain) following a temporal power-law relaxation. This constitutive law is transformable into the empirical power-law creep equation, and the exponent of the constitutive law is the valuables reflecting the fractal structures of rocks in response to the different deformation mechanisms. By the time-scale invariance in our law, the temporal seismicity pattern is recognized on the surface displacement owing to major large earthquake with many small events, on the power-law increase in the cumulative Benioff strain-release as the preseismic activation, and on the power-law aftershock decay represented by the modified Omori's law. The exponents of the temporal power-laws on the cumulative Benioff strain-release and the aftershock decay are linked to that of the constitutive law of rocks, and change coupled with the fractal structure of the crustal rocks. 1. Introduction We study the constitutive law for the rock behavior in order to recognize temporal seismicity patterns on surface displacement and prior or subsequent to mainshocks. To this end, the law requires to include the brittle behavior as well as viscoelastic behavior of rocks, and to express the transient response of rocks, i.e., the response to the sudden change in the stress and strain-rate such as earthquakes. The brittle behavior has been formulated by the damage mechanics and fibre-bundle model (e.g., Turcotte et al., 2003; Nanjo et al, 2005), and the transient behavior is derived as the temporal power-law relaxation called the long time tail or temporal fractal relaxation (e.g., Nagahama, 1994; Kawada and Nagahama, 2004). However, the constitutive law satisfying both of the behaviors has not been derived. The preseismic activation given by the increase in cumulative Benioff strain-release and the aftershock decay represented by modified Omori's law follow the temporal power-laws. Although these processes prior or subsequent to major earthquakes are kinds of the behavior of the crustal rocks, the processes have not been related with the temporal fractal property of the transient behavior of rocks. On the other hand, in the percolation model of rock fracture processes (e.g., Chelidze, 1993) and the theory on the power-law activation of spinodal instability (e.g., Rundle et al., 2000), the temporal power-law increase in cumulative Benioff strain-release is formulated with the settled exponent independent on the size or location of the events. However, the analytical results of the exponents do not become the fixed values (e.g., Bowman et al., 1998), and it has never been discussed what affects the change in the exponent well. In this presentation, from the constitutive law for the transient behavior of rocks, we recognize the temporal seismicity pattern of surface displacement, aftershock decay and cumulative Benioff strain-release. In particular, we state what property of the rocks affects the exponents of the temporal power-laws on the cumulative Benioff strain-release and aftershock decay based on the constitutive law. 2. Constitutive law for transient behavior of rocks The transient behavior of viscoelastic materials is generally formulated based on the relaxation modulus E(t) (t is the deformation time), viz., dσ/dε. For analytical reason, the secant modulus ES(t) = σ / ε in stead of E(t) is utilized. The analyses on the experimental data of the transient behaviors of halite (Shimamoto, 1987; Nagahama, 1994), marble and lherzolite (Kawada and Nagahama, 2004) yields the formulations: σ E¢ - β t æ Q ö º ES (ξ ) = ξ , ξ = expç- ÷ , (1) ε g(ε) C è RT ø where σ is the applied stress, ε is the strain, g(ε) is the strain-dependent function (g(ε) = 1 + aε), β is the positive exponent, Q is the activated energy, R is the universal gas constant, T is the absolute temperature, E' and a are the material constants, and C is a constant. Moreover, ξ stands for the temperature reduced time obtained by normalizing the various temperature behaviors. The power-law of ξ (or t) in Eq. (1) shows the time-scale invariance of the rock behavior, called the temporal fractal property or long time tail (e.g., Nagahama, 1994). Equation (1) leads to E(ξ)µ ξ -β (e.g., Kawada and Nagahama, 2004). The temporal power-law decay function is simply considered as the superposition of exponential decay functions with different lifetimes λ. Then, E(t) is derived also by the Laplace transformation of the distribution function of λ as follows; ¥ æ t ö E(t) = D(λ)expç- ÷dλ , (2) ò 0 è λ ø where D(λ) is the distribution function of λ. When D(λ)µ λ-β-1, E(t) shows the temporal power-law relaxation. This means that the structural fractal property of the rocks given by D(λ)µ λ-β-1 yields the long time tail behavior of rocks and β is related to the fractal distribution of elements composing the rocks (e.g., Nagahama, 1994). When ε = ε / t, Eq. (1) generates a relation (e.g., Kawada and Nagahama, 2004): 1 é ù 1 β êæ g(ε)ö æ ε öú β æ Q ö ε = ç ÷ ç ÷ σ expç- ÷ . (3) êè εE¢ ø è C øú è RT ø ë û Equation (3) shows the stress power-law relation with thermal activated process, and is reduced to the empirical equation for the steady-state flow law. Then, 1/β is the stress exponent reflecting the deformation mechanism, and the empirical steady-state creep equation is a special case of Eq. (3) from the mathematical perspective. In this sense, the 1/β power-law relation between E(ξ) and ξ (or E(t) and t) yields the relation ε µ σ , and represents the transient behavior as well as the steady-state behavior (e.g., Kawada and Nagahama, 2004). The brittle behavior of rocks has been formulated by the continuum damage mechanics and fibre-bundle model (e.g. Turcotte et al., 2003). In particular, Nanjo and Turcotte (2005) 1/β pointed out that the brittle behavior of rocks is expressed by the stress power-law ε µ σ 1/β -β based on 1D fibre-bundle model. Hence, the power-law ε µ σ or E(t)µ t represents the brittle as well as viscoelastic (transient and steady-state) behaviors of rocks. Based on this constitutive law, we investigate the temporal seismicity pattern in the next section. 3. Various power-laws on the temporal seismicity patterns 3.1 Surface displacement time series The surface displacement time series symbolizes a temporal seismicity pattern as a summation of transient responses of surface in various time-scales to the earthquakes. The transient responses in a-few-year scale to only a major large earthquake have been often 1/β analyzed by experimental steady-state creep equation of rocks: ε µ σ with the exponent 1/β = 3.5 (β ~ 0.29) of which the time series observed by the GPS (e.g., Freed and Bürgmann, 2004) is analyzed. On the other hand, the time series due to major earthquakes with small events can be recognized by the temporal fractal property on our constitutive law of rocks in Eq. (1) with β = 0.95 of which the time series observed by the creep meter (e.g., Wesson, 1987) is analyzed. 3.2 Modified Omori's law The temporal fractal decay of aftershocks is represented by the modified Omori's law: dN = B(τ + c)- p , (4) dτ where τ is the time elapsed after the main shock, dN/dτ is the rate of occurrence of aftershocks greater than a magnitude, p is the positive exponent, and B and c are constants. Based on the continuum damage mechanics (e.g., Nanjo et al, 2005), the modified Omori's law is related to the stress power-law in Eq. (3) under β = 1 - 1/p. 3.3 Cumulative Benioff strain-release Seismic activation has been observed prior to a lot of major earthquakes, and been quantified by the time series of cumulative Benioff strain-release Ω, i.e., the summation of the square root of the energy release at earthquake sequence like K Ω = Ω - (t - t)s , (5) c s c where tc is the occurrence time of the earthquake, Ωc is the Benioff cumulative strain-release at t = tc, s is a positive exponent, and K is a constant (e.g., Bowman et al., 1998). s is approximately equal to 0.25 (e.g., Rundle et al., 2000). Based on the fibre-bundle model and continuum damage model (e.g., Turcotte et al., 2003), the power-law increase in the Benioff strain release is linked to the stress power-law in Eq. (3) by β = 2(s - 1) / (3s - 2). In this section, we show that the temporal seismicity pattern on the surface displacement and prior or subsequent to mainshocks can be recognized by the long time tail of the transient behavior of rocks. 4. Discussion In the percolation model of rock fracture processes (e.g., Chelidze, 1993) and the theory on the power-law activation of spinodal instability (e.g., Rundle et al., 2000), the exponent of the temporal power-law on the cumulative Benioff strain-release is settled regardless of the location or size of the objective events, though s-value is not fixed in the analytical results (e.g., Bowman et al., 1998). In Section 3.3, s in Eq. (5) is related to β in Eqs. (1) and (3). β is linked to the structural fractal property of rocks as noted in Section 2, and the stress sensitive exponent 1/β depends on the deformation mechanisms such as the brittle failure and dislocation creep, and ranges from 1 to 60 (Table 1). Therefore, the change in β affects that in s (or p in Eq. (4)). Namely, we point out that β is regulated by the fractal structures of rocks corresponding to the different deformation mechanisms, and that the change in the structures affects that in the exponents s and p. Table 1. Experimental data of stress exponents 1/β and the corresponding deformation mechanisms (modified from Nagahama (1994) and Nakamura and Nagahama (1999)). 1/β Mechanism Material 1/β Mechanism Material 1.3 Diffusion creep Anorthite 15.0 Primary creep Halite 2.6 Dislocation Quartzite 27.0 Brittle failure Plagioclase 3.0 Dislocation Dunite 32.0 Brittle failure Granite 6.7 Dislocation Halite 60.0 Brittle failure Sandstone 5. Conclusion The temporal seismicity pattern on the surface displacement and prior or subsequent to the mainshocks can be recognized by the time-scale invariance of the transient behavior of rocks. The exponents of the temporal power-laws on the cumulative Benioff strain-release and modified Omori's law change linked with the fractal structure of crustal rocks in response to the different deformation mechanisms. 6. Acknowledgement We would like to thank I. Main and K.Z. Nanjo for their variable comments. The first author (YK) was financially supported by the 21st Century Center-Of-Excellence program, "Advanced Science and Technology Center for the Dynamic Earth", of Tohoku University. 7. References Bowman, D.D., Ouillon, G., Sammis, C.G., Sornette, A. and Sornette, D. (1998), An obser- vational test of the critical earthquake concept, J. Geophys. Res. 103B, 24,359-24,372. Chelidze, T.L. (1993), Fractal damage mechanics of geomaterials, Terra Nova 5, 421-437. Freed, A.M. and Bürgmann, R. (2004), Evidence of power-law flow in the Mojave desert mantle, Nature 430, 548-551. Kawada, Y. and Nagahama, H. (2004), Viscoelastic behaviour and temporal fractal properties of lherzolite and marble, Terra Nova 16, 128-132. Nagahama, H., High-temperature viscoelastic behaviour and long time tail of rocks in Fractal and Dynamical Systems in Geosciences, edited by J.H. Kruhl, (Springer-Verlag, Berlin, 1994), pp. 121-129. Nakamura, N. and Nagahama, H., Geomagnetic field perturbation and fault creep motion: a new tectonomagnetic model in Atmospheric and Ionospheric Electromagnetic Phenomena Associated with Earthquakes, edited by M. Hayakawa, (TERRAPUB, Tokyo, 1999), pp. 307-323. Nanjo, K.Z. and Turcotte, D.L. (2005), Damage and rheology in a fibre-bundle model, Geophys. J. Int. 162, 859-866. Nanjo, K.Z., Turcotte, D.L. and Shcherbakov, R. (2005), A model of damage mechanics for the deformation of the continental crust, J. Geophys. Res. 110, B07403, doi: 10.1029/2004JB003438. Rundle, J.B., Klein W., Turcotte, D.L. and Malamud, B.D. (2000), Precursory seismic activation and critical-point phenomena, Pure Appl. Geophys. 157, 2165-2182. Shimamoto, T., High-temperature viscoelastic behavior of rocks in Proceedings of the 7th Japan Symposium on Rock Mechanics, Tokyo, Japan, (Japanese Committee of Rock Mechanics, Tokyo, 1987), pp. 467-472 (in Japanese, with English Abstr.). Turcotte, D.L., Newman, W.I. and Shcherbakov, R. (2003), Micro and macroscopic models of rock failure, Geophys. J. Int. 152, 718-728. Wesson, R.L. (1987), Modelling aftershock migration and afterslip of the San Juan Bautista, California, earthquake of October 3, 1972, Tectonophysics 144, 215-229. The e ects of fault zone coupling on seismic cycling and aftershock generation N. Kuehn1 and S. Hainzl1 1Institute of Geosciences, University of Potsdam, 14415 Potsdam, Germany Corresponding Author: N. Kuehn, [email protected] Abstract: A stochastic version of elastic rebound theory, the linked stress release model, is analyzed concerning quasiperiodic behaviour and generation of aftershocks. In particu- lar, the e ects of di erent coupling strengths are investigated. Simulation results indicate that large earthquakes do occur quasiperiodically in the stress release model. Coupling between seismogenic zones leads to a correlated occurence of large earthquakes in adjacent regions. Moreover, large earthquakes generate aftershocks that follow the Omori law. 1. Introduction In seismic hazard analysis the temporal evolution of seismicity is usually modeled by a Poisson process, based on the assumption that earthquakes occur independently of each other. The Poisson model, however, does not account for correlated earthquake occurrence observed in nature, such as foreshock and aftershock activity accompanying large earthquakes. Furthermore an earthquake decreases the amount of strain present at the fault zone where rupture occurs (Reid, 1910). To account for these features, more elaborate models than the Poissonian have been proposed, such as the Epidemic Type Aftershock Sequence (ETAS) model (Ogata, 1988) for short term clustering or the stress release model (Vere-Jones, 1978) for long term clustering. The latter model is a stochastic version of elastic rebound theory (Reid, 1910), which states that in a seismically active zone, elastic strain energy or stress accumulates due to plate movement and is released by an earthquake that occurs when the stored energy exceeds a threshold. Here we analyze the stress release model with regard to its implications for seismic hazard. We study in particular recurrence time distributions of large earthquakes and the impact of fault interactions. 2. The Model The occurrence probability of an earthquake in the stress release model is governed by a conditional intensity function (t) = (t) that depends on the actual stress level S(t) in the studied region. The stress level increases deterministically between two earthquakes due to tectonical loading and drops when an earthquake occurs. Its temporal evolution can be written as S(t) = S(0) + t Si, (1) tXi