PEAT8002 - SEISMOLOGY Lecture 13: Earthquake Magnitudes and Moment
PEAT8002 - SEISMOLOGY Lecture 13: Earthquake magnitudes and moment Nick Rawlinson Research School of Earth Sciences Australian National University Earthquake magnitudes and moment Introduction In the last two lectures, the effects of the source rupture process on the pattern of radiated seismic energy was discussed. However, even before earthquake mechanisms were studied, the priority of seismologists, after locating an earthquake, was to quantify their size, both for scientific purposes and hazard assessment. The first measure introduced was the magnitude, which is based on the amplitude of the emanating waves recorded on a seismogram. The idea is that the wave amplitude reflects the earthquake size once the amplitudes are corrected for the decrease with distance due to geometric spreading and attenuation. Earthquake magnitudes and moment Introduction Magnitude scales thus have the general form: A M = log + F(h, ∆) + C T where A is the amplitude of the signal, T is its dominant period, F is a correction for the variation of amplitude with the earthquake’s depth h and angular distance ∆ from the seismometer, and C is a regional scaling factor. Magnitude scales are logarithmic, so an increase in one unit e.g. from 5 to 6, indicates a ten-fold increase in seismic wave amplitude. Note that since a log10 scale is used, magnitudes can be negative for very small displacements. For example, a magnitude -1 earthquake might correspond to a hammer blow. Earthquake magnitudes and moment Richter magnitude The concept of earthquake magnitude was introduced by Charles Richter in 1935 for southern California earthquakes. He originally defined earthquake magnitude as the logarithm (to the base 10) of maximum amplitude measured in microns on the record of a standard torsion seismograph with a pendulum period of 0.8 s, magnification of 2800, and damping factor 0.8, located at a distance of 100 km from the epicenter.
[Show full text]