The Earthquake Prediction Experiment at Parkfield, California

Total Page:16

File Type:pdf, Size:1020Kb

The Earthquake Prediction Experiment at Parkfield, California THE EARTHQUAKE PREDICTION EXPERIMENT AT PARKF!ELD, CALl FORN IA EvelynRoelofts John Langbein U.S. GeologicalSurvey, Vancouver,Washington U.S. GeologicalSurvey, Menlo Park, Cafifornia Abstract. Since 1985, a focused earthquake predic- with the existence of a "locked" patch on the fault tion experiment has been in progress along the San beneath Parkfield that has presently accumulated a slip Andreas fault near the town of Parkfield in central deficit equal to the slip in the 1966 earthquake. A California. Parkfield has experienced six moderate magnitude 4.7 earthquake in October 1992 brought the earthquakes since 1857 at average intervals of 22 Parkfield experiment to its highestlevel of alert, with a years, the most recent a magnitude 6 event in 1966. 72-hour public warning that there was a 37% chance of The probability of another moderate earthquake soon a magnitude 6 event. However, this warning proved to appears high, but studiesassigning it a 95% chance of be a false alarm. Most data collected at Parkfield occurring before 1993 now appear to have been over- indicate that strain is accumulating at a constant rate simplified.The identification of a Parkfield fault "seg- on this part of the San Andreas fault, but some inter- ment" was initially based on geometric features in the esting departures from this behavior have been re- surface trace of the San Andreas fault, but more recent corded. Here we outline the scientific arguments bear- microearthquakestudies have demonstratedthat those ing on when the next Parkfield earthquake is likely to features do not extend to seismogenicdepths. On the occur and summarize geophysical observations to other hand, geodetic measurements are consistent date. INTRODUCTION occur before 1993. The Parkfield Earthquake Predic- tion Experiment began in 1985, after Bakun and Earthquake prediction research seemed on the Lindh's conjecture was accepted by the National verge of a breakthrough in 1975, when Chinese seis- Earthquake Prediction Evaluation Council (NEPEC) mologistssuccessfully alerted Haicheng city of an im- and the California Earthquake Prediction Evaluation pending magnitude 7.3 earthquake [Deng et al., 1981]. Council (CEPEC) [Shearer, 1985a, b]. One million Public warnings were also achieved before the 1976 dollars of initial funding for instrumentation was pro- Songpan-Pingwu, China, earthquakes [Wallace and vided by the U.S. Geological Survey (USGS) and was Teng, 1980] and the 1978 Izu-Oshima, Japan, earth- matched by the state of California. The experimental quake (N. Nishide, oral communication, 1992). How- goals are to record geophysicalsignals accompanying ever, these early successeshave not been repeated. the earthquake process at Parkfield, to capture effects Many seismologistseven doubt the existence of mea- of strong motion in the near field, and, most ambi- surable phenomena preceding earthquakes that might tiously, to attempt to issue a warning up to 3 days justify short-termpredictions. Nonetheless, the search before the magnitude 6 event if seismicity and/or fault for earthquake precursorscontinues worldwide, much creep rate reach predefined threshold levels. of it in focused earthquake prediction study areas in Eight years into the experiment, 21 instrument net- Japan, Turkey, the former Soviet Union, and China. works designed to record preearthquake phenomena In the United States a focused earthquake prediction operate near Parkfield (Tables 1a and 1b) [Bakun, study is in progress at Parkfield, California. 1988b], five of which are monitored in real time. Ten The town of Parkfield, built practically on the San more networks wait to record strong motion, coseis- Andreas fault about halfway between San Francisco mic slip, and liquefaction (Table 1c) [Sherburne, 1988] and Los Angeles (Figure 1), experienced moderate when a moderate earthquake occurs. Investigators earthquakesin 1857, 1881, 1901, 1922, 1934, and, most from 13 institutions in addition to the USGS partici- recently, a magnitude 6 event in 1966. On the basis of pate. Including a seismic network that records and this sequenceof events, Bakun and Lindh [1985] esti- locates all events above magnitude 1.0, the instrumen- mated the recurrence interval for magnitude 6 earth- tation at Parkfield is unmatched in diversity and den- quakes near Parkfield as 22 -+ 3 years and conjectured sity by that of any other earthquake prediction study with 95% confidence that another such event would worldwide. This paper is not subjectto U.S. copyright. Reviewsof Geophysics, 32, 3 / August 1994 pages 315-336 Publishedin 1994 by the American GeophysicalUnion. Paper number 94RG01114 ß 315ß 316 ß Roeloftsand Langbein: EARTHQUAKEPREDICTION EXPERIMENT 32, 3 / REVIEWSOF GEOPHYSICS becausereports of earthquake precursorswill be more credible if they are corroborated by signals on other instruments and because long baselines of data may be ;• 5øBendin %,. necessary in order to determine whether small pre- 36ø0ß • FaultTrace earthquake signals are anomalous. A unique and important feature of the Parkfield experiment, which enabled it to receive start-up funds from California, is its prototypical plan for communi- cating earthquake hazard information to the California Office of Emergency Services (OES). Bakun et al. [1986, 1987] and Bakun [1988c] describe the plan in detail. "Alert/status" thresholds have been defined for six instrumentation networks (Table l a). The seismic and creep thresholds reflect phenomena preceding the 1966 Parkfield earthquake, but such data are not avail- able for the other instruments. Instead, their thresh- olds are set so that, statistically, signals unusual enough to meet them occur with specified frequency. When a signal exceeds a threshold (usually detected • .• Kilomete• by a computer program that can page a scientist), 35ø30' scientists monitoring all six of these networks are 1•ø• ' 120015' notified. There are five alert/status levels, with notifi- cation to OES at all but the two lowest levels. At the Figure 1. Parkfield experiment landmarks along the San highest level (level A) the notification to OES states Andreas fault in central California. In the inset California that there is a 37% chance that the magnitude 6 Park- map the solid rectangle is the location of map area; SF denotes San Francisco; LA denotes Los Angeles. Arrows field earthquake will happen within the next 72 hours. indicate the direction of motion of the Pacific plate relative to Before the Parkfield plan was fully exercised, response the North American plate. The San Andreas fault is dark- plans for volcanic unrest at Long Valley Caldera [Hill ened over the approximate extent of surface rupture associ- et al., 1991] and for earthquake hazard in southern ated with the 1966 Parkfield earthquake as mapped by Brown California [Jones et al., 1991] had already been mod- [1970]. eled after it. The NEPEC- and CEPEC-sanctioned prediction window for Parkfield officially closed on December 31, 1992, but the predicted magnitude 6 earthquake did not Since the initial two-million-dollar startup funding, occur. Focused study of Parkfield has generated con- virtually all work at Parkfield has been carried out troversy over the simple picture of cyclically recurring within the previously existing budget of the National seismicity that motivated the experiment. During the Earthquake Hazards Reduction Program (NEHRP). In interval of the experiment, California suffered two this sense, the Parkfield experiment constitutes a com- magnitude 7 earthquakes near major population cen- mitment to concentrate monitoring instruments in a ters that are less well instrumented than Parkfield. single area with a high probability of experiencing an Many have expressed concern that concentration of earthquake soon. Such a concentration is desirable earthquake prediction resources at Parkfield may be a TABLE la. PredictionMonitoring Networksat Parkfield With DefinedAlert/Status Levels as of November1992 Number Sensitivity/ Measurement Network of Sites Threshold Interval References Seismicity (USGS Calnet) 40 Magnitude 1.0 continuous Nishioka and Michael [1990] Continuous strain* 6 10-9 10 min Myren and Johnston [1989] Magnetic field 7 0.25 nT 10 min Mueller et al. [1994] Creep meters (Invar wire- 13 0.01 mm 10 min Schulz [ 1989] LVDT?) Groundwater level* 10 1 mm 15 min Roelofts et al. [1989] Two-color laser geodimeter 18 lines 1 mm 3 times/week Langbein et al. [1990] All of these networks are operatedby USGS, Menlo Park, California. Referencedescribes network and/or measurementtechnique. *The sensitivityin the table is appropriatefor signalswith durationsof 24 hours or less. Becauseof increasingEarth noise at longer periods, signalsmust be larger to be detected at longer periods (see, for example, Langbein et al. [1993]). ?LVDT, linear variable differential transformer. 32, 3 / REVIEWSOF GEOPHYSICS Roeloftsand Langbein:EARTHQUAKE PREDICTION EXPERIMENT ß 317 TABLE lb. Prediction-Oriented Networks at Parkfield Without Alert/Status Levels as of November 1992 Network and Number Sensitivity/ Measurement Operator of Sites Threshold Interval References Borehole tensor strain;*? 2 10-9 10 min Gladwin et al. [1987] University of Queensland, Australia Tilt;*? USGS 5 1 Ixrad 10 min Mortensen et al. [1978] Groundwaterradon; 2 USGS 2 1 pCi/L 10 min Noguchi and Wakita [1977] Soil hydrogen;2 USGS, Reston 7 10 ppm 10 min Sato et al. [1986] Borehole microtemperature; USGS 1 10-4 øC 12 min C. Williams (oral communication, 1992) Resistivity; UC, Riverside 6 dipoles 1% daily average Park [1991] 80-kHz magnetic field; University 1 0.5 •v 6s E. Wescott (oral of Alaska communication, 1992) 0.01- to 10-Hz magnetic field; 2 1 pT 30 min, average McGill et al. [1993], Stanford University Bernardi et al. [1991] Trilateration (4 to 33 km lines); 113 lines 3 mm + 0.2 mm/ yearly King et al. [1987] USGS km Small-aperture nets (3 to 4 km 4 3-4 mm yearly King et al. [1987] lines); USGS GPS; USGS, SIO 4 <1 cm north, east; 4 times per year Prescott et al. [1989] 1-2 cm vertical Rapid static GPS; JPL, USGS 107 2 mm yearly Hurst et al. [1992] Continuous GPS; SIO, JPL, USGS 1 1 cm continuous K. Hudnut (oral communication, 1992) Highway 46 GPS array; USGS, 16 1 cm 2 times per year K.
Recommended publications
  • Generalized Linear Models (Glms)
    San Jos´eState University Math 261A: Regression Theory & Methods Generalized Linear Models (GLMs) Dr. Guangliang Chen This lecture is based on the following textbook sections: • Chapter 13: 13.1 – 13.3 Outline of this presentation: • What is a GLM? • Logistic regression • Poisson regression Generalized Linear Models (GLMs) What is a GLM? In ordinary linear regression, we assume that the response is a linear function of the regressors plus Gaussian noise: 0 2 y = β0 + β1x1 + ··· + βkxk + ∼ N(x β, σ ) | {z } |{z} linear form x0β N(0,σ2) noise The model can be reformulate in terms of • distribution of the response: y | x ∼ N(µ, σ2), and • dependence of the mean on the predictors: µ = E(y | x) = x0β Dr. Guangliang Chen | Mathematics & Statistics, San Jos´e State University3/24 Generalized Linear Models (GLMs) beta=(1,2) 5 4 3 β0 + β1x b y 2 y 1 0 −1 0.0 0.2 0.4 0.6 0.8 1.0 x x Dr. Guangliang Chen | Mathematics & Statistics, San Jos´e State University4/24 Generalized Linear Models (GLMs) Generalized linear models (GLM) extend linear regression by allowing the response variable to have • a general distribution (with mean µ = E(y | x)) and • a mean that depends on the predictors through a link function g: That is, g(µ) = β0x or equivalently, µ = g−1(β0x) Dr. Guangliang Chen | Mathematics & Statistics, San Jos´e State University5/24 Generalized Linear Models (GLMs) In GLM, the response is typically assumed to have a distribution in the exponential family, which is a large class of probability distributions that have pdfs of the form f(x | θ) = a(x)b(θ) exp(c(θ) · T (x)), including • Normal - ordinary linear regression • Bernoulli - Logistic regression, modeling binary data • Binomial - Multinomial logistic regression, modeling general cate- gorical data • Poisson - Poisson regression, modeling count data • Exponential, Gamma - survival analysis Dr.
    [Show full text]
  • Development of Faults and Prediction of Earthquakes in the Himalayas
    Journal of Graphic Era University Vol. 6, Issue 2, 197-206, 2018 ISSN: 0975-1416 (Print), 2456-4281 (Online) Development of Faults and Prediction of Earthquakes in the Himalayas A. K. Dubey Department of Petroleum Engineering Graphic Era Deemed to be University, Dehradun, India E-mail: [email protected] (Received October 9, 2017; Accepted July 3, 2018) Abstract Recurrence period of high magnitude earthquakes in the Himalayas may be of the order of hundreds of years but a large number of smaller earthquakes occur every day. The low intensity earthquakes are not felt by human beings but recorded in sensitive instruments called seismometers. It is not possible to get rid of these earthquakes because the mountain building activity is still going on. Continuous compression and formation of faults in the region is caused by northward movement of the Indian plate. Some of the larger faults extend from Kashmir to Arunachal Pradesh. Strain build up in the region results in displacement along these faults that cause earthquakes. Types of these faults, mechanism of their formation and problems in predicting earthquakes in the region are discussed. Keywords- Faults and faulting, Himalayan thrusts, Oil trap, Seismicity, Superimposed deformation. 1. Introduction If we go back in the history of the Earth (~250 million years ago), India was part of a huge land mass called 'Pangaea'. The land mass was positioned in the southern hemisphere very close to the South pole (Antarctica). Because of some reason, not known to us till now, the landmass broke and India started its onward journey in a northerly direction.
    [Show full text]
  • How Prediction Statistics Can Help Us Cope When We Are Shaken, Scared and Irrational
    EGU21-15219 https://doi.org/10.5194/egusphere-egu21-15219 EGU General Assembly 2021 © Author(s) 2021. This work is distributed under the Creative Commons Attribution 4.0 License. How prediction statistics can help us cope when we are shaken, scared and irrational Yavor Kamer1, Shyam Nandan2, Stefan Hiemer1, Guy Ouillon3, and Didier Sornette4,5 1RichterX.com, Zürich, Switzerland 2Windeggstrasse 5, 8953 Dietikon, Zurich, Switzerland 3Lithophyse, Nice, France 4Department of Management,Technology and Economics, ETH Zürich, Zürich, Switzerland 5Institute of Risk Analysis, Prediction and Management (Risks-X), SUSTech, Shenzhen, China Nature is scary. You can be sitting at your home and next thing you know you are trapped under the ruble of your own house or sucked into a sinkhole. For millions of years we have been the figurines of this precarious scene and we have found our own ways of dealing with the anxiety. It is natural that we create and consume prophecies, conspiracies and false predictions. Information technologies amplify not only our rational but also irrational deeds. Social media algorithms, tuned to maximize attention, make sure that misinformation spreads much faster than its counterpart. What can we do to minimize the adverse effects of misinformation, especially in the case of earthquakes? One option could be to designate one authoritative institute, set up a big surveillance network and cancel or ban every source of misinformation before it spreads. This might have worked a few centuries ago but not in this day and age. Instead we propose a more inclusive option: embrace all voices and channel them into an actual, prospective earthquake prediction platform (Kamer et al.
    [Show full text]
  • Foreshock Sequences and Short-Term Earthquake Predictability on East Pacific Rise Transform Faults
    NATURE 3377—9/3/2005—VBICKNELL—137936 articles Foreshock sequences and short-term earthquake predictability on East Pacific Rise transform faults Jeffrey J. McGuire1, Margaret S. Boettcher2 & Thomas H. Jordan3 1Department of Geology and Geophysics, Woods Hole Oceanographic Institution, and 2MIT-Woods Hole Oceanographic Institution Joint Program, Woods Hole, Massachusetts 02543-1541, USA 3Department of Earth Sciences, University of Southern California, Los Angeles, California 90089-7042, USA ........................................................................................................................................................................................................................... East Pacific Rise transform faults are characterized by high slip rates (more than ten centimetres a year), predominately aseismic slip and maximum earthquake magnitudes of about 6.5. Using recordings from a hydroacoustic array deployed by the National Oceanic and Atmospheric Administration, we show here that East Pacific Rise transform faults also have a low number of aftershocks and high foreshock rates compared to continental strike-slip faults. The high ratio of foreshocks to aftershocks implies that such transform-fault seismicity cannot be explained by seismic triggering models in which there is no fundamental distinction between foreshocks, mainshocks and aftershocks. The foreshock sequences on East Pacific Rise transform faults can be used to predict (retrospectively) earthquakes of magnitude 5.4 or greater, in narrow spatial and temporal windows and with a high probability gain. The predictability of such transform earthquakes is consistent with a model in which slow slip transients trigger earthquakes, enrich their low-frequency radiation and accommodate much of the aseismic plate motion. On average, before large earthquakes occur, local seismicity rates support the inference of slow slip transients, but the subject remains show a significant increase1. In continental regions, where dense controversial23.
    [Show full text]
  • Reliability Engineering: Trends, Strategies and Best Practices
    Reliability Engineering: Trends, Strategies and Best Practices WHITE PAPER September 2007 Predictive HCL’s Predictive Engineering encompasses the complete product life-cycle process, from concept to design to prototyping/testing, all the way to manufacturing. This helps in making decisions early Engineering in the design process, perfecting the product – thereby cutting down cycle time and costs, and Think. Design. Perfect! meeting set reliability and quality standards. Reliability Engineering: Trends, Strategies and Best Practices | September 2007 TABLE OF CONTENTS Abstract 3 Importance of reliability engineering in product industry 3 Current trends in reliability engineering 4 Reliability planning – an important task 5 Strength of reliability analysis 6 Is your reliability test plan optimized? 6 Challenges to overcome 7 Outsourcing of a reliability task 7 About HCL 10 © 2007, HCL Technologies. Reproduction Prohibited. This document is protected under Copyright by the Author, all rights reserved. Reliability Engineering: Trends, Strategies and Best Practices | September 2007 Abstract In reliability engineering, product industries now follow a conscious and planned approach to effectively address multiple issues related to product certification, failure returns, customer satisfaction, market competition and product lifecycle cost. Today, reliability professionals face new challenges posed by advanced and complex technology. Expertise and experience are necessary to provide an optimized solution meeting time and cost constraints associated with analysis and testing. In the changed scenario, the reliability approach has also become more objective and result oriented. This is well supported by analysis software. This paper discusses all associated issues including outsourcing of reliability tasks to a professional service provider as an alternate cost-effective option. Importance of reliability engineering in product industry The degree of importance given to the reliability of products varies depending on their criticality.
    [Show full text]
  • Temporal and Spatial Evolution Analysis of Earthquake Events in California and Nevada Based on Spatial Statistics
    International Journal of Geo-Information Article Temporal and Spatial Evolution Analysis of Earthquake Events in California and Nevada Based on Spatial Statistics Weifeng Shan 1,2 , Zhihao Wang 2, Yuntian Teng 1,* and Maofa Wang 3 1 Institute of Geophysics, China Earthquake Administration, Beijing 100081, China; [email protected] 2 School of Emergency Management, Institute of Disaster Prevention, Langfang 065201, China; [email protected] 3 School of Computer and Information Security, Guilin University of Electronic Science and Technology, Guilin 541004, China; [email protected] * Correspondence: [email protected] Abstract: Studying the temporal and spatial evolution trends in earthquakes in an area is beneficial for determining the earthquake risk of the area so that local governments can make the correct decisions for disaster prevention and reduction. In this paper, we propose a new method for analyzing the temporal and spatial evolution trends in earthquakes based on earthquakes of magnitude 3.0 or above from 1980 to 2019 in California and Nevada. The experiment’s results show that (1) the frequency of earthquake events of magnitude 4.5 or above present a relatively regular change trend of decreasing–rising in this area; (2) by using the weighted average center method to analyze the spatial concentration of earthquake events of magnitude 3.0 or above in this region, we find that the weighted average center of the earthquake events in this area shows a conch-type movement law, where it moves closer to the center from all sides; (3) the direction of the spatial distribution of earthquake events in this area shows a NW–SE pattern when the standard deviational ellipse (SDE) Citation: Shan, W.; Wang, Z.; Teng, method is used, which is basically consistent with the direction of the San Andreas Fault Zone across Y.; Wang, M.
    [Show full text]
  • Analyzing the Performance of GPS Data for Earthquake Prediction
    remote sensing Article Analyzing the Performance of GPS Data for Earthquake Prediction Valeri Gitis , Alexander Derendyaev * and Konstantin Petrov The Institute for Information Transmission Problems, 127051 Moscow, Russia; [email protected] (V.G.); [email protected] (K.P.) * Correspondence: [email protected]; Tel.: +7-495-6995096 Abstract: The results of earthquake prediction largely depend on the quality of data and the methods of their joint processing. At present, for a number of regions, it is possible, in addition to data from earthquake catalogs, to use space geodesy data obtained with the help of GPS. The purpose of our study is to evaluate the efficiency of using the time series of displacements of the Earth’s surface according to GPS data for the systematic prediction of earthquakes. The criterion of efficiency is the probability of successful prediction of an earthquake with a limited size of the alarm zone. We use a machine learning method, namely the method of the minimum area of alarm, to predict earthquakes with a magnitude greater than 6.0 and a hypocenter depth of up to 60 km, which occurred from 2016 to 2020 in Japan, and earthquakes with a magnitude greater than 5.5. and a hypocenter depth of up to 60 km, which happened from 2013 to 2020 in California. For each region, we compare the following results: random forecast of earthquakes, forecast obtained with the field of spatial density of earthquake epicenters, forecast obtained with spatio-temporal fields based on GPS data, based on seismological data, and based on combined GPS data and seismological data.
    [Show full text]
  • Rupture Process of the 2019 Ridgecrest, California Mw 6.4 Foreshock and Mw 7.1 Earthquake Constrained by Seismic and Geodetic Data, Bull
    Rupture process of the 2019 Ridgecrest, M M California w 6.4 Foreshock and w 7.1 Earthquake Constrained by Seismic and Geodetic Data Kang Wang*1,2, Douglas S. Dreger1,2, Elisa Tinti3,4, Roland Bürgmann1,2, and Taka’aki Taira2 ABSTRACT The 2019 Ridgecrest earthquake sequence culminated in the largest seismic event in California M since the 1999 w 7.1 Hector Mine earthquake. Here, we combine geodetic and seismic data M M to study the rupture process of both the 4 July w 6.4 foreshock and the 6 July w 7.1 main- M shock. The results show that the w 6.4 foreshock rupture started on a northwest-striking right-lateral fault, and then continued on a southwest-striking fault with mainly left-lateral M slip. Although most moment release during the w 6.4 foreshock was along the southwest- striking fault, slip on the northwest-striking fault seems to have played a more important role M ∼ M in triggering the w 7.1 mainshock that happened 34 hr later. Rupture of the w 7.1 main- shock was characterized by dominantly right-lateral slip on a series of overall northwest- striking fault strands, including the one that had already been activated during the nucleation M ∼ of the w 6.4 foreshock. The maximum slip of the 2019 Ridgecrest earthquake was 5m, – M located at a depth range of 3 8kmnearthe w 7.1 epicenter, corresponding to a shallow slip deficit of ∼ 20%–30%. Both the foreshock and mainshock had a relatively low-rupture veloc- ity of ∼ 2km= s, which is possibly related to the geometric complexity and immaturity of the eastern California shear zone faults.
    [Show full text]
  • Prediction in Multilevel Generalized Linear Models
    J. R. Statist. Soc. A (2009) 172, Part 3, pp. 659–687 Prediction in multilevel generalized linear models Anders Skrondal Norwegian Institute of Public Health, Oslo, Norway and Sophia Rabe-Hesketh University of California, Berkeley, USA, and Institute of Education, London, UK [Received February 2008. Final revision October 2008] Summary. We discuss prediction of random effects and of expected responses in multilevel generalized linear models. Prediction of random effects is useful for instance in small area estimation and disease mapping, effectiveness studies and model diagnostics. Prediction of expected responses is useful for planning, model interpretation and diagnostics. For prediction of random effects, we concentrate on empirical Bayes prediction and discuss three different kinds of standard errors; the posterior standard deviation and the marginal prediction error standard deviation (comparative standard errors) and the marginal sampling standard devi- ation (diagnostic standard error). Analytical expressions are available only for linear models and are provided in an appendix. For other multilevel generalized linear models we present approximations and suggest using parametric bootstrapping to obtain standard errors. We also discuss prediction of expectations of responses or probabilities for a new unit in a hypotheti- cal cluster, or in a new (randomly sampled) cluster or in an existing cluster. The methods are implemented in gllamm and illustrated by applying them to survey data on reading proficiency of children nested in schools. Simulations are used to assess the performance of various pre- dictions and associated standard errors for logistic random-intercept models under a range of conditions. Keywords: Adaptive quadrature; Best linear unbiased predictor (BLUP); Comparative standard error; Diagnostic standard error; Empirical Bayes; Generalized linear mixed model; gllamm; Mean-squared error of prediction; Multilevel model; Posterior; Prediction; Random effects; Scoring 1.
    [Show full text]
  • William L. Ellsworth, Professor (Research) of Geophysics July 14, 2020
    William L. Ellsworth, Professor (Research) of Geophysics July 14, 2020 William L. Ellsworth is a professor in the Department of Geophysics at Stanford University His research focuses on the seismological study of active faults, the earthquakes they generate and the physics of the earthquake source. A major objective of his work is to improve our knowledge of earthquake hazards through the application of physics-based understanding of the underlying processes, and the transfer of scientific understanding of the hazard to people, businesses, policymakers and government agencies. As Co-Director of the Stanford Center for Induced and Triggered Seismicity (SCITS) he leads multi-disciplinary studies into the causes and consequences of anthropogenic earthquakes in a wide variety of settings. Before coming to Stanford in 2015 he spent over 40 years as a research geophysicist at the U.S. Geological Survey where he served as Chief of the Branch of Seismology and Chief Scientist of the Earthquake Hazards Team. He received B.S. in Physics and M.S. in Geophysics from Stanford University and his Ph.D. in Geophysics from MIT. He is a past President of the Seismological Society of America, a Fellow of the American Geophysical Union, and recipient of the Distinguished Service Award of the Department of the Interior. Education 1978 | Ph.D. in Geophysics, Massachusetts Institute of Technology. 1971 | M.S. in Geophysics, Stanford University. 1971 | B.S. in Physics, Stanford University. Professional Experience 2015 – Present | Professor (Research) of Geophysics, Stanford University. 2015 – Present | Co-Director, Stanford Center for Induced and Triggered Seismicity 1971-2015 | Geophysicist, U. S.
    [Show full text]
  • A Philosophical Treatise of Universal Induction
    Entropy 2011, 13, 1076-1136; doi:10.3390/e13061076 OPEN ACCESS entropy ISSN 1099-4300 www.mdpi.com/journal/entropy Article A Philosophical Treatise of Universal Induction Samuel Rathmanner and Marcus Hutter ? Research School of Computer Science, Australian National University, Corner of North and Daley Road, Canberra ACT 0200, Australia ? Author to whom correspondence should be addressed; E-Mail: [email protected]. Received: 20 April 2011; in revised form: 24 May 2011 / Accepted: 27 May 2011 / Published: 3 June 2011 Abstract: Understanding inductive reasoning is a problem that has engaged mankind for thousands of years. This problem is relevant to a wide range of fields and is integral to the philosophy of science. It has been tackled by many great minds ranging from philosophers to scientists to mathematicians, and more recently computer scientists. In this article we argue the case for Solomonoff Induction, a formal inductive framework which combines algorithmic information theory with the Bayesian framework. Although it achieves excellent theoretical results and is based on solid philosophical foundations, the requisite technical knowledge necessary for understanding this framework has caused it to remain largely unknown and unappreciated in the wider scientific community. The main contribution of this article is to convey Solomonoff induction and its related concepts in a generally accessible form with the aim of bridging this current technical gap. In the process we examine the major historical contributions that have led to the formulation of Solomonoff Induction as well as criticisms of Solomonoff and induction in general. In particular we examine how Solomonoff induction addresses many issues that have plagued other inductive systems, such as the black ravens paradox and the confirmation problem, and compare this approach with other recent approaches.
    [Show full text]
  • Laboratory Earthquake Forecasting: a Machine Learning Competition PERSPECTIVE Paul A
    PERSPECTIVE Laboratory earthquake forecasting: A machine learning competition PERSPECTIVE Paul A. Johnsona,1,2, Bertrand Rouet-Leduca,1, Laura J. Pyrak-Nolteb,c,d,1, Gregory C. Berozae, Chris J. Maronef,g, Claudia Hulberth, Addison Howardi, Philipp Singerj,3, Dmitry Gordeevj,3, Dimosthenis Karaflosk,3, Corey J. Levinsonl,3, Pascal Pfeifferm,3, Kin Ming Pukn,3, and Walter Readei Edited by David A. Weitz, Harvard University, Cambridge, MA, and approved November 28, 2020 (received for review August 3, 2020) Earthquake prediction, the long-sought holy grail of earthquake science, continues to confound Earth scientists. Could we make advances by crowdsourcing, drawing from the vast knowledge and creativity of the machine learning (ML) community? We used Google’s ML competition platform, Kaggle, to engage the worldwide ML community with a competition to develop and improve data analysis approaches on a forecasting problem that uses laboratory earthquake data. The competitors were tasked with predicting the time remaining before the next earthquake of successive laboratory quake events, based on only a small portion of the laboratory seismic data. The more than 4,500 participating teams created and shared more than 400 computer programs in openly accessible notebooks. Complementing the now well-known features of seismic data that map to fault criticality in the laboratory, the winning teams employed unex- pected strategies based on rescaling failure times as a fraction of the seismic cycle and comparing input distribution of training and testing data. In addition to yielding scientific insights into fault processes in the laboratory and their relation with the evolution of the statistical properties of the associated seismic data, the competition serves as a pedagogical tool for teaching ML in geophysics.
    [Show full text]