The Earthquake Prediction Experiment at Parkfield, California
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Generalized Linear Models (Glms)
San Jos´eState University Math 261A: Regression Theory & Methods Generalized Linear Models (GLMs) Dr. Guangliang Chen This lecture is based on the following textbook sections: • Chapter 13: 13.1 – 13.3 Outline of this presentation: • What is a GLM? • Logistic regression • Poisson regression Generalized Linear Models (GLMs) What is a GLM? In ordinary linear regression, we assume that the response is a linear function of the regressors plus Gaussian noise: 0 2 y = β0 + β1x1 + ··· + βkxk + ∼ N(x β, σ ) | {z } |{z} linear form x0β N(0,σ2) noise The model can be reformulate in terms of • distribution of the response: y | x ∼ N(µ, σ2), and • dependence of the mean on the predictors: µ = E(y | x) = x0β Dr. Guangliang Chen | Mathematics & Statistics, San Jos´e State University3/24 Generalized Linear Models (GLMs) beta=(1,2) 5 4 3 β0 + β1x b y 2 y 1 0 −1 0.0 0.2 0.4 0.6 0.8 1.0 x x Dr. Guangliang Chen | Mathematics & Statistics, San Jos´e State University4/24 Generalized Linear Models (GLMs) Generalized linear models (GLM) extend linear regression by allowing the response variable to have • a general distribution (with mean µ = E(y | x)) and • a mean that depends on the predictors through a link function g: That is, g(µ) = β0x or equivalently, µ = g−1(β0x) Dr. Guangliang Chen | Mathematics & Statistics, San Jos´e State University5/24 Generalized Linear Models (GLMs) In GLM, the response is typically assumed to have a distribution in the exponential family, which is a large class of probability distributions that have pdfs of the form f(x | θ) = a(x)b(θ) exp(c(θ) · T (x)), including • Normal - ordinary linear regression • Bernoulli - Logistic regression, modeling binary data • Binomial - Multinomial logistic regression, modeling general cate- gorical data • Poisson - Poisson regression, modeling count data • Exponential, Gamma - survival analysis Dr. -
Development of Faults and Prediction of Earthquakes in the Himalayas
Journal of Graphic Era University Vol. 6, Issue 2, 197-206, 2018 ISSN: 0975-1416 (Print), 2456-4281 (Online) Development of Faults and Prediction of Earthquakes in the Himalayas A. K. Dubey Department of Petroleum Engineering Graphic Era Deemed to be University, Dehradun, India E-mail: [email protected] (Received October 9, 2017; Accepted July 3, 2018) Abstract Recurrence period of high magnitude earthquakes in the Himalayas may be of the order of hundreds of years but a large number of smaller earthquakes occur every day. The low intensity earthquakes are not felt by human beings but recorded in sensitive instruments called seismometers. It is not possible to get rid of these earthquakes because the mountain building activity is still going on. Continuous compression and formation of faults in the region is caused by northward movement of the Indian plate. Some of the larger faults extend from Kashmir to Arunachal Pradesh. Strain build up in the region results in displacement along these faults that cause earthquakes. Types of these faults, mechanism of their formation and problems in predicting earthquakes in the region are discussed. Keywords- Faults and faulting, Himalayan thrusts, Oil trap, Seismicity, Superimposed deformation. 1. Introduction If we go back in the history of the Earth (~250 million years ago), India was part of a huge land mass called 'Pangaea'. The land mass was positioned in the southern hemisphere very close to the South pole (Antarctica). Because of some reason, not known to us till now, the landmass broke and India started its onward journey in a northerly direction. -
How Prediction Statistics Can Help Us Cope When We Are Shaken, Scared and Irrational
EGU21-15219 https://doi.org/10.5194/egusphere-egu21-15219 EGU General Assembly 2021 © Author(s) 2021. This work is distributed under the Creative Commons Attribution 4.0 License. How prediction statistics can help us cope when we are shaken, scared and irrational Yavor Kamer1, Shyam Nandan2, Stefan Hiemer1, Guy Ouillon3, and Didier Sornette4,5 1RichterX.com, Zürich, Switzerland 2Windeggstrasse 5, 8953 Dietikon, Zurich, Switzerland 3Lithophyse, Nice, France 4Department of Management,Technology and Economics, ETH Zürich, Zürich, Switzerland 5Institute of Risk Analysis, Prediction and Management (Risks-X), SUSTech, Shenzhen, China Nature is scary. You can be sitting at your home and next thing you know you are trapped under the ruble of your own house or sucked into a sinkhole. For millions of years we have been the figurines of this precarious scene and we have found our own ways of dealing with the anxiety. It is natural that we create and consume prophecies, conspiracies and false predictions. Information technologies amplify not only our rational but also irrational deeds. Social media algorithms, tuned to maximize attention, make sure that misinformation spreads much faster than its counterpart. What can we do to minimize the adverse effects of misinformation, especially in the case of earthquakes? One option could be to designate one authoritative institute, set up a big surveillance network and cancel or ban every source of misinformation before it spreads. This might have worked a few centuries ago but not in this day and age. Instead we propose a more inclusive option: embrace all voices and channel them into an actual, prospective earthquake prediction platform (Kamer et al. -
Foreshock Sequences and Short-Term Earthquake Predictability on East Pacific Rise Transform Faults
NATURE 3377—9/3/2005—VBICKNELL—137936 articles Foreshock sequences and short-term earthquake predictability on East Pacific Rise transform faults Jeffrey J. McGuire1, Margaret S. Boettcher2 & Thomas H. Jordan3 1Department of Geology and Geophysics, Woods Hole Oceanographic Institution, and 2MIT-Woods Hole Oceanographic Institution Joint Program, Woods Hole, Massachusetts 02543-1541, USA 3Department of Earth Sciences, University of Southern California, Los Angeles, California 90089-7042, USA ........................................................................................................................................................................................................................... East Pacific Rise transform faults are characterized by high slip rates (more than ten centimetres a year), predominately aseismic slip and maximum earthquake magnitudes of about 6.5. Using recordings from a hydroacoustic array deployed by the National Oceanic and Atmospheric Administration, we show here that East Pacific Rise transform faults also have a low number of aftershocks and high foreshock rates compared to continental strike-slip faults. The high ratio of foreshocks to aftershocks implies that such transform-fault seismicity cannot be explained by seismic triggering models in which there is no fundamental distinction between foreshocks, mainshocks and aftershocks. The foreshock sequences on East Pacific Rise transform faults can be used to predict (retrospectively) earthquakes of magnitude 5.4 or greater, in narrow spatial and temporal windows and with a high probability gain. The predictability of such transform earthquakes is consistent with a model in which slow slip transients trigger earthquakes, enrich their low-frequency radiation and accommodate much of the aseismic plate motion. On average, before large earthquakes occur, local seismicity rates support the inference of slow slip transients, but the subject remains show a significant increase1. In continental regions, where dense controversial23. -
Reliability Engineering: Trends, Strategies and Best Practices
Reliability Engineering: Trends, Strategies and Best Practices WHITE PAPER September 2007 Predictive HCL’s Predictive Engineering encompasses the complete product life-cycle process, from concept to design to prototyping/testing, all the way to manufacturing. This helps in making decisions early Engineering in the design process, perfecting the product – thereby cutting down cycle time and costs, and Think. Design. Perfect! meeting set reliability and quality standards. Reliability Engineering: Trends, Strategies and Best Practices | September 2007 TABLE OF CONTENTS Abstract 3 Importance of reliability engineering in product industry 3 Current trends in reliability engineering 4 Reliability planning – an important task 5 Strength of reliability analysis 6 Is your reliability test plan optimized? 6 Challenges to overcome 7 Outsourcing of a reliability task 7 About HCL 10 © 2007, HCL Technologies. Reproduction Prohibited. This document is protected under Copyright by the Author, all rights reserved. Reliability Engineering: Trends, Strategies and Best Practices | September 2007 Abstract In reliability engineering, product industries now follow a conscious and planned approach to effectively address multiple issues related to product certification, failure returns, customer satisfaction, market competition and product lifecycle cost. Today, reliability professionals face new challenges posed by advanced and complex technology. Expertise and experience are necessary to provide an optimized solution meeting time and cost constraints associated with analysis and testing. In the changed scenario, the reliability approach has also become more objective and result oriented. This is well supported by analysis software. This paper discusses all associated issues including outsourcing of reliability tasks to a professional service provider as an alternate cost-effective option. Importance of reliability engineering in product industry The degree of importance given to the reliability of products varies depending on their criticality. -
Temporal and Spatial Evolution Analysis of Earthquake Events in California and Nevada Based on Spatial Statistics
International Journal of Geo-Information Article Temporal and Spatial Evolution Analysis of Earthquake Events in California and Nevada Based on Spatial Statistics Weifeng Shan 1,2 , Zhihao Wang 2, Yuntian Teng 1,* and Maofa Wang 3 1 Institute of Geophysics, China Earthquake Administration, Beijing 100081, China; [email protected] 2 School of Emergency Management, Institute of Disaster Prevention, Langfang 065201, China; [email protected] 3 School of Computer and Information Security, Guilin University of Electronic Science and Technology, Guilin 541004, China; [email protected] * Correspondence: [email protected] Abstract: Studying the temporal and spatial evolution trends in earthquakes in an area is beneficial for determining the earthquake risk of the area so that local governments can make the correct decisions for disaster prevention and reduction. In this paper, we propose a new method for analyzing the temporal and spatial evolution trends in earthquakes based on earthquakes of magnitude 3.0 or above from 1980 to 2019 in California and Nevada. The experiment’s results show that (1) the frequency of earthquake events of magnitude 4.5 or above present a relatively regular change trend of decreasing–rising in this area; (2) by using the weighted average center method to analyze the spatial concentration of earthquake events of magnitude 3.0 or above in this region, we find that the weighted average center of the earthquake events in this area shows a conch-type movement law, where it moves closer to the center from all sides; (3) the direction of the spatial distribution of earthquake events in this area shows a NW–SE pattern when the standard deviational ellipse (SDE) Citation: Shan, W.; Wang, Z.; Teng, method is used, which is basically consistent with the direction of the San Andreas Fault Zone across Y.; Wang, M. -
Analyzing the Performance of GPS Data for Earthquake Prediction
remote sensing Article Analyzing the Performance of GPS Data for Earthquake Prediction Valeri Gitis , Alexander Derendyaev * and Konstantin Petrov The Institute for Information Transmission Problems, 127051 Moscow, Russia; [email protected] (V.G.); [email protected] (K.P.) * Correspondence: [email protected]; Tel.: +7-495-6995096 Abstract: The results of earthquake prediction largely depend on the quality of data and the methods of their joint processing. At present, for a number of regions, it is possible, in addition to data from earthquake catalogs, to use space geodesy data obtained with the help of GPS. The purpose of our study is to evaluate the efficiency of using the time series of displacements of the Earth’s surface according to GPS data for the systematic prediction of earthquakes. The criterion of efficiency is the probability of successful prediction of an earthquake with a limited size of the alarm zone. We use a machine learning method, namely the method of the minimum area of alarm, to predict earthquakes with a magnitude greater than 6.0 and a hypocenter depth of up to 60 km, which occurred from 2016 to 2020 in Japan, and earthquakes with a magnitude greater than 5.5. and a hypocenter depth of up to 60 km, which happened from 2013 to 2020 in California. For each region, we compare the following results: random forecast of earthquakes, forecast obtained with the field of spatial density of earthquake epicenters, forecast obtained with spatio-temporal fields based on GPS data, based on seismological data, and based on combined GPS data and seismological data. -
Rupture Process of the 2019 Ridgecrest, California Mw 6.4 Foreshock and Mw 7.1 Earthquake Constrained by Seismic and Geodetic Data, Bull
Rupture process of the 2019 Ridgecrest, M M California w 6.4 Foreshock and w 7.1 Earthquake Constrained by Seismic and Geodetic Data Kang Wang*1,2, Douglas S. Dreger1,2, Elisa Tinti3,4, Roland Bürgmann1,2, and Taka’aki Taira2 ABSTRACT The 2019 Ridgecrest earthquake sequence culminated in the largest seismic event in California M since the 1999 w 7.1 Hector Mine earthquake. Here, we combine geodetic and seismic data M M to study the rupture process of both the 4 July w 6.4 foreshock and the 6 July w 7.1 main- M shock. The results show that the w 6.4 foreshock rupture started on a northwest-striking right-lateral fault, and then continued on a southwest-striking fault with mainly left-lateral M slip. Although most moment release during the w 6.4 foreshock was along the southwest- striking fault, slip on the northwest-striking fault seems to have played a more important role M ∼ M in triggering the w 7.1 mainshock that happened 34 hr later. Rupture of the w 7.1 main- shock was characterized by dominantly right-lateral slip on a series of overall northwest- striking fault strands, including the one that had already been activated during the nucleation M ∼ of the w 6.4 foreshock. The maximum slip of the 2019 Ridgecrest earthquake was 5m, – M located at a depth range of 3 8kmnearthe w 7.1 epicenter, corresponding to a shallow slip deficit of ∼ 20%–30%. Both the foreshock and mainshock had a relatively low-rupture veloc- ity of ∼ 2km= s, which is possibly related to the geometric complexity and immaturity of the eastern California shear zone faults. -
Prediction in Multilevel Generalized Linear Models
J. R. Statist. Soc. A (2009) 172, Part 3, pp. 659–687 Prediction in multilevel generalized linear models Anders Skrondal Norwegian Institute of Public Health, Oslo, Norway and Sophia Rabe-Hesketh University of California, Berkeley, USA, and Institute of Education, London, UK [Received February 2008. Final revision October 2008] Summary. We discuss prediction of random effects and of expected responses in multilevel generalized linear models. Prediction of random effects is useful for instance in small area estimation and disease mapping, effectiveness studies and model diagnostics. Prediction of expected responses is useful for planning, model interpretation and diagnostics. For prediction of random effects, we concentrate on empirical Bayes prediction and discuss three different kinds of standard errors; the posterior standard deviation and the marginal prediction error standard deviation (comparative standard errors) and the marginal sampling standard devi- ation (diagnostic standard error). Analytical expressions are available only for linear models and are provided in an appendix. For other multilevel generalized linear models we present approximations and suggest using parametric bootstrapping to obtain standard errors. We also discuss prediction of expectations of responses or probabilities for a new unit in a hypotheti- cal cluster, or in a new (randomly sampled) cluster or in an existing cluster. The methods are implemented in gllamm and illustrated by applying them to survey data on reading proficiency of children nested in schools. Simulations are used to assess the performance of various pre- dictions and associated standard errors for logistic random-intercept models under a range of conditions. Keywords: Adaptive quadrature; Best linear unbiased predictor (BLUP); Comparative standard error; Diagnostic standard error; Empirical Bayes; Generalized linear mixed model; gllamm; Mean-squared error of prediction; Multilevel model; Posterior; Prediction; Random effects; Scoring 1. -
William L. Ellsworth, Professor (Research) of Geophysics July 14, 2020
William L. Ellsworth, Professor (Research) of Geophysics July 14, 2020 William L. Ellsworth is a professor in the Department of Geophysics at Stanford University His research focuses on the seismological study of active faults, the earthquakes they generate and the physics of the earthquake source. A major objective of his work is to improve our knowledge of earthquake hazards through the application of physics-based understanding of the underlying processes, and the transfer of scientific understanding of the hazard to people, businesses, policymakers and government agencies. As Co-Director of the Stanford Center for Induced and Triggered Seismicity (SCITS) he leads multi-disciplinary studies into the causes and consequences of anthropogenic earthquakes in a wide variety of settings. Before coming to Stanford in 2015 he spent over 40 years as a research geophysicist at the U.S. Geological Survey where he served as Chief of the Branch of Seismology and Chief Scientist of the Earthquake Hazards Team. He received B.S. in Physics and M.S. in Geophysics from Stanford University and his Ph.D. in Geophysics from MIT. He is a past President of the Seismological Society of America, a Fellow of the American Geophysical Union, and recipient of the Distinguished Service Award of the Department of the Interior. Education 1978 | Ph.D. in Geophysics, Massachusetts Institute of Technology. 1971 | M.S. in Geophysics, Stanford University. 1971 | B.S. in Physics, Stanford University. Professional Experience 2015 – Present | Professor (Research) of Geophysics, Stanford University. 2015 – Present | Co-Director, Stanford Center for Induced and Triggered Seismicity 1971-2015 | Geophysicist, U. S. -
A Philosophical Treatise of Universal Induction
Entropy 2011, 13, 1076-1136; doi:10.3390/e13061076 OPEN ACCESS entropy ISSN 1099-4300 www.mdpi.com/journal/entropy Article A Philosophical Treatise of Universal Induction Samuel Rathmanner and Marcus Hutter ? Research School of Computer Science, Australian National University, Corner of North and Daley Road, Canberra ACT 0200, Australia ? Author to whom correspondence should be addressed; E-Mail: [email protected]. Received: 20 April 2011; in revised form: 24 May 2011 / Accepted: 27 May 2011 / Published: 3 June 2011 Abstract: Understanding inductive reasoning is a problem that has engaged mankind for thousands of years. This problem is relevant to a wide range of fields and is integral to the philosophy of science. It has been tackled by many great minds ranging from philosophers to scientists to mathematicians, and more recently computer scientists. In this article we argue the case for Solomonoff Induction, a formal inductive framework which combines algorithmic information theory with the Bayesian framework. Although it achieves excellent theoretical results and is based on solid philosophical foundations, the requisite technical knowledge necessary for understanding this framework has caused it to remain largely unknown and unappreciated in the wider scientific community. The main contribution of this article is to convey Solomonoff induction and its related concepts in a generally accessible form with the aim of bridging this current technical gap. In the process we examine the major historical contributions that have led to the formulation of Solomonoff Induction as well as criticisms of Solomonoff and induction in general. In particular we examine how Solomonoff induction addresses many issues that have plagued other inductive systems, such as the black ravens paradox and the confirmation problem, and compare this approach with other recent approaches. -
Laboratory Earthquake Forecasting: a Machine Learning Competition PERSPECTIVE Paul A
PERSPECTIVE Laboratory earthquake forecasting: A machine learning competition PERSPECTIVE Paul A. Johnsona,1,2, Bertrand Rouet-Leduca,1, Laura J. Pyrak-Nolteb,c,d,1, Gregory C. Berozae, Chris J. Maronef,g, Claudia Hulberth, Addison Howardi, Philipp Singerj,3, Dmitry Gordeevj,3, Dimosthenis Karaflosk,3, Corey J. Levinsonl,3, Pascal Pfeifferm,3, Kin Ming Pukn,3, and Walter Readei Edited by David A. Weitz, Harvard University, Cambridge, MA, and approved November 28, 2020 (received for review August 3, 2020) Earthquake prediction, the long-sought holy grail of earthquake science, continues to confound Earth scientists. Could we make advances by crowdsourcing, drawing from the vast knowledge and creativity of the machine learning (ML) community? We used Google’s ML competition platform, Kaggle, to engage the worldwide ML community with a competition to develop and improve data analysis approaches on a forecasting problem that uses laboratory earthquake data. The competitors were tasked with predicting the time remaining before the next earthquake of successive laboratory quake events, based on only a small portion of the laboratory seismic data. The more than 4,500 participating teams created and shared more than 400 computer programs in openly accessible notebooks. Complementing the now well-known features of seismic data that map to fault criticality in the laboratory, the winning teams employed unex- pected strategies based on rescaling failure times as a fraction of the seismic cycle and comparing input distribution of training and testing data. In addition to yielding scientific insights into fault processes in the laboratory and their relation with the evolution of the statistical properties of the associated seismic data, the competition serves as a pedagogical tool for teaching ML in geophysics.