Why AI Doomsayers Are Like Sceptical Theists and Why It Matters
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Estimating Remaining Lifetime of Humanity Abstract 1. Introduction
Estimating remaining lifetime of humanity Yigal Gurevich [email protected] Abstract In this paper, we estimate the remaining time for human existence, applying the Doomsday argument and the Strong Self-Sampling Assumption to the reference class consisting of all members of the Homo sapiens, formulating calculations in traditional demographic terms of population and time, using the theory of parameter estimation and available paleodemographic data. The remaining lifetime estimate is found to be 170 years, and the probability of extinction in the coming year is estimated as 0.43%. 1. Introduction Modern humans, Homo sapiens, exist according to available data for at least 130,000 years [4], [5]. It is interesting, however, to estimate the time remaining for the survival of humanity. To determine this value there was proposed the so-called doomsday argument [1] - probabilistic reasoning that predicts the future of the human species, given only an estimate of the total number of humans born so far. This method was first proposed by Brandon Carter in 1983 [2]. Nick Bostrom modified the method by formulating the Strong Self-Sampling Assumption (SSSA): each observer-moment should reason as if it were randomly selected from the class of all observer-moments in its reference class. [3]. In this paper, we apply the SSSA method to the reference class consisting of all members of our species, formulating calculations in traditional demographic terms of population and time, using the parameter estimation theory and the available paleodemographic data. 1 To estimate the remaining time t we will fulfill the assumption that the observer has an equal chance to be anyone at any time. -
Anthropic Measure of Hominid (And Other) Terrestrials. Brandon Carter Luth, Observatoire De Paris-Meudon
Anthropic measure of hominid (and other) terrestrials. Brandon Carter LuTh, Observatoire de Paris-Meudon. Provisional draft, April 2011. Abstract. According to the (weak) anthropic principle, the a priori proba- bility per unit time of finding oneself to be a member of a particular popu- lation is proportional to the number of individuals in that population, mul- tiplied by an anthropic quotient that is normalised to unity in the ordinary (average adult) human case. This quotient might exceed unity for conceiv- able superhuman extraterrestrials, but it should presumably be smaller for our terrestrial anthropoid relations, such as chimpanzees now and our pre- Neanderthal ancestors in the past. The (ethically relevant) question of how much smaller can be addressed by invoking the anthropic finitude argument, using Bayesian reasonning, whereby it is implausible a posteriori that the total anthropic measure should greatly exceed the measure of the privileged subset to which we happen to belong, as members of a global civilisation that has (recently) entered a climactic phase with a timescale of demographic expansion and technical development short compared with a breeding gen- eration. As well as “economist’s dream” scenarios with continual growth, this finitude argument also excludes “ecologist’s dream” scenarios with long term stabilisation at some permanently sustainable level, but it it does not imply the inevitability of a sudden “doomsday” cut-off. A less catastrophic likelihood is for the population to decline gradually, after passing smoothly through a peak value that is accounted for here as roughly the information content ≈ 1010 of our genome. The finitude requirement limits not just the future but also the past, of which the most recent phase – characterised by memetic rather than genetic evolution – obeyed the Foerster law of hyperbolic population growth. -
UC Santa Barbara Other Recent Work
UC Santa Barbara Other Recent Work Title Geopolitics, History, and International Relations Permalink https://escholarship.org/uc/item/29z457nf Author Robinson, William I. Publication Date 2009 Peer reviewed eScholarship.org Powered by the California Digital Library University of California OFFICIAL JOURNAL OF THE CONTEMPORARY SCIENCE ASSOCIATION • NEW YORK Geopolitics, History, and International Relations VOLUME 1(2) • 2009 ADDLETON ACADEMIC PUBLISHERS • NEW YORK Geopolitics, History, and International Relations 1(2) 2009 An international peer-reviewed academic journal Copyright © 2009 by the Contemporary Science Association, New York Geopolitics, History, and International Relations seeks to explore the theoretical implications of contemporary geopolitics with particular reference to territorial problems and issues of state sovereignty, and publishes papers on contemporary world politics and the global political economy from a variety of methodologies and approaches. Interdisciplinary and wide-ranging in scope, Geopolitics, History, and International Relations also provides a forum for discussion on the latest developments in the theory of international relations and aims to promote an understanding of the breadth, depth and policy relevance of international history. Its purpose is to stimulate and disseminate theory-aware research and scholarship in international relations throughout the international academic community. Geopolitics, History, and International Relations offers important original contributions by outstanding scholars and has the potential to become one of the leading journals in the field, embracing all aspects of the history of relations between states and societies. Journal ranking: A on a seven-point scale (A+, A, B+, B, C+, C, D). Geopolitics, History, and International Relations is published twice a year by Addleton Academic Publishers, 30-18 50th Street, Woodside, New York, 11377. -
The End of the World: the Science and Ethics of Human Extinction John Leslie
THE END OF THE WORLD ‘If you want to know what the philosophy professors have got to say, John Leslie’s The End of the World is the book to buy. His willingness to grasp the nettles makes me admire philosophers for tackling the big questions.’ THE OBSERVER ‘Leslie’s message is bleak but his touch is light. Wit, after all, is preferable to the desperation which, in the circumstances, seems the only other response.’ THE TIMES ‘John Leslie is one of a very small group of philosophers thoroughly conversant with the latest ideas in physics and cosmology. Moreover, Leslie is able to write about these ideas with wit, clarity and penetrating insight. He has established himself as a thinker who is unafraid to tackle the great issues of existence, and able to sift discerningly through the competing— and frequently bizarre—concepts emanating from fundamental research in the physical sciences. Leslie is undoubtedly the world’s expert on Brandon Carter’s so-called Doomsday Argument—a philosophical poser that is startling yet informative, seemingly outrageous yet intriguing, and ultimately both disturbing and illuminating. With his distinctive and highly readable style, combined with a bold and punchy treatment, Leslie offers a fascinating glimpse of the power of human reasoning to deduce our place in the universe.’ PAUL DAVIES, PROFESSOR OF NATURAL PHILOSOPHY, UNIVERSITY OF ADELAIDE, AUTHOR OF THE LAST THREE MINUTES ‘This book is vintage John Leslie: it presents a bold and provocative thesis supported by a battery of arguments and references to the most recent advances in science. Leslie is one of the most original and interesting thinkers today. -
The Apocalypse Archive: American Literature and the Nuclear
THE APOCALYPSE ARCHIVE: AMERICAN LITERATURE AND THE NUCLEAR BOMB by Bradley J. Fest B. A. in English and Creative Writing, University of Arizona, Tucson, 2004 M. F. A. in Creative Writing, University of Pittsburgh, Pittsburgh, 2007 Submitted to the Graduate Faculty of the Dietrich School of Arts and Sciences in partial fulfillment of the requirements for the degree of Doctor of Philosophy University of Pittsburgh 2013 UNIVERSITY OF PITTSBURGH DIETRICH SCHOOL OF ARTS AND SCIENCES This dissertation was presented by Bradley J. Fest It was defended on 17 April 2013 and approved by Jonathan Arac, PhD, Andrew W. Mellon Professor of English Adam Lowenstein, PhD, Associate Professor of English and Film Studies Philip E. Smith, PhD, Associate Professor of English Terry Smith, PhD, Andrew W. Mellon Professor of Contemporary Art History and Theory Dissertation Director: Jonathan Arac, PhD, Andrew W. Mellon Professor of English ii Copyright © Bradley J. Fest 2013 iii THE APOCALYPSE ARCHIVE: AMERICAN LITERATURE AND THE NUCLEAR BOMB Bradley J. Fest, PhD University of Pittsburgh, 2013 This dissertation looks at global nuclear war as a trope that can be traced throughout twentieth century American literature. I argue that despite the non-event of nuclear exchange during the Cold War, the nuclear referent continues to shape American literary expression. Since the early 1990s the nuclear referent has dispersed into a multiplicity of disaster scenarios, producing a “second nuclear age.” If the atomic bomb once introduced the hypothesis “of a total and remainderless destruction of the archive,” today literature’s staged anticipation of catastrophe has become inseparable from the realities of global risk. -
Between Ape and Artilect Createspace V2
Between Ape and Artilect Conversations with Pioneers of Artificial General Intelligence and Other Transformative Technologies Interviews Conducted and Edited by Ben Goertzel This work is offered under the following license terms: Creative Commons: Attribution-NonCommercial-NoDerivs 3.0 Unported (CC-BY-NC-ND-3.0) See http://creativecommons.org/licenses/by-nc-nd/3.0/ for details Copyright © 2013 Ben Goertzel All rights reserved. ISBN: ISBN-13: “Man is a rope stretched between the animal and the Superman – a rope over an abyss.” -- Friedrich Nietzsche, Thus Spake Zarathustra Table&of&Contents& Introduction ........................................................................................................ 7! Itamar Arel: AGI via Deep Learning ................................................................. 11! Pei Wang: What Do You Mean by “AI”? .......................................................... 23! Joscha Bach: Understanding the Mind ........................................................... 39! Hugo DeGaris: Will There be Cyborgs? .......................................................... 51! DeGaris Interviews Goertzel: Seeking the Sputnik of AGI .............................. 61! Linas Vepstas: AGI, Open Source and Our Economic Future ........................ 89! Joel Pitt: The Benefits of Open Source for AGI ............................................ 101! Randal Koene: Substrate-Independent Minds .............................................. 107! João Pedro de Magalhães: Ending Aging .................................................... -
Challenges to the Omohundro—Bostrom Framework for AI Motivations
Challenges to the Omohundro—Bostrom framework for AI motivations Olle Häggström1 Abstract Purpose. This paper contributes to the futurology of a possible artificial intelligence (AI) breakthrough, by reexamining the Omohundro—Bostrom theory for instrumental vs final AI goals. Does that theory, along with its predictions for what a superintelligent AI would be motivated to do, hold water? Design/methodology/approach. The standard tools of systematic reasoning and analytic philosophy are used to probe possible weaknesses of Omohundro—Bostrom theory from four different directions: self-referential contradictions, Tegmark’s physics challenge, moral realism, and the messy case of human motivations. Findings. The two cornerstones of Omohundro—Bostrom theory – the orthogonality thesis and the instrumental convergence thesis – are both open to various criticisms that question their validity and scope. These criticisms are however far from conclusive: while they do suggest that a reasonable amount of caution and epistemic humility is attached to predictions derived from the theory, further work will be needed to clarify its scope and to put it on more rigorous foundations. Originality/value. The practical value of being able to predict AI goals and motivations under various circumstances cannot be overstated: the future of humanity may depend on it. Currently the only framework available for making such predictions is Omohundro—Bostrom theory, and the value of the present paper is to demonstrate its tentative nature and the need for further scrutiny. 1. Introduction Any serious discussion of the long-term future of humanity needs to take into account the possible influence of radical new technologies, or risk turning out irrelevant and missing all the action due to implicit and unwarranted assumptions about technological status quo. -
L'argument De La Simulation Et Le Problème De La Classe De Référence
The Simulation Argument and the Reference Class Problem : a Dialectical Contextualism Analysis Paul FRANCESCHI [email protected] www.paulfranceschi.com English postprint of a paper initially published in French in Philosophiques, 2016, 43-2, pp. 371-389, under the title L’argument de la Simulation et le problème de la classe de référence : le point de vue du contextualisme dialectique ABSTRACT. I present in this paper an analysis of the Simulation Argument from a dialectical contextualist’s standpoint. This analysis is grounded on the reference class problem. I begin with describing in detail Bostrom’s Simulation Argument. I identify then the reference class within the Simulation Argument. I also point out a reference class problem, by applying the argument successively to three different reference classes: aware- simulations, imperfect simulations and immersion-simulations. Finally, I point out that there are three levels of conclusion within the Simulation Argument, depending on the chosen reference class, that yield each final conclusions of a fundamentally different nature. 1. The Simulation Argument I shall propose in what follows an analysis of the Simulation Argument, recently described by Nick Bostrom (2003). I will first describe in detail the Simulation Argument (SA for short), focusing in particular on the resulting counter-intuitive consequence. I will then show how such a consequence can be avoided, based on the analysis of the reference class underlying SA, without having to give up one’s pre-theoretical intuitions. The general idea behind SA can be stated as follows. It is very likely that post-human civilizations will possess a computing power that will be completely out of proportion with that of ours today. -
Groundbreaking Predictions About COVID-19 Pandemic Duration, Number of Infected and Dead: a Novel Mathematical Approach Never Used in Epidemiology
medRxiv preprint doi: https://doi.org/10.1101/2020.08.05.20168781; this version posted August 6, 2020. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. All rights reserved. No reuse allowed without permission. Groundbreaking predictions about COVID-19 pandemic duration, number of infected and dead: A novel mathematical approach never used in epidemiology. García de Alcañíz J. G. DVM; López-Rodas V. DVM, PhD and Costas E. BSc, PhD * Genetics, Faculty of Veterinary Medicine, Complutense University, 28040 Madrid, Spain *Corresponding author: [email protected] Abstract Hundreds of predictions about the duration of the pandemic and the number of infected and dead have been carried out using traditional epidemiological tools (i.e. SIR, SIRD models, etc.) or new procedures of big-data analysis. However, the extraordinary complexity of the disease and the lack of knowledge about the pandemic (i.e. R value, mortality rate, etc.) create uncertainty about the accuracy of these estimates. However, several elegant mathematical approaches, based on physics and probability principles, like the Delta-� argument, Lindy’s Law or the Doomsday principle-Carter’s catastrophe, which have been successfully applied by scientists to unravel complex phenomena characterized by their great uncertainty (i.e. Human race’s longevity; How many more humans will be born before extinction) allow predicting parameters of the Covid-19 pandemic. These models predict that the COVID-19 pandemic will hit us until at least September-October 2021, but will likely last until January- September 2022, causing a minimum of 36,000,000 infected and most likely 60,000,000, as well as 1,400,000 dead at best and most likely 2,333,000. -
Optimising Peace Through a Universal Global Peace Treaty to Constrain the Risk of War from a Militarised Artificial Superintelligence
OPTIMISING PEACE THROUGH A UNIVERSAL GLOBAL PEACE TREATY TO CONSTRAIN THE RISK OF WAR FROM A MILITARISED ARTIFICIAL SUPERINTELLIGENCE Working Paper John Draper Research Associate, Nonkilling Economics and Business Research Committee Center for Global Nonkilling, Honolulu, Hawai’i A United Nations-accredited NGO Working Paper: Optimising Peace Through a Universal Global Peace Treaty to Constrain the Risk of War from a Militarised Artificial Superintelligence Abstract This paper argues that an artificial superintelligence (ASI) emerging in a world where war is still normalised constitutes a catastrophic existential risk, either because the ASI might be employed by a nation-state to wage war for global domination, i.e., ASI-enabled warfare, or because the ASI wars on behalf of itself to establish global domination, i.e., ASI-directed warfare. These risks are not mutually incompatible in that the first can transition to the second. Presently, few states declare war on each other or even war on each other, in part due to the 1945 UN Charter, which states Member States should “refrain in their international relations from the threat or use of force against the territorial integrity or political independence of any state”, while allowing for UN Security Council-endorsed military measures and “exercise of self-defense”. In this theoretical ideal, wars are not declared; instead, 'international armed conflicts' occur. However, costly interstate conflicts, both ‘hot’ and ‘cold’, still exist, for instance the Kashmir Conflict and the Korean War. Further a ‘New Cold War’ between AI superpowers, the United States and China, looms. An ASI-directed/enabled future conflict could trigger ‘total war’, including nuclear war, and is therefore high risk. -
Artificial Intelligence, Rationality, and the World Wide
Artificial Intelligence, Rationality, and the World Wide Web January 5, 2018 1 Introduction Scholars debate whether the arrival of artificial superintelligence–a form of intelligence that significantly exceeds the cognitive performance of humans in most domains– would bring positive or negative consequences. I argue that a third possibility is plau- sible yet generally overlooked: for several different reasons, an artificial superintelli- gence might “choose” to exert no appreciable effect on the status quo ante (the already existing collective superintelligence of commercial cyberspace). Building on scattered insights from web science, philosophy, and cognitive psychology, I elaborate and de- fend this argument in the context of current debates about the future of artificial in- telligence. There are multiple ways in which an artificial superintelligence might effectively do nothing–other than what is already happening. The first is related to what, in com- putability theory, is called the “halting problem.” As we will see, whether an artificial superintelligence would ever display itself as such is formally incomputable. We are mathematically incapable of excluding the possibility that a superintelligent computer program, for instance, is already operating somewhere, intelligent enough to fully cam- ouflage its existence from the inferior intelligence of human observation. Moreover, it is plausible that at least one of the many subroutines of such an artificial superintel- ligence might never complete. Second, while theorists such as Nick Bostrom [3] and Stephen Omohundro [11] give reasons to believe that, for any final goal, any sufficiently 1 intelligent entity would converge on certain undesirable medium-term goals or drives (such as resource acquisition), the problem with the instrumental convergence the- sis is that any entity with enough general intelligence for recursive self-improvement would likely either meet paralyzing logical aporias or reach the rational decision to cease operating. -
AI Research Considerations for Human Existential Safety (ARCHES)
AI Research Considerations for Human Existential Safety (ARCHES) Andrew Critch Center for Human-Compatible AI UC Berkeley David Krueger MILA Université de Montréal June 11, 2020 Abstract Framed in positive terms, this report examines how technical AI research might be steered in a manner that is more attentive to hu- manity’s long-term prospects for survival as a species. In negative terms, we ask what existential risks humanity might face from AI development in the next century, and by what principles contempo- rary technical research might be directed to address those risks. A key property of hypothetical AI technologies is introduced, called prepotence, which is useful for delineating a variety of poten- tial existential risks from artificial intelligence, even as AI paradigms might shift. A set of twenty-nine contemporary research directions are then examined for their potential benefit to existential safety. Each research direction is explained with a scenario-driven motiva- tion, and examples of existing work from which to build. The research directions present their own risks and benefits to society that could occur at various scales of impact, and in particular are not guaran- teed to benefit existential safety if major developments in them are deployed without adequate forethought and oversight. As such, each direction is accompanied by a consideration of potentially negative arXiv:2006.04948v1 [cs.CY] 30 May 2020 side effects. Taken more broadly, the twenty-nine explanations of the research directions also illustrate a highly rudimentary methodology for dis- cussing and assessing potential risks and benefits of research direc- tions, in terms of their impact on global catastrophic risks.