How Hard Is Artificial Intelligence? Evolutionary Arguments and Selection Effects
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
UC Santa Barbara Other Recent Work
UC Santa Barbara Other Recent Work Title Geopolitics, History, and International Relations Permalink https://escholarship.org/uc/item/29z457nf Author Robinson, William I. Publication Date 2009 Peer reviewed eScholarship.org Powered by the California Digital Library University of California OFFICIAL JOURNAL OF THE CONTEMPORARY SCIENCE ASSOCIATION • NEW YORK Geopolitics, History, and International Relations VOLUME 1(2) • 2009 ADDLETON ACADEMIC PUBLISHERS • NEW YORK Geopolitics, History, and International Relations 1(2) 2009 An international peer-reviewed academic journal Copyright © 2009 by the Contemporary Science Association, New York Geopolitics, History, and International Relations seeks to explore the theoretical implications of contemporary geopolitics with particular reference to territorial problems and issues of state sovereignty, and publishes papers on contemporary world politics and the global political economy from a variety of methodologies and approaches. Interdisciplinary and wide-ranging in scope, Geopolitics, History, and International Relations also provides a forum for discussion on the latest developments in the theory of international relations and aims to promote an understanding of the breadth, depth and policy relevance of international history. Its purpose is to stimulate and disseminate theory-aware research and scholarship in international relations throughout the international academic community. Geopolitics, History, and International Relations offers important original contributions by outstanding scholars and has the potential to become one of the leading journals in the field, embracing all aspects of the history of relations between states and societies. Journal ranking: A on a seven-point scale (A+, A, B+, B, C+, C, D). Geopolitics, History, and International Relations is published twice a year by Addleton Academic Publishers, 30-18 50th Street, Woodside, New York, 11377. -
Existential Risks
Existential Risks Analyzing Human Extinction Scenarios and Related Hazards Dr. Nick Bostrom Department of Philosophy Yale University New Haven, Connecticut 06520 U. S. A. Fax: (203) 432−7950 Phone: (203) 500−0021 Email: [email protected] ABSTRACT Because of accelerating technological progress, humankind may be rapidly approaching a critical phase in its career. In addition to well−known threats such as nuclear holocaust, the prospects of radically transforming technologies like nanotech systems and machine intelligence present us with unprecedented opportunities and risks. Our future, and whether we will have a future at all, may well be determined by how we deal with these challenges. In the case of radically transforming technologies, a better understanding of the transition dynamics from a human to a “posthuman” society is needed. Of particular importance is to know where the pitfalls are: the ways in which things could go terminally wrong. While we have had long exposure to various personal, local, and endurable global hazards, this paper analyzes a recently emerging category: that of existential risks. These are threats that could cause our extinction or destroy the potential of Earth−originating intelligent life. Some of these threats are relatively well known while others, including some of the gravest, have gone almost unrecognized. Existential risks have a cluster of features that make ordinary risk management ineffective. A final section of this paper discusses several ethical and policy implications. A clearer understanding of the threat picture will enable us to formulate better strategies. 1 Introduction It’s dangerous to be alive and risks are everywhere. Luckily, not all risks are equally serious. -
Global Challenges Foundation
Artificial Extreme Future Bad Global Global System Major Asteroid Intelligence Climate Change Global Governance Pandemic Collapse Impact Artificial Extreme Future Bad Global Global System Major Asteroid Global Intelligence Climate Change Global Governance Pandemic Collapse Impact Ecological Nanotechnology Nuclear War Super-volcano Synthetic Unknown Challenges Catastrophe Biology Consequences Artificial Extreme Future Bad Global Global System Major Asteroid Ecological NanotechnologyIntelligence NuclearClimate WarChange Super-volcanoGlobal Governance PandemicSynthetic UnknownCollapse Impact Risks that threaten Catastrophe Biology Consequences humanArtificial civilisationExtreme Future Bad Global Global System Major Asteroid 12 Intelligence Climate Change Global Governance Pandemic Collapse Impact Ecological Nanotechnology Nuclear War Super-volcano Synthetic Unknown Catastrophe Biology Consequences Ecological Nanotechnology Nuclear War Super-volcano Synthetic Unknown Catastrophe Biology Consequences Artificial Extreme Future Bad Global Global System Major Asteroid Intelligence Climate Change Global Governance Pandemic Collapse Impact Artificial Extreme Future Bad Global Global System Major Asteroid Intelligence Climate Change Global Governance Pandemic Collapse Impact Artificial Extreme Future Bad Global Global System Major Asteroid Intelligence Climate Change Global Governance Pandemic Collapse Impact Artificial Extreme Future Bad Global Global System Major Asteroid IntelligenceEcological ClimateNanotechnology Change NuclearGlobal Governance -
Human Extinction Risks in the Cosmological and Astrobiological Contexts
HUMAN EXTINCTION RISKS IN THE COSMOLOGICAL AND ASTROBIOLOGICAL CONTEXTS Milan M. Ćirković Astronomical Observatory Belgrade Volgina 7, 11160 Belgrade Serbia and Montenegro e-mail: [email protected] Abstract. We review the subject of human extinction (in its modern form), with particular emphasis on the natural sub-category of existential risks. Enormous breakthroughs made in recent decades in understanding of our terrestrial and cosmic environments shed new light on this old issue. In addition, our improved understanding of extinction of other species, and the successes of the nascent discipline of astrobiology create a mandate to elucidate the necessary conditions for survival of complex living and/or intelligent systems. A range of topics impacted by this “astrobiological revolution” encompasses such diverse fields as anthropic reasoning, complexity theory, philosophy of mind, or search for extraterrestrial intelligence (SETI). Therefore, we shall attempt to put the issue of human extinction into a wider context of a general astrobiological picture of patterns of life/complex biospheres/intelligence in the Galaxy. For instance, it seems possible to define a secularly evolving risk function facing any complex metazoan lifeforms throughout the Galaxy. This multidisciplinary approach offers a credible hope that in the very close future of humanity all natural hazards will be well-understood and effective policies of reducing or eliminating them conceived and successfully implemented. This will, in turn, open way for a new issues dealing with the interaction of sentient beings with its astrophysical environment on truly cosmological scales, issues presciently speculated upon by great thinkers such as H. G. Wells, J. B. -
Existential Risk Prevention As Global Priority
Global Policy Volume 4 . Issue 1 . February 2013 15 Existential Risk Prevention as Global Priority Nick Bostrom Research Article University of Oxford Abstract Existential risks are those that threaten the entire future of humanity. Many theories of value imply that even relatively small reductions in net existential risk have enormous expected value. Despite their importance, issues surrounding human-extinction risks and related hazards remain poorly understood. In this article, I clarify the concept of existential risk and develop an improved classification scheme. I discuss the relation between existential risks and basic issues in axiology, and show how existential risk reduction (via the maxipok rule) can serve as a strongly action-guiding principle for utilitarian concerns. I also show how the notion of existential risk suggests a new way of thinking about the ideal of sustainability. Policy Implications • Existential risk is a concept that can focus long-term global efforts and sustainability concerns. • The biggest existential risks are anthropogenic and related to potential future technologies. • A moral case can be made that existential risk reduction is strictly more important than any other global public good. • Sustainability should be reconceptualised in dynamic terms, as aiming for a sustainable trajectory rather than a sus- tainable state. • Some small existential risks can be mitigated today directly (e.g. asteroids) or indirectly (by building resilience and reserves to increase survivability in a range of extreme scenarios) but it is more important to build capacity to improve humanity’s ability to deal with the larger existential risks that will arise later in this century. This will require collective wisdom, technology foresight, and the ability when necessary to mobilise a strong global coordi- nated response to anticipated existential risks. -
Intelligence Explosion Microeconomics
MIRI MACHINE INTELLIGENCE RESEARCH INSTITUTE Intelligence Explosion Microeconomics Eliezer Yudkowsky Machine Intelligence Research Institute Abstract I. J. Good’s thesis of the “intelligence explosion” states that a sufficiently advanced ma- chine intelligence could build a smarter version of itself, which could in turn build an even smarter version, and that this process could continue to the point of vastly exceeding human intelligence. As Sandberg (2010) correctly notes, there have been several attempts to lay down return on investment formulas intended to represent sharp speedups in economic or technological growth, but very little attempt has been made to deal formally with Good’s intelligence explosion thesis as such. I identify the key issue as returns on cognitive reinvestment—the ability to invest more computing power, faster computers, or improved cognitive algorithms to yield cognitive labor which produces larger brains, faster brains, or better mind designs. There are many phenomena in the world which have been argued to be evidentially relevant to this question, from the observed course of hominid evolution, to Moore’s Law, to the competence over time of machine chess-playing systems, and many more. I go into some depth on some debates which then arise on how to interpret such evidence. I propose that the next step in analyzing positions on the intelligence explosion would be to for- malize return on investment curves, so that each stance can formally state which possible microfoundations they hold to be falsified by historical observations. More generally, Yudkowsky, Eliezer. 2013. Intelligence Explosion Microeconomics. Technical report 2013-1. Berkeley, CA: Machine Intelligence Research Institute. -
On the Likelihood of Observing Extragalactic Civilizations: Predictions from the Self-Indication Assumption
On the Likelihood of Observing Extragalactic Civilizations: Predictions from the Self-Indication Assumption S. Jay Olson∗ Department of Physics, Boise State University, Boise, Idaho 83725, USA (Dated: February 20, 2020) Ambitious civilizations that expand for resources at an intergalactic scale could be observable from a cosmological distance, but how likely is one to be visible to us? The question comes down to esti- mating the appearance rate of such things in the cosmos | a radically uncertain quantity. Despite this prior uncertainty, anthropic considerations give rise to Bayesian updates, and thus predictions. The Self-Sampling Assumption (SSA), a school of anthropic probability, has previously been used for this purpose. Here, we derive predictions from the alternative school, the Self-Indication As- sumption (SIA), and point out its features. SIA favors a higher appearance rate of expansionistic life, but our existence at the present cosmic time means that such life cannot be too common (else our galaxy would long ago have been overrun). This combination squeezes our vast prior uncertainty into a few orders of magnitude. Details of the background cosmology fall out, and we are left with some stark conclusions. E.g. if the limits to technology allow a civilization to expand at speed v, the probability of at least one expanding cosmological civilization being visible on our past light cone v3 is 1 − c3 . We also show how the SIA estimate can be updated from the results of a hypothetical full-sky survey that detects \n" expanding civilizations (for n ≥ 0), and calculate the implied final extent of life in the universe. -
Arxiv:2106.13348V1 [Astro-Ph.CO] 24 Jun 2021 of Expanding Cosmological Civilizations
Implications of a search for intergalactic civilizations on prior estimates of human survival and travel speed S. Jay Olson∗ Department of Physics, Boise State University, Boise, Idaho 83725, USA Toby Ordy Future of Humanity Institute, University of Oxford, Oxford, UK (Dated: June 28, 2021) We present a model where some proportion of extraterrestrial civilizations expand uniformly over time to reach a cosmological scale. We then ask what humanity could infer if a sky survey were to find zero, one, or more such civilizations. We show how the results of this survey, in combination with an approach to anthropics called the Self Indication Assumption (SIA), would shift any prior estimates of two quantities: 1) The chance that a technological civilization like ours survives to embark on such expansion, and 2) the maximum feasible speed at which it could expand. The SIA gives pessimistic estimates for both, but survey results (even null results) can reverse some of the effect. I. INTRODUCTION and P (v), we show how they are updated in two steps. First, merely adopting this cosmology in the context of The standard motive for studying the possible be- SIA re-weights P (q) and P (v) to lower-than-expected havior and technology of intelligent extraterrestrial life values of q and much lower-than-expected values of v. P (q) P (v) is to generate search targets for SETI. This has been Specifically, P (q) ! q and P (v) ! v3 , properly true for the development of \expanding cosmological normalized. civilizations" [1, 2] (aka \aggressively expanding civi- In the second step, we show how P (v) is changed again lizations," [3] \loud aliens," [4] etc.) | a behavioral by the results of a search. -
Fermi Paradox from Wikipedia, the Free Encyclopedia Jump To: Navigation, Search
Fermi paradox From Wikipedia, the free encyclopedia Jump to: navigation, search This article is about the absence of evidence for extraterrestrial intelligence. For the type of estimation problem, see Fermi problem. For the music album, see Fermi Paradox (album). For the short story, see The Fermi Paradox Is Our Business Model. A graphical representation of the Arecibo message – Humanity's first attempt to use radio waves to actively communicate its existence to alien civilizations The Fermi paradox (or Fermi's paradox) is the apparent contradiction between high estimates of the probability of the existence of extraterrestrial civilization and humanity's lack of contact with, or evidence for, such civilizations.[1] The basic points of the argument, made by physicists Enrico Fermi and Michael H. Hart, are: • The Sun is a young star. There are billions of stars in the galaxy that are billions of years older; • Some of these stars likely have Earth-like planets[2] which, if the Earth is typical, may develop intelligent life; • Presumably some of these civilizations will develop interstellar travel, as Earth seems likely to do; • At any practical pace of interstellar travel, the galaxy can be completely colonized in just a few tens of millions of years. According to this line of thinking, the Earth should have already been colonized, or at least visited. But no convincing evidence of this exists. Furthermore, no confirmed signs of intelligence elsewhere have been spotted, either in our galaxy or the more than 80 billion other galaxies of the -
Anthropic Shadow: Observation Selection Effects and Human Extinction Risks
Risk Analysis, Vol. 30, No. 10, 2010 DOI: 10.1111/j.1539-6924.2010.01460.x Anthropic Shadow: Observation Selection Effects and Human Extinction Risks Milan M. Cirkovi´ c,´ 1∗ Anders Sandberg,2 and Nick Bostrom2 We describe a significant practical consequence of taking anthropic biases into account in deriving predictions for rare stochastic catastrophic events. The risks associated with catastrophes such as asteroidal/cometary impacts, supervolcanic episodes, and explosions of supernovae/gamma-ray bursts are based on their observed frequencies. As a result, the fre- quencies of catastrophes that destroy or are otherwise incompatible with the existence of observers are systematically underestimated. We describe the consequences of this anthropic bias for estimation of catastrophic risks, and suggest some directions for future work. KEY WORDS: Anthropic principle; astrobiology; existential risks; global catastrophes; impact hazard; natural hazards; risk management; selection effects; vacuum phase transition 1. INTRODUCTION: EXISTENTIAL RISKS clude global nuclear war, collision of Earth with a AND OBSERVATION SELECTION 10-km sized (or larger) asteroidal or cometary body, EFFECTS intentional or accidental misuse of bio- or nano- technologies, or runaway global warming. Humanity faces a series of major global threats, There are various possible taxonomies of ERs.(7) both in the near- and in the long-term future. For our purposes, the most relevant division is one These are of theoretical interest to anyone who based on the causative agent. Thus we distinguish: is concerned about the future of our species, (1) natural ERs (e.g., cosmic impacts, supervolcan- but they are also of direct relevance to many ism, nonanthropogenic climate change, supernovae, practical and policy decisions we make today. -
Anthropic Cosmological Principle -General
Anthropic principle - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Anthropic_principle Anthropic principle From Wikipedia, the free encyclopedia In astrophysics and cosmology, the anthropic principle (from the Greek, anthropos, human) is the philosophical consideration that observations of the physical Universe must be compatible with the conscious life that observes it. Some proponents of the anthropic principle reason that it explains why the Universe has the age and the fundamental physical constants necessary to accommodate conscious life. As a result, they believe it is unremarkable that the universe's fundamental constants happen to fall within the narrow range thought to be compatible with life.[1] The strong anthropic principle (SAP) as explained by Barrow and Tipler (see variants) states that this is all the case because the Universe is compelled, in some sense, for conscious life to eventually emerge. Critics of the SAP argue in favor of a weak anthropic principle (WAP) similar to the one defined by Brandon Carter, which states that the universe's ostensible fine tuning is the result of selection bias: i.e., only in a universe capable of eventually supporting life will there be living beings capable of observing any such fine tuning, while a universe less compatible with life will go unbeheld. Contents 1 Definition and basis 2 Anthropic coincidences 3 Origin 4 Variants 5 Character of anthropic reasoning 6 Observational evidence 7 Applications of the principle 7.1 The nucleosynthesis of carbon-12 7.2 Cosmic inflation 7.3 String theory 7.4 Ice density 7.5 Spacetime 8 The Anthropic Cosmological Principle 9 Criticisms 10 See also 11 Footnotes 12 References 13 External links Definition and basis The principle was formulated as a response to a series of observations that the laws of nature Стр. -
Resources on Existential Risk
Resources on Existential Risk For: Catastrophic Risk: Technologies and Policies Berkman Center for Internet and Society Harvard University Bruce Schneier, Instructor Fall 2015 Contents General Scholarly Discussion of Existential Risk ....................................................................................... 1 Nick Bostrom (Mar 2002), "Existential risks: Analyzing human extinction scenarios and related hazards." ................................................................................................................................... 1 Jason Matheney (Oct 2007), "Reducing the risk of human extinction." ............................................... 2 Nick Bostrom and Milan Ćirković (2008), "Introduction." .................................................................... 3 Anders Sandberg and Nick Bostrom (5 Dec 2008), “Global catastrophic risks survey.” ....................... 5 Nick Bostrom (2009), "The future of humanity." .................................................................................. 6 Nick Bostrom (Feb 2013), "Existential risk prevention as global priority." ........................................... 7 Kira Matus (25 Nov 2014), "Existential risk: challenges for risk regulation." ....................................... 7 Seth Baum (Dec 2014), "The great downside dilemma for risky emerging technologies." ................ 10 Dennis Pamlin and Stuart Armstrong (19 Feb 2015), "Twelve risks that threaten human civilisation: The case for a new risk category." ..................................................................................