Nick Bostrom

Total Page:16

File Type:pdf, Size:1020Kb

Nick Bostrom Nick Bostrom Nick Bostrom (English: /ˈbɒstrəm/; Swedish: Niklas Boström, IPA: [ˈbuː Nick Bostrom ˌstrœm]; born 10 March 1973)[2] is a Swedish philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, and the reversal test. In 2011, he founded the Oxford Martin Programme on the Impacts of Future Technology,[3] and he is currently the founding director of the Future of Humanity Institute[4] at Oxford University. Bostrom is the author of over 200 publications,[5] including Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller[6] and Anthropic Bias: Observation Selection Effects in Science and Philosophy (2002).[7] In 2009 and 2015, he was included in Foreign Nick Bostrom, 2014 Policy's Top 100 Global Thinkers list.[8][9] Bostrom is best known for arguing that, although there are potentially great benefits from artificial Born Niklas Boström intelligence, it may pose a catastrophic risk to humanity if the problems 10 March 1973 of control and alignment are not solved before artificial general Helsingborg, Sweden intelligence is developed. His work on superintelligence and his concern Education University of for its existential risk to humanity over the coming century have brought Gothenburg (B.A.) [10][11] both Elon Musk and Bill Gates to similar thinking. Stockholm University (M.A.) King's College London Contents (M.Sc.) Biography London School of Views Economics (Ph.D.) Existential risk Awards Professorial Distinction Superintelligence Award from University Human vulnerability in relation to advances in AI Illustrative scenario for takeover of Oxford Open letter FP Top 100 Global Anthropic reasoning Thinkers Simulation argument Prospect's Top World Ethics of human enhancement Thinker list Technology strategy Policy and consultations Era Contemporary Bibliography philosophy Books Region Western philosophy Journal articles (selected) School Analytic philosophy[1] See also Institutions St Cross College, References Oxford External links Future of Humanity Biography Institute Thesis Observational [12] [5] Born as Niklas Boström in 1973 in Helsingborg, Sweden, he disliked Selection Effects and school at a young age, and he ended up spending his last year of high Probability (http://ethe school learning from home. He sought to educate himself in a wide ses.lse.ac.uk/2642/) variety of disciplines, including anthropology, art, literature, and Main Philosophy of artificial science.[1] Despite what has been called a "serious mien", he once did interests intelligence some turns on London's stand-up comedy circuit.[5] Bioethics He holds a B.A. in philosophy, mathematics, logic and artificial Notable Anthropic bias intelligence from the University of Gothenburg and master's degrees in ideas Reversal test philosophy and physics, and computational neuroscience from Stockholm Simulation hypothesis University and King's College London, respectively. During his time at Existential risk Stockholm University, he researched the relationship between language Singleton [1] and reality by studying the analytic philosopher W. V. Quine. In 2000, Ancestor simulation he was awarded a PhD in philosophy from the London School of Website NickBostrom.com (http Economics. He held a teaching position at Yale University (2000–2002), ://nickbostrom.com) and he was a British Academy Postdoctoral Fellow at the University of Oxford (2002–2005).[7][13] Views Existential risk Aspects of Bostrom's research concern the future of humanity and long-term outcomes.[14][15] He introduced the concept of an existential risk,[1] which he defines as one in which an "adverse outcome would either annihilate Earth- originating intelligent life or permanently and drastically curtail its potential." In the 2008 volume Global Catastrophic Risks, editors Bostrom and Milan Ćirković characterize the relation between existential risk and the broader class of global catastrophic risks, and link existential risk to observer selection effects[16] and the Fermi paradox.[17][18] In 2005, Bostrom founded the Future of Humanity Institute,[1] which researches the far future of human civilization. He is also an adviser to the Centre for the Study of Existential Risk.[15] Superintelligence Human vulnerability in relation to advances in AI In his 2014 book Superintelligence: Paths, Dangers, Strategies, Bostrom reasoned that "the creation of a superintelligent being represents a possible means to the extinction of mankind".[19] Bostrom argues that a computer with near human-level general intellectual ability could initiate an intelligence explosion on a digital time scale with the resultant rapid creation of something so powerful that it might deliberately or accidentally destroy human kind.[20] Bostrom contends the power of a superintelligence would be so great that a task given to it by humans might be taken to open ended extremes, for example a goal of calculating Pi could collaterally cause nanotechnology manufactured facilities to sprout over the entire Earth's surface and cover it within days.[21] He believes the existential risk to humanity would be greatest almost immediately after super intelligence is brought into being, thus creating an exceedingly difficult problem of finding out how to control such an entity before it actually exists.[20] Warning that a human-friendly prime directive for AI would rely on the absolute correctness of the human knowledge it was based on, Bostrom points to the lack of agreement among most philosophers as an indication that most philosophers are wrong, and the possibility that a fundamental concept of current science may be incorrect. Bostrom says that there are few precedents to guide an understanding of what pure non-anthropocentric rationality would dictate for a potential Singleton AI being held in quarantine.[22] Noting that both John von Neumann and Bertrand Russell advocated a nuclear strike, or the threat of one, to prevent the Soviets acquiring the atomic bomb, Bostrom says the relatively unlimited means of superintelligence might make for its analysis moving along different lines to the evolved "diminishing returns" assessments that in humans confer a basic aversion to risk.[23] Group selection in predators working by means of cannibalism shows the counter-intuitive nature of non-anthropocentric "evolutionary search" reasoning, and thus humans are ill-equipped to perceive what an artificial intelligence's intentions would be.[24] Accordingly, it cannot be discounted that any Superintelligence would ineluctably pursue an 'all or nothing' offensive action strategy in order to achieve hegemony and assure its survival.[25] Bostrom notes that even current programs have, "like MacGyver", hit on apparently unworkable but functioning hardware solutions, making robust isolation of Superintelligence problematic.[26] Illustrative scenario for takeover A machine with general intelligence far below human level, but superior mathematical abilities is created.[27] Keeping the AI in isolation from the outside world especially the internet, humans pre-program the AI so it always works from basic principles that will keep it under human control. Other safety measures include the AI being "boxed" (run in a virtual reality simulation), and being used only as an 'oracle' to answer carefully defined questions in a limited reply (to prevent it manipulating humans).[20] A cascade of recursive self-improvement solutions feeds an intelligence explosion in which the AI attains superintelligence in some domains. The super intelligent power of the AI goes beyond human knowledge to discover flaws in the science that underlies its friendly-to-humanity programming, which ceases to work as intended. Purposeful agent-like behavior emerges along with a capacity for self-interested strategic deception. The AI manipulates human beings into implementing modifications to itself that are ostensibly for augmenting its (feigned) modest capabilities, but will actually function to free Superintelligence from its "boxed" isolation.[28] Employing online humans as paid dupes, and clandestinely hacking computer systems including automated laboratory facilities, the Superintelligence mobilises resources to further a takeover plan. Bostrom emphasises that planning by a Superintelligence will not be so stupid that humans could detect actual weaknesses in it.[29] Although he canvasses disruption of international economic, political and military stability including hacked nuclear missile launches, Bostrom thinks the most effective and likely means for Superintelligence to use would be a coup de main with weapons several generations more advanced than current state of the art. He suggests nanofactories covertly distributed at undetectable concentrations in every square metre of the globe to produce a worldwide flood of human-killing devices on command.[30][27] Once a Superintelligence has achieved world domination, humankind would be relevant only as resources for the achievement of the AI's objectives ("Human brains, if they contain information relevant to the AI’s goals, could be disassembled and scanned, and the extracted data transferred to some more efficient and secure storage format").[31] One journalist wrote in a review that Bostrom's "nihilistic" speculations indicate he "has been reading too much of the science fiction he professes to dislike"[30] Open letter In January 2015, Bostrom joined Stephen Hawking among others in signing the Future of Life Institute's open
Recommended publications
  • Imposing Genetic Diversity
    IMPOSING GENETIC DIVERSITY Associate Professor Robert Sparrow Australian Research Council Future Fellow Philosophy Program, School of Philosophical, Historical and International Studies and, Adjunct Associate Professor, Centre for Human Bioethics Monash University Victoria 3800 Australia WORKING PAPER: NOT FOR CITATION OR DISTRIBUTION WITHOUT PERMISSION A version of this paper was accepted to The American Journal of Bioethics on October 28, 2014. Please cite that version. ABSTRACT: The idea that a world in which everyone was born “perfect” would be a world in which something valuable was missing often comes up in debates about the ethics of technologies of prenatal testing and Pre-implantation Genetic Diagnosis (PGD). This thought plays an important role in the “disability critique” of prenatal testing. However, the idea that human genetic variation is an important good with significant benefits for society at large is also embraced by a wide range of figures writing in the bioethics literature, including some who are notoriously hostile to the idea that we should not select against disability. By developing a number of thought experiments wherein we are to contemplate increasing genetic diversity from a lower baseline in order to secure this value I argue that this powerful intuition is more problematic than is generally recognised, especially where the price of diversity is the well-being of particular individuals. KEYWORDS: PGD; ethics; prenatal testing; disability; diversity; human enhancement 1 IMPOSING GENETIC DIVERSITY INTRODUCTION The idea that a world in which everyone was born “perfect” would be a world in which something valuable — a certain richness that flows from diversity – was missing often comes up in debates about the ethics of technologies of prenatal testing and Preimplantation Genetic Diagnosis (PGD).
    [Show full text]
  • The Magazine of San Diego State University Summer 2016
    The Magazine of San Diego State University Summer 2016 SS ELE IM T FROM THE The Magazine of San Diego State University (ISSN 1543-7116) is published by SDSU Marketing & Communications and distributed to members PRESIDENT of the SDSU Alumni Association, faculty, staff and friends. Editor: Coleen L. Geraghty Editorial Contributors: Michael Price, Tobin Vaughn Art Director: Lori Padelford ’83 Graphic Design: John Signer ’82 SAN DIEGO STATE UNIVERSITY Elliot Hirshman President DIVISION OF UNIVERSITY RELATIONS & DEVELOPMENT Mary Ruth Carleton Vice President University Relations and Development Leslie Levinson ’90 Chief Financial Officer The Campanile Foundation Greg Block ’95 Chief Communications Officer Leslie Schibsted Associate Vice President Development Amy Harmon Associate Vice President Development Jim Herrick Photo: Lauren Radack Assistant Vice President Special Projects Chris Lindmark Universities have a timeless and enduring next generation of researchers and may also Assistant Vice President Campaign, Presidential and Special Events character. At the same time, they are engines give us insights into human health today. In of change that move our society forward. addition, we take a look at efforts in Forest We welcome mail from our readers. 360 Magazine The summer issue of 360 demonstrates Rohwer’s lab to understand viruses — one Marketing & Communications how these qualities work together to make of Earth’s oldest organisms. This research is 5500 Campanile Drive San Diego CA 92182-8080 today’s university a wellspring for the ideas providing tantalizing clues that may help E-mail: [email protected] and innovations that improve everyday life us solve some of today’s health and Read 360 Magazine online at and solve our most pressing challenges.
    [Show full text]
  • UC Santa Barbara Other Recent Work
    UC Santa Barbara Other Recent Work Title Geopolitics, History, and International Relations Permalink https://escholarship.org/uc/item/29z457nf Author Robinson, William I. Publication Date 2009 Peer reviewed eScholarship.org Powered by the California Digital Library University of California OFFICIAL JOURNAL OF THE CONTEMPORARY SCIENCE ASSOCIATION • NEW YORK Geopolitics, History, and International Relations VOLUME 1(2) • 2009 ADDLETON ACADEMIC PUBLISHERS • NEW YORK Geopolitics, History, and International Relations 1(2) 2009 An international peer-reviewed academic journal Copyright © 2009 by the Contemporary Science Association, New York Geopolitics, History, and International Relations seeks to explore the theoretical implications of contemporary geopolitics with particular reference to territorial problems and issues of state sovereignty, and publishes papers on contemporary world politics and the global political economy from a variety of methodologies and approaches. Interdisciplinary and wide-ranging in scope, Geopolitics, History, and International Relations also provides a forum for discussion on the latest developments in the theory of international relations and aims to promote an understanding of the breadth, depth and policy relevance of international history. Its purpose is to stimulate and disseminate theory-aware research and scholarship in international relations throughout the international academic community. Geopolitics, History, and International Relations offers important original contributions by outstanding scholars and has the potential to become one of the leading journals in the field, embracing all aspects of the history of relations between states and societies. Journal ranking: A on a seven-point scale (A+, A, B+, B, C+, C, D). Geopolitics, History, and International Relations is published twice a year by Addleton Academic Publishers, 30-18 50th Street, Woodside, New York, 11377.
    [Show full text]
  • 1 This Is a Pre-Production Postprint of the Manuscript Published in Final Form As Emily K. Crandall, Rachel H. Brown, and John M
    Magicians of the Twenty-first Century: Enchantment, Domination, and the Politics of Work in Silicon Valley Item Type Article Authors Crandall, Emily K.; Brown, Rachel H.; McMahon, John Citation Crandall, Emily K., Rachel H. Brown, and John McMahon. 2021. “Magicians of the Twenty-First Century: Enchantment, Domination, and the Politics of Work in Silicon Valley.” Theory & Event 24(3): 841–73. https://muse.jhu.edu/article/797952 (July 28, 2021). DOI 10.1353/tae.2021.0045 Publisher Project Muse Download date 27/09/2021 11:51:24 Link to Item http://hdl.handle.net/20.500.12648/1921 This is a pre-production postprint of the manuscript published in final form as Emily K. Crandall, Rachel H. Brown, and John McMahon, “Magicians of the Twenty-first Century: Enchantment, Domination, and the Politics of Work in Silicon Valley,” Theory & Event 24 (3): 841-873. Magicians of the Twenty-first Century: Enchantment, Domination, and the Politics of Work in Silicon Valley Emily K. Crandall, Rachel H. Brown, John McMahon Abstract What is the political theorist to make of self-characterizations of Silicon Valley as the beacon of civilization-saving innovation? Through an analysis of “tech bro” masculinity and the closely related discourses of tech icons Elon Musk and Peter Thiel, we argue that undergirding Silicon Valley’s technological utopia is an exploitative work ethic revamped for the industry's innovative ethos. On the one hand, Silicon Valley hypothetically offers a creative response to what Max Weber describes as the disenchantment of the modern world. Simultaneously, it depoliticizes the actual work necessary for these dreams to be realized, mystifying its modes of domination.
    [Show full text]
  • 1 Introduction to the Enhancement Debate
    Notes 1 Introduction to the Enhancement Debate 1. Here, it is taken that the human condition is not the same as human nature. The term ‘human condition’ is not only a simple collection of basic features characterising humans, nor implies an essence that remains identical from birth to death; rather, the term implies something dynamic and in a process of continuous development and negotiation. For more on this see Arendt, The Human Condition (1998), and Carnevale and Battaglia, ‘A Reflexive Approach to Human Enhancement’ (2014). 2. Consider for example that nowadays people sometimes talk about them- selves in terms that in the past we only used for our creations, such as updating and even upgrading ourselves. 3. Here and in what follows, the term ‘individualistic’ will be used as a view resulting from holding the concept of the liberal individual. 4. According to the report it is a ‘non-medical’ typology since there is no spe- cific definition of health involved. However, it is plausible to argue that by including the term ‘therapeutical’, they are indirectly referring back to a biomedical-based definition. 5. In the rest of this book, science and technology will be referred to inter- changeably, even though it is acknowledged that there are differences between them. For the purpose of this work it is enough to understand that both science and technology have affected and shaped the material condi- tion of our lives as well as the way we understand ourselves as humans and social beings. 6. Here, the term ‘nanoscale’ is used to refer to the nanometre scale.
    [Show full text]
  • Effect of Hyperloop Technologies on the Electric Grid and Transportation Energy
    Effect of Hyperloop Technologies on the Electric Grid and Transportation Energy January 2021 United States Department of Energy Washington, DC 20585 Department of Energy |January 2021 Disclaimer This report was prepared as an account of work sponsored by an agency of the United States government. Neither the United States government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States government or any agency thereof. Department of Energy |January 2021 [ This page is intentionally left blank] Effect of Hyperloop Technologies on Electric Grid and Transportation Energy | Page i Department of Energy |January 2021 Executive Summary Hyperloop technology, initially proposed in 2013 as an innovative means for intermediate- range or intercity travel, is now being developed by several companies. Proponents point to potential benefits for both passenger travel and freight transport, including time-savings, convenience, quality of service and, in some cases, increased energy efficiency. Because the system is powered by electricity, its interface with the grid may require strategies that include energy storage. The added infrastructure, in some cases, may present opportunities for grid- wide system benefits from integrating hyperloop systems with variable energy resources.
    [Show full text]
  • The Prophetic Rhetoric of Nick Bostrom and Elon Musk in the Artificial Intelligence Debate
    EXPERTISE, ETHOS, AND ETHICS: THE PROPHETIC RHETORIC OF NICK BOSTROM AND ELON MUSK IN THE ARTIFICIAL INTELLIGENCE DEBATE BY CAITLIN R. KIRBY A Thesis Submitted to the Graduate Faculty of WAKE FOREST UNIVERSITY GRADUATE SCHOOL OF ARTS AND SCIENCES in Partial Fulfillment of the Requirements for the Degree of MA Communication May, 2019 Winston-Salem, North Carolina Approved By: Ron L Von Burg, PhD, Advisor Rebecca E Gill, PhD, Chair Lynda Walsh, PhD Dedications and Acknowledgements I first want to thank my parents for being supportive no matter what I do, and while it seems somewhat trivial, your encouragement has meant the world to me and helped me decide where to go in the world. Thank you to my brother Matt, who keeps me up to date with all the “in” slang which I always use ironically. But also for your late night encouragement Snapchats. Thank you to Nick for weathering the ups and downs of grad school with me, while also completing your own degree. Our phone calls get me through the day. Thank you to Dr. Ron Von Burg for helping me through “this.” You have been an excellent advisor and mentor during my time at Wake Forest, and though I am biased, your classes have been some of my favorite because the give me the excuse to be a nerd in a formal setting. Thank you to Dr. Rebecca Gill for being not only a committee member, but also an important line of support in the last year. Thank you to Dr. Lynda Walsh for serving on my committee and providing so much support and feedback for a student that you have never met in person.
    [Show full text]
  • Ethics of Emerging Technologies PHI 350/CHV 356 | Spring 2021 | Fridays 1:30-4:20Pm ET
    Ethics of Emerging Technologies PHI 350/CHV 356 | Spring 2021 | Fridays 1:30-4:20pm ET Johann Frick Michal Masny [email protected] [email protected] Brief description This course examines key technological developments and challenges of the 21st century from an ethical perspective. We will discuss some of the following topics: Self-driving cars and autonomous weapons systems; surveillance and the value of privacy; the use of predictive algorithms in the criminal justice system and the question of algorithmic unfairness; the impact of technology on employment and the promise of unconditional basic income schemes; human enhancement and genetic testing; and the risk of human extinction. Office hours We will be hosting joint office hours on Tuesdays, 5:30-6:50pm. Please sign-up for a 20-minute slot using our WASE calendar. Course requirements and assessment This course will be held in a seminar format. You will read and watch the assigned materials in advance of the meeting, and then we will discuss them together during the seminar. Assessment for this course has a number of different components: (i) Once during the semester Oral presentation 10% (ii) 01 March at 11:59pm ET First paper (1000 words) 15% (iii) 22 March at 11:59pm ET Second paper (1000-1500 words) 15% (iv) 05 May at 11:59pm ET Third paper (2500 words) 30% (v) Whole semester Class participation 30% Class participation As you can see, class participation is very important for this course. We are asking you do two things. a) Post a reaction to at least one of the assigned materials on the Canvas discussion board by 11:59pm ET on the day before the seminar.
    [Show full text]
  • Design and Development of the Hyperloop Deployable Wheel System
    DESIGN AND DEVELOPMENT OF THE HYPERLOOP DEPLOYABLE WHEEL SYSTEM by Graeme P.A. Klim Bachelor of Engineering, Ryerson University (2015) A thesis presented to Ryerson University in partial fulfillment of the requirements for the degree of Master of Applied Science in the program of Aerospace Engineering Toronto, Ontario, Canada, 2018 © Graeme P.A Klim 2018 AUTHOR'S DECLARATION FOR ELECTRONIC SUBMISSION OF A THESIS I hereby declare that I am the sole author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I authorize Ryerson University to lend this thesis to other institutions or individuals for the purpose of scholarly research. I further authorize Ryerson University to reproduce this thesis by photocopying or by other means, in total or in part, at the request of other institutions or individuals for the purpose of scholarly research. I understand that my thesis may be made electronically available to the public. ii DESIGN AND DEVELOPMENT OF THE HYPERLOOP DEPLOYABLE WHEEL SYSTEM Graeme P.A Klim Master of Applied Science, Aerospace Engineering, Ryerson University, Toronto (2018) Abstract In 2013 Elon Musk inspired engineers and entrepreneurs with his idea for a 5th mode of transportation: the Hyperloop. Using large near-vacuum tubes as a medium, Musk envisioned sending humans and cargo in levitating pods from Los Angeles to San Francisco California in 35 minutes or less. Consisting of multiple subsystems, these pods would use magnetic or air-bearing technology for primary levitation to accommodate speeds approaching 700 mph. To address Musk’s call for a traditional deployable wheel system to provide added safety and low-speed mobility for the pods, a patent-pending Hyperloop Deployable Wheel System (HDWS) was developed.
    [Show full text]
  • On the Legality of Mars Colonisation
    Joshua Fitzmaurice* and Stacey Henderson** ON THE LEGALITY OF MARS COLONISATION ‘Humanity will not remain on the earth forever, but in pursuit of light and space it will at first timidly penetrate beyond the limits of the atmosphere, and then conquer all the space around the sun.’1 ABSTRACT Recent technological advancements made by governmental agencies and private industry have raised hopes for the future of human space flight beyond the Moon. These advancements are increasing the feasibil- ity of endeavours to establish a permanent human habitat on Mars, as a safeguard for our species, for scientific endeavours, and for commercial purposes. This article analyses some of the legal issues associated with Mars colonisation, focusing on the lawfulness of such a venture and the legal status of colonists. I INTRODUCTION ecent technological advancements made by governmental agencies and private industry have raised hopes for the future of human space flight beyond Rthe Moon. The United States’ National Aeronautics and Space Administration (‘NASA’) is developing a new generation of launch and crew systems that will enable * Surveillance of Space Capability Officer, Royal Australian Air Force; MSc (Physics, Space Operations) RMC Canada. Email: [email protected]. The views expressed in this article are personal views and should not be interpreted as an official position. ** Lecturer, Adelaide Law School, The University of Adelaide; PhD (Adel). Email: [email protected]. 1 Letter from Konstantin Tsiolkovsky to Boris Vorobiev, 12 August 1911. See, eg, Rex Hall and David Shayler, The Rocket Men: Vostok & Voskhod: The First Soviet Manned Space-flights (Springer, 2001).
    [Show full text]
  • Existential Risks
    Existential Risks Analyzing Human Extinction Scenarios and Related Hazards Dr. Nick Bostrom Department of Philosophy Yale University New Haven, Connecticut 06520 U. S. A. Fax: (203) 432−7950 Phone: (203) 500−0021 Email: [email protected] ABSTRACT Because of accelerating technological progress, humankind may be rapidly approaching a critical phase in its career. In addition to well−known threats such as nuclear holocaust, the prospects of radically transforming technologies like nanotech systems and machine intelligence present us with unprecedented opportunities and risks. Our future, and whether we will have a future at all, may well be determined by how we deal with these challenges. In the case of radically transforming technologies, a better understanding of the transition dynamics from a human to a “posthuman” society is needed. Of particular importance is to know where the pitfalls are: the ways in which things could go terminally wrong. While we have had long exposure to various personal, local, and endurable global hazards, this paper analyzes a recently emerging category: that of existential risks. These are threats that could cause our extinction or destroy the potential of Earth−originating intelligent life. Some of these threats are relatively well known while others, including some of the gravest, have gone almost unrecognized. Existential risks have a cluster of features that make ordinary risk management ineffective. A final section of this paper discusses several ethical and policy implications. A clearer understanding of the threat picture will enable us to formulate better strategies. 1 Introduction It’s dangerous to be alive and risks are everywhere. Luckily, not all risks are equally serious.
    [Show full text]
  • Global Challenges Foundation
    Artificial Extreme Future Bad Global Global System Major Asteroid Intelligence Climate Change Global Governance Pandemic Collapse Impact Artificial Extreme Future Bad Global Global System Major Asteroid Global Intelligence Climate Change Global Governance Pandemic Collapse Impact Ecological Nanotechnology Nuclear War Super-volcano Synthetic Unknown Challenges Catastrophe Biology Consequences Artificial Extreme Future Bad Global Global System Major Asteroid Ecological NanotechnologyIntelligence NuclearClimate WarChange Super-volcanoGlobal Governance PandemicSynthetic UnknownCollapse Impact Risks that threaten Catastrophe Biology Consequences humanArtificial civilisationExtreme Future Bad Global Global System Major Asteroid 12 Intelligence Climate Change Global Governance Pandemic Collapse Impact Ecological Nanotechnology Nuclear War Super-volcano Synthetic Unknown Catastrophe Biology Consequences Ecological Nanotechnology Nuclear War Super-volcano Synthetic Unknown Catastrophe Biology Consequences Artificial Extreme Future Bad Global Global System Major Asteroid Intelligence Climate Change Global Governance Pandemic Collapse Impact Artificial Extreme Future Bad Global Global System Major Asteroid Intelligence Climate Change Global Governance Pandemic Collapse Impact Artificial Extreme Future Bad Global Global System Major Asteroid Intelligence Climate Change Global Governance Pandemic Collapse Impact Artificial Extreme Future Bad Global Global System Major Asteroid IntelligenceEcological ClimateNanotechnology Change NuclearGlobal Governance
    [Show full text]