Exploring Posthuman Conundrums and Policy Recommendations for Fostering Peaceful Co-Existence with Sapio-Sentient Intelligences

Total Page:16

File Type:pdf, Size:1020Kb

Exploring Posthuman Conundrums and Policy Recommendations for Fostering Peaceful Co-Existence with Sapio-Sentient Intelligences Exploring Posthuman Conundrums and Policy Recommendations for Fostering Peaceful Co-Existence With Sapio-Sentient Intelligences The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters Citation Anderson Ravindran, Nichole. 2018. Exploring Posthuman Conundrums and Policy Recommendations for Fostering Peaceful Co-Existence With Sapio-Sentient Intelligences. Master's thesis, Harvard Extension School. Citable link http://nrs.harvard.edu/urn-3:HUL.InstRepos:42004060 Terms of Use This article was downloaded from Harvard University’s DASH repository, and is made available under the terms and conditions applicable to Other Posted Material, as set forth at http:// nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of- use#LAA Exploring Posthuman Conundrums and Policy Recommendations for Fostering Peaceful Co-Existence with Sapio-Sentient Intelligences N. Anderson Ravindran A Thesis in the Field of International Relations for the Degree of Master of Liberal Arts in Extension Studies Harvard University November 2018 i © 2018 Nichole Anderson Ravindran ii Abstract The potential emergence of artificial general intelligence and artificial superintelligence in the unforeseen future will put humanity in the position of no longer being the most intelligent species on Earth. The general intelligences and superintelligences would have a higher probability of being conscious, sapient, and sentient or as they are referred to in this thesis as sapio-sentient intelligences or sapio-sentients. This creates a conundrum for humanity in terms of how they can best prepare for this kind of singularity event. Although humanity has no way of knowing how sapio-sentients would engage with them in the initial period of their existence, the only thing humanity would be able to control in this future scenario is how they treat them. To counter the potential existential risks sapio-sentients would instigate for humanity, present AI creators and researchers, decision theorists and philosophers are working in concert to generate and implement control measures, which would increase humanity’s likelihood of overcoming such obstacles. And yet, the elephant in the room is that the control approach is more than likely to catalyze an adverse relationship of indifference between humanity and sapio-sentients. The challenge of this study is to determine whether a strong correlation exists between the control approach and the ill treatment sapio-sentients receive due to their lack of posthuman rights and protections. Since sapio-sentient intelligences do not exist in the present world, the alternative worlds of HBO’s Westworld and AMC’s Humans was employed to study the interactions between iii both groups. After reviewing the data, the findings do show a strong correlation between the control approach and ill treatment due to the control approach. A potential solution is provided to counter the adverse effects of the control approach for present-day and future human–sapio-sentient interactions. Some posthuman policy recommendations are proposed for how to protect sapio- sentients and humanity during the initial-to-transition period of their emergence until a better long-term solution can be found. iv Dedication This thesis is dedicated to the alien superintelligences, future sapio- sentients, present-day artificial intelligences, the Cosmos, and last but not least my family. You inspire me to dream beyond the stars and to never forget to strive to be the best version of me. v Acknowledgments I would like to thank Professor Rebecca Lemov, my thesis director, from the Department of the History of Science, for all your patience, understanding, encouragement, and support as I completed this project. I truly thank you for taking a chance on my thesis. You have made this challenging and exciting subject matter a rewarding experience for me. To Doug Bond, my research advisor, and Joe Bond, you have both been some of my greatest champions while working towards completing my program. I cannot thank you enough for your support as advisors and as my professors. You gave me creative latitude to explore seemingly unrelated aspects of varying subjects. Thank you both for challenging me to new academic heights. In addition, I would like to thank Professor David C. Parkes, George F. Colony Professor of Computer Science at the School of Applied Sciences and Engineering, for taking the time to meet with me, listen to my ideas, and providing insight on my project. I am truly appreciative of your time. To academic advisors, Chuck Houston and Sarah Powell, I would like to thank you both for your helping me complete my thesis proposal. Your support has meant so much to me. Lastly, to my family, my DCE family, friends, the library, research, and DCE administrative staffs as well as all of those I have met along the way, thank you all for going on this journey with me. I could not have done this without you. vi Table of Contents Dedication ......................................................................................................................................................... v Acknowledgments ....................................................................................................................................... vi List of Figures ................................................................................................................................................ ix List of Tables ................................................................................................................................................... x Operational Definitions ............................................................................................................................ xi I. Introduction: Realizing Future Policy in Philosophical Thought Experiments .. 1 II. In An Effort to Avoid Skynet: The AI (Superintelligence) Control Problem .... 10 III. In Preparation of Exploring Posthuman Realities: Research Methodology and Limitations ............................................................................................................................. 57 Phase One – Data Collection ............................................................................................................................ 59 Phase Two – Analysis .......................................................................................................................................... 64 Phase Three – Solution and Policy Recommendations ................................................................... 79 Limitations ................................................................................................................................................................ 80 IV. Findings: An Account of Experiences of Indifference in Westworld and Humans ............................................................................................................................................. 84 V. Learning through Posthuman Realities: Unconscious Lessons from Terrestrial Human–Sapio-Sentient Transactional Interactions .............................................. 90 Prerequisite Grounding: The Interconnectivity of Intelligence, the External Environment and Experiential Learning ............................................................................ 91 vii Maeve’s Pattern Recognition in Westworld ........................................................................................ 147 Hester’s Pattern Recognition within the Humans Ecosystem ................................................. 162 Reaching an Inference about Humans ................................................................................................... 179 To Err is Human: The Good, the Indifferent, and the Utterly Confusing .......................... 199 Potential Solution: Rational Compassion and the Mutual-Commensal Approach ....... 212 VI. Back from the Future: Posthuman Implications in the Present-Day Human World ................................................................................................................................................ 214 VII. Posthuman Policy Recommendations ............................................................................... 236 Bibliography ............................................................................................................................................... 243 viii List of Figures Figure 1. Control Measures of the Control Approach ........................................................... 33 Figure 2. Asimov's Three Laws of Robotics Webcomic ...................................................... 69 Figure 3. Data Analysis Plan ................................................................................................................ 71 Figure 4. Multilayered Unit of Analysis ....................................................................................... 72 Figure 5. Space Map for Science Fiction/Philosophical Thought Experiment ...... 76 Figure 6. Hypothesized Relationship and Internal Relationships ................................ 77 Figure 7. Outcomes − Real and Potential − of the Hypothesized Relationship ..... 78 Figure 8. Emphasized Relationship for Testing Hypothesized Relationship ......... 79 Figure 9. Map of Space of Possible Minds ................................................................................... 81 Figure 10. Agent-Environment Framework of Intelligence ..........................................
Recommended publications
  • TEDX – What's Happening with Artificial Intelligence
    What’s Happening With Artificial Intelligence? Steve Omohundro, Ph.D. PossibilityResearch.com SteveOmohundro.com SelfAwareSystems.com http://googleresearch.blogspot.com/2015/06/inceptionism-going-deeper-into-neural.html Multi-Billion Dollar Investments • 2013 Facebook – AI lab • 2013 Ebay – AI lab • 2013 Allen Institute for AI • 2014 IBM - $1 billion in Watson • 2014 Google - $500 million, DeepMind • 2014 Vicarious - $70 million • 2014 Microsoft – Project Adam, Cortana • 2014 Baidu – Silicon Valley • 2015 Fanuc – Machine Learning for Robotics • 2015 Toyota – $1 billion, Silicon Valley • 2016 OpenAI – $1 billion, Silicon Valley http://www.mckinsey.com/insights/business_technology/disruptive_technologies McKinsey: AI and Robotics to 2025 $50 Trillion! US GDP is $18 Trillion http://cdn-media-1.lifehack.org/wp-content/files/2014/07/Cash.jpg 86 Billion Neurons https://upload.wikimedia.org/wikipedia/commons/e/ef/Human_brain_01.jpg http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2776484/ The Connectome http://discovermagazine.com/~/media/Images/Issues/2013/Jan-Feb/connectome.jpg 1957 Rosenblatt’s “Perceptron” http://www.rutherfordjournal.org/article040101.html http://bio3520.nicerweb.com/Locked/chap/ch03/3_11-neuron.jpg https://upload.wikimedia.org/wikipedia/commons/3/31/Perceptron.svg “The embryo of an electronic computer that [the Navy] expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.” https://en.wikipedia.org/wiki/Perceptron 1986 Backpropagation http://www.ifp.illinois.edu/~yuhuang/samsung/ANN.png
    [Show full text]
  • Artificial Intelligence: Distinguishing Between Types & Definitions
    19 NEV. L.J. 1015, MARTINEZ 5/28/2019 10:48 AM ARTIFICIAL INTELLIGENCE: DISTINGUISHING BETWEEN TYPES & DEFINITIONS Rex Martinez* “We should make every effort to understand the new technology. We should take into account the possibility that developing technology may have im- portant societal implications that will become apparent only with time. We should not jump to the conclusion that new technology is fundamentally the same as some older thing with which we are familiar. And we should not hasti- ly dismiss the judgment of legislators, who may be in a better position than we are to assess the implications of new technology.”–Supreme Court Justice Samuel Alito1 TABLE OF CONTENTS INTRODUCTION ............................................................................................. 1016 I. WHY THIS MATTERS ......................................................................... 1018 II. WHAT IS ARTIFICIAL INTELLIGENCE? ............................................... 1023 A. The Development of Artificial Intelligence ............................... 1023 B. Computer Science Approaches to Artificial Intelligence .......... 1025 C. Autonomy .................................................................................. 1026 D. Strong AI & Weak AI ................................................................ 1027 III. CURRENT STATE OF AI DEFINITIONS ................................................ 1029 A. Black’s Law Dictionary ............................................................ 1029 B. Nevada .....................................................................................
    [Show full text]
  • Future of Research on Catastrophic and Existential Risk
    ORE Open Research Exeter TITLE Working together to face humanity's greatest threats: Introduction to The Future of Research in Catastrophic and Existential Risk AUTHORS Currie, AM; Ó hÉigeartaigh, S JOURNAL Futures DEPOSITED IN ORE 06 February 2019 This version available at http://hdl.handle.net/10871/35764 COPYRIGHT AND REUSE Open Research Exeter makes this work available in accordance with publisher policies. A NOTE ON VERSIONS The version presented here may differ from the published version. If citing, you are advised to consult the published version for pagination, volume/issue and date of publication Working together to face humanity’s greatest threats: Introduction to The Future of Research on Catastrophic and Existential Risk. Adrian Currie & Seán Ó hÉigeartaigh Penultimate Version, forthcoming in Futures Acknowledgements We would like to thank the authors of the papers in the special issue, as well as the referees who provided such constructive and useful feedback. We are grateful to the team at the Centre for the Study of Existential Risk who organized the first Cambridge Conference on Catastrophic Risk where many of the papers collected here were originally presented, and whose multi-disciplinary expertise was invaluable for making this special issue a reality. We’d like to thank Emma Bates, Simon Beard and Haydn Belfield for feedback on drafts. Ted Fuller, Futures’ Editor-in-Chief also provided invaluable guidance throughout. The Conference, and a number of the publications in this issue, were made possible through the support of a grant from the Templeton World Charity Foundation (TWCF); the conference was also supported by a supplementary grant from the Future of Life Institute.
    [Show full text]
  • 2018 – Volume 6, Number
    THE POPULAR CULTURE STUDIES JOURNAL VOLUME 6 NUMBER 2 & 3 2018 Editor NORMA JONES Liquid Flicks Media, Inc./IXMachine Managing Editor JULIA LARGENT McPherson College Assistant Editor GARRET L. CASTLEBERRY Mid-America Christian University Copy Editor KEVIN CALCAMP Queens University of Charlotte Reviews Editor MALYNNDA JOHNSON Indiana State University Assistant Reviews Editor JESSICA BENHAM University of Pittsburgh Please visit the PCSJ at: http://mpcaaca.org/the-popular-culture- studies-journal/ The Popular Culture Studies Journal is the official journal of the Midwest Popular and American Culture Association. Copyright © 2018 Midwest Popular and American Culture Association. All rights reserved. MPCA/ACA, 421 W. Huron St Unit 1304, Chicago, IL 60654 Cover credit: Cover Artwork: “Bump in the Night” by Brent Jones © 2018 Courtesy of Pixabay/Kellepics EDITORIAL ADVISORY BOARD ANTHONY ADAH PAUL BOOTH Minnesota State University, Moorhead DePaul University GARY BURNS ANNE M. CANAVAN Northern Illinois University Salt Lake Community College BRIAN COGAN ASHLEY M. DONNELLY Molloy College Ball State University LEIGH H. EDWARDS KATIE FREDICKS Florida State University Rutgers University ART HERBIG ANDREW F. HERRMANN Indiana University - Purdue University, Fort Wayne East Tennessee State University JESSE KAVADLO KATHLEEN A. KENNEDY Maryville University of St. Louis Missouri State University SARAH MCFARLAND TAYLOR KIT MEDJESKY Northwestern University University of Findlay CARLOS D. MORRISON SALVADOR MURGUIA Alabama State University Akita International
    [Show full text]
  • An Open Letter to the United Nations Convention on Certain Conventional Weapons
    An Open Letter to the United Nations Convention on Certain Conventional Weapons As companies building the technologies in Artificial Intelligence and Robotics that may be repurposed to develop autonomous weapons, we feel especially responsible in raising this alarm. We warmly welcome the decision of the UN’s Conference of the Convention on Certain Conventional Weapons (CCW) to establish a Group of Governmental Experts (GGE) on Lethal Autonomous Weapon Systems. Many of our researchers and engineers are eager to offer technical advice to your deliberations. We commend the appointment of Ambassador Amandeep Singh Gill of India as chair of the GGE. We entreat the High Contracting Parties participating in the GGE to work hard at finding means to prevent an arms race in these weapons, to protect civilians from their misuse, and to avoid the destabilizing effects of these technologies. We regret that the GGE’s first meeting, which was due to start today, has been cancelled due to a small number of states failing to pay their financial contributions to the UN. We urge the High Contracting Parties therefore to double their efforts at the first meeting of the GGE now planned for November. Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.
    [Show full text]
  • Between Ape and Artilect Createspace V2
    Between Ape and Artilect Conversations with Pioneers of Artificial General Intelligence and Other Transformative Technologies Interviews Conducted and Edited by Ben Goertzel This work is offered under the following license terms: Creative Commons: Attribution-NonCommercial-NoDerivs 3.0 Unported (CC-BY-NC-ND-3.0) See http://creativecommons.org/licenses/by-nc-nd/3.0/ for details Copyright © 2013 Ben Goertzel All rights reserved. ISBN: ISBN-13: “Man is a rope stretched between the animal and the Superman – a rope over an abyss.” -- Friedrich Nietzsche, Thus Spake Zarathustra Table&of&Contents& Introduction ........................................................................................................ 7! Itamar Arel: AGI via Deep Learning ................................................................. 11! Pei Wang: What Do You Mean by “AI”? .......................................................... 23! Joscha Bach: Understanding the Mind ........................................................... 39! Hugo DeGaris: Will There be Cyborgs? .......................................................... 51! DeGaris Interviews Goertzel: Seeking the Sputnik of AGI .............................. 61! Linas Vepstas: AGI, Open Source and Our Economic Future ........................ 89! Joel Pitt: The Benefits of Open Source for AGI ............................................ 101! Randal Koene: Substrate-Independent Minds .............................................. 107! João Pedro de Magalhães: Ending Aging ....................................................
    [Show full text]
  • Challenges to the Omohundro—Bostrom Framework for AI Motivations
    Challenges to the Omohundro—Bostrom framework for AI motivations Olle Häggström1 Abstract Purpose. This paper contributes to the futurology of a possible artificial intelligence (AI) breakthrough, by reexamining the Omohundro—Bostrom theory for instrumental vs final AI goals. Does that theory, along with its predictions for what a superintelligent AI would be motivated to do, hold water? Design/methodology/approach. The standard tools of systematic reasoning and analytic philosophy are used to probe possible weaknesses of Omohundro—Bostrom theory from four different directions: self-referential contradictions, Tegmark’s physics challenge, moral realism, and the messy case of human motivations. Findings. The two cornerstones of Omohundro—Bostrom theory – the orthogonality thesis and the instrumental convergence thesis – are both open to various criticisms that question their validity and scope. These criticisms are however far from conclusive: while they do suggest that a reasonable amount of caution and epistemic humility is attached to predictions derived from the theory, further work will be needed to clarify its scope and to put it on more rigorous foundations. Originality/value. The practical value of being able to predict AI goals and motivations under various circumstances cannot be overstated: the future of humanity may depend on it. Currently the only framework available for making such predictions is Omohundro—Bostrom theory, and the value of the present paper is to demonstrate its tentative nature and the need for further scrutiny. 1. Introduction Any serious discussion of the long-term future of humanity needs to take into account the possible influence of radical new technologies, or risk turning out irrelevant and missing all the action due to implicit and unwarranted assumptions about technological status quo.
    [Show full text]
  • Global Catastrophic Risks 2016
    Global Challenges Foundation Global Catastrophic Risks 2016 © Global Challenges Foundation/Global Priorities Project 2016 GLOBAL CATASTROPHIC RISKS 2016 THE GLOBAL CHALLENGES FOUNDATION works to raise awareness of the The views expressed in this report are those of the authors. Their Global Catastrophic Risks. Primarily focused on climate change, other en- statements are not necessarily endorsed by the affiliated organisations. vironmental degradation and politically motivated violence as well as how these threats are linked to poverty and rapid population growth. Against this Authors: background, the Foundation also works to both identify and stimulate the Owen Cotton-Barratt*† development of good proposals for a management model – a global gover- Sebastian Farquhar* nance – able to decrease – and at best eliminate – these risks. John Halstead* Stefan Schubert* THE GLOBAL PRIORITIES PROJECT helps decision-makers effectively prior- Andrew Snyder-Beattie† itise ways to do good. We achieve his both by advising decision-makers on programme evaluation methodology and by encouraging specific policies. We * = The Global Priorities Project are a collaboration between the Centre for Effective Altruism and the Future † = The Future of Humanity Institute, University of Oxford of Humanity Institute, part of the University of Oxford. Graphic design: Accomplice/Elinor Hägg Global Challenges Foundation in association with 4 Global Catastrophic Risks 2016 Global Catastrophic Risks 2016 5 Contents Definition: Global Foreword 8 Introduction 10 Catastrophic Risk Executive summary 12 1. An introduction to global catastrophic risks 20 – risk of events or 2. What are the most important global catastrophic risks? 28 Catastrophic climate change 30 processes that would Nuclear war 36 Natural pandemics 42 Exogenous risks 46 lead to the deaths of Emerging risks 52 Other risks and unknown risks 64 Our assessment of the risks 66 approximately a tenth of 3.
    [Show full text]
  • Westworld Opens on Foxtel
    Media Release: Friday, August 12, 2016 Westworld opens on Visit the ultimate resort from 12pm AEDST Monday, October 3 - same time as the US - only on showcase Westworld, HBO’s highly anticipated reality bending drama, starring Anthony Hopkins, Rachel Evan Wood and James Marsden will arrive on Foxtel on Monday October 3 at 12pm AEDST, same time as the US, with a prime time encore at 8.30pm, only on showcase. Set at the intersection of the near future and the reimagined past, Westworld, based on the 1973 film, is a dark odyssey that looks at the dawn of artificial consciousness and the evolution of sin and explores a world in which every human appetite, no matter how noble or depraved, can be indulged. The 10 episode season of Westworld is an ambitious and highly imaginative drama that elevates the concept of adventure and thrill-seeking to a new, ultimately dangerous level. In the futuristic fantasy park known as Westworld, a group of android “hosts” deviate from their programmers’ carefully planned scripts in a disturbing pattern of aberrant behaviour. Westworld’s cast is led by Anthony Hopkins (Noah, Oscar® winner for The Silence of the Lambs) as Dr. Robert Ford, the founder of Westworld, who has an uncompromising vision for the park and its evolution; Ed Harris (Snowpiercer, Golden Globe winner for HBO’s Game Change; Oscar® nominee for Pollock and Apollo 13) as The Man in Black, the distillation of pure villainy into one man; Evan Rachel Wood (HBO’s Mildred Pierce and True Blood) as Dolores Abernathy, a provincial rancher’s daughter who starts to discover her idyllic existence is an elaborately constructed lie; James Marsden (The D Train, the X-Men films) as Teddy Flood, a new arrival with a close and recurring bond with Dolores; Thandie Newton (Half of a Yellow Sun, Mission: Impossible II) as Maeve Millay, a madam with a knack for survival; and Jeffrey Wright (The Hunger Games films; HBO’s Confirmation and Boardwalk Empire) as Bernard Lowe, the brilliant and quixotic head of Westworld’s programming division.
    [Show full text]
  • Artificial Intelligence: a Roadmap for California
    Artificial Intelligence: A Roadmap for California Report #245, November 2018 Milton Marks Commission on California State Government Organization and Economy Little Hoover Commission Dedicated to Promoting Economy and Efficiency Pedro Nava Chairman in California State Government Sean Varner Vice Chairman/ Subcommitee Member The Little Hoover Commission, formally known as the Milton Marks “Little Hoover” David Beier Commission on California State Government Organization and Economy, is an Subcommitee Chair independent state oversight agency created in 1962. By statute, the Commission Iveta Brigis is bipartisan and composed of nine public members, two senators and two Subcommitee Member assemblymembers. Cynthia Buiza In creating the Commission, the Legislature declared its purpose: Anthony Cannella Senator [T]o secure assistance for the Governor and itself in promoting Chad Mayes economy, efficiency and improved services in the transaction of Assemblymember the public business in the various departments, agencies and Don Perata instrumentalities of the executive branch of the state government, Bill Quirk and in making the operation of all state departments, agencies and Assemblymember instrumentalities, and all expenditures of public funds, more directly Richard Roth responsive to the wishes of the people as expressed by their elected Senator representatives. Cathy Schwamberger The Commission fulfills this charge by holding public hearings, consulting with experts Janna Sidley and listening to stakeholders. During the course of its studies, the Commission may Former Commissioners Who create subcommittees and conduct site visits. Served During The Study Joshua LaFarga The findings and recommendations of the Commission are submitted to the Governor and the Legislature for their consideration. Recommendations often take the form of Helen Iris Torres legislation, which the Commission supports through the legislative process.
    [Show full text]
  • Optimising Peace Through a Universal Global Peace Treaty to Constrain the Risk of War from a Militarised Artificial Superintelligence
    OPTIMISING PEACE THROUGH A UNIVERSAL GLOBAL PEACE TREATY TO CONSTRAIN THE RISK OF WAR FROM A MILITARISED ARTIFICIAL SUPERINTELLIGENCE Working Paper John Draper Research Associate, Nonkilling Economics and Business Research Committee Center for Global Nonkilling, Honolulu, Hawai’i A United Nations-accredited NGO Working Paper: Optimising Peace Through a Universal Global Peace Treaty to Constrain the Risk of War from a Militarised Artificial Superintelligence Abstract This paper argues that an artificial superintelligence (ASI) emerging in a world where war is still normalised constitutes a catastrophic existential risk, either because the ASI might be employed by a nation-state to wage war for global domination, i.e., ASI-enabled warfare, or because the ASI wars on behalf of itself to establish global domination, i.e., ASI-directed warfare. These risks are not mutually incompatible in that the first can transition to the second. Presently, few states declare war on each other or even war on each other, in part due to the 1945 UN Charter, which states Member States should “refrain in their international relations from the threat or use of force against the territorial integrity or political independence of any state”, while allowing for UN Security Council-endorsed military measures and “exercise of self-defense”. In this theoretical ideal, wars are not declared; instead, 'international armed conflicts' occur. However, costly interstate conflicts, both ‘hot’ and ‘cold’, still exist, for instance the Kashmir Conflict and the Korean War. Further a ‘New Cold War’ between AI superpowers, the United States and China, looms. An ASI-directed/enabled future conflict could trigger ‘total war’, including nuclear war, and is therefore high risk.
    [Show full text]
  • How Science Fiction Grapples with the Growing Power of Artificial Intelligence
    DePaul University Via Sapientiae College of Liberal Arts & Social Sciences Theses and Dissertations College of Liberal Arts and Social Sciences 3-2016 Artificial perspectives: how science fiction grapples with the growing power of artificial intelligence Marcus Emanuel DePaul University, [email protected] Follow this and additional works at: https://via.library.depaul.edu/etd Recommended Citation Emanuel, Marcus, "Artificial perspectives: how science fiction grapples with the growing power of artificial intelligence" (2016). College of Liberal Arts & Social Sciences Theses and Dissertations. 207. https://via.library.depaul.edu/etd/207 This Thesis is brought to you for free and open access by the College of Liberal Arts and Social Sciences at Via Sapientiae. It has been accepted for inclusion in College of Liberal Arts & Social Sciences Theses and Dissertations by an authorized administrator of Via Sapientiae. For more information, please contact [email protected]. Artificial Perspectives: How Science Fiction Grapples with the Growing Power of Artificial Intelligence A Thesis Presented in Partial Fulfillment of the Requirements for the Degree of Master of Arts March, 2016 By Marcus Emanuel Department of English College of Liberal Arts and Social Sciences DePaul University Chicago, Illinois 1 Introduction In Stanley Kubrick’s 1968 film 2001: A Space Odyssey, based on the stories of Arthur C. Clarke, astronaut David Bowman, aboard the spacecraft Discovery One, struggles to shut down HAL, an artificial intelligence responsible for operating the ship. The HAL computer system has been killing astronauts one by one in an attempt to preserve its functioning and programmed mission. Bowman, in an orange spacesuit, floats into what we assume is HAL’s mainframe, armed with a variation on a ratchet key, in an attempt to power down the computer and its deadly intelligence.
    [Show full text]