Runaround Rundown

Total Page:16

File Type:pdf, Size:1020Kb

Runaround Rundown Logan Brooks UNCC-ENG-2116-021 Predicament/contingency plans/robot labor in Chapter 2 July 27, 2017 Runaround Rundown U.S. Robot and Mechanical Men, Inc. is a company ahead of its time, a booming industrial powerhouse. Although, that hasn't always been the case. The first mining expedition to the planet Mercury was a spectacular failure. The second, although successful, also came close to failing. It all began in 2015, 33 years after the founding of the company. Two young engineers were tasked with successfully establishing a stable base of operations on Mercury. They were kept alive in Mercury's harsh environment by “photo-cell banks,” which had started to deteriorate. To fix the photo-cell banks, a kilogram of selenium and a few hours time was required. The engineers casually sent SPD-13, “Speedy,” a highly advanced robot for his time, after the selenium. After a while, they noticed Speedy had not returned and seemed to be acting erratically. They figured it had to do with the three Laws of Robotics: 1. A robot may not injure a human being , or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given it by humans beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. Speedy's Third Law potential was strengthened due to his expensive nature. He initially went to get the selenium, due to the engineers casual order (Second Law). Once he got close enough to the pool, his Third law potential made him turn back. It turned out that Speedy had initially headed toward the selenium pools, but detected danger and headed back; then once again headed toward the pools. Consequently, it made him run in circles around the pool seemingly “drunk.” This cycle was continuous, the Second and Third Laws were at potential equilibrium. Hence, a runaround. To clarify, Speedy would follow the engineer's orders up until mortal danger was detected. The two Laws were at equilibrium, or in balance with each other. He was acting crazy and running in circles because he could not figure out which law to follow. If the engineers had stressed the importance of the selenium (Law 1), Speedy may have had no problem with his mission. Eventually it all was sorted out, and the mission a success. The previous predicament might have been avoided if proper planning and safety procedures were implemented. Future planetary expeditions need to have contingency plans and backup plans for their contingency plans. The second Mercury expedition was apparently ill equipped in dealing with unexpected problems. Although the second mission was only to report on the advisability of reopening the “Sunside Mining Station,” there were only two engineers and one advanced robot on the entire mission. In addition, their entire existence relied on a finite amount photo-cell banks! Moreover, they arrived ten years after the previous failed expedition, and used much of its antiquated equipment, such as the radio room. U.S. Robot and Mechanical Men Inc. knew that the previous radio equipment was hardly sufficient for use on Mercury (two mile range). U.S. Robots and Mechanical Men Inc. (U.S.R.) should have sent their engineers to Mercury with adequate communication abilities. They had “ultrawave” radio equipment, but it was not set up. Future expeditions should have an “expeditionary party” come first and set up new infrastructure such as communications, housing, safety, and survey equipment. If the engineers had an ultrawave set up, Speedy likely could have been coerced much sooner into submission. U.S.R. should have also sent the Mercury team with additional and better equipment. One advanced robot for an entire mission almost fifty-million miles from Earth is not a recipe for success. Also, relying your life on some apparently antiquated photo-cell banks is also a recipe for disaster. U.S.R needs to provide back-up equipment such as excessive photo-cell banks, life support systems, 2 and materials for fixing them on future planetary expeditions. They also need to have the capability to easily depart the planet if an emergency arises, such as an auto-pilot space shuttle. Additionally, U.S.R. needs the capacity to send functioning robots to teams in times of crisis (or just in general would be nice), a proposal that is outlawed due to fears of operational robots wandering off on Earth. That is a win-win situation. If the additional equipment is not needed at the moment; it eventually will be needed. In addition, if something goes awry (as planetary expeditions often do) the engineers will have the necessary equipment for repairs. From a psychological standpoint, the engineers will have peace of mind that they will not be burnt to a crisp because of a system malfunction; a mindset that is advantageous when working on stressful and important missions. Furthermore, it looks favorable from a publicity standpoint. If (I assume) a highly publicized mission to Mercury fails (possibly with human fatalities), and U.S.R. did not do everything remotely possible to ensure their success; it is a public relations nightmare. If U.S.R. takes every possible step to ensure mission success, and the mission still fails; it is a public relations problem, but not a nightmare. To conclude the back-up plans for future planetary expeditions, U.S.R. must imagine the unimaginable. What if a meteor hits the main housing unit? What if one or both of the engineers die? What if there are extraterrestrials on the planet? These are extravagant examples, but it is important to take into account what could possibly happen that U.S.R. does not foresee. For example, the United States knew the Japaneses Empire had a massive fleet of A6M Zero aircraft. The U.S. had placed restrictions on Japanese business dealings and froze Japanese assets. If someone had just imagined that the Japanese Empire would retaliate, Pearl Harbor might have been prevented or its effects lessened. U.S.R needs to seriously consider these, and additional contingency plans for future exploration. As mentioned above, U.S. Robot and Mechanical Men Inc. cannot assemble functioning robots on earth due to strict laws against it. These laws were implemented due to pressure from labor unions and some skeptical religious organizations. Labor aside, these laws hinder progress on space exploration and alike, due to the machines having to be assembled off Earth. There are serious 3 implications when discussing the morality and ethics of robot labor on Earth. Robots should be allowed to be used on Earth as their owners see fit (within reason). Companies should be allowed to replace their entire workforce with robots if they want to. Companies exist for the sole purpose of profit. If they decide that having an entire robot workforce will generate higher profit/lower losses then they should have every right to do so. Now, neither the Unions nor the human workers will be happy. This is a transition. It is painful at times but happens often. During the Great War, railroads were on the decline but still very profitable. Their inevitable decline was due in part to modern automobile manufacturing and air travel services. The United States Government could have stymied or even outlawed one or both of those services. They did not. I can't imagine anyone today that would look back and say, “we should have banned those damned cars and planes!” The transition from rail to air travel cost many people their jobs, but it opened up vast opportunities, one that eventually led to a mining station on Mercury. From another perspective, natural economic selection will decide if robot labor is a good fit for Earth. If a National company refuses to serve people with a certain color hair, I believe that is their right. Just as it should be their right to use robots as labor. Now inevitably, people will boycott that particular business and it will go under in no time. The same principle applies to robot labor. If a company uses robots; that is all well and good for them. But who truly decides if robots will stay is the consumers of the businesses who employ them. In conclusion, robophobia is simply irrational fear of the unknown and misunderstood. Today, we cannot even fathom that President Wilson would have outlawed automobiles and aircraft, but what if he had? The world would be a much different place. Inevitably, those technologies would surface elsewhere and might have even weakened United States global power. The only people who could reasonably argue that a ban on automobile and aircraft technology would be positive are the Vanderbilts. 4.
Recommended publications
  • Identity Theft Literature Review
    The author(s) shown below used Federal funds provided by the U.S. Department of Justice and prepared the following final report: Document Title: Identity Theft Literature Review Author(s): Graeme R. Newman, Megan M. McNally Document No.: 210459 Date Received: July 2005 Award Number: 2005-TO-008 This report has not been published by the U.S. Department of Justice. To provide better customer service, NCJRS has made this Federally- funded grant final report available electronically in addition to traditional paper copies. Opinions or points of view expressed are those of the author(s) and do not necessarily reflect the official position or policies of the U.S. Department of Justice. This document is a research report submitted to the U.S. Department of Justice. This report has not been published by the Department. Opinions or points of view expressed are those of the author(s) and do not necessarily reflect the official position or policies of the U.S. Department of Justice. IDENTITY THEFT LITERATURE REVIEW Prepared for presentation and discussion at the National Institute of Justice Focus Group Meeting to develop a research agenda to identify the most effective avenues of research that will impact on prevention, harm reduction and enforcement January 27-28, 2005 Graeme R. Newman School of Criminal Justice, University at Albany Megan M. McNally School of Criminal Justice, Rutgers University, Newark This project was supported by Contract #2005-TO-008 awarded by the National Institute of Justice, Office of Justice Programs, U.S. Department of Justice. Points of view in this document are those of the author and do not necessarily represent the official position or policies of the U.S.
    [Show full text]
  • AI, Robots, and Swarms: Issues, Questions, and Recommended Studies
    AI, Robots, and Swarms Issues, Questions, and Recommended Studies Andrew Ilachinski January 2017 Approved for Public Release; Distribution Unlimited. This document contains the best opinion of CNA at the time of issue. It does not necessarily represent the opinion of the sponsor. Distribution Approved for Public Release; Distribution Unlimited. Specific authority: N00014-11-D-0323. Copies of this document can be obtained through the Defense Technical Information Center at www.dtic.mil or contact CNA Document Control and Distribution Section at 703-824-2123. Photography Credits: http://www.darpa.mil/DDM_Gallery/Small_Gremlins_Web.jpg; http://4810-presscdn-0-38.pagely.netdna-cdn.com/wp-content/uploads/2015/01/ Robotics.jpg; http://i.kinja-img.com/gawker-edia/image/upload/18kxb5jw3e01ujpg.jpg Approved by: January 2017 Dr. David A. Broyles Special Activities and Innovation Operations Evaluation Group Copyright © 2017 CNA Abstract The military is on the cusp of a major technological revolution, in which warfare is conducted by unmanned and increasingly autonomous weapon systems. However, unlike the last “sea change,” during the Cold War, when advanced technologies were developed primarily by the Department of Defense (DoD), the key technology enablers today are being developed mostly in the commercial world. This study looks at the state-of-the-art of AI, machine-learning, and robot technologies, and their potential future military implications for autonomous (and semi-autonomous) weapon systems. While no one can predict how AI will evolve or predict its impact on the development of military autonomous systems, it is possible to anticipate many of the conceptual, technical, and operational challenges that DoD will face as it increasingly turns to AI-based technologies.
    [Show full text]
  • The Fundamental Articles of I.AM Cyborg Law
    Beijing Law Review, 2020, 11, 911-946 https://www.scirp.org/journal/blr ISSN Online: 2159-4635 ISSN Print: 2159-4627 The Fundamental Articles of I.AM Cyborg Law Stephen Castell CASTELL Consulting, Witham, UK How to cite this paper: Castell, S. (2020). Abstract The Fundamental Articles of I.AM Cyborg Law. Beijing Law Review, 11, 911-946. Author Isaac Asimov first fictionally proposed the “Three Laws of Robotics” https://doi.org/10.4236/blr.2020.114055 in 1942. The word “cyborg” appeared in 1960, describing imagined beings with both artificial and biological parts. My own 1973 neologisms, “neural Received: November 2, 2020 plug compatibility”, and “softwiring” predicted the computer software-driven Accepted: December 15, 2020 Published: December 18, 2020 future evolution of man-machine neural interconnection and synthesis. To- day, Human-AI Brain Interface cyborg experiments and “brain-hacking” de- Copyright © 2020 by author(s) and vices are being trialed. The growth also of Artificial Intelligence (AI)-driven Scientific Research Publishing Inc. Data Analytics software and increasing instances of “Government by Algo- This work is licensed under the Creative Commons Attribution International rithm” have revealed these advances as being largely unregulated, with insuf- License (CC BY 4.0). ficient legal frameworks. In a recent article, I noted that, with automation of http://creativecommons.org/licenses/by/4.0/ legal processes and judicial decision-making being increasingly discussed, Open Access RoboJudge has all but already arrived; and I discerned also the cautionary Castell’s Second Dictum: “You cannot construct an algorithm that will relia- bly decide whether or not any algorithm is ethical”.
    [Show full text]
  • CSE 311A: Introduction to Intelligent Agents Using Science Fiction Spring 2020
    CSE 311A: Introduction to Intelligent Agents Using Science Fiction Spring 2020 Homework 2 Due: 11:59pm, February 9, 2020 (on Canvas) Provide your answers to the following questions in a PDF file and upload it to Canvas by the deadline. Question 1 [30 points] Read the short story \Runaround" by Isaac Asimov. You can access it (as well as other short stories in the \I, Robot" collection) through this link: https://web.williams.edu/Mathematics/sjmiller/public_html/ 105Sp10/handouts/Runaround.html 1a) Describe, in your own words, why Speedy was circling the selenium pool and how it was able to break the loop. 1b) Model Dr. Powell's decision-making process using a decision tree. En- sure that your tree accurately captures Speedy's actions as well. 1c) What is the best action or best sequence of actions for Dr. Powell to take according to your tree. Explain how you capture Speedy's actions within your decision tree. 1 Question 2 [30 points] This is a modified scenario from Interstellar. The crew of Endurance can visit two planets (Mann's and Edmunds'). They can choose to visit neither planets, one of the two planets, or both planets. The characteristics of Mann's planet are below: • 30% chance of finding a perfectly habitable planet • can support all of Earth's current population if it is • can support none of Earth's population if it is not And the characteristics of Edmunds' planet are below: • 50% chance of finding a perfectly habitable planet • can support 50% of Earth's current population if it is (because it is not as large as Mann's planet) • can support 20% of Earth's current population if it is not (because it is still partially habitable) The crew also needs to decide when to send a message to Earth to let them know which planet to migrate to.
    [Show full text]
  • Iaj 10-3 (2019)
    Vol. 10 No. 3 2019 Arthur D. Simons Center for Interagency Cooperation, Fort Leavenworth, Kansas FEATURES | 1 About The Simons Center The Arthur D. Simons Center for Interagency Cooperation is a major program of the Command and General Staff College Foundation, Inc. The Simons Center is committed to the development of military leaders with interagency operational skills and an interagency body of knowledge that facilitates broader and more effective cooperation and policy implementation. About the CGSC Foundation The Command and General Staff College Foundation, Inc., was established on December 28, 2005 as a tax-exempt, non-profit educational foundation that provides resources and support to the U.S. Army Command and General Staff College in the development of tomorrow’s military leaders. The CGSC Foundation helps to advance the profession of military art and science by promoting the welfare and enhancing the prestigious educational programs of the CGSC. The CGSC Foundation supports the College’s many areas of focus by providing financial and research support for major programs such as the Simons Center, symposia, conferences, and lectures, as well as funding and organizing community outreach activities that help connect the American public to their Army. All Simons Center works are published by the “CGSC Foundation Press.” The CGSC Foundation is an equal opportunity provider. InterAgency Journal FEATURES Vol. 10, No. 3 (2019) 4 In the beginning... Special Report by Robert Ulin Arthur D. Simons Center for Interagency Cooperation 7 Military Neuro-Interventions: The Lewis and Clark Center Solving the Right Problems for Ethical Outcomes 100 Stimson Ave., Suite 1149 Shannon E.
    [Show full text]
  • A. Mayor, Gods and Robots: Myths, Machines and Ancient Dreams of Technology
    A. Mayor, Gods and Robots: Myths, Machines and Ancient Dreams of Technology. Princeton, 2018. 304 pp, Hb. ISBN 9780691183510; $29.95 USD. In 1942, the great science fiction writer Isaac Asimov conceived of three laws of Robotics mandating that: “A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.” A later addition, the fourth, or zeroth law, outweighed the others: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm” (“Runaround,” 1942, later republished in I, Robot [1950]; Mayor, p. 177). Such anxieties resonated with ancient thinkers, and Mayor interrogates these and similar tensions in Gods and Robots: Myths, Machines and Ancient Dreams of Technology. One of her stated goals is to “suggest that on deeper levels the ancient myths about artificial life can provide a context from the exponential developments in artificial life and Artificial Intelligence (AI)—and the looming practical and moral implications” (p. 214). In this she succeeds as she straddles the myth, philosophy, science, and technology of the ancient and modern worlds. In this accessibly written and richly illustrated book (72 black and white illustrations plus 14 color plates), Mayor has collected textual and visual evidence for biotechnology in the ancient Mediterranean world, woven together with telling comparanda from other pre-modern societies (ancient Chinese, ancient Indian, Islamic, and medieval European). In addition to ancient evidence, Mayor considers “modern” science fiction as it may have been informed by the ancient tales as well as initiatives in modern robotics.
    [Show full text]
  • History of Robotics: Timeline
    History of Robotics: Timeline This history of robotics is intertwined with the histories of technology, science and the basic principle of progress. Technology used in computing, electricity, even pneumatics and hydraulics can all be considered a part of the history of robotics. The timeline presented is therefore far from complete. Robotics currently represents one of mankind’s greatest accomplishments and is the single greatest attempt of mankind to produce an artificial, sentient being. It is only in recent years that manufacturers are making robotics increasingly available and attainable to the general public. The focus of this timeline is to provide the reader with a general overview of robotics (with a focus more on mobile robots) and to give an appreciation for the inventors and innovators in this field who have helped robotics to become what it is today. RobotShop Distribution Inc., 2008 www.robotshop.ca www.robotshop.us Greek Times Some historians affirm that Talos, a giant creature written about in ancient greek literature, was a creature (either a man or a bull) made of bronze, given by Zeus to Europa. [6] According to one version of the myths he was created in Sardinia by Hephaestus on Zeus' command, who gave him to the Cretan king Minos. In another version Talos came to Crete with Zeus to watch over his love Europa, and Minos received him as a gift from her. There are suppositions that his name Talos in the old Cretan language meant the "Sun" and that Zeus was known in Crete by the similar name of Zeus Tallaios.
    [Show full text]
  • Roxbox by Artist (Hed) Planet Earth 2 Play Feat
    RoxBox by Artist (Hed) Planet Earth 2 Play Feat. Thomas Jules & Bartender Jucxi D Blackout Careless Whisper Other Side 2 Unlimited 10 Years No Limit Actions & Motives 20 Fingers Beautiful Short Dick Man Drug Of Choice 21 Demands Fix Me Give Me A Minute Fix Me (Acoustic) 2Pac Shoot It Out Changes Through The Iris Dear Mama Wasteland How Do You Want It 10,000 Maniacs Until The End Of Time Because The Night 2Pac Feat Dr. Dre Candy Everybody Wants California Love Like The Weather 2Pac Feat. Dr Dre More Than This California Love These Are The Days 2Pac Feat. Elton John Trouble Me Ghetto Gospel 101 Dalmations 2Pac Feat. Eminem Cruella De Vil One Day At A Time 10cc 2Pac Feat. Notorious B.I.G. Dreadlock Holiday Runnin' Good Morning Judge 3 Doors Down I'm Not In Love Away From The Sun The Things We Do For Love Be Like That Things We Do For Love Behind Those Eyes 112 Citizen Soldier Dance With Me Duck & Run Peaches & Cream Every Time You Go Right Here For You Here By Me U Already Know Here Without You 112 Feat. Ludacris It's Not My Time (I Won't Go) Hot & Wet Kryptonite 112 Feat. Super Cat Landing In London Na Na Na Let Me Be Myself 12 Gauge Let Me Go Dunkie Butt Live For Today 12 Stones Loser Arms Of A Stranger Road I'm On Far Away When I'm Gone Shadows When You're Young We Are One 3 Of A Kind 1910 Fruitgum Co.
    [Show full text]
  • Dr. Asimov's Automatons
    Dr. Asimov’s Automatons Take on a Life of their Own Twenty years after his death, author Isaac Asimov’s robot fiction offers a blueprint to our robotic future...and the problems we could face by Alan S. Brown HIS PAST April, the University of Miami School These became the Three Laws. Today, they are the starting of Law held We Robot, the first-ever legal and point for any serious conversation about how humans and policy issues conference about robots. The name robots will behave around one another. of the conference, which brought together lawyers, As the mere fact of lawyers discussing robot law shows, engineers, and technologists, played on the title the issue is no longer theoretical. If robots are not yet of the most famous book intelligent, they are increasingly t ever written about robots, autonomous in how they carry out I, Robot, by Isaac Asimov. tasks. At the most basic level, every The point was underscored by day millions of robotic Roombas de- Laurie Silvers, president of Holly- cide how to navigate tables, chairs, wood Media Corp., which sponsored sofas, toys, and even pets as they the event. In 1991, Silvers founded vacuum homes. SyFy, a cable channel that specializ- At a more sophisticated level, es in science fiction. Within moments, autonomous robots help select in- she too had dropped Asimov’s name. ventory in warehouses, move and Silvers turned to Asimov for ad- position products in factories, and vice before launching SyFy. It was a care for patients in hospitals. South natural choice. Asimov was one of the Korea is testing robotic jailers.
    [Show full text]
  • Brains, Minds, and Computers in Literary and Science Fiction Neuronarratives
    BRAINS, MINDS, AND COMPUTERS IN LITERARY AND SCIENCE FICTION NEURONARRATIVES A dissertation submitted to Kent State University in partial fulfillment of the requirements for the degree of Doctor of Philosophy. by Jason W. Ellis August 2012 Dissertation written by Jason W. Ellis B.S., Georgia Institute of Technology, 2006 M.A., University of Liverpool, 2007 Ph.D., Kent State University, 2012 Approved by Donald M. Hassler Chair, Doctoral Dissertation Committee Tammy Clewell Member, Doctoral Dissertation Committee Kevin Floyd Member, Doctoral Dissertation Committee Eric M. Mintz Member, Doctoral Dissertation Committee Arvind Bansal Member, Doctoral Dissertation Committee Accepted by Robert W. Trogdon Chair, Department of English John R.D. Stalvey Dean, College of Arts and Sciences ii TABLE OF CONTENTS Acknowledgements ........................................................................................................ iv Chapter 1: On Imagination, Science Fiction, and the Brain ........................................... 1 Chapter 2: A Cognitive Approach to Science Fiction .................................................. 13 Chapter 3: Isaac Asimov’s Robots as Cybernetic Models of the Human Brain ........... 48 Chapter 4: Philip K. Dick’s Reality Generator: the Human Brain ............................. 117 Chapter 5: William Gibson’s Cyberspace Exists within the Human Brain ................ 214 Chapter 6: Beyond Science Fiction: Metaphors as Future Prep ................................. 278 Works Cited ...............................................................................................................
    [Show full text]
  • Three Laws of Robotics
    Three Laws of Robotics From Wikipedia, the free encyclopedia The Three Laws of Robotics (often shortened to The Three Laws or Three Laws) are a set of rules devised by the science fiction author Isaac Asimov. The rules were introduced in his 1942 short story "Runaround", although they had been foreshadowed in a few earlier stories. The Three Laws are: 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. These form an organizing principle and unifying theme for Asimov's robotic-based fiction, appearing in his Robot series, the stories linked to it, and his Lucky Starr series of young-adult fiction. The Laws are incorporated into almost all of the positronic robots appearing in his fiction, and cannot be bypassed, being intended as a safety feature. Many of Asimov's robot-focused stories involve robots behaving in unusual and counter-intuitive ways as an unintended consequence of how the robot applies the Three Laws to the situation in which it finds itself. Other authors working in Asimov's fictional universe have adopted them and references, often parodic, appear throughout science fiction as well as in other genres. This cover of I, Robot illustrates the story "Runaround", the first to list all Three Laws of Robotics.
    [Show full text]
  • The First Law of Robotics ( a Call to Arms )
    :-0 appear, AAAI-94 The First Law of Robotics ( a call to arms ) Daniel Weld Oren Etzioni* Department of Computer Science and Engineering University of Washington Seattle, WA 98195 {weld, etzioni}~cs.washington.edu Abstract should not slavishly obey human commands-its fore- Even before the advent of Artificial Intelligence, sci- most goal should be to avoid harming humans. Con- ence fiction writer Isaac Asimov recognized that an sider the following scenarios: agent must place the protection of humans from harm .A construction robot is instructed to fill a pothole at a higher priority than obeying human orders. In- in the road. Although the robot repairs the cavity, spired by Asimov, we pose the following fundamental it leavesthe steam roller, chunks of tar, and an oil questions: (I) How should one formalize the rich, but slick in the middle of a busy highway. informal, notion of "harm"? (2) How can an agent avoid performing harmful actions, and do so in a com- .A softbot (software robot) is instructed to reduce putationally tractable manner? (3) How should an disk utilization below 90%. It succeeds, but in- agent resolve conflict between its goals and the need spection revealsthat the agent deleted irreplaceable to avoid harm? (4) When should an agent prevent a IjcTEXfiles without backing them up to tape. human from harming herself? While we address some of these questions in technical detail, the primary goal While less dramatic than Asimov's stories, the sce- of this paper is to focus attention on Asimov's concern: narios illustrate his point: not all ways of satisfying a society will reject autonomous agents unless we have human order are equally good; in fact, sometimesit is some credible means of making them safe! better not to satisfy the order at all.
    [Show full text]