The Perceived Ethics of Artificial Intelligence

Total Page:16

File Type:pdf, Size:1020Kb

The Perceived Ethics of Artificial Intelligence Markets, Globalization & Development Review Volume 5 Number 2 Article 3 2020 The Perceived Ethics of Artificial Intelligence Ross Murray University of Texas Rio Grande Valley Follow this and additional works at: https://digitalcommons.uri.edu/mgdr Part of the Business Law, Public Responsibility, and Ethics Commons, Economics Commons, Marketing Commons, Other Business Commons, Science and Technology Studies Commons, and the Sociology Commons Recommended Citation Murray, Ross (2020) "The Perceived Ethics of Artificial Intelligence," Markets, Globalization & Development Review: Vol. 5: No. 2, Article 3. DOI: 10.23860/MGDR-2020-05-02-03 Available at: https://digitalcommons.uri.edu/mgdr/vol5/iss2/3https://digitalcommons.uri.edu/mgdr/vol5/ iss2/3 This Article is brought to you for free and open access by DigitalCommons@URI. It has been accepted for inclusion in Markets, Globalization & Development Review by an authorized editor of DigitalCommons@URI. For more information, please contact [email protected]. The Perceived Ethics of Artificial Intelligence This article is available in Markets, Globalization & Development Review: https://digitalcommons.uri.edu/mgdr/vol5/ iss2/3 Murray: Ethics of Artificial Intelligence The Perceived Ethics of Artificial Intelligence Introduction Artificial Intelligence (AI) has been embedded in consumer products and services in thousands of products such as Apple’s iPhone, Amazon’s Alexa, Tesla’s autonomous vehicles, Facebook’s algorithms that attempt to increase click-through optimization, and smart vacuum cleaners. As products and services attempt to imitate the intelligence of humans – although the products and services are not making decisions based upon their own moral values – the moral values of the employees and business ethics of corporations that create the products and services are being coded into the technology that is evidenced in these products. Social researchers are becoming aware of and commenting on the biases and moral positions that are embedded – often unconsciously – in new technologies (see, e.g., Benjamin 2019; Dholakia et al. 2020; Noble 2018). The use of such products by consumers, although seemingly harmless today, may pose concern as consumers come to the realization that – by using and interacting with AI – they may be revealing the embedded ethical and moral values that were placed there by the creators of the AI. As AI products and services come to be used increasingly, there is a need for a perceived morality and ethical framework. AI has had the moral values and ethical positions designed (embedded) either on purpose or solely because – in order to imitate human intelligence – the AI has to demonstrate an ethical position. Some of the AI in the market today has advanced toward human-likeness such as semi-autonomous vehicles and in-home chatbots like Siri, Alexa and Google Assistant, which are approaching sounding like humans. Samsung’s Neon, a fully simulated digital avatar, unveiled at the 2020 CES (Consumer Electronics Show), appears very close to a human. CES is a conference and trade show that is held annually by the Consumer Technology Association and is for “technology leaders to connect, collaborate, and propel consumer technology forward” (CES 2020). Some of the recent AI related products presented at CES have been prosthetics, Selfie type, Lululab’s Lumine, Whisk, Wiser, Vital Signs monitoring (Taulli 2020). All these innovations use some type of AI, such as deep learning. Extrapolation of beliefs about AI show support for the idea that AI will outperform humans in many activities in the future. Researchers predict AI will outperform humans in many activities in the next ten years, such as translating languages (by 2024), writing high-school essays (by 2026), driving a truck (by 2027), working in retail (by 2031), writing a bestselling book (by 2049), and working as a surgeon (by 2053) (Grace et Published by DigitalCommons@URI, 2020 1 Markets, Globalization & Development Review, Vol. 5 [2020], No. 2, Art. 3 al. 2018). Relating specifically to the voice of AI, Grace et al. (2018) found that respondents of their survey believe AI will be able to “take a written passage and output a recording that can’t be distinguished from a voice actor, by an expert listener” in 9 years (median). As this study's findings were two years prior to this writing, the date is quickly approaching where we may see AI voice that is indistinguishable from a human voice. As the technology of AI progresses, it is the goal of some for AI to progress to artificial general intelligence (AGI). Examples of companies interested in AGI are OpenAI, Deepmind, Google Brain, and Facebook AI Research (Neo 2020). These companies are investing millions of dollars in developing AGI. Simply put, accomplishing AGI would mean that the technology can do anything a human can do or better, possibly surpassing capabilities of a human. With millions of dollars being spent on both AI and AGI development, the purpose of this paper is to explore and discuss the potential problems of creating a technology that will make decisions in its interactions with humans based on business ethics that are coded into the technology by a programmer. These decisions will stem from coding possibly written by an employee of a corporation or a soldier (or agent) of a government. The coder would have incentives to write the code for the technology in ways that will allow the AI to make decisions quickly, often without reference to a moral framework and/or business ethics. Immanuel Kant advanced the concept of ‘deontology’, which focuses on underlying ethical code to make decisions, not the consequences of decisions. Teleology takes into consideration the consequences of actions to guide decisions, whereas deontology does not concern itself with consequences (for a review, in the business ethics context, see Whetstone 2001). The literature has also found that time, motivations, and resources affect the ethical decision-making process (Conway and Gawronski 2013). As the extant literature continues to explore the influences of morality on ethical judgements, we have entered a new age where we have begun to create machines (AI) embedded with human morality and ethics that are performing tasks and making decisions that are instilled with the moral values we give them. Two real-life scenarios are discussed later – in the sections “The Specific Case of Alexa” and “One Human Versus One Human” – that demonstrate that AI is currently performing tasks and making decisions based upon programming by employees. Previous studies have brought to light multiple issues with AI. These include the directive of not harming humans (Bostrom and Yudkowsky 2014), emergency stop capabilities (Arnold and Scheutz 2018), creating a code of ethics (Boddington 2017), racial profiling https://digitalcommons.uri.edu/mgdr/vol5/iss2/3 DOI: 10.23860/MGDR-2020-05-02-03 2 Murray: Ethics of Artificial Intelligence concerns (Bostrom and Yudkowsky 2014; see also Benjamin 2019), imbuing ethics into an autonomous machine (Bonnemains, Saurel and Tessier 2018), security (Belk 2020; Lin et al. 2011) and moral status (Bostrom and Yudkowsky 2014). Surprisingly, however, we have limited research that crystallizes an executable action for the forward progress in terms of the perceived morality of AI. In addition, previous work on digital personal assistants (DPAs) has focused on gender issues (Søndergaard and Hansen 2018), politics motivated by economic reasons (Ojeda 2019), regulating bots on speech rights (Lamo and Calo 2019), and chatbot dialogue systems (Bendel 2019). These studies have also not explored the morality and ethics underlying the information provided to consumers by these chatbots. Those coding the AI may use a deontological moral framework, a teleological one, or a mixture, depending upon the situation; or may not even be aware that how they code has any framework at all (Noble 2018) and may only be guided by the business ethics of the corporation’s ethics handbook. Mixing of moral frameworks and business ethics for AI adds complexity to the issue at hand. Corporations and governments making such decisions will have to make judgments prior to launching the new technology. Many AI projects are unregulated and propelled by the best and fastest innovators. In the extant literature, concerns about the advancement of AI have mainly been about military applications (Fast and Horvitz 2017; Lin et al. 2009; Petit 2017; Roff and Moyes 2016; Russell et al. 2015; Sparrow 2009; Wasilow and Thorpe 2019), loss of control (Fast and Horvitz 2017; Horvitz 2014), and impact on work (Acemoglu and Restrepo 2018; Fast and Horvitz 2017; Morikawa 2017; Scherer 2015). Limited research has been done (of course, with the exceptions cited) as to which moral framework or ethical position was used to create a technology with AI and how the consumer perceives the AI’s morality or ethical position. Although currently unbeknownst to the AI (as it is not currently conscious of its decisions), there is an implicit moral framework being used when evaluated by a human looking in on the technology. One researcher even argues that it is too early for regulation (Reed 2018). Technology is progressing incrementally; it goes unnoticed at times and may seduce consumers into its use without questioning the moral framework of the technology. This paper contributes to the nascent literature by unveiling and discussing the issue of corporate employees designing products that are artificially intelligent, that make decisions based upon the business ethics of the corporation and provide information to consumers without disclosing ethical positions to the end user. Published by DigitalCommons@URI, 2020 3 Markets, Globalization & Development Review, Vol. 5 [2020], No. 2, Art. 3 A focus of this paper is to further reveal and discuss the problems of the entities – particularly corporations and governments – overseeing the production of technological products and services that influence human society, and how these entities may or may not be transparent with the moral framework or ethics used with the global use of their technologies.
Recommended publications
  • ARW 2015 Austrian Robotics Workshop
    ARW 2015 Austrian Robotics Workshop Proceedings Austrian Robotics Workshop 2015 ARW 2015 hosted by Institute of Networked and Embedded Systems http://nes.aau.at Alpen-Adria Universität Klagenfurt Welcome to ARW 2015! NES has organized and hosted the Austrian Robotics Workshop (ARW) 2015 on May 7-8 in Klagenfurt. With more than 80 participants and speakers from eight dif- ferent countries this workshop grew into an international event where people from academia and industry meet - botics. Two keynote speakers gave inspiring talks on their thrilling research in robotics. Sabine Hauert from Bristol Robotics Laboratory and University of Bristol explained the development of nano-robots and how they may sup- port medical treatment. Werner Huber from BMW Group Research and Technology presented the self-driving cars, which have received a lot of media coverage lately. In addition, participants from industry and academia pre- sented and demonstrated their demos and posters during an interesting joint session. We have invited people to observe live demonstrations and discuss recent advan- cements face to face. NES members also demonstrated the SINUS project and its achievements in autonomous multi-UAV mission, communication architecture, collision avoidance, path planning, and video streaming. The Austrian Robotics Workshop 2016 will take place in Wels. Hope to see you there! Christian Bettstetter, Torsten Andre, Bernhard Rinner, Saeed Yahyanejad ARW 2015 Austrian Robotics Workshop We warmly acknowledge the support of our sponsors: Committees,
    [Show full text]
  • The Power and Promise of Computers That Learn by Example
    Machine learning: the power and promise of computers that learn by example MACHINE LEARNING: THE POWER AND PROMISE OF COMPUTERS THAT LEARN BY EXAMPLE 1 Machine learning: the power and promise of computers that learn by example Issued: April 2017 DES4702 ISBN: 978-1-78252-259-1 The text of this work is licensed under the terms of the Creative Commons Attribution License which permits unrestricted use, provided the original author and source are credited. The license is available at: creativecommons.org/licenses/by/4.0 Images are not covered by this license. This report can be viewed online at royalsociety.org/machine-learning Cover image © shulz. 2 MACHINE LEARNING: THE POWER AND PROMISE OF COMPUTERS THAT LEARN BY EXAMPLE Contents Executive summary 5 Recommendations 8 Chapter one – Machine learning 15 1.1 Systems that learn from data 16 1.2 The Royal Society’s machine learning project 18 1.3 What is machine learning? 19 1.4 Machine learning in daily life 21 1.5 Machine learning, statistics, data science, robotics, and AI 24 1.6 Origins and evolution of machine learning 25 1.7 Canonical problems in machine learning 29 Chapter two – Emerging applications of machine learning 33 2.1 Potential near-term applications in the public and private sectors 34 2.2 Machine learning in research 41 2.3 Increasing the UK’s absorptive capacity for machine learning 45 Chapter three – Extracting value from data 47 3.1 Machine learning helps extract value from ‘big data’ 48 3.2 Creating a data environment to support machine learning 49 3.3 Extending the lifecycle
    [Show full text]
  • An Abstract of the Dissertation Of
    AN ABSTRACT OF THE DISSERTATION OF Karina A. Roundtree for the degree of Doctor of Philosophy in Mechanical Engineering presented on August 21, 2020. Title: Achieving Transparency in Human-Collective Systems Abstract approved: H. Onan Demirel Julie A. Adams Collective robotic systems are biologically-inspired and exhibit behaviors found in spa- tial swarms (e.g., fish), colonies (e.g., ants), or a combination of both (e.g., bees). Collec- tive robotic system popularity continues to increase due to their apparent global intel- ligence and emergent behaviors. Many applications can benefit from the incorporation of collectives, including environmental monitoring, disaster response missions, and in- frastructure support. Human-collective system designers continue to debate how best to achieve transparency in human-collective systems in order to attain meaningful and insightful information exchanges between the operator and collective, enable positive operator influence on collectives, and improve the human-collective’s performance. Few human-collective evaluations have been conducted, many of which have only assessed how embedding transparency into one system design element (e.g., models, visualizations, or control mechanisms) may impact human-collective behaviors, such as the human-collective performance. This dissertation developed a transparency defi- nition for collective systems that was leveraged to assess how to achieve transparency in a single human-collective system. Multiple models and visualizations were evalu- ated for a sequential best-of-n decision-making task with four collectives. Transparency was evaluated with respect to how the model and visualization impacted human oper- ators who possess different capabilities, operator comprehension, system usability, and human-collective performance. Transparency design guidance was created in order to aid the design of future human-collective systems.
    [Show full text]
  • Hunt, E. R., & Hauert, S. (2020). a Checklist for Safe Robot Swarms. Nature Machine Intelligence
    Hunt, E. R. , & Hauert, S. (2020). A checklist for safe robot swarms. Nature Machine Intelligence. https://doi.org/10.1038/s42256-020- 0213-2 Peer reviewed version Link to published version (if available): 10.1038/s42256-020-0213-2 Link to publication record in Explore Bristol Research PDF-document This is the author accepted manuscript (AAM). The final published version (version of record) is available online via Nature Research at https://www.nature.com/articles/s42256-020-0213-2 . Please refer to any applicable terms of use of the publisher. University of Bristol - Explore Bristol Research General rights This document is made available in accordance with publisher policies. Please cite only the published version using the reference above. Full terms of use are available: http://www.bristol.ac.uk/red/research-policy/pure/user-guides/ebr-terms/ 1 A checklist for safe robot swarms 2 3 Edmund Hunt and Sabine Hauert* 4 Engineering Mathematics, Bristol Robotics Laboratory, University of Bristol 5 6 *corresponding author: [email protected] 7 8 Standfirst: As robot swarms move from the laboratory to real world applications, a routine 9 checklist of questions could help ensure their safe operation. 10 11 Robot swarms promise to tackle problems ranging from food production and natural 12 disaster response, to logistics and space exploration1–4. As swarms are deployed outside the 13 laboratory in real world applications, we have a unique opportunity to engineer them to be 14 safe from the get-go. Safe for the public, safe for the environment, and indeed, safe for 15 themselves.
    [Show full text]
  • Conference Program Contents AAAI-14 Conference Committee
    Twenty-Eighth AAAI Conference on Artificial Intelligence (AAAI-14) Twenty-Sixth Conference on Innovative Applications of Artificial Intelligence (IAAI-14) Fih Symposium on Educational Advances in Artificial Intelligence (EAAI-14) July 27 – 31, 2014 Québec Convention Centre Québec City, Québec, Canada Sponsored by the Association for the Advancement of Artificial Intelligence Cosponsored by the AI Journal, National Science Foundation, Microso Research, Google, Amazon, Disney Research, IBM Research, Nuance Communications, Inc., USC/Information Sciences Institute, Yahoo Labs!, and David E. Smith In cooperation with the Cognitive Science Society and ACM/SIGAI Conference Program Contents AAAI-14 Conference Committee AI Video Competition / 7 AAAI acknowledges and thanks the following individuals for their generous contributions of time and Awards / 3–4 energy to the successful creation and planning of the AAAI-14, IAAI-14, and EAAI-14 Conferences. Computer Poker Competition / 7 Committee Chair Conference at a Glance / 5 CRA-W / CDC Events / 4 Subbarao Kambhampati (Arizona State University, USA) Doctoral Consortium / 6 AAAI-14 Program Cochairs EAAI-14 Program / 6 Carla E. Brodley (Northeastern University, USA) Exhibition /24 Peter Stone (University of Texas at Austin, USA) Fun & Games Night / 4 IAAI Chair and Cochair General Information / 25 David Stracuzzi (Sandia National Laboratories, USA) IAAI-14 Program / 11–19 David Gunning (PARC, USA) Invited Presentations / 3, 8–9 EAAI-14 Symposium Chair and Cochair Posters / 4, 23 Registration / 9 Laura
    [Show full text]
  • Portrayals and Perceptions of AI and Why They Matter
    Portrayals and perceptions of AI and why they matter 1 Portrayals and perceptions of AI and why they matter Cover Image Turk playing chess, design by Wolfgang von Kempelen (1734 – 1804), built by Christoph 2 Portrayals and perceptions of AI and why they matter Mechel, 1769, colour engraving, circa 1780. Contents Executive summary 4 Introduction 5 Narratives and artificial intelligence 5 The AI narratives project 6 AI narratives 7 A very brief history of imagining intelligent machines 7 Embodiment 8 Extremes and control 9 Narrative and emerging technologies: lessons from previous science and technology debates 10 Implications 14 Disconnect from the reality of the technology 14 Underrepresented narratives and narrators 15 Constraints 16 The role of practitioners 20 Communicating AI 20 Reshaping AI narratives 21 The importance of dialogue 22 Annexes 24 Portrayals and perceptions of AI and why they matter 3 Executive summary The AI narratives project – a joint endeavour by the Leverhulme Centre for the Future of Intelligence and the Royal Society – has been examining how researchers, communicators, policymakers, and publics talk about artificial intelligence, and why this matters. This write-up presents an account of how AI is portrayed Narratives are essential to the development of science and perceived in the English-speaking West, with a and people’s engagement with new knowledge and new particular focus on the UK. It explores the limitations applications. Both fictional and non-fictional narratives of prevalent fictional and non-fictional narratives and have real world effects. Recent historical examples of the suggests how practitioners might move beyond them. evolution of disruptive technologies and public debates Its primary audience is professionals with an interest in with a strong science component (such as genetic public discourse about AI, including those in the media, modification, nuclear energy and climate change) offer government, academia, and industry.
    [Show full text]
  • Telling Stories on Culturally Responsive Artificial Intelligence 19 Stories
    Telling Stories On Culturally Responsive Artificial Intelligence 19 Stories EDITED BY: Ryan Calo, Batya Friedman, Tadayoshi Kohno, Hannah Almeter and Nick Logler 2 Telling Stories: On Culturally Responsive Artificial Intelligence Telling Stories: On Culturally Responsive Artificial Intelligence 19 Stories Edited by Ryan Calo, Batya Friedman, Tadayoshi Kohno, Hannah Almeter and Nick Logler University of Washington Tech Policy Lab, Seattle Copyright © 2020 by University of Washington Tech Policy Lab. All rights reserved. For information about permission to reproduce selections from this book, contact: University of Washington Tech Policy Lab 4293 Memorial Way NE Seattle, WA 98195. Printed by The Tech Policy Lab, USA. First edition. ISBN: 978-1-7361480-0-6 (ebook) Library of Congress Control Number: 2020924819 Book design by Patrick Holderfield, Holdereaux Studio www.holdereauxstudio.com Illustrations by: Tim Corey, Colibri Facilitation: pgs - 22, 24, 28, 29, 40, 41, 43, 45, 48 Akira Ohiso: pgs - 18, 26, 27, 36, 42, 44, 50, 51, 52, 53 Elena Ponte: pgs - 20, 30, 31, 34, 38, 46, 47, 54 Free digital copy available at https://techpolicylab.uw.edu/ About the Global Summit on Culturally Responsive Artificial Intelligence The Global Summit on Culturally Responsive AI is part of the Tech Policy Lab’s Global Summit Initiative (2016-present). Our Global Summits convene twenty to thirty thought leaders from around the world representing design, ethics, governance, policy, and technology. Summits aim to frame and begin progress on pressing grand challenges for tech policy, providing opportunities for collaboration on global and local issues. The 2018 Global Summit structured conversation around grand challenges for developing and disseminating artificial intelligence technologies that maintain respect for and enhance culture and diversity of worldview.
    [Show full text]
  • AAAI News AAAI News
    AAAI News AAAI News Spring News from the Association for the Advancement of Artificial Intelligence AAAI Announces The Interactive Museum Tour-Guide Holte, Ariel Felner, Guni Sharon, Robot (Wolfram Burgard, Armin B. Nathan R. Sturtevant) New Senior Member! Cremers, Dieter Fox, Dirk Hähnel, Gerhard Lakemeyer, Dirk Schulz, Wal- AAAI-16 Outstanding AAAI congratulates Wheeler Ruml ter Steiner, and Sebastian Thrun) Student Paper Award (University of New Hampshire, USA) Toward a Taxonomy and Computa- Boosting Combinatorial Search on his election to AAAI Senior Member tional Models of Abnormalities in through Randomization (Carla P. status. This honor was announced at Images (Babak Saleh, Ahmed Elgam- Gomes, Bart Selman, and Henry the recent AAAI-16 Conference in mal, Jacob Feldman, Ali Farhadi) Phoenix. Senior Member status is Kautz) designed to recognize AAAI members Burgard and colleagues were honored IAAI-16 Innovative who have achieved significant accom- for significant contributions to proba- Application Awards plishments within the field of artificial bilistic robot navigation and the inte- Each year the AAAI Conference on intelligence. To be eligible for nomina- gration with high-level planning Innovative Applications selects the tion for Senior Member, candidates methods, while Gomes, Selman, and recipients of the IAAI Innovative must be consecutive members of AAAI Kautz were recognized for their signifi- Application Award. These deployed for at least five years and have been cant contributions to the area of auto- application case study papers must active in the professional arena for at mated reasoning and constraint solv- describe deployed applications with least ten years. ing through the introduction of measurable benefits that include some randomization and restarts into com- aspect of AI technology.
    [Show full text]
  • Social and Economic Challenges of Artificial Intelligence
    COURSE OUTLINE SOCIAL & ECONOMIC CHALLENGE OF ARTIFICIAL INTELLIGENCE Teachers(s): MIAILHE Nicolas Academic year 2017/2018: Spring Semester SHORT BIOGRAPHY: Nicolas Miailhe is the co-founder and President of "The Future Society at Harvard Kennedy School" under which he also co-founded and leads the "AI Initiative". A recognized strategist, social entrepreneur, and thought-leader, he advises multinationals, governments and international organizations. Nicolas is a Senior Visiting Research Fellow with the Program on Science, Technology and Society (STS) at HKS. His work centers on the governance of emerging technologies. He also specializes in urban innovation and civic engagement. Nicolas has ten years of professional experience in emerging markets such as India, working at the nexus of innovation, high technology, government, industry and civil society. Before joining Harvard, he was Regional Director South Asia at Safran Sagem, the world leader in aerospace, defense and security. An Arthur Sachs Scholar, Nicolas holds a Master in Public Administration from the Harvard Kennedy School of Government. He also holds a Master in Geostrategy and Industrial Dynamics from Pantheon-Assas University in Paris and a Bachelor of Arts in Political Sciences from Sciences Po Strasbourg. COURSE OUTLINE (REQUIRED) Please find below a proposed outline based upon 12 sessions, to be filled in. Should you prefer to use a non- standard outline, please feel free to use the « non-standard course outline » area at the bottom of this document. In both cases please indicate required and/or recommended readings: Session 1: Defining Artificial Intelligence Required readings: • Nicolas Miailhe & Cyrus Hodes, Making the AI revolution work for everyone, The Future Society, AI Initiative, Report to OECD, February 2017 – PART 1, CHAPTER A • Russell, Stuart (2016) "Q&A: The Future of Artificial Intelligence".
    [Show full text]
  • The Frontiers of MACHINE LEARNING 2017 Raymond and Beverly Sackler U.S.-U.K
    The Frontiers of MACHINE LEARNING 2017 Raymond and Beverly Sackler U.S.-U.K. Scientific Forum Foreword Rapid advances in machine learning—the form of artificial intelligence that al- lows computer systems to learn from data—are capturing scientific, economic, and public interest. Recent years have seen machine learning systems enter everyday usage, while further applications across healthcare, transportation, finance, and more appear set to shape the development of these fields over the coming decades. The societal and economic opportunities that follow these ad- vances are significant, and nations are grappling with how artificial intelligence might affect society. There are emerging policy debates in the United States and the United Kingdom about how and where society can make best use of machine learning. As the capabilities of machine learning systems and the range of their applica- tions continue to grow, it is therefore particularly timely for the National Acad- emy of Sciences and the Royal Society to bring together leading figures in these fields. Since 2008, the Raymond and Beverly Sackler U.S.-U.K. Scientific Fo- rum has brought together thought leaders from a variety of scientific fields to exchange ideas on topics of international scientific concern and to help forge an enduring partnership between scientists in the United States and the United Kingdom. The forum on “The Frontiers of Machine Learning” took place in the United States on January 31 and February 1, 2017, at the National Academy of Sciences in Washington, D.C. This event brought together leading researchers and policy experts to explore the cutting edges of machine learning research and the implications of technological advances in this field.
    [Show full text]
  • Onboard Evolution of Understandable Swarm Behaviors
    Jones, S. , Winfield, A. F. T., Hauert, S., & Studley, M. (2019). Onboard evolution of understandable swarm behaviors. Advanced Intelligent Systems, (1900031), [1900031]. https://doi.org/10.1002/aisy.201900031 Publisher's PDF, also known as Version of record License (if available): CC BY Link to published version (if available): 10.1002/aisy.201900031 Link to publication record in Explore Bristol Research PDF-document This is the final published version of the article (version of record). It first appeared online via Wiley at https://onlinelibrary.wiley.com/doi/full/10.1002/aisy.201900031 . Please refer to any applicable terms of use of the publisher. University of Bristol - Explore Bristol Research General rights This document is made available in accordance with publisher policies. Please cite only the published version using the reference above. Full terms of use are available: http://www.bristol.ac.uk/pure/about/ebr-terms FULL PAPER www.advintellsyst.com Onboard Evolution of Understandable Swarm Behaviors Simon Jones, Alan F. Winfield, Sabine Hauert,* and Matthew Studley* used to allow swarm engineers to automat- Designing the individual robot rules that give rise to desired emergent swarm ically discover controllers capable of pro- [3] behaviors is difficult. The common method of running evolutionary algorithms ducing the desired collective behavior. off-line to automatically discover controllers in simulation suffers from two Conventionally, evolutionary swarm robotics has used two approaches: off-line disadvantages: the generation of controllers is not situated in the swarm and so evolution in simulation, followed by the cannot be performed in the wild, and the evolved controllers are often opaque transfer of the evolved controllers into a and hard to understand.
    [Show full text]
  • Thresholded-Rewards Decision Problems: Acting Effectively In
    Thresholded-Rewards Decision Problems: Acting Effectively in Timed Domains Colin McMillen CMU-CS-09-112 April 2, 2009 Computer Science Department School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Thesis Committee: Manuela Veloso, Chair J. Andrew Bagnell Stephen Smith Michael Littman (Rutgers University) Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy. Copyright c 2009 Colin McMillen This research was sponsored by Rockwell Scientific under grant number B4U528968, Department of the Interior under grant number NBCH1040007, L3 Communication Integrated Systems, L.D., under grant number 4500244745, SRI International under grant number 55-000691 and 03-000211, DARPA, Intelligent Automations, Inc., under grant number 6541, Lockheed Martin Corporation under grant number 8100001629, and U.S. Army Research Office under grant number DAAD-190210389. The views and conclusions contained in this document are those of the author and should not be interpreted as representing the official policies, either expressed or implied, of any sponsoring institution, the U.S. government or any other entity. Keywords: Markov Decision Processes, Thresholded Rewards, Limited Time, Zero-Sum Games, Robotics, Multi-Robot Systems, Multi-Agent Systems, Robot Soccer, Capture the Flag, reCAPTCHA Abstract In timed, zero-sum games, winning against the opponent is more important than the final score. A team that is losing near the end of the game may choose to play aggressively to try to even the score before time runs out. In this thesis, we consider the problem of finding optimal policies in stochastic domains with limited time, some notion of score, and in complex environments, such as domains including opponents.
    [Show full text]