Ethical Autonomy Bibliography As of January 6, 2016

Total Page:16

File Type:pdf, Size:1020Kb

Ethical Autonomy Bibliography As of January 6, 2016 Ethical Autonomy Bibliography As of January 6, 2016 Aaronson, Michael. “Killer Robots: Double Standards? Blind Faith?” E-International Relations, June 7, 2013, http://www.e-ir.info/2013/06/07/killer-robots-double-standards-blind-faith/. Achenbach, Joel. “Here’s the Argument for Banning Killer Robots Before We’re Swarmed by Them.” The Washington Post, August 21, 2015. https://www.washingtonpost.com/news/speaking-of-science/wp/2015/08/21/heres-the- argument-for-banning-killer-robots-before-were-swarmed-by-them/. Acheson, Ray, “Editorial: A Chance to Put Humanity First,” CCW Report, 2 no. 1 (April 13, 2015), http://reachingcriticalwill.org/images/documents/Disarmament- fora/ccw/2015/ccwreport/CCWR2.1.pdf. Acheson, Ray and Richard Moyes, “Editorial: The unbearable meaninglessness of autonomous violence,” Reaching Critical Will, 2 no. 4 (April 16, 2015), http://reachingcriticalwill.org/disarmament-fora/others/ccw/2015/meeting-experts- laws/ccwreport/9672-16-april-2015-vol-2-no-4. Ackerman, Evan. “We Should Not Ban ‘Killer Robots,’ and Here’s Why.” IEE Spectrum, July 29, 2015. http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/we-should-not- ban-killer-robots. Adami, Christoph. “Should We Ban Autonomous Killing Robots?” The Huffington Post, September 10, 2015. http://www.huffingtonpost.com/christoph-adami/should-we-ban-autonomous- killing-robots_b_8107898.html. Adhikari, Richard. “Beware the Killer Robots.” Tech News World, June 12, 2015, http://www.technewsworld.com/story/82113.html. Agence France-Presse. “Artificial intelligence future wows Davos elite.” Bangkok Post, January 23, 2015. http://www.bangkokpost.com/tech/world-updates/459255/artificial-intelligence- future-wows-davos-elite. Aguilar, Mario, “Today's Incredible Robots Are The Killing Machines Of The Future,” Gizmodo Australia, April 17, 2015, http://www.gizmodo.com.au/2015/04/todays-incredible-robots- are-the-killing-machines-of-the-future/. “Africa: Religious Leaders Urge a Ban On Fully Autonomous Weapons,” All Africa, April 2, 2015, http://allafrica.com/stories/201504021100.html. Albanesius, Chloe, “Killer Robots Are Probably a Bad Idea,” PC Magazine, April 13, 2015, http://www.pcmag.com/article2/0,2817,2481295,00.asp. Allen, Colin and Wendell Wallach. “Framing Robot Arms Control.” Ethics and Information Technology (2013). http://link.springer.com/article/10.1007%2Fs10676-012-9303-0. Alston, Phil. “Lethal Robotic Technologies: The Implications for Human Rights and International Humanitarian Law.“ Journal of Law, Information, and Science (2012). DOI: 10.5778/JLIS.2011.21.Alston.1. http://www.jlisjournal.org/abstracts/Alston.21.2.html. Altmann, Jurgen. “Preventive Arms Control for Uninhabited Military Vehicles.” In Ethics and Robotics, eds. R. Capurro and M. Nagembourg, 69-82. Heidelberg: AKA Verlag, 2009. http://e3.physik.tu-dortmund.de/P&D/Pubs/0909_Ethics_and_Robotics_Altmann.pdf. Altmann, Jurgen, Peter Asaro, Noel Sharkey, and Robert Sparrow. “Armed Military Robots: Editorial.” Ethics and Information Technology 15 no. 2 (June 2013), 73-76, http://peterasaro.org/writing/Altmann,%20Asaro,%20Sharkey,%20Sparrow%20EIT%20A MR.pdf. “Ambassador Aryasinha inagurated the 2015 Meeting of Experts on LAWS,” News.lk, April 17, 2015, http://www.news.lk/news/sri-lanka/item/7183-ambassador-aryasinha-inagurated- the-2015-meeting-of-experts-on-laws. Amnesty International, “UN: Ban killer robots before their use in policing puts lives at risk,” April 16, 2015, https://www.amnesty.org/en/articles/news/2015/04/ban-killer-robots-before- their-use-in-policing-puts-lives-at-risk/. “Amnesty urges UN member states to ban further development of autonomous weapons systems,” army-technology.com, April 20, 2015, http://www.army- technology.com/news/newsamnesty-urges-un-member-states-to-ban-further-development- of-killer-robots-4556683. Anderson, Kenneth. “Readings: Autonomous Weapons Systems and Their Regulation.” Lawfare, December 11, 2012. http://www.lawfareblog.com/2012/12/readings-autonomous- weapon-systems-and-their-regulation/. Anderson, Kenneth. “Readings: Geoffrey Corn on Autonomous Weapons.” Lawfare, August 3, 2013. http://www.lawfareblog.com/2014/08/readings-geoffrey-corn-on-autonomous- weapons/. 2 Anderson, Kenneth, Daniel Reisner, and Matthew C. Waxman. “Adapting the Law of Armed Conflict to Autonomous Weapon Systems.” International Law Studies, 2014, Forthcoming. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2477095. Anderson, Kenneth and Matthew C. Waxman. “Don’t Ban Armed Robots in the U.S.” New Republic, October 17, 2013. http://www.newrepublic.com/article/115229/armed-robots-banning- autonomous-weapon-systems-isnt-answer. Anderson, Kenneth and Matthew C. Waxman. “Human Rights Watch Report on Killer Robots, and Our Critique.” Lawfare, November 26, 2012. http://www.lawfareblog.com/2012/11/human-rights-watch-report-on-killer-robots- and-our-critique/. Anderson, Kenneth and Matthew C. Waxman. “Killer Robots and the Laws of War.” The Wall Street Journal, November 3, 2013. http://online.wsj.com/news/articles/SB1000142405270230465510457916336188447 9576?mod=hp_opinion. Anderson, Kenneth and Matthew C. Waxman. “Killer Robots and the Laws of War in Monday’s Wall Street Journal.” Lawfare, November 4, 2013. http://www.lawfareblog.com/2013/11/killer-robots-and-the-laws-of-war-in- mondays-wall-street-journal/. Anderson, Kenneth and Matthew C. Waxman. “Law and Ethics for Autonomous Weapons Systems: Why a Ban Won’t Work and How the Laws of War Can.” Columbia Public Law Research Paper 13-351. http://ssrn.com/abstract=2250126. Anderson, Kenneth and Matthew C. Waxman. “Law and Ethics for Robot Soldiers.” Policy Review (2012). http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2046375. Anderson, Kenneth and Matthew C. Waxman. “Similar Ethical Dilemmas for Autonomous Weapon Systems and Autonomous Self-Driving Cars.” Lawfare, November 6, 2015. https://www.lawfareblog.com/similar-ethical-dilemmas-autonomous-weapon-systems-and- autonomous-self-driving-cars. Anderson, Kenneth and Matthew C. Waxman. “Tom Malinowski Responds on Autonomous Lethal Systems.” Lawfare, December 5, 2012. http://www.lawfareblog.com/2012/12/tom- malinowski-responds-on-autonomous-lethal-systems/. Anderson, Kenneth and Matthew C. Waxman. “Tom Malinowski Responds on Autonomous Lethal Systems: Part II.” Lawfare, December 6, 2012. http://www.lawfareblog.com/2012/12/tom- malinowski-responds-on-autonomous-lethal-systems/. “Anonymous Killing by New Technologies? The Soldier between Conscience and Machine.” Ethics and Armed Forces, Issue 2014/1. http://www.ethikundmilitaer.de/fileadmin/Journale/2014- 3 06_English/Full_issue_2014_1_Anonymous_Killing_by_new_Technologies_The_Soldier_b etween_Conscience_and_Machine.pdf. Anthony, Ian and Chris Holland. “The Governance of Autonomous Weapons,” SIPRI Yearbook 2014: Armaments, Disarmament and International Security, Stockholm International Peace Research Institute, pp. 423-431, http://www.sipri.org/yearbook/2014/files/sipri-yearbook- 2014-chapter-9-section-ii. Antebi, Liran. “The UN and Autonomous Weapons Systems: A Missed Opportunity.” Canada Free Press, June 9, 2015. http://canadafreepress.com/article/72703. Arkin, Ronald. “Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture.” Georgia Tech Robotics Lab, 2007. http://www.cc.gatech.edu/ai/robot-lab/online-publications/formalizationv35.pdf. Arkin, Ronald. Governing Lethal Behavior in Autonomous Robots. Boca Raton: Travis and Francis Group, 2009. Arkin, Ronald. “Lethal Autonomous Systems and the Plight of the Non-combatant.” AISIB Quarterly, July 2013. http://www.unog.ch/80256EDD006B8954/%28httpAssets%29/54B1B7A616EA1 D10C1257CCC00478A59/$file/Article_Arkin_LAWS.pdf. Arkin, Ronald. "The Case for Ethical Autonomy in Unmanned Systems", Journal of Military Ethics, Vol. 9, Issue 4, pp. 332-341, 2010, http://www.cc.gatech.edu/ai/robot-lab/online- publications/Arkin_ethical_autonomous_systems_final.pdf. Arkin, Ronald, “Warfighting Robots Could Reduce Civilian Casualties, So Calling for a Ban Now Is Premature.” IEEE Spectrum, August 5, 2015. http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/autonomous-robotic- weapons-could-reduce-civilian-casualties. Article 36. “AI experts call for autonomous weapons to be banned.” Article36.org, July 28, 2015, http://www.article36.org/autonomous-weapons/ai-experts-ban/. Article 36. “Autonomous Weapons,” Article36.org, http://www.article36.org/issue/weapons/autonomous-weapons/. Article 36, “Key Areas for Debate on Autonomous Weapons Systems: Memorandum for Delegates at the CCW Meeting of Experts on LAWS,” Article36.org, May 2014, http://www.article36.org/wp-content/uploads/2014/05/A36-CCW-May-2014.pdf. Article 36, “Killer Robots: UK Government Policy on Fully Autonomous Weapons,” Article36.org, April 2013, http://www.article36.org/wp-content/uploads/2013/04/Policy_Paper1.pdf. 4 Asaro, Peter. “Ban Killer Robots before They Become Weapons of Mass Destruction.” Scientific American, August 7, 2015. http://www.scientificamerican.com/article/ban-killer-robots- before-they-become-weapons-of-mass-destruction/. Asaro, Peter. “Campaigning to stop killer robots,” interview with Beverly O’Connor on ABC News, April 14, 2015, http://www.abc.net.au/news/2015-04-15/campaigning-to-stop-killer- robots/6393132. Asaro, Peter. “How Just Could a Robot War Be?” In Proceedings of the 2008 Conference on Current Issues in Computing and Philosophy. Amsterdam, The Netherlands: 2008, 50- 64. http://peterasaro.org/writing/Asaro%20Just%20Robot%20War.pdf. Asaro,
Recommended publications
  • ARW 2015 Austrian Robotics Workshop
    ARW 2015 Austrian Robotics Workshop Proceedings Austrian Robotics Workshop 2015 ARW 2015 hosted by Institute of Networked and Embedded Systems http://nes.aau.at Alpen-Adria Universität Klagenfurt Welcome to ARW 2015! NES has organized and hosted the Austrian Robotics Workshop (ARW) 2015 on May 7-8 in Klagenfurt. With more than 80 participants and speakers from eight dif- ferent countries this workshop grew into an international event where people from academia and industry meet - botics. Two keynote speakers gave inspiring talks on their thrilling research in robotics. Sabine Hauert from Bristol Robotics Laboratory and University of Bristol explained the development of nano-robots and how they may sup- port medical treatment. Werner Huber from BMW Group Research and Technology presented the self-driving cars, which have received a lot of media coverage lately. In addition, participants from industry and academia pre- sented and demonstrated their demos and posters during an interesting joint session. We have invited people to observe live demonstrations and discuss recent advan- cements face to face. NES members also demonstrated the SINUS project and its achievements in autonomous multi-UAV mission, communication architecture, collision avoidance, path planning, and video streaming. The Austrian Robotics Workshop 2016 will take place in Wels. Hope to see you there! Christian Bettstetter, Torsten Andre, Bernhard Rinner, Saeed Yahyanejad ARW 2015 Austrian Robotics Workshop We warmly acknowledge the support of our sponsors: Committees,
    [Show full text]
  • The Power and Promise of Computers That Learn by Example
    Machine learning: the power and promise of computers that learn by example MACHINE LEARNING: THE POWER AND PROMISE OF COMPUTERS THAT LEARN BY EXAMPLE 1 Machine learning: the power and promise of computers that learn by example Issued: April 2017 DES4702 ISBN: 978-1-78252-259-1 The text of this work is licensed under the terms of the Creative Commons Attribution License which permits unrestricted use, provided the original author and source are credited. The license is available at: creativecommons.org/licenses/by/4.0 Images are not covered by this license. This report can be viewed online at royalsociety.org/machine-learning Cover image © shulz. 2 MACHINE LEARNING: THE POWER AND PROMISE OF COMPUTERS THAT LEARN BY EXAMPLE Contents Executive summary 5 Recommendations 8 Chapter one – Machine learning 15 1.1 Systems that learn from data 16 1.2 The Royal Society’s machine learning project 18 1.3 What is machine learning? 19 1.4 Machine learning in daily life 21 1.5 Machine learning, statistics, data science, robotics, and AI 24 1.6 Origins and evolution of machine learning 25 1.7 Canonical problems in machine learning 29 Chapter two – Emerging applications of machine learning 33 2.1 Potential near-term applications in the public and private sectors 34 2.2 Machine learning in research 41 2.3 Increasing the UK’s absorptive capacity for machine learning 45 Chapter three – Extracting value from data 47 3.1 Machine learning helps extract value from ‘big data’ 48 3.2 Creating a data environment to support machine learning 49 3.3 Extending the lifecycle
    [Show full text]
  • An Abstract of the Dissertation Of
    AN ABSTRACT OF THE DISSERTATION OF Karina A. Roundtree for the degree of Doctor of Philosophy in Mechanical Engineering presented on August 21, 2020. Title: Achieving Transparency in Human-Collective Systems Abstract approved: H. Onan Demirel Julie A. Adams Collective robotic systems are biologically-inspired and exhibit behaviors found in spa- tial swarms (e.g., fish), colonies (e.g., ants), or a combination of both (e.g., bees). Collec- tive robotic system popularity continues to increase due to their apparent global intel- ligence and emergent behaviors. Many applications can benefit from the incorporation of collectives, including environmental monitoring, disaster response missions, and in- frastructure support. Human-collective system designers continue to debate how best to achieve transparency in human-collective systems in order to attain meaningful and insightful information exchanges between the operator and collective, enable positive operator influence on collectives, and improve the human-collective’s performance. Few human-collective evaluations have been conducted, many of which have only assessed how embedding transparency into one system design element (e.g., models, visualizations, or control mechanisms) may impact human-collective behaviors, such as the human-collective performance. This dissertation developed a transparency defi- nition for collective systems that was leveraged to assess how to achieve transparency in a single human-collective system. Multiple models and visualizations were evalu- ated for a sequential best-of-n decision-making task with four collectives. Transparency was evaluated with respect to how the model and visualization impacted human oper- ators who possess different capabilities, operator comprehension, system usability, and human-collective performance. Transparency design guidance was created in order to aid the design of future human-collective systems.
    [Show full text]
  • The Perceived Ethics of Artificial Intelligence
    Markets, Globalization & Development Review Volume 5 Number 2 Article 3 2020 The Perceived Ethics of Artificial Intelligence Ross Murray University of Texas Rio Grande Valley Follow this and additional works at: https://digitalcommons.uri.edu/mgdr Part of the Business Law, Public Responsibility, and Ethics Commons, Economics Commons, Marketing Commons, Other Business Commons, Science and Technology Studies Commons, and the Sociology Commons Recommended Citation Murray, Ross (2020) "The Perceived Ethics of Artificial Intelligence," Markets, Globalization & Development Review: Vol. 5: No. 2, Article 3. DOI: 10.23860/MGDR-2020-05-02-03 Available at: https://digitalcommons.uri.edu/mgdr/vol5/iss2/3https://digitalcommons.uri.edu/mgdr/vol5/ iss2/3 This Article is brought to you for free and open access by DigitalCommons@URI. It has been accepted for inclusion in Markets, Globalization & Development Review by an authorized editor of DigitalCommons@URI. For more information, please contact [email protected]. The Perceived Ethics of Artificial Intelligence This article is available in Markets, Globalization & Development Review: https://digitalcommons.uri.edu/mgdr/vol5/ iss2/3 Murray: Ethics of Artificial Intelligence The Perceived Ethics of Artificial Intelligence Introduction Artificial Intelligence (AI) has been embedded in consumer products and services in thousands of products such as Apple’s iPhone, Amazon’s Alexa, Tesla’s autonomous vehicles, Facebook’s algorithms that attempt to increase click-through optimization, and smart vacuum cleaners. As products and services attempt to imitate the intelligence of humans – although the products and services are not making decisions based upon their own moral values – the moral values of the employees and business ethics of corporations that create the products and services are being coded into the technology that is evidenced in these products.
    [Show full text]
  • Hunt, E. R., & Hauert, S. (2020). a Checklist for Safe Robot Swarms. Nature Machine Intelligence
    Hunt, E. R. , & Hauert, S. (2020). A checklist for safe robot swarms. Nature Machine Intelligence. https://doi.org/10.1038/s42256-020- 0213-2 Peer reviewed version Link to published version (if available): 10.1038/s42256-020-0213-2 Link to publication record in Explore Bristol Research PDF-document This is the author accepted manuscript (AAM). The final published version (version of record) is available online via Nature Research at https://www.nature.com/articles/s42256-020-0213-2 . Please refer to any applicable terms of use of the publisher. University of Bristol - Explore Bristol Research General rights This document is made available in accordance with publisher policies. Please cite only the published version using the reference above. Full terms of use are available: http://www.bristol.ac.uk/red/research-policy/pure/user-guides/ebr-terms/ 1 A checklist for safe robot swarms 2 3 Edmund Hunt and Sabine Hauert* 4 Engineering Mathematics, Bristol Robotics Laboratory, University of Bristol 5 6 *corresponding author: [email protected] 7 8 Standfirst: As robot swarms move from the laboratory to real world applications, a routine 9 checklist of questions could help ensure their safe operation. 10 11 Robot swarms promise to tackle problems ranging from food production and natural 12 disaster response, to logistics and space exploration1–4. As swarms are deployed outside the 13 laboratory in real world applications, we have a unique opportunity to engineer them to be 14 safe from the get-go. Safe for the public, safe for the environment, and indeed, safe for 15 themselves.
    [Show full text]
  • Conference Program Contents AAAI-14 Conference Committee
    Twenty-Eighth AAAI Conference on Artificial Intelligence (AAAI-14) Twenty-Sixth Conference on Innovative Applications of Artificial Intelligence (IAAI-14) Fih Symposium on Educational Advances in Artificial Intelligence (EAAI-14) July 27 – 31, 2014 Québec Convention Centre Québec City, Québec, Canada Sponsored by the Association for the Advancement of Artificial Intelligence Cosponsored by the AI Journal, National Science Foundation, Microso Research, Google, Amazon, Disney Research, IBM Research, Nuance Communications, Inc., USC/Information Sciences Institute, Yahoo Labs!, and David E. Smith In cooperation with the Cognitive Science Society and ACM/SIGAI Conference Program Contents AAAI-14 Conference Committee AI Video Competition / 7 AAAI acknowledges and thanks the following individuals for their generous contributions of time and Awards / 3–4 energy to the successful creation and planning of the AAAI-14, IAAI-14, and EAAI-14 Conferences. Computer Poker Competition / 7 Committee Chair Conference at a Glance / 5 CRA-W / CDC Events / 4 Subbarao Kambhampati (Arizona State University, USA) Doctoral Consortium / 6 AAAI-14 Program Cochairs EAAI-14 Program / 6 Carla E. Brodley (Northeastern University, USA) Exhibition /24 Peter Stone (University of Texas at Austin, USA) Fun & Games Night / 4 IAAI Chair and Cochair General Information / 25 David Stracuzzi (Sandia National Laboratories, USA) IAAI-14 Program / 11–19 David Gunning (PARC, USA) Invited Presentations / 3, 8–9 EAAI-14 Symposium Chair and Cochair Posters / 4, 23 Registration / 9 Laura
    [Show full text]
  • Portrayals and Perceptions of AI and Why They Matter
    Portrayals and perceptions of AI and why they matter 1 Portrayals and perceptions of AI and why they matter Cover Image Turk playing chess, design by Wolfgang von Kempelen (1734 – 1804), built by Christoph 2 Portrayals and perceptions of AI and why they matter Mechel, 1769, colour engraving, circa 1780. Contents Executive summary 4 Introduction 5 Narratives and artificial intelligence 5 The AI narratives project 6 AI narratives 7 A very brief history of imagining intelligent machines 7 Embodiment 8 Extremes and control 9 Narrative and emerging technologies: lessons from previous science and technology debates 10 Implications 14 Disconnect from the reality of the technology 14 Underrepresented narratives and narrators 15 Constraints 16 The role of practitioners 20 Communicating AI 20 Reshaping AI narratives 21 The importance of dialogue 22 Annexes 24 Portrayals and perceptions of AI and why they matter 3 Executive summary The AI narratives project – a joint endeavour by the Leverhulme Centre for the Future of Intelligence and the Royal Society – has been examining how researchers, communicators, policymakers, and publics talk about artificial intelligence, and why this matters. This write-up presents an account of how AI is portrayed Narratives are essential to the development of science and perceived in the English-speaking West, with a and people’s engagement with new knowledge and new particular focus on the UK. It explores the limitations applications. Both fictional and non-fictional narratives of prevalent fictional and non-fictional narratives and have real world effects. Recent historical examples of the suggests how practitioners might move beyond them. evolution of disruptive technologies and public debates Its primary audience is professionals with an interest in with a strong science component (such as genetic public discourse about AI, including those in the media, modification, nuclear energy and climate change) offer government, academia, and industry.
    [Show full text]
  • Telling Stories on Culturally Responsive Artificial Intelligence 19 Stories
    Telling Stories On Culturally Responsive Artificial Intelligence 19 Stories EDITED BY: Ryan Calo, Batya Friedman, Tadayoshi Kohno, Hannah Almeter and Nick Logler 2 Telling Stories: On Culturally Responsive Artificial Intelligence Telling Stories: On Culturally Responsive Artificial Intelligence 19 Stories Edited by Ryan Calo, Batya Friedman, Tadayoshi Kohno, Hannah Almeter and Nick Logler University of Washington Tech Policy Lab, Seattle Copyright © 2020 by University of Washington Tech Policy Lab. All rights reserved. For information about permission to reproduce selections from this book, contact: University of Washington Tech Policy Lab 4293 Memorial Way NE Seattle, WA 98195. Printed by The Tech Policy Lab, USA. First edition. ISBN: 978-1-7361480-0-6 (ebook) Library of Congress Control Number: 2020924819 Book design by Patrick Holderfield, Holdereaux Studio www.holdereauxstudio.com Illustrations by: Tim Corey, Colibri Facilitation: pgs - 22, 24, 28, 29, 40, 41, 43, 45, 48 Akira Ohiso: pgs - 18, 26, 27, 36, 42, 44, 50, 51, 52, 53 Elena Ponte: pgs - 20, 30, 31, 34, 38, 46, 47, 54 Free digital copy available at https://techpolicylab.uw.edu/ About the Global Summit on Culturally Responsive Artificial Intelligence The Global Summit on Culturally Responsive AI is part of the Tech Policy Lab’s Global Summit Initiative (2016-present). Our Global Summits convene twenty to thirty thought leaders from around the world representing design, ethics, governance, policy, and technology. Summits aim to frame and begin progress on pressing grand challenges for tech policy, providing opportunities for collaboration on global and local issues. The 2018 Global Summit structured conversation around grand challenges for developing and disseminating artificial intelligence technologies that maintain respect for and enhance culture and diversity of worldview.
    [Show full text]
  • AAAI News AAAI News
    AAAI News AAAI News Spring News from the Association for the Advancement of Artificial Intelligence AAAI Announces The Interactive Museum Tour-Guide Holte, Ariel Felner, Guni Sharon, Robot (Wolfram Burgard, Armin B. Nathan R. Sturtevant) New Senior Member! Cremers, Dieter Fox, Dirk Hähnel, Gerhard Lakemeyer, Dirk Schulz, Wal- AAAI-16 Outstanding AAAI congratulates Wheeler Ruml ter Steiner, and Sebastian Thrun) Student Paper Award (University of New Hampshire, USA) Toward a Taxonomy and Computa- Boosting Combinatorial Search on his election to AAAI Senior Member tional Models of Abnormalities in through Randomization (Carla P. status. This honor was announced at Images (Babak Saleh, Ahmed Elgam- Gomes, Bart Selman, and Henry the recent AAAI-16 Conference in mal, Jacob Feldman, Ali Farhadi) Phoenix. Senior Member status is Kautz) designed to recognize AAAI members Burgard and colleagues were honored IAAI-16 Innovative who have achieved significant accom- for significant contributions to proba- Application Awards plishments within the field of artificial bilistic robot navigation and the inte- Each year the AAAI Conference on intelligence. To be eligible for nomina- gration with high-level planning Innovative Applications selects the tion for Senior Member, candidates methods, while Gomes, Selman, and recipients of the IAAI Innovative must be consecutive members of AAAI Kautz were recognized for their signifi- Application Award. These deployed for at least five years and have been cant contributions to the area of auto- application case study papers must active in the professional arena for at mated reasoning and constraint solv- describe deployed applications with least ten years. ing through the introduction of measurable benefits that include some randomization and restarts into com- aspect of AI technology.
    [Show full text]
  • Social and Economic Challenges of Artificial Intelligence
    COURSE OUTLINE SOCIAL & ECONOMIC CHALLENGE OF ARTIFICIAL INTELLIGENCE Teachers(s): MIAILHE Nicolas Academic year 2017/2018: Spring Semester SHORT BIOGRAPHY: Nicolas Miailhe is the co-founder and President of "The Future Society at Harvard Kennedy School" under which he also co-founded and leads the "AI Initiative". A recognized strategist, social entrepreneur, and thought-leader, he advises multinationals, governments and international organizations. Nicolas is a Senior Visiting Research Fellow with the Program on Science, Technology and Society (STS) at HKS. His work centers on the governance of emerging technologies. He also specializes in urban innovation and civic engagement. Nicolas has ten years of professional experience in emerging markets such as India, working at the nexus of innovation, high technology, government, industry and civil society. Before joining Harvard, he was Regional Director South Asia at Safran Sagem, the world leader in aerospace, defense and security. An Arthur Sachs Scholar, Nicolas holds a Master in Public Administration from the Harvard Kennedy School of Government. He also holds a Master in Geostrategy and Industrial Dynamics from Pantheon-Assas University in Paris and a Bachelor of Arts in Political Sciences from Sciences Po Strasbourg. COURSE OUTLINE (REQUIRED) Please find below a proposed outline based upon 12 sessions, to be filled in. Should you prefer to use a non- standard outline, please feel free to use the « non-standard course outline » area at the bottom of this document. In both cases please indicate required and/or recommended readings: Session 1: Defining Artificial Intelligence Required readings: • Nicolas Miailhe & Cyrus Hodes, Making the AI revolution work for everyone, The Future Society, AI Initiative, Report to OECD, February 2017 – PART 1, CHAPTER A • Russell, Stuart (2016) "Q&A: The Future of Artificial Intelligence".
    [Show full text]
  • The Frontiers of MACHINE LEARNING 2017 Raymond and Beverly Sackler U.S.-U.K
    The Frontiers of MACHINE LEARNING 2017 Raymond and Beverly Sackler U.S.-U.K. Scientific Forum Foreword Rapid advances in machine learning—the form of artificial intelligence that al- lows computer systems to learn from data—are capturing scientific, economic, and public interest. Recent years have seen machine learning systems enter everyday usage, while further applications across healthcare, transportation, finance, and more appear set to shape the development of these fields over the coming decades. The societal and economic opportunities that follow these ad- vances are significant, and nations are grappling with how artificial intelligence might affect society. There are emerging policy debates in the United States and the United Kingdom about how and where society can make best use of machine learning. As the capabilities of machine learning systems and the range of their applica- tions continue to grow, it is therefore particularly timely for the National Acad- emy of Sciences and the Royal Society to bring together leading figures in these fields. Since 2008, the Raymond and Beverly Sackler U.S.-U.K. Scientific Fo- rum has brought together thought leaders from a variety of scientific fields to exchange ideas on topics of international scientific concern and to help forge an enduring partnership between scientists in the United States and the United Kingdom. The forum on “The Frontiers of Machine Learning” took place in the United States on January 31 and February 1, 2017, at the National Academy of Sciences in Washington, D.C. This event brought together leading researchers and policy experts to explore the cutting edges of machine learning research and the implications of technological advances in this field.
    [Show full text]
  • Onboard Evolution of Understandable Swarm Behaviors
    Jones, S. , Winfield, A. F. T., Hauert, S., & Studley, M. (2019). Onboard evolution of understandable swarm behaviors. Advanced Intelligent Systems, (1900031), [1900031]. https://doi.org/10.1002/aisy.201900031 Publisher's PDF, also known as Version of record License (if available): CC BY Link to published version (if available): 10.1002/aisy.201900031 Link to publication record in Explore Bristol Research PDF-document This is the final published version of the article (version of record). It first appeared online via Wiley at https://onlinelibrary.wiley.com/doi/full/10.1002/aisy.201900031 . Please refer to any applicable terms of use of the publisher. University of Bristol - Explore Bristol Research General rights This document is made available in accordance with publisher policies. Please cite only the published version using the reference above. Full terms of use are available: http://www.bristol.ac.uk/pure/about/ebr-terms FULL PAPER www.advintellsyst.com Onboard Evolution of Understandable Swarm Behaviors Simon Jones, Alan F. Winfield, Sabine Hauert,* and Matthew Studley* used to allow swarm engineers to automat- Designing the individual robot rules that give rise to desired emergent swarm ically discover controllers capable of pro- [3] behaviors is difficult. The common method of running evolutionary algorithms ducing the desired collective behavior. off-line to automatically discover controllers in simulation suffers from two Conventionally, evolutionary swarm robotics has used two approaches: off-line disadvantages: the generation of controllers is not situated in the swarm and so evolution in simulation, followed by the cannot be performed in the wild, and the evolved controllers are often opaque transfer of the evolved controllers into a and hard to understand.
    [Show full text]