Ai, Politics and Security in the Asia Pacific Program

Total Page:16

File Type:pdf, Size:1020Kb

Ai, Politics and Security in the Asia Pacific Program AI, POLITICS AND SECURITY IN THE ASIA PACIFIC PROGRAM Thursday 14 March - Friday 15 March 2019 Mills Room, Chancelry Building, 10 East Road, ANU Co-convened by: In partnership with: Coral Bell School of Asia Pacific Affairs (ANU) Leverhulme Centre for the Future of Intelligence (Cambridge) CONTENTS Welcome 1 Conference Program 2 - 3 Session 1: What is AI? 4 Session 2: AI, International Norms and Challenges to the Ethics and Laws of War 5 - 6 Session 3: AI and the Changing Nature of Security, Strategy and War 7 - 8 Session 4: The Future of AI and Associated Risks 8 Session 5: Understanding the Science Behind Targeted Messaging 9 Session 6: AI, Big Data Analytics, the Citizen and the State 10 - 11 Session 7: The Case of Cambridge Analytica: Revelations and Lessons Learned 12 Session 8: AI Impacts: Building Effective Networks in the Asia Pacific 13 - 15 Participant List 16 - 17 The ANU Coral Bell School of Asia Pacific Affairs CFI is an interdisciplinary research centre committed (Bell School) is a world-leading centre for research, to exploring the impacts of AI, both short and long education, and policy analysis on international term; it is based at the University of Cambridge, with and Asia Pacific politics, security, diplomacy, and ‘spokes’ at Oxford, Imperial College London, and strategic affairs. UC Berkeley. WELCOME FROM WORKSHOP CO-CONVENORS The existing and potential impact of AI is impossible to ignore. AI already influences every aspect of our lives, sometimes in seemingly unremarkable ways. Yet, we are also seeing signs of how it may radically alter international politics and security. From the use of AI by Cambridge Analytica to send targeted messages to voters through social media in the context of Brexit and the 2016 US presidential election, to on-going debates in the European Parliament and the UN about lethal autonomous weapons, the far-reaching political and security implications of AI demand our attention. The Coral Bell School of Asia Pacific Affairs (The Australian National University) and the Leverhulme Centre for the Future of Intelligence (University of Cambridge) signed a Memorandum of Understanding (MoU) in September 2018. The aim of this MoU is to foster research collaboration on the impact of AI on the politics and security of the Asia Pacific. This workshop is our first joint endeavour. We hope that it will do the following: > Bring together outstanding scholars and practitioners working on the political and security implications of AI to establish a network of people making important contributions in this area; > Introduce those with expertise in politics and security (who may not have previously focused on AI) to those with specific expertise in AI in order to open up new possibilities for collaboration - and, importantly; > Provide a forum for discussion and debate about some of the most significant and challenging problems that we currently face in international politics. We are very grateful to you for joining us and look forward to two days of learning and lively discussion! Professor Huw Price Professor Toni Erskine Academic Director Director Leverhulme Centre for the Future of Intelligence Coral Bell School of Asia Pacific Affairs University of Cambridge The Australian National University Huw Price and Toni Erskine signing a ‘memorandum of understanding’, Singapore in 2018. AI, Politics and Security in the Asia Pacific 1 THURSDAY 14 MARCH 2019 8.30-9.00am Registration, tea and coffee 9.00-9.30am Welcome and Introduction Professor Toni Erskine, Director, Coral Bell School of Asia Pacific Affairs, ANU Professor Huw Price, Academic Director, Leverhulme Centre for the Future of Intelligence, Cambridge Professor Mike Calford, Provost, ANU 9.30-10.30am SESSION 1: WHAT IS AI? Dr Karina Vold (CFI, Cambridge) > What is AI? Reflections from Philosophy Professor Lexing Xie (Research School of Computer Science, ANU) > Explaining AI to Political Scientists and Policy Makers: Insights from Computer Science Chair: Professor Huw Price (CFI, Cambridge) 10.30-11.00am Morning tea 11.00am-1.00pm SESSION 2: AI, INTERNATIONAL NORMS AND CHALLENGES TO THE ETHICS AND LAWS OF WAR Professor Toby Walsh via Skype (Dept. of Computer Science and Engineering, UNSW) > Technical Concerns around “Killer Robots” Professor Toni Erskine (Bell School, ANU) > AI and the Problem of Misplaced Responsibility in War Jake Lucchi (Head of Content and AI - Public Policy, Google Asia Pacific) > Google’s ‘AI Principles’: How Can Google Contribute to Developing International Norms on Issues such as Lethal Autonomous Weapons? Chair: Associate Professor Bina D’Costa (Dept. of International Relations, Bell School, ANU) Discussant: Associate Professor Seth Lazar (School of Philosophy, ANU) 1.00-2.00pm Lunch 2.00-3.30pm SESSION 3: AI AND THE CHANGING NATURE OF SECURITY, STRATEGY AND WAR Major General Mick Ryan (Australian War College) > Strategic Competition, War and AI Dr Shashi Jayakumar (S. Rajaratnam School of International Studies, NTU) > AI and Security: A Perspective from Singapore Chair: Dr Meighen McCrae (Strategic & Defence Studies Centre, Bell School, ANU) Discussant: Professor Evelyn Goh (Strategic & Defence Studies Centre, Bell School, ANU) 3.30-4.00pm Afternoon tea 4.00-6.30pm SESSION 4: THE FUTURE OF AI AND ASSOCIATED RISKS Dr Shahar Avin (Centre for the Study of Existential Risk, Cambridge) In this interactive exercise, participants will assume the roles of key stakeholders in the future of AI, from tech companies, through governments and militaries, to other industries (including defence) and NGOs. Together we will create a single, rich narrative of one possible way the relevant technologies could develop and be deployed. We will use this narrative to explore open questions about the future of AI, its poitential impacts and risks, and key decision points in the present and the near future, touching on themes from the entire workshop agenda. 2 Coral Bell School of Asia Pacific Affairs FRIDAY 15 MARCH 2019 8.30-9.00am Registration, tea and coffee 9.00-10.30am SESSION 5: UNDERSTANDING THE SCIENCE BEHIND TARGETED MESSAGING Vesselin Popov (Psychometrics Centre, Cambridge) > Predicting Psychological Traits from Digital Footprints > Microtargeting and Tailoring Chair: Dr George Carter (Dept. of Pacific Affairs, Bell School, ANU) Discussant: Dr Jenny Davis (School of Sociology, ANU) 10.30-10.45am Morning tea 10.45am-12.45pm SESSION 6: AI, BIG DATA ANALYTICS, THE CITIZEN AND THE STATE Bing Song (Director, Berggruen Institute China Center) > China’s Social Credit System Dr Karina Vold (CFI, Cambridge) > Privacy, Autonomy and Personalised Targeting: Rethinking How Personal Data Is Used Katherine Mansted (Belfer Center, Harvard; National Security College, ANU) > The Coming AI Wave: Can Democracy Survive? Chair: Hon Professor Brendan Sargeant (Strategic & Defence Studies Centre, Bell School, ANU) Discussant: Dr Sarah Logan (Dept. of International Relations, Bell School, ANU) 12.45-1.45pm Lunch 1.45-3.15pm SESSION 7: THE CASE OF CAMBRIDGE ANALYTICA: REVELATIONS AND LESSONS LEARNED Vesselin Popov (Psychometrics Centre, Cambridge) Chair: Professor Toni Erskine (Bell School, ANU) Discussant: Dr Paul Kenny (Dept. of Political and Social Change, Bell School, ANU) 3.15-3.30pm Afternoon Tea 3.30-4.30pm SESSION 8: AI IMPACTS: BUILDING EFFECTIVE NETWORKS IN THE ASIA PACIFIC Roundtable with: - Bing Song (Vice President, Berggruen Institute China Center), co-chair - Professor Huw Price (CFI, Cambridge), co-chair - Sakuntala Akmeemana (Principal Sector Specialist - Development Policy Division, DFAT) - Jake Lucchi (Head of Content and AI - Public Policy, Google Asia Pacific) - Delia Pembrey (Augmented Intelligence Centre of Excellence, Department of Human Services) - Christina Parolin (Executive Director, Australian Academy of the Humanities) - Professor Duncan Ivison (Deputy Vice Chancellor, Research, University of Sydney) - Dr Yang Liu (CFI, Cambridge) 4.30-5.30pm Cocktail Reception AI, Politics and Security in the Asia Pacific 3 SESSION 1: WHAT IS AI? Dr Karina Vold Leverhulme Centre for the Future of Intelligence, University of Cambridge Karina Vold specialises in philosophy of mind and cognition. She received her Bachelor’s degree in philosophy and political science from the University of Toronto and her PhD in Philosophy from McGill University. She has been a visiting scholar at Ruhr University, a fellow at Duke University, and a lecturer at Carleton and McGill Universities. Vold is interested in the extent to which our minds are inseparable from our bodies and our wider social, cultural, and physical environments. Her recent work focuses on theories of cognitive extension, intelligence augmentation, neuroethics, and decision-making. At CFI she is exploring the topics of personalised targeting, the use of non-autonomous AI systems to aid human cognition, and ethical questions about the use of AI. Professor Lexing Xie Research School of Computer Science, The Australian National University Lexing Xie is Professor in the Research School of Computer Science at The Australian National University where she leads the ANU Computational Media lab (http://cm.cecs.anu.edu.au). Her research interests are in machine learning, optimisation and social media. Of particular recent interest are stochastic time series models, neural networks for sequences and languages, the intersection of prediction and planning, applied to diverse problems such as multimedia knowledge graphs and popularity in social media. Her research is supported by the US Air Force Office
Recommended publications
  • Letters to the Editor
    Articles Letters to the Editor Research Priorities for is a product of human intelligence; we puter scientists, innovators, entrepre - cannot predict what we might achieve neurs, statisti cians, journalists, engi - Robust and Beneficial when this intelligence is magnified by neers, authors, professors, teachers, stu - Artificial Intelligence: the tools AI may provide, but the eradi - dents, CEOs, economists, developers, An Open Letter cation of disease and poverty are not philosophers, artists, futurists, physi - unfathomable. Because of the great cists, filmmakers, health-care profes - rtificial intelligence (AI) research potential of AI, it is important to sionals, research analysts, and members Ahas explored a variety of problems research how to reap its benefits while of many other fields. The earliest signa - and approaches since its inception, but avoiding potential pitfalls. tories follow, reproduced in order and as for the last 20 years or so has been The progress in AI research makes it they signed. For the complete list, see focused on the problems surrounding timely to focus research not only on tinyurl.com/ailetter. - ed. the construction of intelligent agents making AI more capable, but also on Stuart Russell, Berkeley, Professor of Com - — systems that perceive and act in maximizing the societal benefit of AI. puter Science, director of the Center for some environment. In this context, Such considerations motivated the “intelligence” is related to statistical Intelligent Systems, and coauthor of the AAAI 2008–09 Presidential Panel on standard textbook Artificial Intelligence: a and economic notions of rationality — Long-Term AI Futures and other proj - Modern Approach colloquially, the ability to make good ects on AI impacts, and constitute a sig - Tom Dietterich, Oregon State, President of decisions, plans, or inferences.
    [Show full text]
  • The Future of AI: Opportunities and Challenges
    The Future of AI: Opportunities and Challenges Puerto Rico, January 2-5, 2015 ! Ajay Agrawal is the Peter Munk Professor of Entrepreneurship at the University of Toronto's Rotman School of Management, Research Associate at the National Bureau of Economic Research in Cambridge, MA, Founder of the Creative Destruction Lab, and Co-founder of The Next 36. His research is focused on the economics of science and innovation. He serves on the editorial boards of Management Science, the Journal of Urban Economics, and The Strategic Management Journal. & Anthony Aguirre has worked on a wide variety of topics in theoretical cosmology, ranging from intergalactic dust to galaxy formation to gravity physics to the large-scale structure of inflationary universes and the arrow of time. He also has strong interest in science outreach, and has appeared in numerous science documentaries. He is a co-founder of the Foundational Questions Institute and the Future of Life Institute. & Geoff Anders is the founder of Leverage Research, a research institute that studies psychology, cognitive enhancement, scientific methodology, and the impact of technology on society. He is also a member of the Effective Altruism movement, a movement dedicated to improving the world in the most effective ways. Like many of the members of the Effective Altruism movement, Geoff is deeply interested in the potential impact of new technologies, especially artificial intelligence. & Blaise Agüera y Arcas works on machine learning at Google. Previously a Distinguished Engineer at Microsoft, he has worked on augmented reality, mapping, wearable computing and natural user interfaces. He was the co-creator of Photosynth, software that assembles photos into 3D environments.
    [Show full text]
  • Beneficial AI 2017
    Beneficial AI 2017 Participants & Attendees 1 Anthony Aguirre is a Professor of Physics at the University of California, Santa Cruz. He has worked on a wide variety of topics in theoretical cosmology and fundamental physics, including inflation, black holes, quantum theory, and information theory. He also has strong interest in science outreach, and has appeared in numerous science documentaries. He is a co-founder of the Future of Life Institute, the Foundational Questions Institute, and Metaculus (http://www.metaculus.com/). Sam Altman is president of Y Combinator and was the cofounder of Loopt, a location-based social networking app. He also co-founded OpenAI with Elon Musk. Sam has invested in over 1,000 companies. Dario Amodei is the co-author of the recent paper Concrete Problems in AI Safety, which outlines a pragmatic and empirical approach to making AI systems safe. Dario is currently a research scientist at OpenAI, and prior to that worked at Google and Baidu. Dario also helped to lead the project that developed Deep Speech 2, which was named one of 10 “Breakthrough Technologies of 2016” by MIT Technology Review. Dario holds a PhD in physics from Princeton University, where he was awarded the Hertz Foundation doctoral thesis prize. Amara Angelica is Research Director for Ray Kurzweil, responsible for books, charts, and special projects. Amara’s background is in aerospace engineering, in electronic warfare, electronic intelligence, human factors, and computer systems analysis areas. A co-founder and initial Academic Model/Curriculum Lead for Singularity University, she was formerly on the board of directors of the National Space Society, is a member of the Space Development Steering Committee, and is a professional member of the Institute of Electrical and Electronics Engineers (IEEE).
    [Show full text]
  • F.3. the NEW POLITICS of ARTIFICIAL INTELLIGENCE [Preliminary Notes]
    F.3. THE NEW POLITICS OF ARTIFICIAL INTELLIGENCE [preliminary notes] MAIN MEMO pp 3-14 I. Introduction II. The Infrastructure: 13 key AI organizations III. Timeline: 2005-present IV. Key Leadership V. Open Letters VI. Media Coverage VII. Interests and Strategies VIII. Books and Other Media IX. Public Opinion X. Funders and Funding of AI Advocacy XI. The AI Advocacy Movement and the Techno-Eugenics Movement XII. The Socio-Cultural-Psychological Dimension XIII. Push-Back on the Feasibility of AI+ Superintelligence XIV. Provisional Concluding Comments ATTACHMENTS pp 15-78 ADDENDA pp 79-85 APPENDICES [not included in this pdf] ENDNOTES pp 86-88 REFERENCES pp 89-92 Richard Hayes July 2018 DRAFT: NOT FOR CIRCULATION OR CITATION F.3-1 ATTACHMENTS A. Definitions, usage, brief history and comments. B. Capsule information on the 13 key AI organizations. C. Concerns raised by key sets of the 13 AI organizations. D. Current development of AI by the mainstream tech industry E. Op-Ed: Transcending Complacency on Superintelligent Machines - 19 Apr 2014. F. Agenda for the invitational “Beneficial AI” conference - San Juan, Puerto Rico, Jan 2-5, 2015. G. An Open Letter on Maximizing the Societal Benefits of AI – 11 Jan 2015. H. Partnership on Artificial Intelligence to Benefit People and Society (PAI) – roster of partners. I. Influential mainstream policy-oriented initiatives on AI: Stanford (2016); White House (2016); AI NOW (2017). J. Agenda for the “Beneficial AI 2017” conference, Asilomar, CA, Jan 2-8, 2017. K. Participants at the 2015 and 2017 AI strategy conferences in Puerto Rico and Asilomar. L. Notes on participants at the Asilomar “Beneficial AI 2017” meeting.
    [Show full text]
  • Public Response to RFI on AI
    White House Office of Science and Technology Policy Request for Information on the Future of Artificial Intelligence Public Responses September 1, 2016 Respondent 1 Chris Nicholson, Skymind Inc. This submission will address topics 1, 2, 4 and 10 in the OSTP’s RFI: • the legal and governance implications of AI • the use of AI for public good • the social and economic implications of AI • the role of “market-shaping” approaches Governance, anomaly detection and urban systems The fundamental task in the governance of urban systems is to keep them running; that is, to maintain the fluid movement of people, goods, vehicles and information throughout the system, without which it ceases to function. Breakdowns in the functioning of these systems and their constituent parts are therefore of great interest, whether it be their energy, transport, security or information infrastructures. Those breakdowns may result from deteriorations in the physical plant, sudden and unanticipated overloads, natural disasters or adversarial behavior. In many cases, municipal governments possess historical data about those breakdowns and the events that precede them, in the form of activity and sensor logs, video, and internal or public communications. Where they don’t possess such data already, it can be gathered. Such datasets are a tremendous help when applying learning algorithms to predict breakdowns and system failures. With enough lead time, those predictions make pre- emptive action possible, action that would cost cities much less than recovery efforts in the wake of a disaster. Our choice is between an ounce of prevention or a pound of cure. Even in cases where we don’t have data covering past breakdowns, algorithms exist to identify anomalies in the data we begin gathering now.
    [Show full text]
  • Program Guide
    The First AAAI/ACM Conference on AI, Ethics, and Society February 1 – 3, 2018 Hilton New Orleans Riverside New Orleans, Louisiana, 70130 USA Program Guide Organized by AAAI, ACM, and ACM SIGAI Sponsored by Berkeley Existential Risk Initiative, DeepMind Ethics & Society, Future of Life Institute, IBM Research AI, Pricewaterhouse Coopers, Tulane University AIES 2018 Conference Overview Thursday, February Friday, Saturday, 1st February 2nd February 3rd Tulane University Hilton Riverside Hilton Riverside 8:30-9:00 Opening 9:00-10:00 Invited talk, AI: Invited talk, AI Iyad Rahwan and jobs: and Edmond Richard Awad, MIT Freeman, Harvard 10:00-10:15 Coffee Break Coffee Break 10:15-11:15 AI session 1 AI session 3 11:15-12:15 AI session 2 AI session 4 12:15-2:00 Lunch Break Lunch Break 2:00-3:00 AI and law AI and session 1 philosophy session 1 3:00-4:00 AI and law AI and session 2 philosophy session 2 4:00-4:30 Coffee break Coffee Break 4:30-5:30 Invited talk, AI Invited talk, AI and law: and philosophy: Carol Rose, Patrick Lin, ACLU Cal Poly 5:30-6:30 6:00 – Panel 2: Invited talk: Panel 1: Prioritizing The Venerable What will Artificial Ethical Tenzin Intelligence bring? Considerations Priyadarshi, in AI: Who MIT Sets the Standards? 7:00 Reception Conference Closing reception Acknowledgments AAAI and ACM acknowledges and thanks the following individuals for their generous contributions of time and energy in the successful creation and planning of AIES 2018: Program Chairs: AI and jobs: Jason Furman (Harvard University) AI and law: Gary Marchant
    [Show full text]
  • Portrayals and Perceptions of AI and Why They Matter
    Portrayals and perceptions of AI and why they matter 1 Portrayals and perceptions of AI and why they matter Cover Image Turk playing chess, design by Wolfgang von Kempelen (1734 – 1804), built by Christoph 2 Portrayals and perceptions of AI and why they matter Mechel, 1769, colour engraving, circa 1780. Contents Executive summary 4 Introduction 5 Narratives and artificial intelligence 5 The AI narratives project 6 AI narratives 7 A very brief history of imagining intelligent machines 7 Embodiment 8 Extremes and control 9 Narrative and emerging technologies: lessons from previous science and technology debates 10 Implications 14 Disconnect from the reality of the technology 14 Underrepresented narratives and narrators 15 Constraints 16 The role of practitioners 20 Communicating AI 20 Reshaping AI narratives 21 The importance of dialogue 22 Annexes 24 Portrayals and perceptions of AI and why they matter 3 Executive summary The AI narratives project – a joint endeavour by the Leverhulme Centre for the Future of Intelligence and the Royal Society – has been examining how researchers, communicators, policymakers, and publics talk about artificial intelligence, and why this matters. This write-up presents an account of how AI is portrayed Narratives are essential to the development of science and perceived in the English-speaking West, with a and people’s engagement with new knowledge and new particular focus on the UK. It explores the limitations applications. Both fictional and non-fictional narratives of prevalent fictional and non-fictional narratives and have real world effects. Recent historical examples of the suggests how practitioners might move beyond them. evolution of disruptive technologies and public debates Its primary audience is professionals with an interest in with a strong science component (such as genetic public discourse about AI, including those in the media, modification, nuclear energy and climate change) offer government, academia, and industry.
    [Show full text]
  • IEEE EAD First Edition Committees List Table of Contents (1 January 2019)
    IEEE EAD First Edition Committees List Table of Contents (1 January 2019) Please click on the links below to navigate to specific sections of this document. Executive Committee Descriptions and Members Content Committee Descriptions and Members: • General Principles • Classical Ethics in A/IS • Well-being • Affective Computing • Personal Data and Individual Agency • Methods to Guide Ethical Research and Design • A/IS for Sustainable Development • Embedding Values into Autonomous Intelligent Systems • Policy • Law • Extended Reality • Safety and Beneficence of Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI) • Reframing Autonomous Weapons Systems Supporting Committee Descriptions and Members: • The Editing Committee • The Communications Committee • The Glossary Committee • The Outreach Committee A Special Thanks To: • Arabic Translation Team • China Translation Team • Japan Translation Team • South Korea Translation Team • Brazil Translation Team • Thailand Translation Team • Russian Federation Translation Team • Hong Kong Team • Listing of Contributors to Request for Feedback for EADv1 • Listing of Attendees for SEAS Events iterating EADv1 and EADv2 • All our P7000™ Working Group Chairs 1 1 January 2019 | The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems Executive Committee Descriptions & Members Raja Chatila – Executive Committee Chair, The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems Raja Chatila, IEEE Fellow, is Professor at Sorbonne Université, Paris, France , and Director of the Institute of Intelligent Systems and Robotics (ISIR). He is also Director of the Laboratory of Excellence “SMART” on human-machine interactions. His work covers several aspects of autonomous and interactive Robotics, in robot navigation and SLAM, motion planning and control, cognitive and control architectures, human-robot interaction, and robot learning, and to applications in the areas of service, field and space robotics.
    [Show full text]
  • Buku Pengantar Teknologi Informasi.Pdf
    PENGANTAR TEKNOLOGI P E INFORMASI N G A Teknologi Informasi adalah islah umum untuk teknologi apa pun yang N membantu manusia dalam membuat, mengubah, menyimpan, T mengomunikasikan dan/atau menyebarkan informasi yang menyatukan A R komputasi dan komunikasi berkecepatan nggi untuk data, suara, dan T PENGANTAR video yang berupa berupa komputer pribadi, telepon, TV, peralatan rumah tangga elektronik, dan peran genggam modern. E K Buku ini menyajikan pengetahuan dasar tentang teknologi informasi yang N dibuat dengan lebih detail, jelas, dan praks sesuai dengan kehidupan O TEKNOLOGI sehari-hari, terdiri dari ga belas bab bahasan yaitu: Pengenalan Teknologi Informasi; Perangkat Keras dan perangkat lunak Komputer; Data, Informasi, L dan Pengetahuan; Sistem Telekomunikasi dan Jaringan; Internet, Intranet, O dan Ekstranet; Sistem Fungsional, Perusahaan dan Interorganisasi; G INFORMASI E-Commerce; Supply Chain Management; Data, Pengetahuan dan Penunjang I Keputusan; Intelligent Systems; Strategic Systems And Reorganizaon; I Pembangunan Sistem Informasi (Informaon System Development); dan Hak N Atas Kekayaan Intelektual dengan pembahasan. Sesuai dengan ngkat F kebutuhan praks, buku ini sangat cocok sebagai bahan referensi awal bagi O mereka yang ingin mempelajari dasar-dasar teknologi informasi. R M A S I J B A U A . H A H R K A I R R Y DIVISI BUKU PERGURUAN TINGGI I A U M JUHRIYANSYAH DALLE N D PT RAJAGRAFINDO PERSADA S D Jl. Raya Leuwinanggung No. 112 Y I N A. AKRIM Kel. Leuwinanggung, Kec. Tapos, Kota Depok 16956 A Telp 021-84311162 Fax 021-84311163 H Email: rajapers@rajagraf indo.co.id www.rajagraf indo.co.id D BAHARUDDIN A L L E RAJAWALI PERS Divisi Buku Perguruan Tinggi PT RajaGrafindo Persada D E P O K Perpustakaan Nasional: Katalog Dalam Terbitan (KDT) Juhriyansyah Dalle, A.A Karim, Baharuddin Pengantar Teknologi Informasi/Juhriyansyah Dalle, A.A Karim, Baharuddin.
    [Show full text]
  • Ethical and Societal Implications of Algorithms, Data, and Artificial Intelligence: a Roadmap for Research
    Ethical and societal implications of algorithms, data, and artificial intelligence: a roadmap for research Jess Whittlestone, Rune Nyrup, Anna Alexandrova, Kanta Dihal and Stephen Cave B Authors: Jess Whittlestone, Rune Nyrup, Anna Alexandrova, About the Nuffield Foundation Kanta Dihal, Stephen Cave, Leverhulme Centre for the Future of Intelligence, University of Cambridge. The Nuffield Foundation funds research, analysis, and student programmes that advance educational With research assistance from: Ezinne Nwankwo, opportunity and social well-being across the José Hernandez-Orallo, Karina Vold, Charlotte Stix. United Kingdom. The research we fund aims to improve the design and operation of social Citation: Whittlestone, J. Nyrup, R. Alexandrova, A. policy, particularly in Education, Welfare, and Justice. Dihal, K. Cave, S. (2019) Ethical and societal implications Our student programmes provide opportunities for of algorithms, data, and artificial intelligence: a roadmap young people, particularly those from disadvantaged for research. London: Nuffield Foundation. backgrounds, to develop their skills and confidence in quantitative and scientific methods. ISBN: 978-1-9160211-0-5 We have recently established the Ada Lovelace The Nuffield Foundation has funded this project, but Institute, an independent research and deliberative the views expressed are those of the authors and not body with a mission to ensure data and AI work necessarily those of the Foundation. for people and society. We are also the founder and co-funder of the Nuffield Council on Bioethics, which examines and reports on ethical issues Acknowledgements in biology and medicine. The authors are grateful to the following people for We are a financially and politically independent charitable their input at workshops and valuable feedback on drafts trust established in 1943 by William Morris, Lord Nuffield, of this report: Haydn Belfield, Jude Browne, Sarah Castell, the founder of Morris Motors.
    [Show full text]
  • My Route to Existential Risk - Nytimes.Com
    My Route to Existential Risk - NYTimes.com http://opinionator.blogs.nytimes.com/2013/01/27/cambridge-cab... JANUARY 27, 2013, 5:00 PM Cambridge, Cabs and Copenhagen: My Route to Existential Risk By HUW PRICE In Copenhagen the summer before last, I shared a taxi with a man who thought his chance of dying in an artificial intelligence-related accident was as high as that of heart disease or cancer. No surprise if he'd been the driver, perhaps (never tell a taxi driver that you're a philosopher!), but this was a man who has spent his career with computers. Indeed, he's so talented in that field that he is one of the team who made this century so, well, 21st - who got us talking to one another on video screens, the way we knew we'd be doing in the 21st century, back when I was a boy, half a century ago. For this was Jaan Tallinn, one of the team who gave us Skype. (Since then, taking him to dinner in Trinity College here in Cambridge, I've had colleagues queuing up to shake his hand, thanking him for keeping them in touch with distant grandchildren.) I knew of the suggestion that A.I. might be dangerous, of course. I had heard of the "singularity," or "intelligence explosion"- roughly, the idea, originally due to the statistician I J Good (a Cambridge-trained former colleague of Alan Turing's), that once machine intelligence reaches a certain point, it could take over its own process of improvement, perhaps exponentially, so that we humans would soon be left far behind.
    [Show full text]
  • On Managing Vulnerabilities in AI/ML Systems
    On managing vulnerabilities in AI/ML systems Jonathan M. Spring April Galyardt jspringATseidotcmudotedu Software Engineering Institute CERT® Coordination Center Carnegie Mellon University Software Engineering Institute Pittsburgh, PA Carnegie Mellon University Pittsburgh, PA Allen D. Householder Nathan VanHoudnos CERT® Coordination Center Software Engineering Institute Software Engineering Institute Carnegie Mellon University Carnegie Mellon University Pittsburgh, PA Pittsburgh, PA ABSTRACT 1 INTRODUCTION This paper explores how the current paradigm of vulnerability man- The topic of this paper is more “security for automated reasoning” agement might adapt to include machine learning systems through and less “automated reasoning for security.” We will introduce the a thought experiment: what if flaws in machine learning (ML) questions that need to be answered in order to adapt existing vul- were assigned Common Vulnerabilities and Exposures (CVE) iden- nerability management practices to support automated reasoning tifiers (CVE-IDs)? We consider both ML algorithms and model ob- systems. We suggest answers to some of the questions, but some are jects. The hypothetical scenario is structured around exploring the quite thorny questions that may require a new paradigm of either changes to the six areas of vulnerability management: discovery, re- vulnerability management, development of automated reasoning port intake, analysis, coordination, disclosure, and response. While systems, or both. algorithm flaws are well-known in the academic research commu- First, some definitions. We follow the ®CERT Coordination Cen- nity, there is no apparent clear line of communication between this ter (CERT/CC) definition of vulnerability: “a set of conditions or research community and the operational communities that deploy behaviors that allows the violation of an explicit or implicit secu- and manage systems that use ML.
    [Show full text]