Cambridge University Launches New Centre to Study AI and the Future of Intelligence 3 December 2015

Total Page:16

File Type:pdf, Size:1020Kb

Cambridge University Launches New Centre to Study AI and the Future of Intelligence 3 December 2015 Cambridge University launches new centre to study AI and the future of intelligence 3 December 2015 Human-level intelligence is familiar in biological The Centre is a response to the Leverhulme 'hardware'—it happens inside our skulls. Trust's call for "bold, disruptive thinking, capable of Technology and science are now converging on a creating a step-change in our understanding". The possible future where similar intelligence can be Trust awarded the grant to Cambridge for a created in computers. proposal developed with the Executive Director of the University's Centre for the Study of Existential While it is hard to predict when this will happen, Risk (CSER), Dr Seán Ó hÉigeartaigh. CSER some researchers suggest that human-level AI will investigates emerging risks to humanity's future be created within this century. Freed of biological including climate change, disease, warfare and constraints, such machines might become much technological revolutions. more intelligent than humans. What would this mean for us? Stuart Russell, a world-leading AI Dr Ó hÉigeartaigh said: "The Centre is intended to researcher at the University of California, Berkeley, build on CSER's pioneering work on the risks and collaborator on the project, suggests that this posed by high-level AI and place those concerns in would be "the biggest event in human history". a broader context, looking at themes such as Professor Stephen Hawking agrees, saying that different kinds of intelligence, responsible "when it eventually does occur, it's likely to be development of technology and issues surrounding either the best or worst thing ever to happen to autonomous weapons and drones." humanity, so there's huge value in getting it right." The Leverhulme Centre for the Future of Now, thanks to an unprecedented £10 million grant Intelligence spans institutions, as well as from the Leverhulme Trust, the University of disciplines. It is a collaboration led by the University Cambridge is to establish a new interdisciplinary of Cambridge with links to the Oxford Martin School research centre, the Leverhulme Centre for the at the University of Oxford, Imperial College Future of Intelligence, to explore the opportunities London, and the University of California, Berkeley. and challenges of this potentially epoch-making It is supported by Cambridge's Centre for Research technological development, both short and long in the Arts, Social Sciences and Humanities term. (CRASSH). As Professor Price put it, "a proposal this ambitious, combining some of the best minds The Centre brings together computer scientists, across four universities and many disciplines, could philosophers, social scientists and others to not have been achieved without CRASSH's vision examine the technical, practical and philosophical and expertise." questions artificial intelligence raises for humanity in the coming century. Zoubin Ghahramani, Deputy Director, Professor of Information Engineering and a Fellow of St John's Huw Price, the Bertrand Russell Professor of College, Cambridge, said: "The field of machine Philosophy at Cambridge and Director of the learning continues to advance at a tremendous Centre, said: "Machine intelligence will be one of pace, and machines can now achieve near-human the defining themes of our century, and the abilities at many cognitive tasks—from recognising challenges of ensuring that we make good use of images to translating between languages and its opportunities are ones we all face together. At driving cars. We need to understand where this is present, however, we have barely begun to all leading, and ensure that research in machine consider its ramifications, good or bad". intelligence continues to benefit humanity. The Leverhulme Centre for the Future of Intelligence will 1 / 2 bring together researchers from a number of disciplines, from philosophers to social scientists, cognitive scientists and computer scientists, to help guide the future of this technology and study its implications." The Centre aims to lead the global conversation about the opportunities and challenges to humanity that lie ahead in the future of AI. Professor Price said: "With far-sighted alumni such as Charles Babbage, Alan Turing, and Margaret Boden, Cambridge has an enviable record of leadership in this field, and I am delighted that it will be home to the new Leverhulme Centre." Provided by University of Cambridge APA citation: Cambridge University launches new centre to study AI and the future of intelligence (2015, December 3) retrieved 29 September 2021 from https://phys.org/news/2015-12-cambridge-university- centre-ai-future.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. 2 / 2 Powered by TCPDF (www.tcpdf.org).
Recommended publications
  • Letters to the Editor
    Articles Letters to the Editor Research Priorities for is a product of human intelligence; we puter scientists, innovators, entrepre - cannot predict what we might achieve neurs, statisti cians, journalists, engi - Robust and Beneficial when this intelligence is magnified by neers, authors, professors, teachers, stu - Artificial Intelligence: the tools AI may provide, but the eradi - dents, CEOs, economists, developers, An Open Letter cation of disease and poverty are not philosophers, artists, futurists, physi - unfathomable. Because of the great cists, filmmakers, health-care profes - rtificial intelligence (AI) research potential of AI, it is important to sionals, research analysts, and members Ahas explored a variety of problems research how to reap its benefits while of many other fields. The earliest signa - and approaches since its inception, but avoiding potential pitfalls. tories follow, reproduced in order and as for the last 20 years or so has been The progress in AI research makes it they signed. For the complete list, see focused on the problems surrounding timely to focus research not only on tinyurl.com/ailetter. - ed. the construction of intelligent agents making AI more capable, but also on Stuart Russell, Berkeley, Professor of Com - — systems that perceive and act in maximizing the societal benefit of AI. puter Science, director of the Center for some environment. In this context, Such considerations motivated the “intelligence” is related to statistical Intelligent Systems, and coauthor of the AAAI 2008–09 Presidential Panel on standard textbook Artificial Intelligence: a and economic notions of rationality — Long-Term AI Futures and other proj - Modern Approach colloquially, the ability to make good ects on AI impacts, and constitute a sig - Tom Dietterich, Oregon State, President of decisions, plans, or inferences.
    [Show full text]
  • The Future of AI: Opportunities and Challenges
    The Future of AI: Opportunities and Challenges Puerto Rico, January 2-5, 2015 ! Ajay Agrawal is the Peter Munk Professor of Entrepreneurship at the University of Toronto's Rotman School of Management, Research Associate at the National Bureau of Economic Research in Cambridge, MA, Founder of the Creative Destruction Lab, and Co-founder of The Next 36. His research is focused on the economics of science and innovation. He serves on the editorial boards of Management Science, the Journal of Urban Economics, and The Strategic Management Journal. & Anthony Aguirre has worked on a wide variety of topics in theoretical cosmology, ranging from intergalactic dust to galaxy formation to gravity physics to the large-scale structure of inflationary universes and the arrow of time. He also has strong interest in science outreach, and has appeared in numerous science documentaries. He is a co-founder of the Foundational Questions Institute and the Future of Life Institute. & Geoff Anders is the founder of Leverage Research, a research institute that studies psychology, cognitive enhancement, scientific methodology, and the impact of technology on society. He is also a member of the Effective Altruism movement, a movement dedicated to improving the world in the most effective ways. Like many of the members of the Effective Altruism movement, Geoff is deeply interested in the potential impact of new technologies, especially artificial intelligence. & Blaise Agüera y Arcas works on machine learning at Google. Previously a Distinguished Engineer at Microsoft, he has worked on augmented reality, mapping, wearable computing and natural user interfaces. He was the co-creator of Photosynth, software that assembles photos into 3D environments.
    [Show full text]
  • Beneficial AI 2017
    Beneficial AI 2017 Participants & Attendees 1 Anthony Aguirre is a Professor of Physics at the University of California, Santa Cruz. He has worked on a wide variety of topics in theoretical cosmology and fundamental physics, including inflation, black holes, quantum theory, and information theory. He also has strong interest in science outreach, and has appeared in numerous science documentaries. He is a co-founder of the Future of Life Institute, the Foundational Questions Institute, and Metaculus (http://www.metaculus.com/). Sam Altman is president of Y Combinator and was the cofounder of Loopt, a location-based social networking app. He also co-founded OpenAI with Elon Musk. Sam has invested in over 1,000 companies. Dario Amodei is the co-author of the recent paper Concrete Problems in AI Safety, which outlines a pragmatic and empirical approach to making AI systems safe. Dario is currently a research scientist at OpenAI, and prior to that worked at Google and Baidu. Dario also helped to lead the project that developed Deep Speech 2, which was named one of 10 “Breakthrough Technologies of 2016” by MIT Technology Review. Dario holds a PhD in physics from Princeton University, where he was awarded the Hertz Foundation doctoral thesis prize. Amara Angelica is Research Director for Ray Kurzweil, responsible for books, charts, and special projects. Amara’s background is in aerospace engineering, in electronic warfare, electronic intelligence, human factors, and computer systems analysis areas. A co-founder and initial Academic Model/Curriculum Lead for Singularity University, she was formerly on the board of directors of the National Space Society, is a member of the Space Development Steering Committee, and is a professional member of the Institute of Electrical and Electronics Engineers (IEEE).
    [Show full text]
  • F.3. the NEW POLITICS of ARTIFICIAL INTELLIGENCE [Preliminary Notes]
    F.3. THE NEW POLITICS OF ARTIFICIAL INTELLIGENCE [preliminary notes] MAIN MEMO pp 3-14 I. Introduction II. The Infrastructure: 13 key AI organizations III. Timeline: 2005-present IV. Key Leadership V. Open Letters VI. Media Coverage VII. Interests and Strategies VIII. Books and Other Media IX. Public Opinion X. Funders and Funding of AI Advocacy XI. The AI Advocacy Movement and the Techno-Eugenics Movement XII. The Socio-Cultural-Psychological Dimension XIII. Push-Back on the Feasibility of AI+ Superintelligence XIV. Provisional Concluding Comments ATTACHMENTS pp 15-78 ADDENDA pp 79-85 APPENDICES [not included in this pdf] ENDNOTES pp 86-88 REFERENCES pp 89-92 Richard Hayes July 2018 DRAFT: NOT FOR CIRCULATION OR CITATION F.3-1 ATTACHMENTS A. Definitions, usage, brief history and comments. B. Capsule information on the 13 key AI organizations. C. Concerns raised by key sets of the 13 AI organizations. D. Current development of AI by the mainstream tech industry E. Op-Ed: Transcending Complacency on Superintelligent Machines - 19 Apr 2014. F. Agenda for the invitational “Beneficial AI” conference - San Juan, Puerto Rico, Jan 2-5, 2015. G. An Open Letter on Maximizing the Societal Benefits of AI – 11 Jan 2015. H. Partnership on Artificial Intelligence to Benefit People and Society (PAI) – roster of partners. I. Influential mainstream policy-oriented initiatives on AI: Stanford (2016); White House (2016); AI NOW (2017). J. Agenda for the “Beneficial AI 2017” conference, Asilomar, CA, Jan 2-8, 2017. K. Participants at the 2015 and 2017 AI strategy conferences in Puerto Rico and Asilomar. L. Notes on participants at the Asilomar “Beneficial AI 2017” meeting.
    [Show full text]
  • Public Response to RFI on AI
    White House Office of Science and Technology Policy Request for Information on the Future of Artificial Intelligence Public Responses September 1, 2016 Respondent 1 Chris Nicholson, Skymind Inc. This submission will address topics 1, 2, 4 and 10 in the OSTP’s RFI: • the legal and governance implications of AI • the use of AI for public good • the social and economic implications of AI • the role of “market-shaping” approaches Governance, anomaly detection and urban systems The fundamental task in the governance of urban systems is to keep them running; that is, to maintain the fluid movement of people, goods, vehicles and information throughout the system, without which it ceases to function. Breakdowns in the functioning of these systems and their constituent parts are therefore of great interest, whether it be their energy, transport, security or information infrastructures. Those breakdowns may result from deteriorations in the physical plant, sudden and unanticipated overloads, natural disasters or adversarial behavior. In many cases, municipal governments possess historical data about those breakdowns and the events that precede them, in the form of activity and sensor logs, video, and internal or public communications. Where they don’t possess such data already, it can be gathered. Such datasets are a tremendous help when applying learning algorithms to predict breakdowns and system failures. With enough lead time, those predictions make pre- emptive action possible, action that would cost cities much less than recovery efforts in the wake of a disaster. Our choice is between an ounce of prevention or a pound of cure. Even in cases where we don’t have data covering past breakdowns, algorithms exist to identify anomalies in the data we begin gathering now.
    [Show full text]
  • Program Guide
    The First AAAI/ACM Conference on AI, Ethics, and Society February 1 – 3, 2018 Hilton New Orleans Riverside New Orleans, Louisiana, 70130 USA Program Guide Organized by AAAI, ACM, and ACM SIGAI Sponsored by Berkeley Existential Risk Initiative, DeepMind Ethics & Society, Future of Life Institute, IBM Research AI, Pricewaterhouse Coopers, Tulane University AIES 2018 Conference Overview Thursday, February Friday, Saturday, 1st February 2nd February 3rd Tulane University Hilton Riverside Hilton Riverside 8:30-9:00 Opening 9:00-10:00 Invited talk, AI: Invited talk, AI Iyad Rahwan and jobs: and Edmond Richard Awad, MIT Freeman, Harvard 10:00-10:15 Coffee Break Coffee Break 10:15-11:15 AI session 1 AI session 3 11:15-12:15 AI session 2 AI session 4 12:15-2:00 Lunch Break Lunch Break 2:00-3:00 AI and law AI and session 1 philosophy session 1 3:00-4:00 AI and law AI and session 2 philosophy session 2 4:00-4:30 Coffee break Coffee Break 4:30-5:30 Invited talk, AI Invited talk, AI and law: and philosophy: Carol Rose, Patrick Lin, ACLU Cal Poly 5:30-6:30 6:00 – Panel 2: Invited talk: Panel 1: Prioritizing The Venerable What will Artificial Ethical Tenzin Intelligence bring? Considerations Priyadarshi, in AI: Who MIT Sets the Standards? 7:00 Reception Conference Closing reception Acknowledgments AAAI and ACM acknowledges and thanks the following individuals for their generous contributions of time and energy in the successful creation and planning of AIES 2018: Program Chairs: AI and jobs: Jason Furman (Harvard University) AI and law: Gary Marchant
    [Show full text]
  • Portrayals and Perceptions of AI and Why They Matter
    Portrayals and perceptions of AI and why they matter 1 Portrayals and perceptions of AI and why they matter Cover Image Turk playing chess, design by Wolfgang von Kempelen (1734 – 1804), built by Christoph 2 Portrayals and perceptions of AI and why they matter Mechel, 1769, colour engraving, circa 1780. Contents Executive summary 4 Introduction 5 Narratives and artificial intelligence 5 The AI narratives project 6 AI narratives 7 A very brief history of imagining intelligent machines 7 Embodiment 8 Extremes and control 9 Narrative and emerging technologies: lessons from previous science and technology debates 10 Implications 14 Disconnect from the reality of the technology 14 Underrepresented narratives and narrators 15 Constraints 16 The role of practitioners 20 Communicating AI 20 Reshaping AI narratives 21 The importance of dialogue 22 Annexes 24 Portrayals and perceptions of AI and why they matter 3 Executive summary The AI narratives project – a joint endeavour by the Leverhulme Centre for the Future of Intelligence and the Royal Society – has been examining how researchers, communicators, policymakers, and publics talk about artificial intelligence, and why this matters. This write-up presents an account of how AI is portrayed Narratives are essential to the development of science and perceived in the English-speaking West, with a and people’s engagement with new knowledge and new particular focus on the UK. It explores the limitations applications. Both fictional and non-fictional narratives of prevalent fictional and non-fictional narratives and have real world effects. Recent historical examples of the suggests how practitioners might move beyond them. evolution of disruptive technologies and public debates Its primary audience is professionals with an interest in with a strong science component (such as genetic public discourse about AI, including those in the media, modification, nuclear energy and climate change) offer government, academia, and industry.
    [Show full text]
  • Ai, Politics and Security in the Asia Pacific Program
    AI, POLITICS AND SECURITY IN THE ASIA PACIFIC PROGRAM Thursday 14 March - Friday 15 March 2019 Mills Room, Chancelry Building, 10 East Road, ANU Co-convened by: In partnership with: Coral Bell School of Asia Pacific Affairs (ANU) Leverhulme Centre for the Future of Intelligence (Cambridge) CONTENTS Welcome 1 Conference Program 2 - 3 Session 1: What is AI? 4 Session 2: AI, International Norms and Challenges to the Ethics and Laws of War 5 - 6 Session 3: AI and the Changing Nature of Security, Strategy and War 7 - 8 Session 4: The Future of AI and Associated Risks 8 Session 5: Understanding the Science Behind Targeted Messaging 9 Session 6: AI, Big Data Analytics, the Citizen and the State 10 - 11 Session 7: The Case of Cambridge Analytica: Revelations and Lessons Learned 12 Session 8: AI Impacts: Building Effective Networks in the Asia Pacific 13 - 15 Participant List 16 - 17 The ANU Coral Bell School of Asia Pacific Affairs CFI is an interdisciplinary research centre committed (Bell School) is a world-leading centre for research, to exploring the impacts of AI, both short and long education, and policy analysis on international term; it is based at the University of Cambridge, with and Asia Pacific politics, security, diplomacy, and ‘spokes’ at Oxford, Imperial College London, and strategic affairs. UC Berkeley. WELCOME FROM WORKSHOP CO-CONVENORS The existing and potential impact of AI is impossible to ignore. AI already influences every aspect of our lives, sometimes in seemingly unremarkable ways. Yet, we are also seeing signs of how it may radically alter international politics and security.
    [Show full text]
  • IEEE EAD First Edition Committees List Table of Contents (1 January 2019)
    IEEE EAD First Edition Committees List Table of Contents (1 January 2019) Please click on the links below to navigate to specific sections of this document. Executive Committee Descriptions and Members Content Committee Descriptions and Members: • General Principles • Classical Ethics in A/IS • Well-being • Affective Computing • Personal Data and Individual Agency • Methods to Guide Ethical Research and Design • A/IS for Sustainable Development • Embedding Values into Autonomous Intelligent Systems • Policy • Law • Extended Reality • Safety and Beneficence of Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI) • Reframing Autonomous Weapons Systems Supporting Committee Descriptions and Members: • The Editing Committee • The Communications Committee • The Glossary Committee • The Outreach Committee A Special Thanks To: • Arabic Translation Team • China Translation Team • Japan Translation Team • South Korea Translation Team • Brazil Translation Team • Thailand Translation Team • Russian Federation Translation Team • Hong Kong Team • Listing of Contributors to Request for Feedback for EADv1 • Listing of Attendees for SEAS Events iterating EADv1 and EADv2 • All our P7000™ Working Group Chairs 1 1 January 2019 | The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems Executive Committee Descriptions & Members Raja Chatila – Executive Committee Chair, The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems Raja Chatila, IEEE Fellow, is Professor at Sorbonne Université, Paris, France , and Director of the Institute of Intelligent Systems and Robotics (ISIR). He is also Director of the Laboratory of Excellence “SMART” on human-machine interactions. His work covers several aspects of autonomous and interactive Robotics, in robot navigation and SLAM, motion planning and control, cognitive and control architectures, human-robot interaction, and robot learning, and to applications in the areas of service, field and space robotics.
    [Show full text]
  • Buku Pengantar Teknologi Informasi.Pdf
    PENGANTAR TEKNOLOGI P E INFORMASI N G A Teknologi Informasi adalah islah umum untuk teknologi apa pun yang N membantu manusia dalam membuat, mengubah, menyimpan, T mengomunikasikan dan/atau menyebarkan informasi yang menyatukan A R komputasi dan komunikasi berkecepatan nggi untuk data, suara, dan T PENGANTAR video yang berupa berupa komputer pribadi, telepon, TV, peralatan rumah tangga elektronik, dan peran genggam modern. E K Buku ini menyajikan pengetahuan dasar tentang teknologi informasi yang N dibuat dengan lebih detail, jelas, dan praks sesuai dengan kehidupan O TEKNOLOGI sehari-hari, terdiri dari ga belas bab bahasan yaitu: Pengenalan Teknologi Informasi; Perangkat Keras dan perangkat lunak Komputer; Data, Informasi, L dan Pengetahuan; Sistem Telekomunikasi dan Jaringan; Internet, Intranet, O dan Ekstranet; Sistem Fungsional, Perusahaan dan Interorganisasi; G INFORMASI E-Commerce; Supply Chain Management; Data, Pengetahuan dan Penunjang I Keputusan; Intelligent Systems; Strategic Systems And Reorganizaon; I Pembangunan Sistem Informasi (Informaon System Development); dan Hak N Atas Kekayaan Intelektual dengan pembahasan. Sesuai dengan ngkat F kebutuhan praks, buku ini sangat cocok sebagai bahan referensi awal bagi O mereka yang ingin mempelajari dasar-dasar teknologi informasi. R M A S I J B A U A . H A H R K A I R R Y DIVISI BUKU PERGURUAN TINGGI I A U M JUHRIYANSYAH DALLE N D PT RAJAGRAFINDO PERSADA S D Jl. Raya Leuwinanggung No. 112 Y I N A. AKRIM Kel. Leuwinanggung, Kec. Tapos, Kota Depok 16956 A Telp 021-84311162 Fax 021-84311163 H Email: rajapers@rajagraf indo.co.id www.rajagraf indo.co.id D BAHARUDDIN A L L E RAJAWALI PERS Divisi Buku Perguruan Tinggi PT RajaGrafindo Persada D E P O K Perpustakaan Nasional: Katalog Dalam Terbitan (KDT) Juhriyansyah Dalle, A.A Karim, Baharuddin Pengantar Teknologi Informasi/Juhriyansyah Dalle, A.A Karim, Baharuddin.
    [Show full text]
  • Ethical and Societal Implications of Algorithms, Data, and Artificial Intelligence: a Roadmap for Research
    Ethical and societal implications of algorithms, data, and artificial intelligence: a roadmap for research Jess Whittlestone, Rune Nyrup, Anna Alexandrova, Kanta Dihal and Stephen Cave B Authors: Jess Whittlestone, Rune Nyrup, Anna Alexandrova, About the Nuffield Foundation Kanta Dihal, Stephen Cave, Leverhulme Centre for the Future of Intelligence, University of Cambridge. The Nuffield Foundation funds research, analysis, and student programmes that advance educational With research assistance from: Ezinne Nwankwo, opportunity and social well-being across the José Hernandez-Orallo, Karina Vold, Charlotte Stix. United Kingdom. The research we fund aims to improve the design and operation of social Citation: Whittlestone, J. Nyrup, R. Alexandrova, A. policy, particularly in Education, Welfare, and Justice. Dihal, K. Cave, S. (2019) Ethical and societal implications Our student programmes provide opportunities for of algorithms, data, and artificial intelligence: a roadmap young people, particularly those from disadvantaged for research. London: Nuffield Foundation. backgrounds, to develop their skills and confidence in quantitative and scientific methods. ISBN: 978-1-9160211-0-5 We have recently established the Ada Lovelace The Nuffield Foundation has funded this project, but Institute, an independent research and deliberative the views expressed are those of the authors and not body with a mission to ensure data and AI work necessarily those of the Foundation. for people and society. We are also the founder and co-funder of the Nuffield Council on Bioethics, which examines and reports on ethical issues Acknowledgements in biology and medicine. The authors are grateful to the following people for We are a financially and politically independent charitable their input at workshops and valuable feedback on drafts trust established in 1943 by William Morris, Lord Nuffield, of this report: Haydn Belfield, Jude Browne, Sarah Castell, the founder of Morris Motors.
    [Show full text]
  • My Route to Existential Risk - Nytimes.Com
    My Route to Existential Risk - NYTimes.com http://opinionator.blogs.nytimes.com/2013/01/27/cambridge-cab... JANUARY 27, 2013, 5:00 PM Cambridge, Cabs and Copenhagen: My Route to Existential Risk By HUW PRICE In Copenhagen the summer before last, I shared a taxi with a man who thought his chance of dying in an artificial intelligence-related accident was as high as that of heart disease or cancer. No surprise if he'd been the driver, perhaps (never tell a taxi driver that you're a philosopher!), but this was a man who has spent his career with computers. Indeed, he's so talented in that field that he is one of the team who made this century so, well, 21st - who got us talking to one another on video screens, the way we knew we'd be doing in the 21st century, back when I was a boy, half a century ago. For this was Jaan Tallinn, one of the team who gave us Skype. (Since then, taking him to dinner in Trinity College here in Cambridge, I've had colleagues queuing up to shake his hand, thanking him for keeping them in touch with distant grandchildren.) I knew of the suggestion that A.I. might be dangerous, of course. I had heard of the "singularity," or "intelligence explosion"- roughly, the idea, originally due to the statistician I J Good (a Cambridge-trained former colleague of Alan Turing's), that once machine intelligence reaches a certain point, it could take over its own process of improvement, perhaps exponentially, so that we humans would soon be left far behind.
    [Show full text]