The Partnership on AI Jeffrey Heer (University of Washington; [email protected]) DOI: 10.1145/3284751.3284760

Total Page:16

File Type:pdf, Size:1020Kb

The Partnership on AI Jeffrey Heer (University of Washington; Jheer@Uw.Edu) DOI: 10.1145/3284751.3284760 AI MATTERS, VOLUME 4, ISSUE 3 4(3) 2018 The Partnership on AI Jeffrey Heer (University of Washington; [email protected]) DOI: 10.1145/3284751.3284760 The Partnership on AI to Benefit People and The PAI kick-off meeting was held in Berlin Society (or PAI) is a consortium of industrial, in October 2017, with wide-ranging discus- non-profit, and academic partners for promot- sions organized around the above pillars. A ing public understanding and beneficial use primary of outcome of the meeting and sub- of Artificial Intelligence (AI) technologies. Pri- sequent membership surveys was the forma- mary goals of the PAI are “to study and for- tion of three initial working groups focused on mulate best practices on AI technologies, to “Safety-Critical AI”, “Fair, Transparent, and Ac- advance the public’s understanding of AI, and countable AI”, and “AI, Labor, and the Econ- to serve as an open platform for discussion omy”. Each of these groups are now conduct- and engagement about AI and its influences ing a series of meetings to develop best prac- on people and society.” tices and resources for their respective topics. PAI membership includes representa- I am a member of the Fair, Transparent, and tives from over 70 partner organizations Accountable AI (or FTA) working group, co- (https://www.partnershiponai.org/ chaired by Ed Felten (Princeton) and Verity partners/), spanning nine countries (pri- Harding (DeepMind). The charter of the FTA marily in North America, Europe, and Asia) working group is to study issues and oppor- and including technology companies of tunities for promoting fair, accountable, trans- various sizes, academic institutions, and a parent, and explainable AI, including the de- number of non-profits (including ACM). The velopment of standards and best practices as Board of Directors represents a number a way to avoid and minimize the risk that AI of large tech companies (Amazon, Apple, systems will undermine fairness, equality, due Facebook, IBM, Google/DeepMind, Microsoft) process, civil liberties, or human rights. The as well as non-profits (ACLU, MacArthur FTA group has held two in-person meetings Foundation, OpenAI) and academics (Arizona so far: the first in London in May 2018 and State University, Harvard, UC Berkeley). The the second at Princeton in August 2018. The founding co-chairs are Eric Horvitz (Microsoft) initial meeting resulted in the formation of four and Mustafa Suleyman (DeepMind), with specific projects. day-to-day operations of the PAI run by a dedicated staff in San Francisco, CA. The “Papers” project aims to produce a set of three educational primers, one each for fair- The PAI’s research and outreach efforts center ness, transparency, and accountability. The around six thematic pillars: goal is to provide a starting point for people to learn about fairness in AI, how it is conceived, • Safety-Critical AI and what people are currently doing about it. • Fair, Transparent, and Accountable AI The purpose is not just to come up with def- initions, but provide structure for how to think • AI, Labor, and the Economy about problems of fairness, transparency and • Collaborations Between People and AI Sys- accountability in the context of the work that tems Partner organizations are conducting. • Social and Societal Influences of AI The “Case Studies” project focuses of identi- • AI and Social Good fying multiple case studies that illustrate how FTA concerns can be addressed, as well For more details regarding each thematic pil- as develop a framework for the analysis of lar, see https://www.partnershiponai. such case studies. Meanwhile, the “Grand org/about/#our-work. Challenges” project is working to create chal- lenges or endorse existing contests developed Copyright c 2018 by the author(s). around FTA issues in practice. 25 AI MATTERS, VOLUME 4, ISSUE 3 4(3) 2018 The “Diverse Voices” project seeks to under- stand different communities, how their voices can be involved in tech policy making, and the impact of AI deployment. The project in- tends to engage with these groups in a mean- ingful manner and ensure under-represented groups’ voices are heard. One goal is to pro- vide guidance for outreach, as many compa- nies are interested in deploying ethical AI but are unsure how to address the problem. Each of these projects is a work-in-progress, with additional milestones and meetings planned over the coming year. Jeffrey Heer is a Pro- fessor of Computer Sci- ence & Engineering at the University of Washington and Co-Founder of Tri- facta Inc., a provider of interactive tools for scal- able data transformation. His research spans human-computer interac- tion, visualization, data science, and interac- tive machine learning. He also serves as an ACM representative to the Partnership on AI. 26.
Recommended publications
  • Apple Joins Group Devoted to Keeping AI Nice 27 January 2017
    Apple joins group devoted to keeping AI nice 27 January 2017 control and end up being detrimental to society. The companies "will conduct research, recommend best practices, and publish research under an open license in areas such as ethics, fairness, and inclusivity; transparency, privacy, and interoperability; collaboration between people and AI systems; and the trustworthiness, reliability, and robustness of the technology," according to a statement. Internet giants have been investing heavily in creating software to help machines think more like people, ideally acting as virtual assistants who get to know users and perhaps even anticipate needs. Apple is the latest tech giant to join the non-profit "Partnership on AI," which will have its inaugural board SpaceX founder and Tesla chief executive Elon meeting in San Francisco on February 3 Musk in 2015 took part in creating nonprofit research company OpenAI devoted to developing artificial intelligence that will help people and not hurt them. A technology industry alliance devoted to making sure smart machines don't turn against humanity Musk found himself in the middle of a technology said Friday that Apple has signed on and will have world controversy by holding firm that AI could turn a seat on the board. on humanity and be its ruin instead of a salvation. Microsoft, Amazon, Google, Facebook, IBM, and A concern expressed by Musk was that highly Google-owned British AI firm DeepMind last year advanced artificial intelligence would be left to its established the non-profit organization, called own devices, or in the hands of a few people, to the "Partnership on AI," which will have its inaugural detriment of civilization as a whole.
    [Show full text]
  • Redesigning AI for Shared Prosperity: an Agenda Executive Summary
    Redesigning AI for Shared Prosperity: an Agenda Executive Summary Artificial intelligence is expected to contribute trillions of dollars to the global GDP over the coming decade,1 but these gains may not occur equitably or be shared widely. Today, many communities around the world face persistent underemployment, driven in part by technological advances that have divided workers into cohorts of haves and have nots. If advances in workplace AI continue on their current trajectory, they could accelerate this economic exclusion. Alternatively, as this Agenda outlines, advances in AI could expand access to good jobs for all workers, regardless of their formal education, geographic location, race, ethnicity, gender, or ability. To put AI on this alternate path, Redesigning AI for Shared Prosperity: An Agenda proposes the creation of shared prosperity targets: verifiable criteria the AI industry must meet to support the future of workers. Crucial to the success of these targets will be thoughtful design and proper alignment with the needs of key stakeholders. In service of that, this Agenda compiles critical questions that must be answered for these targets to be implementable and effective. This living document is not a final set of questions, nor a comprehensive literature review of the issue areas covered. Instead, it is an open invitation to contribute answers as well as new questions that will help make AI advancement more likely to support inclusive economic progress and respect human rights. The primary way that people around the world support themselves is through waged work. Consequently, the creation of AI systems that lower labor demand will have a seismic effect, reducing employment and/ or wages while weakening the worker bargaining power that contributes to job quality and security.
    [Show full text]
  • Overview of Artificial Intelligence
    Updated October 24, 2017 Overview of Artificial Intelligence The use of artificial intelligence (AI) is growing across a Currently, AI technologies are highly tailored to particular wide range of sectors. Stakeholders and policymakers have applications or tasks, known as narrow (or weak) AI. In called for careful consideration of strategies to influence its contrast, general (or strong) AI refers to intelligent behavior growth, reap predicted benefits, and mitigate potential risks. across a range of cognitive tasks, a capability which is This document provides an overview of AI technologies unlikely to occur for decades, according to most analysts. and applications, recent private and public sector activities, Some researchers also use the term “augmented and selected issues of interest to Congress. intelligence” to capture AI’s various applications in physical and connected systems, such as robotics and the AI Technologies and Applications Internet of Things, and to focus on using AI technologies to Though definitions vary, AI can generally be thought of as enhance human activities rather than to replace them. computerized systems that work and react in ways commonly thought to require intelligence, such as solving In describing the course of AI development, the Defense complex problems in real-world situations. According to Advanced Research Projects Agency (DARPA) has the Association for the Advancement of Artificial identified three “waves.” The first wave focused on Intelligence (AAAI), researchers broadly seek to handcrafted knowledge that allowed for system-enabled understand “the mechanisms underlying thought and reasoning in limited situations, though lacking the ability to intelligent behavior and their embodiment in machines.” learn or address uncertainty, as with many rule-based systems from the 1980s.
    [Show full text]
  • Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims
    Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims∗ Miles Brundage1†, Shahar Avin3,2†, Jasmine Wang4,29†‡, Haydn Belfield3,2†, Gretchen Krueger1†, Gillian Hadfield1,5,30, Heidy Khlaaf6, Jingying Yang7, Helen Toner8, Ruth Fong9, Tegan Maharaj4,28, Pang Wei Koh10, Sara Hooker11, Jade Leung12, Andrew Trask9, Emma Bluemke9, Jonathan Lebensold4,29, Cullen O’Keefe1, Mark Koren13, Théo Ryffel14, JB Rubinovitz15, Tamay Besiroglu16, Federica Carugati17, Jack Clark1, Peter Eckersley7, Sarah de Haas18, Maritza Johnson18, Ben Laurie18, Alex Ingerman18, Igor Krawczuk19, Amanda Askell1, Rosario Cammarota20, Andrew Lohn21, David Krueger4,27, Charlotte Stix22, Peter Henderson10, Logan Graham9, Carina Prunkl12, Bianca Martin1, Elizabeth Seger16, Noa Zilberman9, Seán Ó hÉigeartaigh2,3, Frens Kroeger23, Girish Sastry1, Rebecca Kagan8, Adrian Weller16,24, Brian Tse12,7, Elizabeth Barnes1, Allan Dafoe12,9, Paul Scharre25, Ariel Herbert-Voss1, Martijn Rasser25, Shagun Sodhani4,27, Carrick Flynn8, Thomas Krendl Gilbert26, Lisa Dyer7, Saif Khan8, Yoshua Bengio4,27, Markus Anderljung12 1OpenAI, 2Leverhulme Centre for the Future of Intelligence, 3Centre for the Study of Existential Risk, 4Mila, 5University of Toronto, 6Adelard, 7Partnership on AI, 8Center for Security and Emerging Technology, 9University of Oxford, 10Stanford University, 11Google Brain, 12Future of Humanity Institute, 13Stanford Centre for AI Safety, 14École Normale Supérieure (Paris), 15Remedy.AI, 16University of Cambridge, 17Center for Advanced Study in the Behavioral Sciences,18Google Research, 19École Polytechnique Fédérale de Lausanne, 20Intel, 21RAND Corporation, 22Eindhoven University of Technology, 23Coventry University, 24Alan Turing Institute, 25Center for a New American Security, 26University of California, Berkeley, 27University of Montreal, 28Montreal Polytechnic, 29McGill University, 30Schwartz Reisman Institute for Technology and Society arXiv:2004.07213v2 [cs.CY] 20 Apr 2020 April 2020 ∗Listed authors are those who contributed substantive ideas and/or work to this report.
    [Show full text]
  • Artificial Intelligence the Next Digital Frontier?
    ARTIFICIAL INTELLIGENCE THE NEXT DIGITAL FRONTIER? DISCUSSION PAPER JUNE 2017 Jacques Bughin | Brussels Eric Hazan | Paris Sree Ramaswamy | Washington, DC Michael Chui | San Francisco Tera Allas | London Peter Dahlström | London Nicolaus Henke | London Monica Trench | London Since its founding in 1990, the McKinsey Global Institute (MGI) has sought to develop a deeper understanding of the evolving global economy. As the business and economics research arm of McKinsey & Company, MGI aims to provide leaders in the commercial, public, and social sectors with the facts and insights on which to base management and policy decisions. The Lauder Institute at the University of Pennsylvania ranked MGI the world’s number-one private-sector think tank in its 2016 Global Think Tank Index for the second consecutive year. MGI research combines the disciplines of economics and management, employing the analytical tools of economics with the insights of business leaders. Our “micro-to-macro” methodology examines microeconomic industry trends to better understand the broad macroeconomic forces affecting business strategy and public policy. MGI’s in-depth reports have covered more than 20 countries and 30 industries. Current research focuses on six themes: productivity and growth, natural resources, labor markets, the evolution of global financial markets, the economic impact of technology and innovation, and urbanization. Recent reports have assessed the economic benefits of tackling gender inequality, a new era of global competition, Chinese innovation, and digital globalization. MGI is led by four McKinsey and Company senior partners: Jacques Bughin, James Manyika, Jonathan Woetzel, and Frank Mattern, MGI’s chairman. Michael Chui, Susan Lund, Anu Madgavkar, Sree Ramaswamy, and Jaana Remes serve as MGI partners.
    [Show full text]
  • Partnership on Artificial Intelligence” Formed by Google, Facebook, Amazon, Ibm, Microsoft and Apple
    УДК 004.8 Zelinska Dariia “PARTNERSHIP ON ARTIFICIAL INTELLIGENCE” FORMED BY GOOGLE, FACEBOOK, AMAZON, IBM, MICROSOFT AND APPLE Vinnitsa National Technical University Анотація У статті розповідається про співробітництво найбільших ІТ-компаній та створення особливого партнерства – Партнерства Штучного Інтелекту. Дослідження показує різні цілі та місії, висвітлені технологічними гігантами, і декотрі проблеми, що пояснюють майбутню діяльність даної організації. Ключові слова: партнерство, Штучний Інтелект, Рада Довіри, технології, Силіконова Долина. Abstract The article tells about the cooperation between the greatest IT-companies and the creation of the special partnership – the Partnership on Artificial Intelligence.The research shows different goals and missions highlighted by technology giants and some problems explaining future activities of this organization. Keywords: partnership, Artificial Intelligence, The Board of Trustees, technology, Silicon Valley. It is a well-known fact that the greatest IT-companies have been mostly competitors and cooperated a little not so far ago. But such a situation has changed lately and the best example is joining forces of some tech giants to create a new AI partnership. It is dedicated to advancing public understanding of the sector, as well as coming up with standards for future researchers to abide by. Five tech giants announced inSeptember, 2016 that they are launching a nonprofit to “advance public understanding” of artificial intelligence and to formulate “best practices on the challenges and opportunities within the field.” The Partnership on Artificial Intelligence to Benefit People and Society is being formed by Amazon, Facebook, Google, IBM, and Microsoft, each of which will have a representative on the group's 10- member board.After conspicuously being absent when the group came together in September 2016, Apple has joined the Partnership on AI inJanuary, 2017.
    [Show full text]
  • The Dawn of Artificial Intelligence”
    Statement of Greg Brockman Co-Founder and Chief Technology Officer OpenAI before the Subcommittee on Space, Science, and Competitiveness Committee on Commerce, Science, and Transportation United States Senate Hearing on “The Dawn of Artificial Intelligence” Thank you Chairman Cruz, Ranking Member Peters, distinguished members of the Subcommittee. Today’s hearing presents an important first opportunity for the members of the Senate to understand and analyze the potential impacts of artificial intelligence on our Nation and the world, and to refine thinking on the best ways in which the U.S. Government might approach AI. I'm honored to have been invited to give this testimony today. By way of introduction, I'm Greg Brockman, co-founder and Chief Technology Officer of OpenAI. OpenAI is a non-profit AI research company. Our mission is to build safe, advanced AI technology and ensure that its benefits are distributed to everyone. OpenAI is chaired by technology executives Sam Altman and Elon Musk. The US has led the way in almost all technological breakthroughs of the last hundred years, and we've reaped enormous economic rewards as a result. Currently, we have a lead, but hardly a monopoly, in AI. For instance, this year Chinese teams won the top categories in a Stanford University-led image recognition competition. South Korea has declared a billion dollar AI fund. Canada produced some technologies enabling the current boom, and recently announced an investment into key areas of AI. I'd like to share 3 key points for how we can best succeed in AI and what the U.S.
    [Show full text]
  • Areas for Future Action in the Responsible AI Ecosystem
    1 Acknowledgments This report truly is a product of collective intelligence. It has been produced by The Future Society, with the input and collaboration of the wider GPAI Responsible Development, Use and Governance of AI Working Group (“Responsible AI Working Group” or “RAIWG”), in a period of two months leading to the first Global Partnership for AI (GPAI) plenary. Transparency and open collaboration have been the underlying values behind a process which included three working group meetings, four steering committee meetings, three targeted feedback sessions, seven interviews, one questionnaire, and over 400 comments on interactive documents online. GPAI Responsible AI Co-Chairs David Sadek Yoshua Bengio Rajeev Sangal Raja Chatila Matthias Spielkamp Osamu Sudo GPAI Responsible AI Steering Roger Taylor Committee Dyan Gibbens GPAI Responsible AI Observers Kate Hannah Amir Banifatemi Toshiya Jitsuzumi Vilas Dhar Vincent Müller Marc-Antoine Dilhac Wanda Muñoz Adam Murray Dino Pedreschi Karine Perset Catherine Régis Stuart Russell Francesca Rossi Cédric Wachholz GPAI Responsible AI Members CEIMIA Secretariat Carolina Aguerre Mathieu Marcotte Genevieve Bell Christophe Mariage-Beaulieu Ivan Bratko Jacques Rajotte Joanna Bryson Réjean Roy Partha Pratim Chakrabarti Jack Clark The Future Society Project Team Virginia Dignum Niki Iliadis, Project Manager Alistair Knott Nicolas Miailhe, Research Director Pushmeet Kohli Nicolas Moës, Analyst Marta Kwiatkowska Sacha Alanoca, Analyst Christian Lemaître Léon Delfina Belli, Analyst Alice Hae Yun Oh Caroline Galvan, Advisor Luka Omladič Suzanne Bays, Advisor Julie Owono Sarah Cairns-Smith, Advisor V K Rajah 2 Table of contents Acknowledgments 2 Table of contents 3 Foreword 5 Executive Summary 6 Introduction 11 Section 1. Responsible AI Landscape 12 1.1 Context and Objective of the Catalogue 12 1.2 The Catalogue 13 1.2.1 Categories 13 1.2.2 Attributes 15 1.2.3 Catalogue Insights 16 1.3 Methodology and Limitations 17 Section 2.
    [Show full text]
  • Roundtable Participants
    Follow-up to the Report of the High-level Panel on Digital Cooperation Roundtable Champions and Key Constituents (as of 24 April 2020) The High-level Panel on Digital Cooperation was convened by the UN Secretary-General to provide recommendations on how the international community could work together to optimize the use of digital technologies and mitigate the risks. The Office of Special Adviser Fabrizio Hochschild shared the Panel’s report, “The Age of Digital Interdependence,” with some 400 stakeholders (including all member states) and received almost 100 responses from Member States, the private sector, academia, and civil society. Many of these entities volunteered to lead or participate in one or more of the Panel’s recommendations, through a series of virtual roundtables, as either “Champions” or “Key Constituents”. Champions and Key Constituents have been assigned with attention to geographical, sectoral, and topical balance as well as entities’ specific experience in subject matters of a particular recommendation. Key Constituents will play an elemental role in contributing their skills and expertise to the roundtables, as steered by the Champions. We are delighted to work with the following Champions and Key Constituents for each recommendation: ● 1A Global Connectivity ○ Champions ▪ The Government of Uganda ▪ International Telecommunication Union (ITU) ▪ UNICEF ○ Key Constituents ▪ The Government of Egypt ▪ The Government of Kazakhstan ▪ The Government of Malaysia ▪ The Government of Niger ▪ The Government of Rwanda ▪ European
    [Show full text]
  • Human - AI Collaboration Framework and Case Studies
    Collaborations Between People and AI Systems (CPAIS) Human - AI Collaboration Framework and Case Studies September, 2019 Human - AI Collaboration Framework and Case Studies Introduction Developing best practices on collaborations between people and AI systems – including issues of transparency and trust, responsibility for specific decisions, and appropriate levels of autonomy – depends on a nuanced understanding of the nature of those collaborations. For example, a human interacting one-on-one with a I on a consumer website requires different features and responses than one, or perhaps even a group of individuals, sitting in an autonomous vehicle. With these issues in mind, the Partnership on AI, with members of its Collaborations Between People and AI Systems Expert Group, has developed a Human-AI Collaboration Framework containing 36 questions that identify some characteristics that differentiate examples of human-AI collaborations. We have also prepared a collection of seven Case Studies that illustrate the framework and its applications. This project explores the relevant features one should consider when thinking about human-AI collaboration, and how these features present themselves in real-world examples. By drawing attention to the nuances – including the distinct implications and potential social impacts – of specific AI technologies, the Framework can serve as a helpful nudge toward responsible product/tool design, policy development, or even research processes on or around AI systems that interact with humans. As a software engineer from a leading technology company suggested, this Framework would be useful to them because it would enable focused attention on the impact of their AI system design, beyond the typical parameters of how quickly it goes to market or how it performs technically.
    [Show full text]
  • Mapping U.S. Multinationals' Global AI R&D Activity
    DECEMBER 2020 Mapping U.S. Multinationals’ Global AI R&D Activity CSET Issue Brief AUTHORS Roxanne Heston Remco Zwetsloot Table of Contents Executive Summary ............................................................................................... 2 1. Introduction: U.S. Tech Multinationals and AI ................................................ 4 2. Methodology and Data ................................................................................... 5 3. Findings .............................................................................................................. 8 Where Are AI Labs and Staff Located? ........................................................... 8 What Types of R&D Do AI Labs Conduct? .................................................. 14 What Drives Location Decisions? .................................................................. 19 Access to Talent ......................................................................................... 19 Market Access and Product Adaptation .................................................. 21 Cost Savings ............................................................................................... 22 Conclusion .......................................................................................................... 24 Acknowledgments .............................................................................................. 26 Appendix: Data and Methods .......................................................................... 27 A. Locating AI Labs .......................................................................................
    [Show full text]
  • AI Governance and Ethics for Boards
    For enterprise leaders and their legal & compliance advisors AI Governance and Ethics for Boards 10 Things Boards Need to Know About AI 1 10 Things Boards Need to Know About AI What you will learn In the coming decades, Artificial Intelligence will bring many changes for companies, government agencies, and non-profit organizations. Business processes will be transformed by automation. Entirely new products, services, and business models will be invented. Jobs will undoubtedly be lost, but many more will be created. Education will be more important than ever and will need to become more relevant and more effective than it is today. AI will mean new competition for some and decline or even extinction for others. AI will be a key driver of economic growth and will create tremendous new wealth. Sharing that wealth equitably will be a crucial focus for policymakers and society as a whole, as will the imperative to devise sensible regulations for this powerful new technology. Board members, CEOs, entrepreneurs, together with their legal and governance advisors need to educate themselves about how to choose among AI’s strategic options, how to pursue them effectively, and how to cope with the important ethical and social challenges that AI will bring. This Microsoft AI Insight is a guide to these issues. Contents 01 06 AI will be a crucial growth driver, embrace it Build AI on a foundation of trust 02 07 Learn the basics of what AI can do Learn how to make AI fair 03 08 AI’s tactical and strategic flavors, you need both AI regulation is coming—plan for it 04 09 Understand the value of your data for AI Manage AI’s impact on jobs 05 10 Leverage AI partners, build internal AI know-how Plan for creative destruction, do the right thing AI Governance and Ethics for Boards 01 AI will be a crucial growth driver, embrace it The most important technologies are those that can solve many kinds of problems.
    [Show full text]