The Partnership on AI Jeffrey Heer (University of Washington; [email protected]) DOI: 10.1145/3284751.3284760

The Partnership on AI Jeffrey Heer (University of Washington; Jheer@Uw.Edu) DOI: 10.1145/3284751.3284760

AI MATTERS, VOLUME 4, ISSUE 3 4(3) 2018 The Partnership on AI Jeffrey Heer (University of Washington; [email protected]) DOI: 10.1145/3284751.3284760 The Partnership on AI to Benefit People and The PAI kick-off meeting was held in Berlin Society (or PAI) is a consortium of industrial, in October 2017, with wide-ranging discus- non-profit, and academic partners for promot- sions organized around the above pillars. A ing public understanding and beneficial use primary of outcome of the meeting and sub- of Artificial Intelligence (AI) technologies. Pri- sequent membership surveys was the forma- mary goals of the PAI are “to study and for- tion of three initial working groups focused on mulate best practices on AI technologies, to “Safety-Critical AI”, “Fair, Transparent, and Ac- advance the public’s understanding of AI, and countable AI”, and “AI, Labor, and the Econ- to serve as an open platform for discussion omy”. Each of these groups are now conduct- and engagement about AI and its influences ing a series of meetings to develop best prac- on people and society.” tices and resources for their respective topics. PAI membership includes representa- I am a member of the Fair, Transparent, and tives from over 70 partner organizations Accountable AI (or FTA) working group, co- (https://www.partnershiponai.org/ chaired by Ed Felten (Princeton) and Verity partners/), spanning nine countries (pri- Harding (DeepMind). The charter of the FTA marily in North America, Europe, and Asia) working group is to study issues and oppor- and including technology companies of tunities for promoting fair, accountable, trans- various sizes, academic institutions, and a parent, and explainable AI, including the de- number of non-profits (including ACM). The velopment of standards and best practices as Board of Directors represents a number a way to avoid and minimize the risk that AI of large tech companies (Amazon, Apple, systems will undermine fairness, equality, due Facebook, IBM, Google/DeepMind, Microsoft) process, civil liberties, or human rights. The as well as non-profits (ACLU, MacArthur FTA group has held two in-person meetings Foundation, OpenAI) and academics (Arizona so far: the first in London in May 2018 and State University, Harvard, UC Berkeley). The the second at Princeton in August 2018. The founding co-chairs are Eric Horvitz (Microsoft) initial meeting resulted in the formation of four and Mustafa Suleyman (DeepMind), with specific projects. day-to-day operations of the PAI run by a dedicated staff in San Francisco, CA. The “Papers” project aims to produce a set of three educational primers, one each for fair- The PAI’s research and outreach efforts center ness, transparency, and accountability. The around six thematic pillars: goal is to provide a starting point for people to learn about fairness in AI, how it is conceived, • Safety-Critical AI and what people are currently doing about it. • Fair, Transparent, and Accountable AI The purpose is not just to come up with def- initions, but provide structure for how to think • AI, Labor, and the Economy about problems of fairness, transparency and • Collaborations Between People and AI Sys- accountability in the context of the work that tems Partner organizations are conducting. • Social and Societal Influences of AI The “Case Studies” project focuses of identi- • AI and Social Good fying multiple case studies that illustrate how FTA concerns can be addressed, as well For more details regarding each thematic pil- as develop a framework for the analysis of lar, see https://www.partnershiponai. such case studies. Meanwhile, the “Grand org/about/#our-work. Challenges” project is working to create chal- lenges or endorse existing contests developed Copyright c 2018 by the author(s). around FTA issues in practice. 25 AI MATTERS, VOLUME 4, ISSUE 3 4(3) 2018 The “Diverse Voices” project seeks to under- stand different communities, how their voices can be involved in tech policy making, and the impact of AI deployment. The project in- tends to engage with these groups in a mean- ingful manner and ensure under-represented groups’ voices are heard. One goal is to pro- vide guidance for outreach, as many compa- nies are interested in deploying ethical AI but are unsure how to address the problem. Each of these projects is a work-in-progress, with additional milestones and meetings planned over the coming year. Jeffrey Heer is a Pro- fessor of Computer Sci- ence & Engineering at the University of Washington and Co-Founder of Tri- facta Inc., a provider of interactive tools for scal- able data transformation. His research spans human-computer interac- tion, visualization, data science, and interac- tive machine learning. He also serves as an ACM representative to the Partnership on AI. 26.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    2 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us