Expanding Explainability: Towards Social Transparency in AI Systems

Expanding Explainability: Towards Social Transparency in AI Systems

Expanding Explainability: Towards Social Transparency in AI systems UPOL EHSAN, Georgia Institute of Technology, USA Q. VERA LIAO, IBM Research AI, USA MICHAEL MULLER, IBM Research AI, USA MARK O. RIEDL, Georgia Institute of Technology, USA JUSTIN D. WEISZ, IBM Research AI, USA As AI-powered systems increasingly mediate consequential decision-making, their explainability is critical for end-users to take informed and accountable actions. Explanations in human-human interactions are socially-situated. AI systems are often socio- organizationally embedded. However, Explainable AI (XAI) approaches have been predominantly algorithm-centered. We take a developmental step towards socially-situated XAI by introducing and exploring Social Transparency (ST), a sociotechnically informed perspective that incorporates the socio-organizational context into explaining AI-mediated decision-making. To explore ST conceptually, we conducted interviews with 29 AI users and practitioners grounded in a speculative design scenario. We suggested constitutive design elements of ST and developed a conceptual framework to unpack ST’s effect and implications at the technical, decision-making, and organizational level. The framework showcases how ST can potentially calibrate trust in AI, improve decision-making, facilitate organizational collective actions, and cultivate holistic explainability. Our work contributes to the discourse of Human-Centered XAI by expanding the design space of XAI. CCS Concepts: • Human-centered computing ! Scenario-based design; Empirical studies in HCI; HCI theory, concepts and models; Collaborative and social computing theory, concepts and paradigms; • Computing methodologies ! Artificial intelligence. Additional Key Words and Phrases: Explainable AI, social transparency, human-AI interaction, explanations, Artificial Intelligence, sociotechnical, socio-organizational context ACM Reference Format: Upol Ehsan, Q. Vera Liao, Michael Muller, Mark O. Riedl, and Justin D. Weisz. 2021. Expanding Explainability: Towards Social Transparency in AI systems. In CHI Conference on Human Factors in Computing Systems (CHI ’21), May 8–13, 2021, Yokohama, Japan. ACM, New York, NY, USA, 29 pages. https://doi.org/10.1145/3411764.3445188 1 INTRODUCTION Explanations matter. In human-human interactions, they provide necessary delineations of reasoning and justification for one’s thoughts and actions, and a primary vehicle to transfer knowledge from one person to another [65]. Explanations play a central role in sense-making, decision-making, coordination, and many other aspects of our personal and social lives [41]. They are becoming increasingly important in human-AI interactions as well. As AI systems are rapidly being employed in high stakes decision-making scenarios in industries such as healthcare [63], finance76 [ ], college arXiv:2101.04719v1 [cs.HC] 12 Jan 2021 admissions [79], hiring [19], and criminal justice [37], the need for explainability becomes paramount. Explainability is not only sought by users and other stakeholders to understand and develop appropriate trust of AI systems, but also to support discovery of new knowledge and make informed decisions [58]. To respond to this emerging need for explainability, there has been commendable progress in the field of Explainable AI (XAI), especially around algorithmic approaches to generate representations of how a machine learning (ML) model operates or makes decisions. Despite the recent growth spurt in the field of XAI, studies examining how people actually interact withAI explanations have found popular XAI techniques to be ineffective6 [ , 80, 111], potentially risky [50, 95], and underused CHI ’21, May 8–13, 2021, Yokohama, Japan Ehsan, Liao, Muller, Riedl, and Weisz in real-world contexts [58]. The field has been critiqued for its techno-centric view, where “inmates [are running] the asylum” [70], based on the impression that XAI researchers often develop explanations based on their own intuition rather than the situated needs of their intended audience. Currently, the dominant algorithm-centered XAI approaches make up for only a small fragment of the landscape of explanations as studied in the Social Sciences [65, 70, 71, 101] and exhibit significant gaps from how explanations are sought and produced by people. Certain techno-centric pitfalls that are deeply embedded in AI and Computer Science, such as Solutionism (always seeking technical solutions) and Formalism (seeking abstract, mathematical solutions) [32, 87], are likely to further widen these gaps. One way to address the gaps would be to critically reflect on the status quo. Here, the lenses of Agre’s Critical Technical Practice (CTP) [4, 5] can help. CTP encourages us to question the core epistemic and methodological assumptions in XAI, critically reflect on them to overcome impasses, and generate new questions and hypotheses. By bringingthe unconscious aspects of experience to our conscious awareness, critical reflection makes them actionable [24, 25, 88]. Put differently, a CTP-inspired reflective perspective on26 XAI[ ] will encourage us to ask: by continuing the dominant algorithm-centered paradigm in XAI, what perspectives are we missing? How might we incorporate the marginalized perspectives to embody alternative technology? In this case, a dominant XAI approach can be construed as algorithm- centered that privileges technical transparency and circumscribes the epistemic space of explainable AI around model transparency. An algorithm-centered approach can be effective if explanations and AI systems existed in a vacuum. However, it is not the case that explanations and AI systems are devoid of situated context. On one hand, explanations (as a construct) are socially situated [64, 65, 70, 105]. Explanation is first and foremost a shared meaning-making process that occurs between an explainer and an explainee. This process is dynamic to the goals and changing beliefs of both parties [20, 38, 39, 45]. For our purposes in this paper, we adopt the broad definition that an explanation is an answer to a why-question [20, 57, 70]. On the other hand, implicit in AI systems are human-AI assemblages. Most consequential AI systems are deeply embedded in socio-organizational tapestries in which groups of humans interact with it, going beyond a 1-1 human-AI interaction paradigm. Given this understanding, we might ask: if both AI systems and explanations are socially-situated, then why are we not requiring incorporation of the social aspects when we conceptualize explainability in AI systems? How can one form a holistic understanding of an AI system and make informed decisions if one only focuses on the technical half of a sociotechnical system? We illustrate the shortcomings of a solely technical view of explainability in the following scenario, which is inspired by incidents described by informants in our study. You work for a leading cloud software company, responsible for determining product pricing in various markets. Your institution built a new AI-powered tool that provides pricing recommendations based on a wide variety of factors. This tool has been extensively evaluated to assist you on pricing decisions. One day, you are tasked with creating a bid to be the cloud provider for a major financial institution. The AI-powered tool gives youa recommended price. You might think, why should I trust the AI’s recommendation? You examine a variety of technical explanations the system provides: visualizations of the model’s decision-making process and descriptions of how the algorithm reached this specific recommendation. Confident at the soundness ofthe model’s recommendation, you create the bid and submit it to the client. You are disheartened to learn that the client rejected your bid and instead accepted the bid from a competitor. Given a highly-accurate machine learning model, along with a full complement of technical explanations, why should the seller’s pricing decision not have been successful? It is because the answer to the why-question is not limited 2 Expanding Explainability: Towards Social Transparency in AI systems CHI ’21, May 8–13, 2021, Yokohama, Japan to the machine explaining itself. It is also in the situational and socio-organizational context, which one can learn from how price recommendations were handled by other sellers. What other factors went into those decisions? Were there regulatory or client-specific (e.g., internal budgetary constraints) issues that were beyond the scope of the model? Did something drastic happen in the operating environment (e.g., a global pandemic) that necessitated a different strategy? In other words, situational context matters and it is with this context the “why” questions could be answered effectively and completely. At a first glance, it may seem that socio-organizational context has nothing to do with explaining an AIsystem. Therein lies the issue — where we draw the boundary of our epistemic canvas for XAI matters. If the boundary is traced along the bounds of an algorithm, we risk excluding the human and social factors that significantly impact the way people make sense of a system. Sense-making is not just about opening the closed box of AI, but also about who is around the box, and the sociotechnical factors that govern the use of the AI system and the decision. Thus the “ability” in explainability does not lie exclusively in the guts of the AI system [26]. For the XAI field as a whole, if we restrict our epistemic lenses to solely focus on algorithms, we run

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    29 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us