The Pennsylvania State University the Graduate School EFFECTS of AGENCY LOCUS and TRANSPARENCY of ARTIFICIAL INTELLIGENCE: UNCER
Total Page:16
File Type:pdf, Size:1020Kb
The Pennsylvania State University The Graduate School EFFECTS OF AGENCY LOCUS AND TRANSPARENCY OF ARTIFICIAL INTELLIGENCE: UNCERTAINTY REDUCTION AND EMERGING MIND A Dissertation in Mass Communications by Bingjie Liu © 2020 Bingjie Liu Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy May 2020 ii The dissertation of Bingjie Liu was reviewed and approved* by the following: S. Shyam Sundar James P. Jimirro Professor of Media Effects Dissertation Advisor Chair of Committee Mary Beth Oliver Donald P. Bellisario Professor of Media Studies Michael Schmierbach Associate Professor of Communications Denise Haunani Solomon Liberal Arts Professor of Communication Arts and Sciences Matthew McAllister Professor of Communications Chair of the Graduate Program iii ABSTRACT Existing research and mass media conceptualize interactive technologies, such as social robots and voice assistants, as machines without true agency despite their apparent autonomy and human- likeness. This is because they are often machines fully programmed by humans and act by following human-made rules. However, self-learning artificial intelligence (AI), which is increasingly used in powering many interactive technologies, is not fully programmed and does not merely follow human- made rules, but instead, learns rules from data with so little human interference that we quite often do not even understand the rules it has learned. The shift of agency locus from human to machine and the lack of transparency of the learning outcomes raise new questions for human-machine communication. How do individuals react to machines that learn autonomously yet remain opaque and mysterious? What measures should we take to cultivate appropriate levels of trust in such machines? To answer these questions, the current study examines the effects of an AI system’s agency locus, meaning whether it makes decisions by following human-made rules (human-agency AI) or rules it has learned from data by itself (machine-agency AI), and the level of transparency about such rules (no transparency vs. placebic transparency vs. real transparency), upon users’ cognitions, affects, and behaviors toward an AI system. Two online experiments following a 2 (agency locus: human vs. machine) X 3 (transparency: no vs. placebic vs. real) factorial design were conducted in two contexts (fake news detection and personality evaluation). Across contexts, the human-agency AI triggered more person presence, homophily, and was more trusted than the machine-agency AI. The machine-agency AI was perceived as more autonomous and triggered more “mind perception,” which also enhanced trust. Real transparency about AI’s internal states (i.e., rules) reduced uncertainty and increased mind perception, both of which enhanced trust. By reducing uncertainty, real transparency reduced anxiety and induced more excitement. Underlying the influence of agency locus and transparency of AI on trust are both a route of anthropomorphism (person presence → iv uncertainty reduction → trust) and a non-anthropomorphism route of mind perception (perceived autonomy or direct access → mind perception → trust). The actual processes are found to be governed by laws of intergroup communication, interpersonal communication, and information processing. Specifically, participants were less influenced by peripheral cues with categorical information (i.e., agency locus) when they had enough cognitive resources (i.e., more past experience with AI applications, or with real transparency). Participants were more motivated to scrutinize messages about an AI system’s internal states when the need for uncertainty reduction was high (i.e., when interacting with the machine-agency AI). Contexts, as proxies of individuals’ goal structures and social densities, were found to influence outcomes of human-machine interaction by potentially influencing their level of involvement and expectancies. Findings suggest that for machine-learning AI, users recognize it as a mind that is not necessarily humanlike, and that having knowledge about its internal states can to some extent help individuals surpass the human-machine ontological boundary, go beyond the anthropomorphism route, and develop trust in AI. Findings shed light on fundamental interpersonal processes and the larger question of the problem of other minds. In addition, findings also have methodological implications for research on human-machine interaction and practical implications for the design of intelligent machines in general and the design of AI transparency in particular, while also informing policy-making about AI regulations in terms of transparency and accountability. v TABLE OF CONTENTS LIST OF FIGURES ................................................................................................................................... viii LIST OF TABLES ........................................................................................................................................ xi ACKNOWLEDGEMENTS ....................................................................................................................... xiii Introduction.................................................................................................................................................... 1 Chapter 1 Scope and Conceptualization ........................................................................................................ 3 Uncertainty reduction and trust development in human-machine communication ............................... 4 Conceptualizing trust in machines ................................................................................................. 5 Trust development: the necessity and effect of uncertainty reduction in HMC ............................ 6 Conceptualizing agency of AI: apparent agency, agency locus, and transparency ............................... 8 Apparent agency of AI ................................................................................................................... 9 Agency locus shift: from proxy of human agency to machine agency ........................................ 11 Communication and transparency: manifestation of AI’s mind .................................................. 14 Scope and focus of the current study ........................................................................................... 15 Chapter 2 Literature Review ........................................................................................................................ 16 Effects of agency locus of AI............................................................................................................... 16 Uncertainty reduction: the intergroup path and the interpersonal path ........................................ 17 Agency locus as identity cue: uncertainty reduction at intergroup level ..................................... 17 Above and beyond person presence: autonomy and mind perception ......................................... 25 Effects of agency locus on affect ................................................................................................. 31 Individual differences in AI experience ....................................................................................... 35 Effects of transparency ........................................................................................................................ 36 Defining knowledge and transparency ......................................................................................... 36 Effects of transparency on uncertainty reduction and trust.......................................................... 38 Effects of transparency on uncertainty reduction at intergroup level .......................................... 39 Accessing machine’s mind: mind perception and trust ............................................................... 40 Effects of transparency on affect ................................................................................................. 40 Symbolic effect of AI transparency: effect of placebic transparency .......................................... 42 Chapter 3 Method ........................................................................................................................................ 51 Participants ........................................................................................................................................... 51 Procedure ............................................................................................................................................. 52 Stimuli .................................................................................................................................................. 53 Overview ...................................................................................................................................... 53 Agency locus ................................................................................................................................ 62 Objects being judged and judgments ........................................................................................... 64 Explanations ................................................................................................................................. 71 Measures .............................................................................................................................................. 82 Manipulation check .....................................................................................................................