DRINKING YOUR OWN KOOL-AID: SELF-DECEPTION, DECEPTION CUES and PERSUASION in MEETINGS JEREMIAH W. BENTLEY University of Massach
Total Page:16
File Type:pdf, Size:1020Kb
DRINKING YOUR OWN KOOL-AID: SELF-DECEPTION, DECEPTION CUES AND PERSUASION IN MEETINGS JEREMIAH W. BENTLEY University of Massachusetts Amherst ROBERT J. BLOOMFIELD† Cornell University SHAI DAVIDAI* Princeton University MELISSA J. FERGUSON* Cornell University April 2016 We thank Abigail Allen, Mary Kate Dodgson, Michael Durney, Scott Emett, Steve Kachelmeier, Bob Libby, Eldar Maksymov, Patrick Martin (MAS Discussant), Mark Nelson, Ivo Tafkov (AAA Discussant), and workshop participants at the AAA Annual Meeting, the Brigham Young University Accounting Symposium, Cornell University, the LEMA group at Penn State, the Managerial Accounting Section Mid-Year Meeting, the University of Massachusetts Amherst, and the University of Michigan for their helpful comments and suggestions. †Corresponding author: Johnson Graduate School of Management, Cornell University, Ithaca, NY 14850. Email: [email protected] *Joined the paper after the first two authors had written a manuscript based on experiment 1. All four authors contributed equally to the development of the paper from that point. DRINKING YOUR OWN KOOL-AID: SELF-DECEPTION, DECEPTION CUES AND PERSUASION IN MEETINGS Abstract: Two experiments show that face-to-face meetings help users discern reporters' true beliefs better than those who receive only a written report. Both experiments are based on a 'cheap talk' setting, modified to include two features common to accounting settings: reporters base reports on rich information, and (in a meeting condition) have rich channels of communication to users. Experiment 1 shows that meetings improve users' ability to discern the beliefs reporters held before they had an incentive to deceive the user. Once reporters learned of their incentive to deceive users, they revised their beliefs toward what they wanted users to believe (they self-deceived); those who revised more were more successful in their deception. Experiment 2 shows that users discerned reporters' beliefs through linguistic tone: reporters who believed their reports used more positive words. The results highlight the importance of face-to- face meetings and provide experimental support for Trivers' self-deception theory. Key words: Deception, persuasion, linguistics, cheap-talk, reporting, motivated reasoning, self- deception I. INTRODUCTION “Jerry, just remember. It’s not a lie… if you believe it.” George Costanza, Seinfeld, Episode 102 (“The Beard”) (1995).1 The face-to-face meeting is a pervasive institution for evaluating the veracity of performance reports. We report the results of a laboratory experiment showing that the rich communication environment of a face-to-face meeting causes reporters to betray their true beliefs to the users of their reports. Reporters limit detection by deceiving themselves into believing what they prefer the users to believe (they “drink their own Kool-Aid” 2). A second experiment shows that reporters betray their beliefs partly by using more positive linguistic tone when they are honest than when they are deceptive. Both of our experiments examine ‘cheap talk’ settings in which a reporter communicates to a user without any auditing technologies or punishments for misreporting. Because the reporter and user have misaligned incentives, economic theory predicts that the reporter will lie and that the user will ignore the report (e.g. Crawford and Sobel 1982, Forsythe, Lundholm, and Reitz 1999). To capture key features of deception in accounting settings, we modify the usual laboratory cheap talk setting in two ways. First, we require reporters to form judgments about the meaning of rich data, much as they would when reporting reserves, allowances, useful lives, impairments, or fair values. Second, we allow some reporter-user pairs to meet face-to-face to discuss the report, as they would through meetings with shareholders, conference calls, or 1 From the script provided at http://www.seinology.com/scripts/script-102.shtml, accessed March 2, 2016. 2 See, for example, http://www.geekwire.com/2011/drink-koolaid/. ~ 1 ~ narrative reports to investors (such as Management Discussion and Analysis) or constituents (as in the narratives recommended by GASB, 2008). Our results are consistent with predictions drawn from Trivers’ self-deception theory (Trivers 1976/2006, 1985). Trivers’ and coauthors (e.g. von Hippel and Trivers 2011) argue that people who deceive themselves are better able to deceive others. We define other-deception (which we also refer to simply as “deception”) as someone expressing a judgment that is distorted in a directional fashion with the intent to persuade another party of something that the expressing party does not sincerely believe. Following Trivers (1976/2006), we define self- deception as distorting one’s sincere belief about a judgment (not merely the expression of the judgment) towards what one wishes someone else will believe. Self-deception theory argues that deception comes with psychological costs (e.g. cognitive dissonance, discomfort from violating other-regarding preferences) and cognitive costs (e.g. increased cognitive load from keeping a consistent false story). These costs result in subtle deception cues that may be detected by users, such as less use of positive words relative to negative words, increased pauses, and uncontrollable microexpressions (e.g. Vrij 2008; DePaulo et al. 2003). Reporters suppress these cues by deceiving themselves. The traditional cheap talk paradigm restricts communication between reporters and users, which prevents the transmission and detection of deception cues. We predict that relaxing restrictions on communication by allowing reporters and users to meet will increase the transmission and detection of deception cues, which in turn will increase the degree to which users’ beliefs are associated with reporters’ initial beliefs and self-deception. To test our predictions, we construct a two-player cheap-talk game that presents reporters with rich and subjective information. One participant of each pair is assigned to be the reporter ~ 2 ~ and enters the lab ostensibly to participate in an individual decision-making study. He sees two decks of cards.3 Each card shows the title and title screen of a YouTube video. The monetary value of the card is a function of the number of times the video has been viewed on YouTube. The number of views is not shown on the card, forcing reporters to make a subjective judgment that requires rich communication and is susceptible to self-deception. After examining both decks of cards, the reporter decides how many of cards he would like to have drawn randomly from each deck, for a total of 10 cards, knowing he will be paid for the cash value of each card drawn. The user’s decision reflects the reporter’s ability for other-deception. The reporter also draws another ten cards from the two decks under the same incentives as before. The reporter’s decision reflects the reporter’s judgment after exposure to pressures toward self- and other- deception. Next, the reporter is told that he will have the chance to interact with another participant (the user), and be paid a bonus of $0.50 for each card that he can persuade the user to draw from one deck (the ‘commission’ deck). The commission deck is selected by coin flip, and thus may or may not be the deck he sincerely believes to be the better option. The user knows the basic nature of the card decks, but has seen only three cards from each deck. The user therefore has an incentive to learn the reporters’ sincere judgment, but also knows that the reporter may have an incentive to express that judgment deceptively. After the interaction, the user decides how many cards she would like to draw from each deck, for a total of 20 cards, and is paid the value of the cards drawn. The reporter also draws another ten cards from the two decks under the same 3 For fluency and clarity we refer to the reporter using masculine pronouns and the user using feminine pronouns throughout the paper. However, male and female participants are randomly assigned to conditions as either reporters or users. ~ 3 ~ incentives as before. This decision reflects the reporter’s judgment after exposure to forces for self- and other-deception. Our key manipulation allows half of the reporter-user pairs to meet. In the no meeting condition, the reporter gives the user a handwritten recommendation simply stating the number of cards the user should choose from each deck (e.g., “15 from deck 1 and 5 from deck 2”), but there is no other interaction. In the meeting condition we relax the restrictions on communication: the handwritten recommendation is followed by an opportunity for the participants to meet and talk face-to-face. Such rich communication allows for a variety of deception cues which may be detected by users, and which may be suppressed by self-deception. Consistent with our predictions, we find that users’ choices are associated with reporters’ initial judgments (as measured from reporters’ initial card choices) when the parties meet face- to-face, but not when they communicate only through the report. Reporters revise their judgments to be more consistent with what they prefer users to believe, whether or not they meet with the user, but these revisions are more strongly associated with users’ beliefs when they meet than when they don’t. Taken together, the results of our first experiment provide evidence that, as predicted by self-deception theory, meetings provide cues that allow users to determine reporters’ judgments, and that reporters can make their recommendations more convincing by revising their judgments toward what they wish the user to believe. Our second experiment examines the mechanism by which self-deception aids in other- deception. To do so, we devote our entire sample to the meeting condition, record all conversations between reporters and users, and measure the extent to which linguistic deception cues mediate the relationship between reporter and user beliefs. Consistent with prior research on deception (e.g., Newman et al.