People Show Envy, Not Guilt, When Making Decisions with Machines
Total Page:16
File Type:pdf, Size:1020Kb
People Show Envy, Not Guilt, when Making Decisions with Machines Celso M. de Melo Jonathan Gratch USC Institute for Creative Technologies USC Institute for Creative Technologies 12015 Waterfront Drive, Building #4, 12015 Waterfront Drive, Building #4, Playa Vista, CA 90094-2536 Playa Vista, CA 90094-2536 [email protected] [email protected] Abstract— Research shows that people consistently reach hope to achieve in human-machine interaction the kind of more efficient solutions than those predicted by standard eco- efficiency we see in human-human interaction. nomic models, which assume people are selfish. Artificial intelli- gence, in turn, seeks to create machines that can achieve these To gather insight on this bias we resort to a well-known levels of efficiency in human-machine interaction. However, as experimental economics theory: Fehr and Schmidt’s inequity reinforced in this paper, people’s decisions are systematically less aversion theory [11]. The advantage of this theory is that, in efficient – i.e., less fair and favorable – with machines than with contrast to standard economic theory, it takes into humans. To understand the cause of this bias, we resort to a well- consideration people’s other-regarding preferences, such as known experimental economics model: Fehr and Schmidt’s ineq- altruism, fairness, and reciprocity. These are the kinds of uity aversion model. This model accounts for people’s aversion to social considerations that often lead to higher efficiency than disadvantageous outcome inequality (envy) and aversion to ad- what is predicted by standard models [12], [13]. Specifically, vantageous outcome inequality (guilt). We present an experiment the model accomplishes this by accounting for people’s where participants engaged in the ultimatum and dictator games aversion to disadvantageous outcome inequality, or envy, and with human or machine counterparts. By fitting this data to Fehr aversion to advantageous outcome inequality, or guilt. We and Schmidt’s model, we show that people acted as if they were present an experiment where participants engaged in standard just as envious of humans as of machines; but, in contrast, people economic games with human or machine counterparts. By showed less guilt when making unfavorable decisions to ma- chines. This result, thus, provides critical insight into this bias fitting this experimental data to Fehr and Schmidt’s model we people show, in economic settings, in favor of humans. We dis- show that, when the outcome was unequal in favor of others, cuss implications for the design of machines that engage in social people were not more envious of humans than machines; in decision making with humans. contrast, when the outcome was unequal in their favor, people showed less guilt with machines than humans. To the best of Keywords— Guilt; Decision Making; Bias; Human vs. our knowledge, this is the first time clear evidence is shown Machines that increased guilt is important in explaining people’s bias in favor of humans, when compared to machines. I. INTRODUCTION A. Fairness, Altruism, and Reciprocity The last few decades have seen growing interest in the development of artificially intelligent (software and robotic) Evidence for these other-regarding preferences has been agents that can engage efficiently in decision making with shown in simple games where the selfish option is obvious, humans [1]-[3]. Here, “efficiency” stands for economic yet people still choose differently. Two games that are outcomes that are more favorable to all parties, when particularly relevant for this paper are the ultimatum and compared to the rational predictions of standard economic dictator games. In the ultimatum game [14], there are two models. However, despite evidence that machines can be players: a proposer and a responder. The proposer is given an treated similarly to humans [4], [5], recent research suggests initial endowment of money and has to decide how much to that people show systematic differences in patterns of brain offer to the responder. Then, the responder has to make a activation and behavior when engaging in decision making decision: if the offer is accepted, both players get the proposed with machines, when compared to humans [6]-[10]. Building allocation; if the offer is rejected, however, no one gets on these findings, we present further evidence that, in anything. The standard prediction is that the proposer should economic settings, people favor humans to machines. This is offer the minimum non-zero amount, as the responder will an important challenge for artificial intelligence because it always prefer to have something to nothing. In practice, shows a fundamental bias that needs to be overcome if we people usually offer 40 to 50 percent of the initial endowment and low offers (about 20 percent of the endowment) are This research was supported in part by grants NSF IIS-1211064, SES- usually rejected [15]. A concern for fairness is usually argued 0836004, and AFOSR FA9550-09-1-0507. The content does not necessarily to explain this behavior. reflect the position or the policy of any Government, and no official endorse- ment should be inferred.. The dictator game [16] is similar to the ultimatum game, Nass, Fogg, and Moon [22] showed that machines that were except that the responder doesn’t get to make a decision and perceived to be teammates were rated more positively than must accept whatever is offered by the proposer. This game, machines that were not. However, despite having the ability to therefore, removes strategic considerations from the equation, treat machines like social actors, recent research suggests that i.e., the proposer doesn’t have to fear that the offer will be people still make important distinctions between machines and rejected. Accordingly, the standard prediction is for the humans. Specifically, this evidence reveals that people can proposer to keep all the money and offer nothing. reach different decisions and show different patterns of brain Nevertheless, even in this case, people offer an average of 10 activation with machines in the exact same decision making to 25 percent of the initial endowment and the modal offer is tasks, for the exact same financial incentives, when compared 50 percent. Decisions in this game, thus, have been argued to to humans [6]-[10]. For instance, Gallagher, Anthony, reflect altruism. Roepstorff, and Frith [6] showed that when people played the rock-paper-scissors game with a human there was activation Experimental economists have advanced theories that of the medial prefrontal cortex, a region of the brain that had attempt to account both for selfish behavior (e.g., in previously been implicated in mentalizing (i.e., inferring of competitive markets [17]) and the kind of deviation from other’s beliefs, desires and intentions); however, no such selfish behavior discussed above. Three kinds of models have activation occurred when people engaged with a machine that been proposed: (a) models of social preferences, which followed a known predefined algorithm. In another study, assume that the players’ utility function not only depend on Sanfey, Rilling, Aronson, Nystrom, and Cohen [10] showed one’s own payoff, but the others’ payoffs as well [11], [18]; that, when receiving unfair offers in the ultimatum game, (b) models that account for the others’ “type”, i.e., people will people showed stronger activation of the bilateral anterior adjust their behavior according to whether the other is a selfish insula – a region associated with the experience of negative type or a (conditionally) altruistic type [19]; finally, (c) emotions – when engaging with humans, when compared to models of intention-based reciprocity, in which case people machines. This evidence, thus, suggests that people will consider the intention behind others’ actions. Thus, for experienced less emotion and spent less effort inferring mental instance, one is more likely to accept an unfair offer if the states with machines than with humans. other did not have a choice (or did not intend to make the offer), than in the case where the other intended to act These findings are compatible with research that shows that unkindly [20], [21]. In this paper, we focus on one model of people perceive less mind in machines than in humans [23]. social preferences that has received considerable attention and Denying mind to others or perceiving inferior mental ability in can capture behavior in many different economic settings: others, in turn, is known to lead to discrimination [24]. In the Fehr and Schmidt’s inequity aversion model [11]. We discuss context of human-machine interaction, Blascovich et al. [4] alternative models in the last section of the paper. also propose that machines are less likely to influence humans, Fehr and Schmidt assume that people are inequity averse, the lower the perceived mental ability in machines. i.e., people dislike outcomes where others are better off than C. Overview of Approach them (envy) or outcomes where they are better off than others (guilt). To model this, they propose the following utility We ran an experiment where participants engaged in the ul- function for dyadic interactions: timatum game and a modified version of the dictator game with human vs. computer counterparts. This experiment is described in Section II. The results allowed us to understand whether people were showing a bias in favor of humans, , max ,0 (1) max ,0 which we expected given the findings presented above. In Section III, we fit this experimental data to Fehr and Schmidt’s inequity aversion model. We calculate separate parameters for envy and guilt according to whether the coun- with 0 ≤ βi ≤ αi and βi ≤ 1 and where xi, i {k, l}, are the payoffs for the players. Notice that the parameter α captures terpart is a human or a computer. We then compare these pa- aversion to disadvantageous inequality (envy), whereas the rameters to understand the nature of this bias.