<<

Punishment As Revenge, Not Only For Inequity Aversion

Donald T. Wargo Department of Economics Temple University

Department of Economics DETU Working Paper 18-03 February 2018

1301 Cecil B. Moore Avenue, Philadelphia, PA 19122 http://www.cla.temple.edu/economics/faculty/detu-working-paper-series/

Punishment As Revenge, Not Only For Inequity Aversion

Donald T. Wargo

Department of Economics, Temple University, Philadelphia, PA

Correspondence at: [email protected]

Abstract

This paper shows that punishment arises not only from inequity aversion but also from revenge itself (a desire for reciprocity). We perform face-to-face revenge/punishment experiments by randomly recruiting the students at a major U.S. University. Our results show that punishment is motivated by both inequity aversion and revenge.

JEL:D91 Role and Effects of Psychological, Emotional, Social, and Cognitive Factors on Decision Making‡

1. Introduction and Literature review

Why people punish is a very interesting and important question. Following the seminal paper by Clutton-Brock and Parker (1995), punishment refers to an action that implies a cost for the punished person and occurs when an individual reduces its own current payoffs to decrease the payoffs of the punished person. Benedikt et al. (2008) also points out costly (altruistic) punishment means paying a cost for another individual to incur a cost and has been advanced as a key mechanism to explain cooperative behavior in human societies.

Why do people punish even at a cost?

Firstly, inequity aversion leads to punishment. Basing on the experimental results from

1 15 diverse populations, Joseph et al. (2006) show that increasing unequal behavior leads to costly punishment and costly punishment positively covaries with altruistic behavior.

By isolating egalitarian motives in an experiment without cooperation norms, Dawes et al.

(2007) present proof that individuals incur costs to reduce or augment others' incomes in order to create equal divisions of wealth. Punishment might be initiated by egalitarian motives. Tim Johnson et al. (2007) show that the more the individuals care about equality, the more they are willing to punish free-riders in public goods games. They argue egalitarian motives drive altruistic punishment, which significantly effects cooperation.

Raihani and McAuliffe (2012) show human punishment is motivated by inequity aversion, not a desire for reciprocity. They use an experimental approach to ask whether punishment is motivated by inequity aversion or by a desire for reciprocity. They show that humans punish cheats only when cheating produces disadvantageous inequity, while there is no evidence for reciprocity. This finding challenges the notion that punishment is motivated by a simple desire to reciprocally harm cheats and shows that victims compare their own payoffs with those of partners when making punishment decisions.

Secondly, punishment helps to ensure cooperation. A lot of experimental evidence shows that many humans are willing to punish the cheat, although they have to pay a cost. In public goods interactions, punishment offers an effective mechanism to ensure cooperation. Altruistic punishment is highly correlated with altruistic contribution. Ernst and Simon (2000) presents empirically a widespread willingness of the cooperators to punish the free-riders. They show punishment happens even if it is costly and the punisher does not get any material benefits. The more the free-riders deviate from the cooperation levels of the cooperators, the more heavily they are punished. Therefore, the potential

2 free-riders can avoid or at least decrease punishment by increasing their cooperation level. Ernst and Simon (2002) also present experimentally that the altruistic punishment of defectors is an important mechanism for cooperation. Altruistic punishment flourishes cooperation. Cooperation breaks down if altruistic punishment is ruled out. Altruistic punishment may be due to the negative emotions towards defectors.

Thirdly, people could obtain pleasure and satisfaction from punishment. In his paper

‘The pleasure of punishment’, Mary Beckman (2004) catches that a reward center in the brain zings when people punish cheaters, even if at a cost. The brain scans reveal that the punisher’ caudate nucleus revs up while he is deciding on a punishment. The caudate nucleus is a brain region that gets busy when people feel satisfaction or pleasure. When one pays more to punish, the caudate nucleus fires more intensely. Quervain et al. (2004) use H₂¹⁵O positron emission tomography to examine the neural basis for altruistic punishment of defectors in an economic exchange. The dorsal striatum is activated by effective punishment, which has been implicated in the processing of rewards that accrue as a result of goal-directed actions. The stronger activations in the dorsal striatum correlate with the more willingness to incur greater costs in order to punish. They argue punishing defectors can produce satisfaction.

Fourthly, K. Fliessbach et al. (2007) point out that a desire to maximize relative payoffs instigates punishment.

People might punish for submission, normative conformity and moral. In animal societies, Clutton-Brock and Parker (1995) contend punishing strategies are used to

3 establish and maintain dominance relationships, to discourage parasites and cheats, to discipline offspring or prospective sexual partners and to maintain cooperative behavior.

Joseph (2004) shows normative conformity, a desire and expectation to behave as all others do, may lead to the punishment of all deviators, cooperators, and free riders alike. Falk et al. (2005) also show norm enforcement can instigate punishment by setting punishment costs equal to the amount punishment reduces incomes, thereby preventing individuals from reducing inequality between themselves and those they punish. In his article ‘The Moral Instinct’, (2008) said: “people feel that those who commit immoral acts deserve to be punished. Not only is it allowable to inflict pain on a person who has broken a moral rule; it is wrong not to, to ‘let them get away with it.’ People are thus untroubled in inviting divine retribution or the power of the state to harm other people they deem immoral.”

Is punishment effective?

There are a lot of evidence that show punishment is effective and lead to cooperation in some experiments, whereas in other experiments punishment does not increases the level of cooperation but reduce it and decrease the average and overall subjects’ payoff.

Ernst and Klaus (1999) have shown that a big majority of selfish players can be forced by a minority of fair-minded players to cooperate fully in the public good game with punishment. Ernst et al. (2002) also present empirically that punishing non-cooperators can lead to almost universal cooperation in circumstances in which purely self-interested behavior would cause a complete breakdown of cooperation without punishment. Ernst

4 and Rockenbach (2003) show experimentally that altruistic cooperation can be almost completely destroyed by selfish or greedy sanction, whereas fair sanctions leave altruistic cooperation intact. Their findings challenge proximate and ultimate theories of human cooperation that neglect the distinction between fair and unfair sanctions. They state that it has been shown that a threat of punishment can decrease the level of cooperation in trust games in another series of experiments. Christoph et al. (2007) show that punishment of defectors can pave the way for the emergence and establishment of cooperative behavior if subjects have the option to stand aside and abstain from the joint endeavor. Compulsory rather than voluntary joint enterprises are less likely to lead to cooperative behavior. Simon Gächter et al. (2008) show that when the increased gain from cooperation outweighs the negligible cost of punishment, punishment can increase cooperation and make group and individuals better off in the long run.

Some papers also show that punishment is not or not so much effective. Dominique

(2004) examines the neural basis for altruistic punishment of defectors in an economic exchange by using positron emission tomography. Subjects could choose symbolical or effective punishment. They show that effective punishment does reduce the payoff, whereas symbolic punishment does not reduce the defector’s economic payoff. Jia-Jia

Wu et al. (2005) conduct two repeated two-player Prisoner's Dilemma experiments with university students from Beijing as participants. In their control experiments, there is not an option of costly punishment. They show that the cooperation level in the treatment experiments with costly punishment as an option either actually decreased or stayed the same. This is opposite with the result from the similar experiments with university students

5 from Boston that find level of cooperation does increase with costly punishment. They state differences in cultural attitudes might explain this opposite result. Anna et al. (2008) show that costly punishment does not increase the average payoff, even though it increases the amount of cooperation. What is more, the use of costly punishment strong negatively correlates with total payoff, that is, those people with highest payoff tend not to use costly punishment. They assert winners do not punish because costly punishment is maladaptive in cooperation games. Benedikt et al. (2008) conduct public goods experiments in 16 comparable participant pools around the world and found out antisocial punishment was strong enough to remove the cooperation-enhancing effect of punishment in some participant pools. They contend the cooperation effect of antisocial punishment is highly related with weak norms of civic cooperation and weakness of the rule of law. Strong social norms of cooperation help lead to socially beneficial punishment.

Ohtsuki et al. (2009) find that costly punishment typically reduces the average payoff and leads to an efficient equilibrium only in a small parameter region, whereas the population does better without costly punishment in most cases. They believe withholding help for defectors rather than punishing them is a more efficient strategy of indirect reciprocity.

David G. Rand et al. (2009) conclude that reward is as effective as punishment for maintaining public cooperation and leads to a higher total earning by comparing public goods games followed by punishment, reward, or both in the setting of truly repeated games, in which player identities persist from round to round. When reward and punishment are both available, punishment has no effect on contribution and leads to lower payoff. They contend reward outperforms punishment in repeated public goods games. David G. Rand et al. (2009) conclude that costly punishment does not promote

6 the of cooperation by performing computer simulations of evolutionary dynamics in populations of finite size. In the paper ‘Inside the Mind of a Psychopath’, Kent and

Joshua (2010) says: “Michael Caldwell, a psychologist at the Mendota Juvenile Treatment

Center in Madison, uses intensive one-on-one therapy known as decompression aimed at ending the vicious cycle in which punishment for bad behavior inspires more bad behavior, which is in turn punished.” This means punishing bad behavior inspires more bad behavior. Punishment is not effective.

2. Research Methods

Raihani and McAuliffe (2012) did the Mechanical Turk Experiment on Amazon.com.

Amazon Mechanical Turk (MTurk) is a crowdsourcing system in which tasks are distributed to a population of thousands of anonymous workers for completion. This system is becoming increasingly popular with researchers and developers. Ross et al.

(2010) survey MTurk workers about their demographic make-up and usage behavior.

Ross et al. (2010) find that this population is diverse across several notable demographic dimensions such as age, gender, and income, but is not precisely representative of the

U.S. as a whole. Indeed, certain homogeneous aspects of the population, such as education level and nationality, may impose limits on the appropriateness of Turkers as a target community for some interventions or research areas. An awareness of the demographics and behaviors of MTurk workers is important for understanding the capabilities and potential side effects of using this system.

Most of the players in Raihani and McAuliffe’s (2012) Mechanical Turk Experiment are

7 poor people from India. They cannot represent the general population. Therefore, the behavioral results are not representative enough.

This paper does a face-to-face revenge game by randomly recruiting players from the student population at a very diverse university. The more representative face-to-face experiment is able to capture some behavior which is not been able to capture by online experiment.

Economic Experiment: Revenge Game

The revenge game is a game played in economic experiments in which two players interact to decide how to divide a sum of money that is given to them. There are three different treatments of the game (A-C):

• In treatment A, the first player (P1) is given $0.70 and the second (P2) is given $0.10.

• In treatment B, P1 is given $0.70 and P2 is given $0.30.

• In treatment C, P1 is given $0.70 and P2 is given $0.70.

The game consists of two moves.

In move one, P2 announces whether s/he wishes to “cheat” or not to cheat. “Cheating” consists of taking $0.20 of P1’s money; not cheating consists of doing nothing. If P2 takes

$.20 of P1’s money, P2 keeps this $.20.

In move two, after P2 makes his/her move, P1 announces whether or not s/he wishes to punish P2. P1 can punish P2 by paying $0.10 to reduce P2’s income by $0.30 or can do nothing. In the punishment phase, the experimenter takes away both the $.30 and the $.10

“cost of punishment.” The game is played only once by each dyad.

8 The players’ endowments initially, and after P2 “cheats” and after P1 punishes P2, given

P2’s cheating, are as follows:

Treatment Initial Endowments After P2 Cheats After P1 Punishes

P1 P2 P1 P2 P1 P2

A $.70 $.10 $.50 $.30 $.40 $0

B $.70 $.30 $.50 $.50 $.40 $.20

C $.70 $.70 $.50 $.90 $.40 $.60

The players in the revenge/punishment game were randomly recruited from the student population at Temple University, which has 43,000 students from 55 countries and is one of the most diverse universities in the United States. Over 130 pairs were randomly recruited (260+ subjects) and the experimenters made sure each member of the dyad did not know the other. The experiments were conducted with the subjects facing each other in person. The treatment choice among A, B and C was randomly chosen with the constraint that final outcome included approximately equal numbers of the three treatments.

3. Results and Analysis

The primary statistic in question appears is whether conditioning on having cheating significantly alters the probability of punishment, and, more importantly, whether

9 conditioning on having cheated has a stronger effect on treatment group C then on treatment group A or B. Formally, the test is whether the statistic:

Prob(Punish=1|Cheat=1) - Prob(Punish=1|Cheat=0)

is a) significantly different than 0 and b) larger for Treatment C than Treatment A or B.

Our results from the 130+ face-to-face dyads are as follows:

Probability of Punishment: Dr. Wargo Results

Treatment A Treatment B Treatment C

Cheat=0 Cheat=1 Difference Cheat=0 Cheat=1 Difference Cheat=0 Cheat=1 Difference

0% 51.1% 51.1*% 27.7% 41.9% 14.15% 0.079% 75.6% 67.4*%

The most striking result is that in treatment C, less than 1% punished when P2 did not cheat but 75.6% punished when P2 cheated. [The results are statistically significant.]

However, it is also very interesting to observe that in treatment B, where after cheating each party ended up with exactly the same amount ($.50), P1 still punished 42% of the time if P2 cheated.

We contract this results with the same experiment conducted on Amazon.com’s

“Mechanical Turk”, where 560 participators (self selected) were randomly assigned to 280 dyads. The subjects then played the revenge/punishment game anonymously via each’s

10 own computer. The results from this anonymous, computerized, version are as follows:

Probability of Punishment: Mechanical Turk Experiment

Treatment A Treatment B Treatment C

Cheat=0 Cheat=1 Difference Cheat=0 Cheat=1 Difference Cheat=0 Cheat=1 Difference

10% 15% 5*% 12% 15% 3% 12% 45% 33*%

Note that in treatment C, a significantly larger proportion of P1 punished given cheating in the face-to-face experiment vs. the Mechanical Turk experiment (75.6% vs. 45%). This is consistent with the extant literature, which shows higher levels of punishment in face to face games vs. computerized games.

Also, note that in treatment A, face-to-face, not a single P1 player punished if P2 did not cheat, likely because P1 was initially endowed with significantly more money at the start. This is also virtually true of treatment C if P2 did not cheat in the face-to-face version.

Finally, note the high proportion of punishment given P2’s cheating in the face-to-face version of the experiment, no matter what the relative endowments of the two players become after P2 cheated.

Conclusions:

We feel there are a couple of main conclusions to be gained from the data. The first is that the motivation to punish is a very important factor in the economic decisions of player

11 P1. In the case of treatment B, this is even more important than inequity aversion, since in treatment B, players P1 and P2 each end up with $.50, but a higher proportion punishes when P2 cheats than when he/she does not cheat. This can be explained by another motivate other than inequality, which is a desire for reciprocity (i.e. revenge itself). This conclusion is not consistent with Raihani and McAuliffe (2012). This conclusion can also be confirmed by the result from treatment A in which when P2 cheated, the P1 punishes with probability 51.1% though the cheat helps the endowments change to a fairer situation

(i.e from .70:.30 to .50:.50). This cannot be explained by equity aversion.

The second conclusion is that in the treatment C punishment is motivated by equity aversion. When player P2 does not cheat, the P1 does not punish because the endowments are fair. Comparing to no cheat, however, the player P1 punishes the cheater with a statistically significant and high probability (67.4%) because the cheat makes the endowment deviate from equity. This conclusion is consistent with Raihani and

McAuliffe (2012).

The third conclusion is that (consistent with the literature) a dramatically higher proportion of P1’s punishment, given P2’s cheating, in the face-to-face experiment than on the Mechanical Turk experiment.

The fourth conclusion is that in the face-to-face experiment, P1 appears prohibited from punishing in treatment A and C if P2 does not cheat. This is not true of the Mechanical

Turk experiment.

12

Finally, we believe there are other interesting insights and ideas for further research to be gleaned from our experiment. These include:

References

Anna Dreber, David G. Rand, Drew Fudenberg and Martin A. Nowak (2008), ‘Winners don’t punish’, Nature 452,348-351

Armin Falk, Ernst Fehr and (2005)., ‘Driving Forces Behind Informal Sanctions’, Econometrica 73, 2017–2030

Benedikt Herrmann, Christian Thöni and Simon Gächter (2008), ‘Antisocial Punishment Across Societies’, Science 319, 1363-1366

Christoph Hauert, Arne Traulsen, Hannelore Brandt, Martin A. Nowak and Karl Sigmund (2007), ‘Via Freedom to Coercion: The Emergence of Costly Punishment’, Science 316, 1905

Christopher T. Dawes, James H. Fowler, Tim Johnson, Richard McElreath and Oleg Smirnov (2007), Egalitarian motives in humans, Nature 446, 794–796.

David G. Rand, Anna Dreber, Tore Ellingsen, Drew Fudenberg and Martin A. Nowak (2009), ‘Positive Interactions Promote Public Cooperation’, Science 325,1272-1275

David G. Rand, Hisashi Ohtsukia and Martin A. Nowaka (2009), ‘Direct reciprocity with costly punishment: Generous tit-for-tat prevails’, Journal of Theoretical Biology 256,45– 57

Dominique J.-F. de Quervain (2004),‘The Neural Basis of Altruistic Punishment’, Science 305,1254-1258

Ernst Fehr and Klaus M. Schmidt (1999), ‘A Theory of Fairness,Competition and Cooperation’, The Quarterly Journal of Economics,published August 1999,817-868

Ernst Fehr and Simon Gächter (2000), ‘Cooperation and Punishment in Public Goods Experiments’, The American Economic Review, Vol. 90, No. 4 ,980-994

Ernst Fehr and Simon Gächter (2002), ‘Altruistic punishment in humans’,Nature 415, 137- 140

Ernst Fehr, Urs Fischbacher and Simon Gächter (2002), ‘, Human Cooperation, and The Enforcement of Social Norms’, Human Nature 13,no. 1

13 Ernst Fehr and B. Rockenbach (2003), ‘Detrimental effects of sanctions on human altruism', Nature 422, 137-140

Hisashi Ohtsuki,Yoh Iwasa and Martin A. Nowak (2009), ‘Indirect Reciprocity Provides Only A Narrow Margin of Efficiency For Costly Punishment’, Nature 457,79-82

Jia-Jia Wu,Bo-Yu Zhanga,Zhen-Xing Zhou, Qiao-Qiao He, Xiu-Deng Zheng,Ross Cressman and Yi Tao (2005), ‘Costly punishment does not always increase cooperation’, PNAS 106,no.41

Joseph Henrich (2004), ‘Cultural , coevolutionary processes and large- scale cooperation’,J. Econ. Behav. Organ. 53, 3-35

Joseph Henrich, Richard McElreath, Abigail Barr, Jean Ensminger, Clark Barrett, Alexander Bolyanatz, Juan Camilo Cardenas, Michael Gurven, Edwins Gwako, Natalie Henrich, Carolyn Lesorogol, Frank Marlowe, David Tracer and John Ziker (2006), ‘Costly Punishment Across Human Societies’, Science 312, 1767

Kent A. Kiehl and Joshua W. Buckholtz (2010), ‘Inside the Mind of a Psychopath’, Scientific American Mind 21, 22 - 29

K. Fliessbach, B. Weber, P. Trautner, T. Dohmen, U. Sunde, C. E. Elger and A. Falk (2007), ‘Social Comparison Affects Reward-Related Brain Activity in the Human Ventral Striatum’, Science 318, 1305-1308

Mary Beckman (2004),’The Pleasure of Punishment’,ScienceNow, published online 27 Aug 2004.

N. J. Raihani and K. McAuliffe (2012), Human punishment is motivated by inequity aversion, not a desire for reciprocity, Biology Letters, doi:10.1098/rsbl.2012.0470

Ross, J., Irani, I., Silberman, M. Six, Zaldivar, A., and Tomlinson, B. (2010). "Who are the Crowdworkers? Shifting Demographics in Amazon Mechanical Turk". In: CHI EA 2010. (2863-2872)

Simon Gächter, Elke Renner and Martin Sefton (2008), ‘The Long-Run Benefits of Punishment’, Science 322,1510

Steven Pinker (2008), ‘The Moral Instinct’, The New York Times, published 13 Jan 2008.

T.H. Clutton-Brock and G.A. Parker (1995), ‘Punishment in Animal Societies’, Nature 373,209-215

14

Appendix 1

Inequity Aversion Experiment [Final Version] Script for Experimenter

I am assisting my economics professor in conducting a serious experiment about economic decision-making. Are you willing to participate in this short experiment? If you participate, you will receive some compensation for your participation.

OK? Good. Here are the rules for the experiment. Each of you will be given a sum of money to participate in the experiment. You may keep the money you still possess at the end of the experiment as compensation for participating in the experiment.

[Say to one player] You will be designated Player 1. [Say to the other player] You will designate Player 2. I would like you to introduce yourselves to each other and shake hands. I will now give some money to each of you.

15 Player 2, if he/she decides to, can take exactly $.20 from Player 1 (no more and no less). The $.20 then belongs to Player 2. OK? Do you have any questions?

Now, just to make sure I have explained the experiment clearly, Player 2 would you please repeat the decision choice you have. Good, that is correct.

To make sure I have explained the experiment clearly, Player 1 would you please repeat the decision choice you have. Good, that is correct.

OK. Player 2, whenever you are ready, you can decide to take $.20 from Player 1 and simply take the money from his pile or tell me you will not take the $.20. [Wait for Player 2 to decide and act or not act.]

In Round 2, Player 1, if he/she decides, can “punish” Player 2 by stating that Player 2 must lose $.30. (I, the experimenter, will take back the $.30 from Player2.) However, if you decide to cause Player 2 to lose $.30, it will “cost” you $.10 and I will take $.10 from you.

So, Player 1, do you want to cause Player 2 to lose $.30 at a cost of $.10 to you? [Wait for Player 1 to decide and then take the money from each player’s pile if that is what Player 1 decides. ]

OK. That completes the experiment, I want to thank you for your participation and as I said before, you make keep the money you still possess.

16