Fairness & Altruism
Total Page:16
File Type:pdf, Size:1020Kb
FAIRNESS & ALTRUISM: A BRIEF SURVEY OF SOME UNFORTUNATE APPARENT CONSEQUENCES OF EVOLUTIONARY PROCESSES, AND A DISCUSSION OF ONE SOLUTION Prepared by Drew Schroeder for Phil-152, December 2006. I. FAIRNESS1 Divide the Dollar. Two players are competing for a reward of $90. Each is told to secretly write down a fraction on a sheet of paper. If the fractions sum to more than 1, neither player gets anything. If they sum to 1 or less, each player gets a share of the $90 corresponding to the fraction she wrote down. Define an evolutionarily stable strategy (ESS) as a strategy such that, if everyone followed it, an individual or small group of individuals who adopted any different strategy would do worse. The intuitive idea, in biological terms, is this: an ESS is a strategy such that if it’s dominant in a population, any mutant that arises will do worse. The strategy will therefore be very resistant to change by evolutionary processes. Now, what is an ESS in Divide the Dollar? Not surprisingly, writing down 1/2 is an ESS. If everyone is writing down 1/2, a mutant who writes more than 1/2 will never get anything, while those writing 1/2 will still usually get $45 (since they’ll usually be playing against others writing 1/2). A mutant who writes less than 1/2 will get something, but she’ll always get less than those writing 1/2. So, either way, a mutant does worse than those writing 1/2, so writing 1/2 is an ESS. There are other ESSs, however. Consider a population in which half write 2/3 and half write 1/3. (If you want to think of it as a strategy, you can either think of it as writing each half the time, or – to make things simpler – at birth flipping a coin and then subsequently writing either 2/3 or 1/3, depending on what comes up.) The expected payoff of writing 1/3 in this population is of course $30. (You’ll always get 1/3 of $90.) The expected payoff of writing 2/3 is also $30, since half the time you’ll get nothing (when you play a 2/3-er) and half the time you’ll get $60 (when you play a 1/3-er). Consider some mutant. Suppose she writes down a fraction less than 1/3. Then she’ll always get what she writes down times $90, but her expected payoff will be less than $30. So, she’ll do worse. Suppose that she writes down a fraction higher than 2/3. Then she’ll never get anything. Expected payoff=$0. Finally, suppose she writes down some fraction, N, between 1/3 and 2/3. She’ll get nothing when her opponent writes 2/3, and she’ll get N x $90 when her opponent writes 1/3. So her payoff will be N x 0.5 x $90. But since N is less than 2/3, this will be less than $30. (Do the math.) Thus, no matter what she does, a mutant will have an expected payoff of less than $30. She’ll do worse than a 1/3-er or a 2/3-er. It turns out that there are infinitely many ESSs: for any two fractions that add up to 1, there’s an ESS that has each played some fraction of the time. (For 4/5 and 1/5, for example, it’s writing 4/5 75% of the time, and 1/5 25% of the time.) The problem, of course, is that all of the non-1/2 ESSs are inefficient. That is, the average payoff is less than it could be. Part of the pie is being wasted. If everyone wrote 1/2, everyone would do better. This is crucial – we’re not talking about a case where some people could do better at the expense of others, or even a case where some people could do better without cost to others. This is a case where everyone would be better off with a different strategy. Since these are ESSs, though, unilateral change (e.g. by mutation) can’t get us out of the inefficient state. Any new strategy will do worse than the old ones, so evolution has gotten the population into a trap from which it can’t extricate it. How big a problem is this, actually? We can run computer simulations. Start with a random assignment of strategies (of the sort “write N”). Have members of the population play against random other members of the population. Interpret the payoffs in terms of evolutionary fitness: if A has a higher payoff than B, then A’s strategy is represented in proportionally higher numbers in the next generation. 1 The material in this section very closely tracks the discussion in Skyrms 1996 (ch. 1). 1 Repeat the procedure through many iterations until an equilibrium is reached. What happens? It turns out that it depends on the initial distribution of strategies. If we allow members of the population three strategies, write-1/2, write-2/3, and write-1/3, then in about 74% of initial distributions, the population will evolve to the “write 1/2” ESS. In about 26% of cases, however, we’ll end up with half 2/3-ers and half 1/3-ers. (See the graph to the right.) This is not an insignificant fraction. If evolution worked this way, we should expect to find lots of inefficient, stable strategies out there – populations where everyone would be better off, if they could all just change their strategies, but where any change by a single individual will leave her worse off. What can we do to avoid these inefficient “traps”? Skyrms’ simulations suggest two solutions. First, if we admit more possible strategies, we don’t get rid of inefficiency, but we do minimize it. If, for example, we allow twenty strategies Each point in the triangle represents a particular distribution of strategies within the population. Points closer to a given vertex represent higher proportions of that strategy. All initial (write-1/20, 2/20...19/20, 20/20), the distributions evolve to one of the three labeled equilibria. A, though, is unstable, since a small efficient solution only comes about 57% increase or decrease in 1/2-ers will quickly lead the population to either B or C. B and C are of the time -- but an additional 36% of stable, since after any small change, the population will return to B or C. the time the population gets stuck in the 9/20-11/20 or 8/20-12/20 “traps”. These are indeed inefficient, but they’re much less inefficient than the 1/3-2/3 “trap”. 93% of the time, then, the population gets an average payoff of at least 80% of the pie. The more interesting solution, however, is through correlation. Notice that if we have a mixed population of 1/3-ers, 1/2-ers, and 2/3-ers, the 1/2-ers do well when playing against 1/2-ers or 1/3-ers. They do badly against 2/3-ers. The 2/3-ers do badly against 2/3-ers and 1/2-ers, but well against 1/3-ers. (1/3-ers don’t care who they play.) Suppose, then, that players are somewhat more likely to play against players who share their strategy than they are to play against players with different strategies. This won’t affect 1/3-ers at all. 2/3-ers will suffer, since they’ll be involved in 2/3 vs. 2/3 games more often. The 1/2-ers will benefit, since they’ll play 1/2-ers more often. We’d expect, then, that correlated interaction would help us get to the write-1/2 equilibrium. What’s surprising is how significant the effects are. If we introduce a correlation coefficient of 10%,2 the efficient outcome arises about 91% of the time (instead of 74%). If the coefficient is 20%, the inefficient trap is virtually gone. What’s the take-home lesson? Evolution can sometimes lead us into and get us stuck in inefficient situations. One way to break out of such a trap is to ensure that like interacts with like. 2 Formally, this means that for some player, A, 10% of the encounters she would have had with different strategies come instead with like strategies. That is, if As make up 20% of the population, given random pairing they’d meet non-As 80% of the time. With a correlation coefficient of 10%, they meet non-As only 72% of the time. 2 II. ALTRUISM Define biologically altruistic behavior as behavior which increases the fitness of others, at cost to one’s own fitness. This is not the way we ordinarily use the term ‘altruistic’. When we call some act altruistic, we’re usually making some claim about the agent’s psychology: that she acted with the intention of helping someone else, at perceived cost to herself. Notice that a biologically altruistic act need not be psychologically altruistic: a worm presumably has no intentions at all, and so couldn’t possibly be a psychological altruist, though it could be a biological altruist. Also, a psychologically altruistic act need not be biologically altruistic: you might intend to spend your afternoons selflessly helping the homeless, but if society rewards you in some way that increases your fitness (e.g. by featuring you on a “most eligible bachelors” T.V.