I 0 I Note that the expected payout is a weighted average of payouts

1 1 1 1 1 I What is the expected payoff of ( 3 , 3 , 3 ) against ( 2 , 0, 2 )?

Rock Paper Scissors

I Consider :

R P S R 0, 0 -1, 1 1, -1

P 1, -1 0, 0 -1, 1

S -1, 1 1, -1 0, 0 I 0 I Note that the expected payout is a weighted average of payouts

Rock Paper Scissors

I Consider Rock Paper Scissors:

R P S R 0, 0 -1, 1 1, -1

P 1, -1 0, 0 -1, 1

S -1, 1 1, -1 0, 0

1 1 1 1 1 I What is the expected payoff of ( 3 , 3 , 3 ) against ( 2 , 0, 2 )? I Note that the expected payout is a weighted average of payouts

Rock Paper Scissors

I Consider Rock Paper Scissors:

R P S R 0, 0 -1, 1 1, -1

P 1, -1 0, 0 -1, 1

S -1, 1 1, -1 0, 0

1 1 1 1 1 I What is the expected payoff of ( 3 , 3 , 3 ) against ( 2 , 0, 2 )? I 0 Rock Paper Scissors

I Consider Rock Paper Scissors:

R P S R 0, 0 -1, 1 1, -1

P 1, -1 0, 0 -1, 1

S -1, 1 1, -1 0, 0

1 1 1 1 1 I What is the expected payoff of ( 3 , 3 , 3 ) against ( 2 , 0, 2 )? I 0 I Note that the expected payout is a weighted average of payouts 1 1 1 I u((1, 0, 0), ( 2 , 0, 2 )) = 2 1 1 I u((0, 1, 0), ( 2 , 0, 2 )) = 0 1 1 1 I u((0, 0, 1), ( 2 , 0, 2 )) = − 2 to play rock

I By removing pure strategies that lower the average

1 1 I Best against ( 2 , 0, 2 ) is I A to a strategy will consist of pure strategies that have the same (high) expected payout

1 1 I How can we raise the expected payout against ( 2 , 0, 2 )?

Rock Paper Scissors

I Consider Rock Paper Scissors:

R P S R 0, 0 -1, 1 1, -1

P 1, -1 0, 0 -1, 1

S -1, 1 1, -1 0, 0 1 1 1 I u((1, 0, 0), ( 2 , 0, 2 )) = 2 1 1 I u((0, 1, 0), ( 2 , 0, 2 )) = 0 1 1 1 I u((0, 0, 1), ( 2 , 0, 2 )) = − 2 to play rock

I A best response to a strategy will consist of pure strategies that have the same (high) expected payout

I By removing pure strategies that lower the average

1 1 I Best strategy against ( 2 , 0, 2 ) is

Rock Paper Scissors

I Consider Rock Paper Scissors:

R P S R 0, 0 -1, 1 1, -1

P 1, -1 0, 0 -1, 1

S -1, 1 1, -1 0, 0

1 1 I How can we raise the expected payout against ( 2 , 0, 2 )? to play rock

I A best response to a strategy will consist of pure strategies that have the same (high) expected payout

1 1 1 I u((1, 0, 0), ( 2 , 0, 2 )) = 2 1 1 I u((0, 1, 0), ( 2 , 0, 2 )) = 0 1 1 1 I u((0, 0, 1), ( 2 , 0, 2 )) = − 2 1 1 I Best strategy against ( 2 , 0, 2 ) is

Rock Paper Scissors

I Consider Rock Paper Scissors:

R P S R 0, 0 -1, 1 1, -1

P 1, -1 0, 0 -1, 1

S -1, 1 1, -1 0, 0

1 1 I How can we raise the expected payout against ( 2 , 0, 2 )? I By removing pure strategies that lower the average to play rock

I A best response to a strategy will consist of pure strategies that have the same (high) expected payout

1 1 I u((0, 1, 0), ( 2 , 0, 2 )) = 0 1 1 1 I u((0, 0, 1), ( 2 , 0, 2 )) = − 2 1 1 I Best strategy against ( 2 , 0, 2 ) is

Rock Paper Scissors

I Consider Rock Paper Scissors:

R P S R 0, 0 -1, 1 1, -1

P 1, -1 0, 0 -1, 1

S -1, 1 1, -1 0, 0

1 1 I How can we raise the expected payout against ( 2 , 0, 2 )? I By removing pure strategies that lower the average 1 1 1 I u((1, 0, 0), ( 2 , 0, 2 )) = 2 to play rock

I A best response to a strategy will consist of pure strategies that have the same (high) expected payout

1 1 1 I u((0, 0, 1), ( 2 , 0, 2 )) = − 2 1 1 I Best strategy against ( 2 , 0, 2 ) is

Rock Paper Scissors

I Consider Rock Paper Scissors:

R P S R 0, 0 -1, 1 1, -1

P 1, -1 0, 0 -1, 1

S -1, 1 1, -1 0, 0

1 1 I How can we raise the expected payout against ( 2 , 0, 2 )? I By removing pure strategies that lower the average 1 1 1 I u((1, 0, 0), ( 2 , 0, 2 )) = 2 1 1 I u((0, 1, 0), ( 2 , 0, 2 )) = 0 to play rock

I A best response to a strategy will consist of pure strategies that have the same (high) expected payout

1 1 I Best strategy against ( 2 , 0, 2 ) is

Rock Paper Scissors

I Consider Rock Paper Scissors:

R P S R 0, 0 -1, 1 1, -1

P 1, -1 0, 0 -1, 1

S -1, 1 1, -1 0, 0

1 1 I How can we raise the expected payout against ( 2 , 0, 2 )? I By removing pure strategies that lower the average 1 1 1 I u((1, 0, 0), ( 2 , 0, 2 )) = 2 1 1 I u((0, 1, 0), ( 2 , 0, 2 )) = 0 1 1 1 I u((0, 0, 1), ( 2 , 0, 2 )) = − 2 I A best response to a strategy will consist of pure strategies that have the same (high) expected payout

to play rock

Rock Paper Scissors

I Consider Rock Paper Scissors:

R P S R 0, 0 -1, 1 1, -1

P 1, -1 0, 0 -1, 1

S -1, 1 1, -1 0, 0

1 1 I How can we raise the expected payout against ( 2 , 0, 2 )? I By removing pure strategies that lower the average 1 1 1 I u((1, 0, 0), ( 2 , 0, 2 )) = 2 1 1 I u((0, 1, 0), ( 2 , 0, 2 )) = 0 1 1 1 I u((0, 0, 1), ( 2 , 0, 2 )) = − 2 1 1 I Best strategy against ( 2 , 0, 2 ) is I A best response to a strategy will consist of pure strategies that have the same (high) expected payout

Rock Paper Scissors

I Consider Rock Paper Scissors:

R P S R 0, 0 -1, 1 1, -1

P 1, -1 0, 0 -1, 1

S -1, 1 1, -1 0, 0

1 1 I How can we raise the expected payout against ( 2 , 0, 2 )? I By removing pure strategies that lower the average 1 1 1 I u((1, 0, 0), ( 2 , 0, 2 )) = 2 1 1 I u((0, 1, 0), ( 2 , 0, 2 )) = 0 1 1 1 I u((0, 0, 1), ( 2 , 0, 2 )) = − 2 1 1 I Best strategy against ( 2 , 0, 2 ) is to play rock Rock Paper Scissors

I Consider Rock Paper Scissors:

R P S R 0, 0 -1, 1 1, -1

P 1, -1 0, 0 -1, 1

S -1, 1 1, -1 0, 0

1 1 I How can we raise the expected payout against ( 2 , 0, 2 )? I By removing pure strategies that lower the average 1 1 1 I u((1, 0, 0), ( 2 , 0, 2 )) = 2 1 1 I u((0, 1, 0), ( 2 , 0, 2 )) = 0 1 1 1 I u((0, 0, 1), ( 2 , 0, 2 )) = − 2 1 1 I Best strategy against ( 2 , 0, 2 ) is to play rock I A best response to a strategy will consist of pure strategies that have the same (high) expected payout I Each player asks “if the other players stuck with their strategies, am I better off mixing the ratio of strategies?” I If pi is a best response to p−i , the payouts of the pure strategies in pi are equal

I Note that pure Nash equilibria are still Nash equilibria

Nash Equilibrium

I Mixed strategies (p1,..., pn) are a if pi is a best response to p−i I If pi is a best response to p−i , the payouts of the pure strategies in pi are equal

I Note that pure Nash equilibria are still Nash equilibria

Nash Equilibrium

I Mixed strategies (p1,..., pn) are a Nash equilibrium if pi is a best response to p−i

I Each player asks “if the other players stuck with their strategies, am I better off mixing the ratio of strategies?” I Note that pure Nash equilibria are still Nash equilibria

Nash Equilibrium

I Mixed strategies (p1,..., pn) are a Nash equilibrium if pi is a best response to p−i

I Each player asks “if the other players stuck with their strategies, am I better off mixing the ratio of strategies?” I If pi is a best response to p−i , the payouts of the pure strategies in pi are equal Nash Equilibrium

I Mixed strategies (p1,..., pn) are a Nash equilibrium if pi is a best response to p−i

I Each player asks “if the other players stuck with their strategies, am I better off mixing the ratio of strategies?” I If pi is a best response to p−i , the payouts of the pure strategies in pi are equal

I Note that pure Nash equilibria are still Nash equilibria I 0

1 1 1 I When both players use ( 3 , 3 , 3 ) I Test this: what is the payoff of a pure strategy against 1 1 1 ( 3 , 3 , 3 )?

I What should the (unique) Nash equilibrium be?

I Note that the expected payoff for each player is 0 (the is fair)

Rock Paper Scissors

I Consider Rock Paper Scissors:

R P S R 0, 0 -1, 1 1, -1

P 1, -1 0, 0 -1, 1

S -1, 1 1, -1 0, 0 I 0

1 1 1 I When both players use ( 3 , 3 , 3 ) I Test this: what is the payoff of a pure strategy against 1 1 1 ( 3 , 3 , 3 )?

I Note that the expected payoff for each player is 0 (the game is fair)

Rock Paper Scissors

I Consider Rock Paper Scissors:

R P S R 0, 0 -1, 1 1, -1

P 1, -1 0, 0 -1, 1

S -1, 1 1, -1 0, 0

I What should the (unique) Nash equilibrium be? I 0

I Test this: what is the payoff of a pure strategy against 1 1 1 ( 3 , 3 , 3 )?

I Note that the expected payoff for each player is 0 (the game is fair)

Rock Paper Scissors

I Consider Rock Paper Scissors:

R P S R 0, 0 -1, 1 1, -1

P 1, -1 0, 0 -1, 1

S -1, 1 1, -1 0, 0

I What should the (unique) Nash equilibrium be? 1 1 1 I When both players use ( 3 , 3 , 3 ) I 0

I Note that the expected payoff for each player is 0 (the game is fair)

Rock Paper Scissors

I Consider Rock Paper Scissors:

R P S R 0, 0 -1, 1 1, -1

P 1, -1 0, 0 -1, 1

S -1, 1 1, -1 0, 0

I What should the (unique) Nash equilibrium be? 1 1 1 I When both players use ( 3 , 3 , 3 ) I Test this: what is the payoff of a pure strategy against 1 1 1 ( 3 , 3 , 3 )? I Note that the expected payoff for each player is 0 (the game is fair)

Rock Paper Scissors

I Consider Rock Paper Scissors:

R P S R 0, 0 -1, 1 1, -1

P 1, -1 0, 0 -1, 1

S -1, 1 1, -1 0, 0

I What should the (unique) Nash equilibrium be? 1 1 1 I When both players use ( 3 , 3 , 3 ) I Test this: what is the payoff of a pure strategy against 1 1 1 ( 3 , 3 , 3 )? I 0 Rock Paper Scissors

I Consider Rock Paper Scissors:

R P S R 0, 0 -1, 1 1, -1

P 1, -1 0, 0 -1, 1

S -1, 1 1, -1 0, 0

I What should the (unique) Nash equilibrium be? 1 1 1 I When both players use ( 3 , 3 , 3 ) I Test this: what is the payoff of a pure strategy against 1 1 1 ( 3 , 3 , 3 )? I 0

I Note that the expected payoff for each player is 0 (the game is fair) Theorem (Nash) Suppose that:

I a game has finitely many players I each player has finitely many pure strategies I we allow for mixed strategies Then the game admits a Nash equilibrium

I Can we guarantee a mixed Nash equilibrium?

Nash Equilibrium

I We saw that there are not always pure Nash equilibrium Theorem (Nash) Suppose that:

I a game has finitely many players I each player has finitely many pure strategies I we allow for mixed strategies Then the game admits a Nash equilibrium

Nash Equilibrium

I We saw that there are not always pure Nash equilibrium

I Can we guarantee a mixed Nash equilibrium? Nash Equilibrium

I We saw that there are not always pure Nash equilibrium

I Can we guarantee a mixed Nash equilibrium? Theorem (Nash) Suppose that:

I a game has finitely many players I each player has finitely many pure strategies I we allow for mixed strategies Then the game admits a Nash equilibrium I You can hit the ball to either the opponent’s left or right I Opponent can anticipate where you will hit the ball (to their left or right) I Payoffs are: Opponent L R L 50,50 80,20 You R 90,10 20,80

I No pure Nash equilibrium I Assume that strategies are (p, 1 − p) and (q, 1 − q) I Idea: your opponent’s pure strategies must return the same expected payoff (for them) against (p, 1 − p) I Strategies in Nash equilibrium are (.7,.3)(p = .7) and (.6,.4)(q = .6)

I Options:

I What are the Nash equilibrium?

Tennis

I You’re playing tennis, and returning the ball I No pure Nash equilibrium I Assume that strategies are (p, 1 − p) and (q, 1 − q) I Idea: your opponent’s pure strategies must return the same expected payoff (for them) against (p, 1 − p) I Strategies in Nash equilibrium are (.7,.3)(p = .7) and (.6,.4)(q = .6)

I You can hit the ball to either the opponent’s left or right I Opponent can anticipate where you will hit the ball (to their left or right) I Payoffs are: Opponent L R L 50,50 80,20 You R 90,10 20,80

I What are the Nash equilibrium?

Tennis

I You’re playing tennis, and returning the ball I Options: I No pure Nash equilibrium I Assume that strategies are (p, 1 − p) and (q, 1 − q) I Idea: your opponent’s pure strategies must return the same expected payoff (for them) against (p, 1 − p) I Strategies in Nash equilibrium are (.7,.3)(p = .7) and (.6,.4)(q = .6)

I Opponent can anticipate where you will hit the ball (to their left or right) I Payoffs are: Opponent L R L 50,50 80,20 You R 90,10 20,80

I What are the Nash equilibrium?

Tennis

I You’re playing tennis, and returning the ball I Options: I You can hit the ball to either the opponent’s left or right I No pure Nash equilibrium I Assume that strategies are (p, 1 − p) and (q, 1 − q) I Idea: your opponent’s pure strategies must return the same expected payoff (for them) against (p, 1 − p) I Strategies in Nash equilibrium are (.7,.3)(p = .7) and (.6,.4)(q = .6)

I Payoffs are: Opponent L R L 50,50 80,20 You R 90,10 20,80

I What are the Nash equilibrium?

Tennis

I You’re playing tennis, and returning the ball I Options: I You can hit the ball to either the opponent’s left or right I Opponent can anticipate where you will hit the ball (to their left or right) I No pure Nash equilibrium I Assume that strategies are (p, 1 − p) and (q, 1 − q) I Idea: your opponent’s pure strategies must return the same expected payoff (for them) against (p, 1 − p) I Strategies in Nash equilibrium are (.7,.3)(p = .7) and (.6,.4)(q = .6)

I What are the Nash equilibrium?

Tennis

I You’re playing tennis, and returning the ball I Options: I You can hit the ball to either the opponent’s left or right I Opponent can anticipate where you will hit the ball (to their left or right) I Payoffs are: Opponent L R L 50,50 80,20 You R 90,10 20,80 I No pure Nash equilibrium I Assume that strategies are (p, 1 − p) and (q, 1 − q) I Idea: your opponent’s pure strategies must return the same expected payoff (for them) against (p, 1 − p) I Strategies in Nash equilibrium are (.7,.3)(p = .7) and (.6,.4)(q = .6)

Tennis

I You’re playing tennis, and returning the ball I Options: I You can hit the ball to either the opponent’s left or right I Opponent can anticipate where you will hit the ball (to their left or right) I Payoffs are: Opponent L R L 50,50 80,20 You R 90,10 20,80

I What are the Nash equilibrium? I Assume that strategies are (p, 1 − p) and (q, 1 − q) I Idea: your opponent’s pure strategies must return the same expected payoff (for them) against (p, 1 − p) I Strategies in Nash equilibrium are (.7,.3)(p = .7) and (.6,.4)(q = .6)

Tennis

I You’re playing tennis, and returning the ball I Options: I You can hit the ball to either the opponent’s left or right I Opponent can anticipate where you will hit the ball (to their left or right) I Payoffs are: Opponent L R L 50,50 80,20 You R 90,10 20,80

I What are the Nash equilibrium? I No pure Nash equilibrium I Idea: your opponent’s pure strategies must return the same expected payoff (for them) against (p, 1 − p) I Strategies in Nash equilibrium are (.7,.3)(p = .7) and (.6,.4)(q = .6)

Tennis

I You’re playing tennis, and returning the ball I Options: I You can hit the ball to either the opponent’s left or right I Opponent can anticipate where you will hit the ball (to their left or right) I Payoffs are: Opponent L R L 50,50 80,20 You R 90,10 20,80

I What are the Nash equilibrium? I No pure Nash equilibrium I Assume that strategies are (p, 1 − p) and (q, 1 − q) I Strategies in Nash equilibrium are (.7,.3)(p = .7) and (.6,.4)(q = .6)

Tennis

I You’re playing tennis, and returning the ball I Options: I You can hit the ball to either the opponent’s left or right I Opponent can anticipate where you will hit the ball (to their left or right) I Payoffs are: Opponent L R L 50,50 80,20 You R 90,10 20,80

I What are the Nash equilibrium? I No pure Nash equilibrium I Assume that strategies are (p, 1 − p) and (q, 1 − q) I Idea: your opponent’s pure strategies must return the same expected payoff (for them) against (p, 1 − p) Tennis

I You’re playing tennis, and returning the ball I Options: I You can hit the ball to either the opponent’s left or right I Opponent can anticipate where you will hit the ball (to their left or right) I Payoffs are: Opponent L R L 50,50 80,20 You R 90,10 20,80

I What are the Nash equilibrium? I No pure Nash equilibrium I Assume that strategies are (p, 1 − p) and (q, 1 − q) I Idea: your opponent’s pure strategies must return the same expected payoff (for them) against (p, 1 − p) I Strategies in Nash equilibrium are (.7,.3)(p = .7) and (.6,.4)(q = .6) I So 2p = p + 3(1 − p) 3 I p = 4

3 2 (between the original payouts of the Nash equilbrium)

I Pure Nash equilibria occur when you both go to the same movie I For mixed Nashed equilibria, write out the strategies as (p, 1 − p) and (q, 1 − q) I (Same) trick: to find p, consider your date’s pure strategies: the payouts for both strategies must be the same

1 I Similarly, q = 4 I Note that both you and your date’s expected payout is

Going to the Movies Date C D C 3, 2 1, 1 You D 0, 0 2, 3

I What are the Nash equilibria? I So 2p = p + 3(1 − p) 3 I p = 4

3 2 (between the original payouts of the Nash equilbrium)

I (Same) trick: to find p, consider your date’s pure strategies: the payouts for both strategies must be the same

1 I Similarly, q = 4 I Note that both you and your date’s expected payout is

Going to the Movies Date C D C 3, 2 1, 1 You D 0, 0 2, 3

I What are the Nash equilibria? I Pure Nash equilibria occur when you both go to the same movie I For mixed Nashed equilibria, write out the strategies as (p, 1 − p) and (q, 1 − q) 3 2 (between the original payouts of the Nash equilbrium)

I So 2p = p + 3(1 − p) 3 I p = 4 1 I Similarly, q = 4 I Note that both you and your date’s expected payout is

Going to the Movies Date C D C 3, 2 1, 1 You D 0, 0 2, 3

I What are the Nash equilibria? I Pure Nash equilibria occur when you both go to the same movie I For mixed Nashed equilibria, write out the strategies as (p, 1 − p) and (q, 1 − q) I (Same) trick: to find p, consider your date’s pure strategies: the payouts for both strategies must be the same 3 2 (between the original payouts of the Nash equilbrium)

3 I p = 4 1 I Similarly, q = 4 I Note that both you and your date’s expected payout is

Going to the Movies Date C D C 3, 2 1, 1 You D 0, 0 2, 3

I What are the Nash equilibria? I Pure Nash equilibria occur when you both go to the same movie I For mixed Nashed equilibria, write out the strategies as (p, 1 − p) and (q, 1 − q) I (Same) trick: to find p, consider your date’s pure strategies: the payouts for both strategies must be the same I So 2p = p + 3(1 − p) 3 2 (between the original payouts of the Nash equilbrium)

1 I Similarly, q = 4 I Note that both you and your date’s expected payout is

Going to the Movies Date C D C 3, 2 1, 1 You D 0, 0 2, 3

I What are the Nash equilibria? I Pure Nash equilibria occur when you both go to the same movie I For mixed Nashed equilibria, write out the strategies as (p, 1 − p) and (q, 1 − q) I (Same) trick: to find p, consider your date’s pure strategies: the payouts for both strategies must be the same I So 2p = p + 3(1 − p) 3 I p = 4 3 2 (between the original payouts of the Nash equilbrium)

I Note that both you and your date’s expected payout is

Going to the Movies Date C D C 3, 2 1, 1 You D 0, 0 2, 3

I What are the Nash equilibria? I Pure Nash equilibria occur when you both go to the same movie I For mixed Nashed equilibria, write out the strategies as (p, 1 − p) and (q, 1 − q) I (Same) trick: to find p, consider your date’s pure strategies: the payouts for both strategies must be the same I So 2p = p + 3(1 − p) 3 I p = 4 1 I Similarly, q = 4 3 2 (between the original payouts of the Nash equilbrium)

Going to the Movies Date C D C 3, 2 1, 1 You D 0, 0 2, 3

I What are the Nash equilibria? I Pure Nash equilibria occur when you both go to the same movie I For mixed Nashed equilibria, write out the strategies as (p, 1 − p) and (q, 1 − q) I (Same) trick: to find p, consider your date’s pure strategies: the payouts for both strategies must be the same I So 2p = p + 3(1 − p) 3 I p = 4 1 I Similarly, q = 4 I Note that both you and your date’s expected payout is Going to the Movies Date C D C 3, 2 1, 1 You D 0, 0 2, 3

I What are the Nash equilibria? I Pure Nash equilibria occur when you both go to the same movie I For mixed Nashed equilibria, write out the strategies as (p, 1 − p) and (q, 1 − q) I (Same) trick: to find p, consider your date’s pure strategies: the payouts for both strategies must be the same I So 2p = p + 3(1 − p) 3 I p = 4 1 I Similarly, q = 4 3 I Note that both you and your date’s expected payout is 2 (between the original payouts of the Nash equilbrium) I Idea: some small percentage of a population develops a mutation

I This creates a competing ‘strategy’, compared to animals without the mutation

I Will those with the mutation thrive or die?

Evolutionarily Stable Strategies

I Simplification of theory due to I This creates a competing ‘strategy’, compared to animals without the mutation

I Will those with the mutation thrive or die?

Evolutionarily Stable Strategies

I Simplification of theory due to John Maynard Smith

I Idea: some small percentage of a population develops a mutation I Will those with the mutation thrive or die?

Evolutionarily Stable Strategies

I Simplification of theory due to John Maynard Smith

I Idea: some small percentage of a population develops a mutation

I This creates a competing ‘strategy’, compared to animals without the mutation Evolutionarily Stable Strategies

I Simplification of theory due to John Maynard Smith

I Idea: some small percentage of a population develops a mutation

I This creates a competing ‘strategy’, compared to animals without the mutation

I Will those with the mutation thrive or die? I This creates a game such as:

Defend Not Defend 2, 2 1, 3

Not 3, 1 0, 0

I Suppose that  % (some really small percent) of the ants have the mutation, and 100 −  % don’t

I Payoff for a mutant is bigger when  is small

I Percent of population with mutation will grow until Nash equilibrium is reached

Evolutionarily Stable Strategies

I An example: ants may or may not help defend the nest I Suppose that  % (some really small percent) of the ants have the mutation, and 100 −  % don’t

I Payoff for a mutant is bigger when  is small

I Percent of population with mutation will grow until Nash equilibrium is reached

Evolutionarily Stable Strategies

I An example: ants may or may not help defend the nest

I This creates a game such as:

Defend Not Defend 2, 2 1, 3

Not 3, 1 0, 0 I Payoff for a mutant is bigger when  is small

I Percent of population with mutation will grow until Nash equilibrium is reached

Evolutionarily Stable Strategies

I An example: ants may or may not help defend the nest

I This creates a game such as:

Defend Not Defend 2, 2 1, 3

Not 3, 1 0, 0

I Suppose that  % (some really small percent) of the ants have the mutation, and 100 −  % don’t I Percent of population with mutation will grow until Nash equilibrium is reached

Evolutionarily Stable Strategies

I An example: ants may or may not help defend the nest

I This creates a game such as:

Defend Not Defend 2, 2 1, 3

Not 3, 1 0, 0

I Suppose that  % (some really small percent) of the ants have the mutation, and 100 −  % don’t

I Payoff for a mutant is bigger when  is small Evolutionarily Stable Strategies

I An example: ants may or may not help defend the nest

I This creates a game such as:

Defend Not Defend 2, 2 1, 3

Not 3, 1 0, 0

I Suppose that  % (some really small percent) of the ants have the mutation, and 100 −  % don’t

I Payoff for a mutant is bigger when  is small

I Percent of population with mutation will grow until Nash equilibrium is reached