Beyond : Solution Concepts for the 21st Century

Joseph Y. Halpern

Computer Science Department Cornell University Ithaca, NY 14853, USA [email protected]

An often useful way to think of security is as a game between an adversary and the “good” participants in the protocol. Game theorists try to understand games in terms of solution concepts; essentially, this is a rule for predicting how the game will be played. The most commonly used in is Nash equilibrium. Intuitively, a Nash equilibrium is a profile (a collection of strategies, one for each player in the game) such that no player can do better by deviating. The intuition behind Nash equilibrium is that it represent a possible steady state of play. It is a fixed point where each player holds correct beliefs about what other players are doing, and plays a to those beliefs. Part of what makes Nash equilibrium so attractive is that in games where each player has only finitely many possible deterministic strategies, and we allow mixed (i.e., randomized) strategies, there is guaranteed to be a Nash equilibrium [11] (this was, in fact, the key result of Nash’s thesis). For quite a few games, thinking in terms of Nash equilibrium gives insight into what people do (there is a reason that game theory is taught in business schools!). However, as is well known, Nash equilibrium suffers from numerous problems. For example, the Nash equilibrium in games such as repeated prisoner’s dilemma is to always defect. It is hard to make a case that rational players “should” play the Nash equilibrium in this game when “irrational” players who cooperate for a while do much better! Moreover, in a game that is only played once, why should a Nash equilibrium arise when there are multiple Nash equilibria? Players have no way of knowing which one will be played. And even in games where there is a unique Nash equilibrium (like repeated prisoner’s dilemma), how do players obtain correct beliefs about what other players are doing if the game is played only once? (See [10] for a discussion of some of these problems.) I will argue that Nash equilibrium is particularly problematic when it comes to security. Not surprisingly, there has been a great deal of work in the economics community on developing alternative solution concepts. Various alternatives to and refinements of Nash equilibrium have been introduced, including, among many others, , , (trembling hand) perfect equilibrium, proper equilibrium, and iterated deletion of weakly dominated strate- gies. (These notions are discussed in standard game theory texts, such as [4, 12].) Despite some successes, none of these alternative solution concepts address the following four problems with Nash equilibrium, all quite relevant to security. – Although both computer science and distributed computing are concerned with multiple agents interacting, the focus in the game theory literature has been on the strategic concerns of agents—rational players choosing strate- gies that are best responses to strategies chosen by other player, the focus in distributed computing has been on problems such as fault tolerance and , leading to, for example, work on Byzantine agreement [3, 13]. Nash equilibrium does not deal with “faulty” or “unexpected” behavior, “ir- rational” players, or colluding agents. When dealing with security concerns, all these issues are critical. – Nash equilibrium does not take computational concerns into account. We need solution concepts that can deal with resource-bounded players, concerns that are at the heart of cryptography. – Nash equilibrium presumes that players have of the structure of the game, including all the possible moves that can be made in every situation and all the players in game. But in security applications, sometimes the largest problems come from actions that were totally unan- ticipated, and not on anyone’s radar screen beforhand (other than the at- tacker’s!). – Nash equilibrium presumes that players know what other players are doing (and are making a best response to it). But how do they gain this knowledge in a one-shot game, particularly if there are multiple equilibria?

The full version of this paper [5] has an overview of all the relevant issues and further references; more details can be found in [1, 2, 7, 8, 6, 9].

References

1. Abraham, I., Dolev, D., Gonen, R., Halpern, J.Y.: Distributed computing meets game theory: robust mechanisms for rational secret sharing and multiparty compu- tation. In: Proc. 25th ACM Symposium on Principles of Distributed Computing. pp. 53–62 (2006) 2. Abraham, I., Dolev, D., Halpern, J.Y.: Lower bounds on implementing robust and resilient mediators. In: Fifth Theory of Cryptography Conference. pp. 302–319 (2008) 3. Fischer, M.J., Lynch, N.A., Paterson, M.S.: Impossibility of distributed consensus with one faulty processor. Journal of the ACM 32(2), 374–382 (1985) 4. Fudenberg, D., Tirole, J.: Game Theory. MIT Press, Cambridge, Mass. (1991) 5. Halpern, J.Y.: Beyond Nash Equilibrium: solution concepts for the 21st century. In: Lectures in Game Theory for Computer Scientists, pp. 264–289. Cambridge Uni- versity Press, Cambridge, U.K. (2011), an earlier version of the paper can be found in the Proceedings of Twenty-Seventh Annual ACM Symposium on Principles of Distributed Computing, 2008, pp. 1–10, and the Proceedings of the Eleventh In- ternational Conference on Principles of Knowledge Representation and Reasoning (KR 2008), 2008, pp. 6–14. 6. Halpern, J.Y., Pass, R.: Iterated regret minimization: A more realistic solution concept. In: Proc. Twentieth International Joint Conference on Artificial Intelli- gence (IJCAI ’07). pp. 153–158 (2007), a longer version of the paper, with the title “Iterated regret minimization: a new solution concept”, will appear in Games and Economic Behavior 7. Halpern, J.Y., Pass, R.: Game theory with costly computation. In: Proc. First Symposium on Innovations in Computer Science (2010) 8. Halpern, J.Y., Pass, R.: I don’t want to think about it now: Decision theory with costly computation. In: Principles of Knowledge Representation and Reasoning: Proc. Twelfth International Conference (KR ’10) (2010) 9. Halpern, J.Y., Rˆego,L.C.: Extensive games with possibly unaware players. In: Proc. Fifth International Joint Conference on Autonomous Agents and Multiagent Systems. pp. 744–751 (2006), full version available at arxiv.org/abs/0704.2014 10. Kreps, D.M.: Game Theory and Economic Modeling. Oxford University Press, Oxford, UK (1990) 11. Nash, J.: Equilibrium points in n-person games. Proc. National Academy of Sci- ences 36, 48–49 (1950) 12. Osborne, M.J., Rubinstein, A.: A Course in Game Theory. MIT Press, Cambridge, Mass. (1994) 13. Pease, M., Shostak, R., Lamport, L.: Reaching agreement in the presence of faults. Journal of the ACM 27(2), 228–234 (1980)