LOTE Study Guide
Total Page:16
File Type:pdf, Size:1020Kb
Projektas „Europos kreditų perkėlimo ir kaupimo sistemos (ECTS) nacionalinės koncepcijos parengimas: kreditų harmonizavimas ir mokymosi pasiekimais grindžiamų studijų programų metodikos kūrimas bei diegimas“ VP1-2.2-ŠMM-08-V-01-001
STUDENT’S STUDY GUIDE FOR THE COURSE GAME THEORY
This guide aims to summarize the content and the key concepts of the game theory course (LOTE). The guide supplements the course (unit) description. It details in-class activities and homework assignments that supposed to be completed during the semester. The guide starts from the brief description of the course content, required readings, homework assignments, and seminar topics. The second part of this guide provides the extended summary for each topic and the list of the key concepts. Description of the course The course in general consists of two parts. The first part recalls and deepens the pre- required game theoretic introductive material studied at bachelor level in microeconomic course “Microeconomics” either in Lithuanian or English. Therefore this part of the course is first of all dedicated to the ABC concepts of game theory and details the games of complete information represented in static, dynamic and coalition forms. It provides more formal and rigorous theoretical framework to the game theoretic problems and implies deeper understanding of Nash equilibrium and its refinements. The material of the first part is checked during the mid-term exam. The second part introduces fairly new topics in game theory. It reveals different aspects of incomplete information game theory, repeated interaction and the evolutionary stable outcomes. It also covers key features of cooperative games like characteristic function, nucleolus, core, Shapley value, etc. The largest part of the cooperative games however is not in the scope of the course and is left for self-studying or PhD level courses. The material of the second part is checked during the final exam.
Course content Readings/Assignments
1. Theoretical framework: [VR] Chapter 1 games examples, representation of a game in a Homework: [VR] 1.1 – 1.8, strategic, extensive and coalition forms, mixed and 1.11, [MWG] 7.C.1, 7.E.1 behavioural strategies and their equivalence
2. Strategic-form analysis: [VR] Chapter 2 dominance and iterative dominance , best response Homework: [VR] 2.3 – 2.9, function Nash equilibrium in pure and mixed 2.11 – 2.15, [MWG] 8.B.1,
1 strategies, zero-sum bilateral games, strong and 8.B.5, 8.C.4, 8.D.4 – 8.D.6, coalition-proof equilibria, correlated equilibrium, 8.D.9 rationalizability. [VR] Chapter 3 Seminars: 3.2 – 3.4
3. Refinements of Nash equilibrium: [VR] Chapter 4 „incredible threats“, extensive form refinements: Homework: [VR] 4.1-4.8, proper subgames, subgame perfect equilibrium, weak 4.15-4.20, [MWG] 8.E.1., perfect Bayes equilibrium, sequential, perfect (tremble- 9.B.9-9.B.11, 9.B.14, 9.C.7 hand) and proper refinements, strategic form [VR] Chapter 5 refinements: perfect and proper equilibrium Seminars: 5.2, 5.4
4. Incomplete information: [VR] Chapter 6 Bayesian games, Bayes-Nash equilibrium, direct Homework: [VR] 6.1, 6.2, mechanisms, incentives based behaviour and 6.8, 6.9, 6.11-6.14, 6.17, revaluation principal, signalling games. 6.19, 6.20 [VR] Chapter 7 Seminars: 7.1-7.4
5. Repeated interaction: [VR] Chapters 8, 10 (11) repeated games, reputation and “irrationality”, folk Homework: [VR] 8.1-8.6, theorems, evolutionary stable equilibrium. 8.6, 8.9, 8.15, 8.16 [VR] Chapter 9 Seminars: 9.3
6. Cooperative games: [PS] Chapters 2,3, 5.1, 8 bargaining process, bargaining power, form of a Homework: [V] 3.1-3.5, coalition function, core, nucleolus, Shapley value and [PS] 2.2.2-2.2.6, 3.2.1-3.2.2, Banzhaf index. 3.3.3, 3.4.1-3.4.2
The largest part of the course material is based on Vega-Redondo [VR] book and is dedicated to the non-cooperative game theory. The material of the book is supplemented by the Lecture notes (slides presented during the lectures) and short summaries of the topics based on the corresponding summary sections from [VR] book that are placed in the current study guide. Selected sections from Peleg B. and Sudhölter P. [PS] are taken to theorize and exemplify the coalition form of the game. Mas-Collel et al. [MWG] Chapters 7–9 are used for additional readings. Some sections from this book (various applications of game theory concepts to the problems in microeconomics) could be presented during the seminars, although the basic topics for the seminars are covered in [VR].
Author Year Title Publisher, publ. place
[VR] Vega-Redondo 1992 Economics and the Theory of Cambridge University F. Games Press, New York
[PS] Peleg B., 2007 Introduction to the Theory of Springer Berlin
2 Sudhölter P. Cooperative Games Heidelberg, New York
[MWG] Mas-Colell 2004 Microeconomic Theory Oxford University A. et al. Press, New York
[V] Vilkas E. 2003 Sprendimų priėmimo teorija, VDU, Kaunas paskaitų konspektas (in Lith.) Jointly with traditional lectures and discussion of the applied aspects of game theory, the studied material is used to investigate various applications, mostly related to economic problems. This practical part is left for self-studying and involves the seminar presentation and homework assignments (also presented during recitation hours). The material from the practical studies is used for the mid-term and final exams. In addition it is suggested to write down a brief essay (a verbal description, up to one A4 page) on the application of studied concepts to the real life situations. Any other interactive activities regarding game theory applications to the real cases organized by the students, interesting group discussions are graded as the extra points to the main course activities grades and are encouraged activities for those who seek the highest grade for this course. All the required activities other than mid- term and final exams are detailed below:
Activity Description Seminar Presentation of applications and organization of behavioral presentation experiments (public goods, auctions, behavioral economics). Examples of seminar topics: efficient allocation of public goods, macroeconomic coordination failures, decentralized price, signaling in labor market, insurance market and averse selection, auctions, trade, efficiency wages and unemployment, evolution and reinforcement learning, predictable irrationality. Homework Solutions to homework assignments are presented to the class assignments during recitation hours. Problems are divided into groups according to their complexity (0.5 and 1 points). A student supposed to collect at least 2 points to receive the highest grade for this part. Corrections and crucial assistance from the other students are graded in proportion to the solved part. This study guide is in turn supplemented by the lecture notes organized in the form of slides that could be found on the companion page for the course http://uosis.mif.vu.lt/~celov or in the Virtual Study Environment (Lith. virtualioji mokymosi aplinka). Each lecture is summarized and the list of key concepts and definitions are also provided below.
How to score highest grade in Game Theory?
Do many problems – more than it is suggested for homework Pose correct questions, dig dipper into the concepts – the devil is in details Attend lectures – discussions outside the book and lecture notes Read/browse the content of recommended readings before lectures and a several days after Make your own lecture notes – the most elegant get extra points Go to bed by 11pm (including the lecturer), get up at 7am – sleep at least 8 hours per day
3 Do the morning exercises, feed well, relax before going to sleep Do not memorize the material, try to understand it Do not copy the solutions to the problem sets … and do not give your solutions to the group-mates A better option is to explain the solution loudly step-by-step ... do it to your friends, little brother, dog, Teddy Bear Avoid solution of the "typical" problem in the mechanical way – play the games, not just do mathematics
To sum up, before moving to the next section it is suggested to recall the material regarding the representation of the example-games in normal and extensive forms. Try to explain to your group-mates the key concepts that where studied during the lectures prior to moving to your homework assignments. Discuss the homework solutions with your group- mates before and during the recitation hours. A crucial way to better understanding of the game theory other than solving a lot of problems is to play the games, define their rules, and be the economic naturalist. The reasons are obvious: it is fun and you need to relax sometime, not just study; to see the practical application of game theory is the key to become the professional. Therefore at the completion of the course you’ll be expected to be the game theory experienced practitioners, not just the layman consumers.
Summary of the lectures
1. Theoretical framework.
Game theory (GT) is a theory of rational behavior of economic agents with interests and/or actions being in conflict. More rigorously it could be defined as the theory of mathematical models of conflict and cooperation between intelligent and rational decision makers. Game theory is applicable whenever at least two individuals–people, companies, political parties, or nations–confront situations where the outcome for each depends on the behavior of all. The models of game theory are highly abstract representations of real-life situations. This introductive topic (VR, Chapter 1) investigates the basic theoretical framework required to model and analyze general strategic situations. It presents two alternative ways of representing any given game: the extensive form and the strategic (or normal) form.
The first approach (the extensive form, GE) is the most general, in that it describes explicitly the players’ order of moves and their available information at each point in the game. It embodies a formal specification of the tree of events, where each of its intermediate nodes has a particular player associated with it who decides how play (i.e., the “branching process”) unfolds thereafter. Eventually, a final node is reached that has a corresponding vector of payoffs reflecting how the different players evaluate such a path of play.v n n G= N, K , R , H , A ( x ) , [p ( z )]n , E{ { i}i=0{ i} i = 0{ } x K \ Z { i i=1}z Z } where N denotes the set of players with 0 representing “Nature”, the precedence rule R defined on the set of nodes K forms the tree of events (game tree), Ki represents the order of moves, Hi partitions the nodes into information sets, while A(x) denotes available actions, πi(.) is the i-th player’s payoff function defined on the set of final nodes Z.
4 In contrast, the strategic (the normal, GN) form of a game is based on the fundamental notion of strategy. A player’s strategy is a contingent plan of action that anticipates every possible decision point at which the player may be in the course of game. Given a profile of players’ strategies, the path of play is uniquely defined and so is the corresponding outcome and payoffs. In essence, the strategic form of a game is simply a compact description of the situation through the corresponding mapping from strategy profiles to payoffs. G G= N, S n , p n N( E) { { i}i=0{ i } i = 1}, where Si denotes the i-th player pure strategy space, and the final node z from GE is induced by the strategy profile (s0, s1, …, sn). Often, we shall be interested in allowing players to randomize when selecting their strategy. This possibility gives rise to the so-called mixed extension of a game (Σi a ri -1 dimensioned simplex of the set of pure strategies Si with ri being the number of pure strategies), where payoffs associated with mixed strategies are defined in expected terms. If a player’s randomization can occur independently at each one of her information sets, the induced plan of (randomized) action is called a behavioral strategy (denoted by γi). One can establish the formal relationship between mixed and behavioral strategies. At a strategic level, however, they turn out to be fully equivalent, as long as players display perfect recall (i.e., never forget prior information or their own previous choices). Finally, the sharp change of focus adopted by the branch of game theory that has been labeled “cooperative,” as opposed to the non-cooperative game theory studied in the first part of the course is briefly outlined. The implicit assumption of cooperative game theory is that players can jointly enforce (or commit to) the implementation of any outcome and will use this possibility to reach some efficient configuration. To illustrate the nature of this approach, this introductory chapter outlines the implications of two of the most widely used solution concepts (the core and the Shapley value) within a very simple setup. The gaming process consists of two stages: the first is the negotiation process that results in some coalition structure, the second (the actual play) mimics one of the form of non-cooperative games. The first stage lies in the focus of cooperative game theory analysis and is actually described by the characteristic function (v(K)):
GC = { N, v (.)} In this part of the course several classical examples of the games are introduced that will be continued at latter sections. A simple entry game explains the content of credible threats problem; prisoner’s dilemma leads to optimal, but not Pareto efficient solution; coordination problems expressed in the form of battle of sexes; a matching-pennies game; the highest-card (s simplified poker) game, that lead to Pareto efficient, but not unique (multiple) optimal solutions.
Key concepts: game theory, players, actions, information (perfect, proper, complete, incomplete, (a)symmetric), payoffs, forms of the game: normal, extensive, coalition; game tree, strategy (pure, mixed, behavioural), strategic equivalence.
Seminar discussion:
1. Game theory problems in business and everyday life (1-2 students). Find the information on the history of the classical game theory problems. Define them in the general normal form, and the extensive form. Find and describe the similar situations
5 that suit the classical examples. In the second part of the discussion, provide the examples from the real life situations that may be described and solved by the tools of game theory. Each student may earn the extra points preparing the description of real life situation.
Examples of two player games for the discussion:
Battle of the sexes. A boy and girl agree to meet this evening, but cannot recall if they will be attending the shopping or a basketball match. He prefers the shopping and she prefers the basketball, though both prefer being together to being apart. Thus, while both parties prefer to find themselves at the same place, the boy and girl cannot agree which event to attend. This is an example of a coordination game. When expressed in the normal form, it becomes evident that there are two pure strategy (and one mixed strategy) equilibria. All equilibria are Pareto efficient: Girl \ Boy Basketball Shopping Basketball 3, 2 1, 1 Shopping 0, 0 2, 3
Prisoner’s dilemma. Two individuals, labeled 1 and 2, who have been arrested on the suspicion of having committed jointly a certain crime. They are placed in separate cells and each of them is given the option by the prosecutor of providing enough evidence to incriminate the other. If one defects the other, he may go free while the other receives a life sentence. Yet, if both confess, bad fate befalls them. If both cooperate and stay silent, insufficient evidence will lead them being charged with and convicted of a lesser crime. The game has unique Nash equilibrium that at the same time is not Pareto efficient.
1 \ 2 Defect Cooperate Defect -10, -10 0, -12 Cooperate -12, 0 -1, -1
Matching-pennies. Two players simultaneously choose “heads” or “tails.” If their choices coincide (i.e., both select heads, or both select tails) player 2 pays a dollar to player 1; in the opposite cases, player 1 pays this amount to player 2. This is an example of antagonistic zero sum game with no pure strategy equilibrium.
1 \ 2 Heads Tails Heads 1, -1 -1, 1 Tails -1, 1 1, -1
Simple entry. Consider two firms, 1 and 2, involved in the following game. Firm 1 is considering whether to enter the market originally occupied by a single incumbent, firm 2. In deciding what to do (enter (E) or not (N)), firm 1 must anticipate what will be the reaction of the incumbent (fight (F) or concede (C)), a decision the latter will implement only after it learns that firm 1 has entered the market. Assume that the monopoly (or collusive) profits to be derived from the market are given by two million dollars, which firm 2 either can enjoy alone if it remains the sole firm or must share with firm 1 if it concedes entry. On the other hand, if firm 2 fights entry, both firms are assumed to incur a net loss of one million dollars because of the reciprocal predatory policies then pursued. The game has two Nash equilibria
6 in pure strategies. One however is not acceptable, since it is supplemented by “incredible” threat. 1 \ 2 Fight Concede Not enter 0, 2 0, 2 Enter -1, -1 1, 1
Stag hunt. The French philosopher, Jean Jacques Rousseau, presented the following situation. Two hunters can either jointly hunt a stag (an adult deer and rather large meal) or individually hunt a rabbit (tasty, but substantially less filling). Hunting stags is quite challenging and requires mutual cooperation. If either hunts a stag alone, the chance of success is minimal. Hunting stags is most beneficial for society but requires a lot of trust among its members. There are two pure strategy equilibria. Both players prefer one equilibrium to the other – Pareto efficient. However, the inefficient equilibrium is less risky as the payoff variance over the other player's strategies is lower. Specifically, one equilibrium is payoff-dominant while the other is risk-dominant. Such type of games is known as warranty games.
1 \ 2 Stag Rabbit Stag 10, 10 0, 8 Rabbit 8, 0 7, 7
2. Strategic form analysis.
The second topic of the course makes focus on the main theoretical tools and concepts available for the analysis of games in strategic form (VR, Chapter 2). It has started with the most basic notion of payoff dominance, for which we have contemplated a variety of different specific versions. For the standard one, we have formulated an iterative process of elimination of dominated strategies that responds to the idea that rationality (in a weak sense) is common knowledge, and therefore there is no rational player that would ever play strongly dominated strategy. In some games (those called dominance solvable) this process leads to a unique prediction of play. But most of the times the resulting limiting set of not dominated strategies is fairly large although it is obviously not empty. Since domination required to be better in all strategic interactions the previous concept is indeed too restrictive, that allowed us turning to the concept of Nash equilibrium, a central notion in game theory that embodies a joint requirement of individual rationality (in the stronger sense of payoff maximization) and correct (“rational”) expectations fixing the behavior of other rational individuals. Even though it is typically not unique, at least one can guarantee its existence in every finite game, provided players may use mixed strategies. Nash equilibrium becomes a particularly well-behaved concept for the restricted class of strategic situation known as zero-sum games, the original context studied by the early researchers in game theory. In these games, players’ interests are strictly opposed (antagonistic), which turns out to afford a very elegant and clear-cut analysis. In particular, the latter is based on famous von Neumann (1928) theorem – for every bilateral and finite zero-sum game there exists a real number (ν*) that is equal to maxmin and minmax values of the game, and for every Nash equilibrium of this game the value of the game is equal to ν*. From the theorem it follows, that all Nash equilibria provide the same payoff and equilibrium play displays interchangeability, i.e., it does not require any implicit or explicit coordination among players.
7 The last part of this chapter has discussed a number of different variations (strengthenings or generalizations) on the notion of Nash equilibrium. First, we have briefly focused on the concepts of strong and coalition-proof equilibria, which require that the equilibrium configuration be robust to deviations jointly devised by any coalition of players. Unfortunately, both of these notions (even the latter weaker one) happen to be afflicted by acute nonexistence problems and will be solved in the next section. Next, we turned to the concept of correlated equilibrium, which allows players to rely on incentive-compatible stochastic coordination mechanisms in choosing their actions. The wider possibilities this affords enlarge substantially the range of payoffs that may be achieved in some games. In particular, payoffs that are larger than those attainable at any Nash equilibrium can be achieved by a carefully designed (in particular, asymmetric) pattern of individual signals. An interesting conclusion was that the players are interested to be less informed (keep the original asymmetry in information) to guarantee better pay-offs. We noted also that each of the Nash equilibria is in fact the particular case of correlated equilibrium, when players’ strategies are not correlated. Finally, we have discussed the notion of rationalizability. Its motivation derives from the idea that, unless players explicitly communicate with each other (a possibility that, in any case, would have to be modeled as part of the game), there is no reason to believe they must succeed in coordinating on a particular Nash equilibrium. This suggests that players’ analysis of the game should often be based alone on the knowledge of payoffs and the presumption that the opponents are rational maximizers. Then, what arises is an iterative process of independent reasoning for each player that has a close parallelism with the iterative elimination of dominated. In fact the limiting sets of the two are equal in the bilateral games, but are different when the number of players is higher than two.
Key concepts: dominance, iterative dominance, best response function, Nash equilibrium, pure and mixed strategy, zero-sum bilateral games, strong and coalition-proof equilibria, correlated equilibrium, rationalizability.
Topics for the seminar presentations: Topics on static and dynamic oligopoly models are covered within industrial organisation and microeconomic analysis. However for the large group of students Cournot and Bertrand models could be suggested for additional seminar presentations after this chapter and Stackelberg model and differentiated products latter on. Here are presented summaries for the selected topics that are not covered within the content of other courses and are fairly new problems to the students.
2. Efficient allocation of public goods (1-2 students). The field of implementation theory is a vast area of research that aims at exploring the possibilities and limitations of reconciling individual incentives and some standard of social desirability. To break ground, it is suggested to start focusing on public-good allocation problems, where strong free-rider inefficiencies have been seen to arise when simple-minded approaches are used, e.g., the natural subscription mechanism. Therefore it is possible to conclude, that a more “creative” approach to the problem is called for, leading the enquiry into the field of mechanism (or institutional) design. In this vein, it could be shown that a rather simple and satisfactory solution to the problem is provided by a mechanism proposed by Walker (1981), whose performance was found to display the following regularity: every Nash equilibrium of the induced game results into a
8 Lindahl (and therefore efficient) allocation of the underlying economic environment. The following concern is to cast and study the implementation problem in a more abstract but substantially more general fashion. This concluding part addresses the suitable construction of Nash equilibria mechanisms.
3. Macroeconomic coordination failures (1 student). A simple strategic model for the study of macroeconomic coordination failures. The main theoretical issue here has been posed as follows. Is it possible to provide a coherent strategic rationale for the Keynesian claim that, despite flexible prices, a market system may become trapped into a low-activity equilibrium? Indeed, a stylized framework could be provided that allows for (in fact, forces) this possibility at the extreme lowest level of zero activity. Its main features are (i) acute production complementarities among intermediate commodities in the production of a final consumption good, and (ii) sequential timing in the production and marketing decisions that entails important allocation irreversibilities. To obtain less drastic conclusions, the problem might be reconsidered within a variation of the original framework that curtails agents’ price-manipulation possibilities quite significantly. In this revised setup, the model displays a wide equilibrium multiplicity, which allows for a nondegenerate range of different equilibrium activity levels at which agents may (mis)coordinate.
3. Refinements of Nash equilibrium. This topic has been concerned with the so-called refinements of Nash equilibrium (VR, Chapter 4). The material covers a wide variety of the refinements, differing both in the stringency of their requirements (i.e., how much they “refine” Nash equilibrium) and their framework of application (extensive- or strategic-form games). One of the weakest notions is subgame-perfect equilibrium (SPE), which requires that a Nash equilibrium should materialize in every proper subgame, i.e., in each subgame starting with a singleton information set. This concept is most useful in games of perfect information, where every information set consists of a single node. In other kinds of games, where not all players are fully informed of past history at their decision nodes, this concept may have little or no cutting power over Nash equilibrium. Many games of interest are not of perfect information. This motivates introducing other equilibrium notions such as weak perfect Bayesian equilibrium (WPBE), in which players’ beliefs are made explicit at every information set and are required to be statistically consistent with players’ equilibrium strategies. The fact that WPBE imposes no restrictions whatsoever on off-equilibrium beliefs (i.e., full discretion is allowed at non visited information sets) has been shown to be a significant drawback of this concept. In particular, it renders it too weak in some games, where it may even fall short of meeting the basic requirement of subgame perfection. In a sense, we may conceive SPE and WPBE as being Nash refinements geared toward excluding only incredible threats. Some of their conceptual problems follow from the fact that they abstract from the need to “refine” beliefs and therefore may admit some that, in fact, should be judged as untenable. To exclude untenable beliefs is the motivation underlying the concepts of sequential equilibrium, perfect equilibrium, and proper equilibrium. The first one attains this objective by demanding a certain continuity requirement on the formation of out-of-equilibrium beliefs. The latter two do it by introducing an explicit theory of deviations (i.e., out-of equilibrium behavior) that allows for the possibility that players may make choice mistakes with small probability.
9 Most of the discussion has been concerned with Nash refinements that build on a dichotomy between choices (or beliefs) arising on- and off-equilibrium, thus requiring them to be defined on the extensive-form representation of a game. However, we have seen that refinements defined on the strategic form (e.g., the exclusion of weakly dominated strategies, or the counterparts of perfect and proper) are of special interest as well and may go well beyond a narrow strategic-form interpretation. They may reflect, for example, considerations that would have seemed to belong exclusively to the realm of extensive-form Nash refinements, such as those based on backward induction, forward induction, or even sequential rationality. A useful relationship between different sets of equilibria is:
WPBE( GE ) NE( GE )适 SE ( G E ) 适 PFE ( G E ) PRE ( G E ) 蛊 , SPE( GE ) where for any extensive-form game GE we have:
NE(GE) – set of Nash equilibria in GE;
WPBE(GE) – set of weak perfect Bayesian equilibria in GE;
SPE(GE) – set of subgame-perfect equilibria in GE;
SE(GE) – set of sequential equilibria in GE;
PFE(GE) – set of (trembling-hand) perfect equilibria in GE;
PRE(GE) – set of proper equilibria in GE.
Key concepts: “incredible” threats, subgame, proper subgame, backward induction, forward induction, subgame perfect equilibrium, perfect information, weak perfect Bayesian equilibrium, prior beliefs, off-the-equilibrium path, completely mixed profile, consistent assessment, sequential equilibrium,, tremble-hand perfect equilibrium, proper equilibrium, weak dominance.
Topics for the seminar presentations:
4. Decentralized price formation (1 student). Presentation is about a stylized model of bargaining between two individuals (a buyer and a seller) who must agree on how to share some prespecified surplus. The theoretical framework considered has individuals propose their offers in alternation, the partner then responding with either acceptance or rejection. Even though the game could proceed indefinitely and the wealth of possible inter-temporal strategies is staggering, the game has a unique subgame- perfect equilibrium in which players agree immediately on a certain division of the surplus. Building on this bilateral model of bargaining, we have then studied a population-based process in which pairs of individuals bargain in “parallel” and may switch partners in case of disagreement. The resulting game again displays a unique subgame-perfect equilibrium, which in turn determines a uniform price. This price, however, turns out to be different from the Walrasian price, at least under a certain interpretation of what is the right “benchmark economy”.
5. Efficient allocation of indivisible object (1 student). A problem, the so-called King Solomon’s dilemma, recalls the topic of Nash implementation of efficient allocation problem presented earlier. The key condition of monotonicity required for an SCR to be Nash implementable is violated in this case, thus ruling out that the desired SCR (i.e., the assignment of the child in dispute to the true mother) might be implemented
10 in Nash equilibrium. However, that the problem can be solved if a multistage mechanism is used and the individuals play according to a subgame-perfect equilibrium of the induced game. This suggests a rich interplay between mechanism design and equilibrium theory (in particular Nash refinements), which has in fact been explored quite exhaustively in recent literature.
Mid-term exam.
Quiz questions are formulated in the way to check the general understanding of the key concepts and examples that were listed in this guide. These are not the memory test on a particular definition. Mid-term exam problems are similar to the homework assignments – additional motivation to do homework assignments, not just some exercises for the minimal required recitation points.
4. Incomplete information.
In this topic we seek to elaborated on the framework and concepts developed in previous the first half of the course to model the important class of situations in which players do not share complete information about the details of the interaction (VR, Chapter 6). Such incomplete information games arise, for example, when there is an asymmetric distribution of information among the players concerning the underlying payoffs. Key parts of modern economic theory, such as the study of market imperfections or mechanism design, focus on scenarios displaying these features. The starting point of the discussion has been Harsanyi’s model of a Bayesian game. In this context, Nature is attributed the role of specifying all the relevant details of the environment, which are a priori undetermined, and then distributing the information to the different agents (possibly in an asymmetric fashion) before they enter into actual play. In essence, a so-called Bayes-Nash equilibrium (BNE) in this setup can be identified with an ordinary Nash equilibrium for the extended game where Nature participates as an additional player. This identification settles a number of important issues. For example, it guarantees that a BNE always exists in a finite Bayesian game, as an immediate corollary of previous results. By construction, a Bayesian game defines a strategic-form framework and a BNE is a strategic-form notion. They do not lend themselves naturally, therefore, to the study of considerations of credibility, perfectness, or (most importantly) signaling, all of which are at the core of some of the most interesting applications in this context. To model matters more effectively, it is required to define the framework of a signaling game that introduces the aforementioned considerations in the simplest possible setting. It merely involves two parties moving in sequence, the first fully informed about the decision of Nature, the second one ignorant of it but perfectly aware of the action chosen by the first agent. The main merit of this stylized framework is that it allows for the explicit introduction of the notion of beliefs and, consequently, provides a natural criterion of perfectness as well as the possibility of useful signaling. A natural adaptation (in fact, a refinement) of the former BNE concept to a signaling context gives rise to the concept of signaling equilibrium (SE). Elaborating on the parallelisms on which we built for BNE, an SE can be simply viewed as a perfect Bayesian equilibrium for an enlarged multistage game with Nature. Again, this settles a number of issues such as that of existence of SE.
11 The incomplete-information framework presented in this topic is very rich and versatile as well as often quite subtle. All this receives ample confirmation in seminar topics, where we will discuss a wide variety of applications. There, it will be exemplified in two different ways. First, we have seen that it may be used to provide a rather solid basis for the often controversial notion of mixed strategies. Specifically, it can be shown that, generically, any Nash equilibrium in mixed strategies can be “purified,” i.e., approximated by a pure- strategy BNE if the game in question is slightly perturbed by some asymmetric incomplete information. Second, we have also briefly illustrated the intricacies displayed by incomplete- information (signaling) games when their analysis is tested against forward-induction arguments. In some cases, this has been shown to yield a series of contradictory implications that are reminiscent of considerations already encountered in the 3rd refinement topic.
Key concepts: Bayesian game, space of types, Nature, behavioural strategy, Bayes- Nash equilibrium, direct mechanisms, incentives based behaviour, revaluation principal, signalling game, forward induction.
Topics for the seminar presentations:
6. Signalling in the labour market (1 student). The classical model of Spence that focuses on the potential role played by education as a signal of unobserved worker productivity in labor markets. Specifically, the setup involves two firms that may discern the education level of the worker they face but not her underlying productivity. Under these circumstances, the fact that education is assumed to be more costly an activity for low-productivity types introduces the possibility that, at SE, different education levels “separate” types and thus allow firms to infer the worker’s productivity. We have discussed conditions under which other kind of equilibria also exists, such as pooling (or even hybrid) equilibria where intertype separation does not (at least completely) take place. Finally, we have relied on a forward-induction refinement of SE (the intuitive criterion) to select, among the typical multiplicity of equilibria, the one that permits intertype separation at the lowest possible “cost” for the high-productivity type.
7. Insurance markets and adverse selection (1 student). Rotschild and Stiglitz model’s focus is on the implications of incomplete information and the resulting adverse selection, on the functioning of insurance markets. As in Spence’s model, their theoretical context also involves two uninformed firms and one informed agent, although the order of move is now reversed. First, the firms decide simultaneously on the set of insurance contracts offered to the individual, in ignorance of the latter’s particular risk conditions (i.e., her type, high or low). Then, the individual (aware of her own type) chooses the contract that, among those offered, yields her the highest expected payoff. We have shown that, in general, there can exist only (weak perfect Bayesian) equilibria that are separating, i.e., strategy configurations in which each type chooses a different contract. However, we have also seen that there are reasonable parameter configurations (specifically, a relatively low ex ante probability for the high-risk type) where no equilibrium whatsoever exists in pure strategies. This has been understood as the effect of a negative (information-based) externality imposed by the high-risk type on the low-risk one, whose effect naturally becomes important when the latter is relatively frequent.
12 8. One-sided auctions (1 student). This is a problem of mechanism design concerning the allocation of a given indivisible object. It involves a one-sided context, since only the buyers’ side of the market is genuinely involved in the strategic interaction. Specifically, we have studied a simple first-price auction, conducted in the presence of two potential buyers who are privately informed of their own valuations for the good. Modeled as a Bayesian game, we have solved for its unique BNE in affine strategies. In this equilibrium, individuals try to exploit strategically their private information by bidding only half of their true valuation of the good. Of course, this implies that, in equilibrium, the individual with the highest valuation ends up obtaining the good but at a price that is below his reservation value. This has led to the following mechanism- design questions. Is there any procedure for allocating the good that the seller might prefer to a first-price auction? If so, can one identify the mechanism that maximizes expected revenues? To tackle such “ambitious” questions, one needs to rely on the powerful revelation principle. In a nutshell, this principle establishes that the performance achievable through any arbitrary mechanism can be reproduced through truthful equilibria of a direct mechanism where players’ messages concern their respective characteristics.
9. Buyer-seller trade (1 student). A two-sided auction context: both buyers and sellers actively participate in the mechanism for allocating an indivisible object. Focusing on the simplest case where there is only a single buyer and a single seller, we have studied a double auction in which agents are asked to submit simultaneously their worst acceptable terms of trade. Under the assumption that agents’ valuations are private information, we have modeled it as a Bayesian game and solved for the (unique) BNE in affine strategies. In this equilibrium, players’ underbidding incentives induce, with positive probability, ex post inefficiency, i.e., trade is not carried out in some cases where, nevertheless, there would be aggregate gains to do so. Motivated by this negative conclusion, we again have been led to a mechanism-design question. Is there any mechanism that, at equilibrium, guarantees ex post efficiency? By resorting once more to the revelation principle, this question has been provided with an essentially negative answer. That is, no such mechanism exists, if players must always be furnished with incentives to participate in it, i.e., if the requirement of individual rationality has to be satisfied.
Double-auction experiment may be organized to illustrate the 9th topic (1 student).
5. Repeated interaction. In this final non-cooperative topic a general framework to analyze strategic situations in which the same set of players repeatedly interact under stable circumstances (i.e., a fixed-stage game) is investigated (VR, Chapter 7) . The discussion has been organized into two alternative, and qualitatively quite distinct, scenarios. In one of them, players do not envisage any prespecified end to their interaction – that is, their understanding (and, therefore, our model) of the situation is an infinitely repeated game. Instead, in the second scenario, their interaction is known by the players to last a certain finite (predetermined) number of rounds and, therefore, the appropriate model turns out to be that of a finitely repeated game. Much of our concern in this chapter has revolved around the so-called folk theorems. These results – which are cast in a variety of different forms, reflect different time horizons, and rely on
13 different equilibrium concepts – all share a similar objective. Namely, they aim at identifying conditions under which repeated interaction is capable of sustaining, at equilibrium, a large variety of different outcomes and intertemporal payoffs. More specifically, their main focus is on whether payoffs distinct from those attainable at Nash equilibria of the stage game (e.g., those that are Pareto- superior to them, but even those that are Pareto-inferior) can be supported at an equilibrium of the repeated game. As it turns out, the answers one obtains are surprisingly wide in scope, at least if the repeated game is infinitely repeated and players are sufficiently patient (e.g., if they are concerned with limit average payoffs or their discount rate is high enough). Under those conditions, essentially all payoff vectors that are individually rational (i.e., dominate the minimax payoff for each player) can be supported by some Nash (or even subgame-perfect) equilibrium of the repeated game. However, matters are somewhat less sharp if the horizon of interaction is finite. In this case, to obtain similar “folk results,” the stage game must display sufficient punishment leeway through alternative Nash equilibria. This rules out, for example, cases such as the finitely repeated prisoner’s dilemma where, because the stage game has a unique Nash equilibrium, the unique subgame-perfect equilibrium involves repeated defection throughout. This and other examples – such as the chain-store game – that display a sharp contrast between the conclusions prevailing under finite and infinite time horizons have led us to wonder about the possible lack of robustness of the referred conclusions. Indeed, we have found that the analysis undertaken in the finite-horizon framework may be rather fragile to small perturbations in at least two respects. First, they do not survive a slight relaxation of the notion of rationality that allows players to ignore deviations that are only marginally (ε-)profitable. Second, they are not robust to the introduction of a small degree of incomplete information that perturbs the players’ originally degenerate beliefs about the types of others In either case, one recovers the folk-type conclusions for long (but finite) repeated games. Having allowed for the possibility that players may entertain some doubt about the opponents’ types, it is natural to ask whether some players might try to exploit this uncertainty to shape for themselves a profitable reputation as the game unfolds. To analyze this issue, we have focused on a simple and stylized context where just one long-run player faces a long sequence of short-run players in turn. In this setup, the asymmetric position enjoyed by the former player (she is the only one who can enjoy the future returns of any “investment in reputation”) yields a stark conclusion: along any sequential equilibrium, the long-run player can ensure for herself almost the Stackelberg payoff.
Key concepts: repeated game, equilibrium path, stage, discounted payoff, limit average payoff, folk theorems, (in)finite horizon, lowest payoff, minimax payoff, reputation and irrationality, sub-game perfection.
Topic for the seminar presentation:
10. Efficiency wages and unemployment (1 student). Discuss a very stylized model of a repeated labor market where a single firm and two workers meet every period. The constituent stage game involves sequential decisions, with the firm moving first (by issuing wage proposals) and the two workers subsequently (deciding whether to work for the firm at the offered wage and, in that case, whether to exert costly effort or not). In this stage game alone, no worker has any incentive to exert effort after accepting a wage offer, thus typically leading to an inefficient outcome in its unique subgame- perfect equilibrium. Such an inefficiency can be remedied in the (infinitely) repeated game if effort is observable and workers are sufficiently patient. That is, there is a subgame perfect equilibrium of the repeated game where the firm offers a wage premium (i.e., proposes an efficiency wage) that offsets workers’ opportunistic
14 incentives. However, if the workers’ effort is unobservable, the situation becomes substantially more problematic.
6. Cooperative games. The last topic is devoted to an introductory study of the basic properties and solutions of cooperative games represented in coalitional form. A coalitional or a strategic game is cooperative if the players can make binding agreements about the distribution of payoffs or the choice of strategies, even if these agreements are not specified or implied by the rules of the game. Binding agreements are prevalent in economics. Indeed, almost every one-stage seller-buyer transaction is binding. Moreover, most multi-stage seller-buyer transactions are supported by binding contracts. Usually, an agreement or a contract is binding if its violation entails high monetary penalties which deter the players from breaking it. Cooperative coalitional games are divided into two categories: games with transferable utilities and games with nontransferable utilities. In the introductive part we introduced the first class of the models that is fully described be the value (characteristic) function. Let S be a non empty subset of players (coalition), then the worth of TU game according to von Neumann and Morgenstern is defined as the maximum value in two players zero sum game, with players being two coalitions S and N\S. Therefore a cooperative game with transferable utility is defined as a pair: GC = { N, v (.)}. This concept could be considered from the two points of view: the surplus v(.) approach looks at the net benefits that the coalition earns from the cooperation and the cost c(.) approach describes the cost of serving all customers in S. The v( S )= c ({ i }) - c ( S ) latter concepts are linked through i S . Most TU games derived from practical situation satisfy superadditivity. Roughly the value of coalition should not be less than the value of all its sub-coalitions. In the case of cost interpretation the property is replaced by subadditiviy. Several useful classifications of games come from their properties. Firstly, if a TU game satisfies additivity property for any coalition and it’s supplementary coalition the game is a constant sum. If the value of the all-player coalition is equal to the value of players taking separately – the game is inessential, otherwise it would be essential. Other useful properties are non-negativity, monotonicity, a simple game takes just two values 0 and 1, symmetric if the values depends only on the cardinality of the coalition. Finally, the TU games are strategically equivalent in respect to any increasing linear transformation. Therefore every essential superadditive TU game could be strategically equivalently transformed into normalized (0, 1) game. The result of any game is a payoff vector that could be represented by any real number vector. However the one that is linked to the TU game has to be at least efficient and individually rational. Efficiency and rationality define the core of the game as: C( v )={ x�N | x ( N ) � v ( N ), x ( S ) 凸 v ( S ), S N , S } However the latter is empty in every essential constant-sum game. The problem is solved either requiring additional assumptions for the game to balance its structure or moving to the concept of nucleolus, that uniquely defines the singleton non-empty solution to the optimal definition of payoff vector. The latter uses the lexicographical order and has a clear socio-economic interpretation that any reform that redistributes the wealth from the richest to poor is least painful if done in accordance to the nucleolus solution. Lastly the question of how to value each player is related to the concept of value functions that could be associated with the TU games. The most reasonable approach to the choice of a solution concept is the axiomatic approach that allows choosing a solution
15 satisfying a number of a priori chosen properties stated as axioms reflecting reasonable under the circumstances criteria, such as social efficiency, fairness (null-player receive nothing), marginality, simplification of computational aspects. Shapley value is defined for any TU game that satisfies efficiency, symmetry, a null-player property and additivity axioms as: n-1 s!( n- s - 1)! Shi ( v )= 邋 ( v ( S� { i }) v ( S )) s=0n! S N \{ i } |S |= s The alternative to Shapley value especially in simple voting games is the Banzhaf index where it is even more often applied than the former. The index is faster to compute and roughly measures the power of each player as the number of cases when the latter has a critical vote.
Key concepts: coalition, transferable utility, characteristic function, cost function, super(sub)additivity, payoff vector; TU games: constant-sum, essential, simple, symmetric, monotonic, normalised; strategic equivalence, core, nucleolus, lexicographic order, Shapley value, Banzhaf index.
Seminar discussion:
11. The Sveriges Riksbank Prize in Economic Science in memory of Alfred Nobel and the Game theory (1-4 students). In recent years there were 4 important nominations in Economic Science prize that are directly linked to the Game theory, from non- cooperative game in 1994, to the cooperative theory in 2012. Make the short non- technical presentation of the key results the scientists were valued and nominated for.
1994. For their pioneering analysis of equilibria in the theory of non-cooperative games.
16 2005. For having enhanced our understanding of conflict and cooperation through game- theory analysis
2007. For having laid the foundations of mechanism design theory
2012. For the theory of stable allocations and the practice of market design.
b.1951 b.1923
17