Common Knowledge
Total Page:16
File Type:pdf, Size:1020Kb
Common Knowledge Milena Scaccia December 15, 2008 Abstract In a game, an item of information is considered common knowledge if all players know it, and all players know that they know it, and all players know that they know that they know it, ad infinitum. We investigate how the information structure of a game can affect equilibrium and explore how common knowledge can be approximated by Monderer and Samet’s notion of common belief in the case it cannot be attained. 1 Introduction A proposition A is said to be common knowledge among a group of players if all players know A, and all players know that all players know A and all players know that all players know that all players know A, and so on ad infinitum. The definition itself raises questions as to whether common knowledge is actually attainable in real-world situations. Is it possible to apply this infinite recursion? There are situations in which common knowledge seems to be readily attainable. This happens in the case where all players share the same state space under the assumption that all players behave rationally. We will see an example of this in section 3 where we present the Centipede game. A public announcement to a group of players also makes an event A common knowledge among them [12]. In this case, the players ordinarily perceive the announcement of A, so A is implied simultaneously is each others’ presence. Thus A becomes common knowledge among the group of players. We observe this in the celebrated Muddy Foreheads game, a variant of an example originally presented by Littlewood (1953). In this game of n players, each player is to determine whether he has mud on his forehead whilst being able to observe the state of the n−1 other players, and not his own. If a sage announces to the n players that at least one of them has a muddy forehead, then this information becomes common knowledge. Everyone knows that at least one player has a muddy forehead, and everyone knows that everyone knows that at least one player has a muddy forehead, and so on. If, at each period, the sage questions the players as to whether they have mud on their foreheads, and under the assumption the players are rational and truthful, it can be proven by induction that if k players have muddy foreheads, then by the kth period, all k will confess to having mud on their foreheads. In section 3, we revisit this game in greater detail and observe how the presence and absence of common knowledge can drastically affect equilibrium in this game. We investigate how crucial the assumption of common knowledge is and how certain theories collapse in the event common knowledge is absent [7]. 1 The above view of common knowledge may be applicable only in limited contexts. In the case that knowl- edge is derived from sources that are not completely reliable (as in sending and receiving information through a medium which is not perfectly reliable), then common knowledge cannot be achieved [8]. Consider the case of processors in a distributed system that must jointly execute a complex computer pro- tocol [3]. The processors are to coordinate themselves by means of communicating to each other what they know. It has been established that a necessary condition for the processors to be able to perfectly synchro- nize themselves is that of common knowledge. But given that messages sent and received between them have the possibility of failing, then common knowledge seems to be an excessively strong requirement that may rarely be achieved in practice. Figure 1: The Coordinated Attack Problem Consider a version of Gray’s original coordinated attack problem which can serve as an analogy to the communicating processors. The game is as follows: Suppose there are two allied armies situated on opposite hilltops wishing to attack their foe, residing in the valley (Figure 1). The state of the enemy Prepared, Unprepared, is only observable to the first commander. When he sees that the enemy is unpre- pared, he wants to alert the second commander. The first commander will not attack unless he is absolutely sure the second commander will do the same. If they attack at the same time while the enemy is unprepared, they will win the battle, otherwise they will be defeated. Their only means of communication is through a messenger pigeon, which with positive probability may not make it to the other side at some point during its rather dangerous trek. In the case that the message sent from the first commander to the second does arrive successfully, both commanders now know the message, but the first commander cannot be sure that the second knows it. Hence, the second will need to send a confirmation to the first. Assuming it arrives, we now have that commander 1 knows that commander 2 knows, but we do not have that commander 2 knows that commander 1 knows that he knows. This will require commander 1 to send another confirma- tion. Wouldn’t this be enough for the two to know they will both attack? No, because again, we have that commander 1 is not absolutely certain that his last message was received, and so on and so forth. We must convince ourselves that under this protocol, in which communication is not 100% reliable, no matter how many trips the pigeon makes, there is no iron-clad guarantee that the opposing side will attack at the same time. Given that the attainment of common knowledge is not possible in such contexts, we are left with the open problem of determining how common knowledge may be approximated. In section 4, we go on to apply Monderer and Samet’s proposed way of achieving a reasonable level of coordination by approximating common knowledge with the weaker concept of common belief. In the last section, we comment on the protocol used in the coordinated attack game as well as provide a brief discussion on the possible limitations of the proposed solution to approximating common knowledge. 2 In the next section, we formalize the definition of common knowledge and give a brief background on concepts needed for the rest of this paper. 2 Background 2.1 Preliminaries We present the basic definitions of Mutual Knowledge and Common Knowledge. Definition 2.1 An event is Mutual Knowledge if players know this event. Note this is weaker than common knowledge. Let P be a finite set of players, and let the space (Ω,Σ,µ) be a probability space, where Ω is the state space, Σ is the σ-field of events, and µ is the probability measure on Σ. Let Hi, i ∈ P be a measurable partition of Σ and Hi(ω) be the element of Hi containing ω, ω ∈ Ω. The set Hi(ω) contains the set of states that is indistinguishable to player i when ω occurs. Last but not least, let Fi be the σ-field generated by Hi. We will need this when proving a theorem on common knowledge. Let Ki(E) denote the event player i knows event E. This set will consist of all states ω in which player i knows E: Ki(E)= {ω : Hi(ω) ⊆ E}. We refer to K as the “knowledge operator”. Hence the event everyone knows E (Mutual Knowledge) is defined by K (E)= ∩i∈P Ki(E)= {ω : ∪i∈P Hi(ω) ⊆ E}. We refer to K as the “everyone knows” operator and can construct a recursive chain as follows: The event everyone knows that everyone knows E (Second order mutual knowledge) is defined by 2 K (E)= ∩i∈P Ki(K (E)) = {ω : ∪i∈P Hi(ω) ⊆ K (E)}. Everyone knows that everyone knows that everyone knows E (third order mutual knowledge) is defined by 3 2 2 K (E)= ∩i∈P Ki(K (E)) = {ω : ∪i∈P Hi(ω) ⊆ K (E)}. Note that this is a decreasing sequence of events, for Kn+1(E) ⊆ Kn(E) and that K0(E)= E. The event everyone knows that everyone knows that everyone knows (ad infinitum) E can be defined as the intersection of all sets of the form K n(E): ∞ n K (E)= ∩n≥1K (E), n n−1 ∞ where K (E)= ∩i∈P Ki(K (E)). Because K is a countable intersection of classes, it must itself be in this class. Proposition 2.2 An event E is common knowledge at state ω ⇐⇒ ω ∈ K ∞(E). 3 Some properties of the “knowledge operator”, K. Property 1 Ki ∈ Fi. Property 2 If E ∈ Fi, then Ki(E)= E. Property 3 Ki(Ki(E)) = Ki(E). Property 4 If F ⊆ E then Ki(F) ⊆ Ki(E). n n Property 5 If (An) is a decreasing sequence of events, then Ki(Tn A )= Tn Ki(A ). Proof (Proposition 2.2). (=⇒ ) Suppose that event E is common knowledge at ω. There exists an ω ∈ F ⊆ E such that F ∈ Fi ∀i ∈ P . By properties 2 and 4, F = Ki(F) ⊆ Ki(E). Hence F ⊆ ∩iKi(E). Recall we have defined this intersection to be one iteration of the “everyone knows” operator, K . So F ⊆ K (E). Since K n+1(E) ⊆ K n(E), then by induction on n, n ≥ 1, F ⊆ K n(E). But this implies ′ n ∞ ∞ that E ⊆ ∩n≥1K = K . Therefore, ω ∈ F ⊆ K . ∞ ∞ ∞ (←−) Conversely, suppose that ω ∈ K . It will suffice to show that K (E) ⊆ E and that K (E) ∈ Fi ∀i ∈ P . Because K satisfies the decreasing sequence of events property, then for n ≥ 1, K ∞(E) ⊆ K n(E) ⊆ K (E) ⊆ Ki(E) ⊆ E.