<<

Communication with Language Barriers

Francesco Giovannoni† Siyang Xiong‡

February 23, 2017

Abstract

We consider a general communication model with language barriers, and study whether language barriers harm welfare in communication. Contrary to the negative result in Blume and Board (2013), we provide two positive results. First, the negative effect of any language barriers can be completely eliminated, if we introduce a new communication protocol called N-dimensional communication. Second, even if we stick to the classical 1-dimensional communication (as in Crawford and Sobel (1982)), for any payoff primitive, there exists some language barriers whose maximimal equi- librium welfare dominates any cheap-talk equilibrium under no language barriers.

We thank.... †Department of Economics, University of Bristol, [email protected] ‡Department of Economics, University of Bristol, [email protected]

1 Government officer: ”Why don’t they just speak English?” Dr. Eleanor Arroway: ”Maybe because 70% of the planet speaks other lan- guages? Mathematics is the only truly universal language. It’s no coincidence that they are using primes.” — Contact (the movie) 1997

1 Introduction

Communication is about transmission of information, so that a natural question to ask is “what” information is actually transmitted and this has been the focus of the literature on strategic communication, or “.” This literature, however, has typically ignored the issue of “how” information is transmitted. Yet, everyday experience suggests the intuition that how information is transmitted may hinder or help communication. For in- stance, it is notoriously hard to convey humor or any other emotion in modern electronic communication but once electronic communication became sufficiently important, emo- tions were developed to deal with this very issue (Curran and Casey (2006)). Similarly, there is a debate on the appropriateness of releasing medical records to patients where one of the concerns is that patients may not be able to understand medical jargon and so the common suggestion is to avoid such jargon when likely to cause misunderstandings (see Ross and Lin (2003) for a survey). In general, the importance of being able to com- municate effectively is amply recognized in many fields. For example, good rhetorical skills are considered crucial for modern politicians to the point that there is now concern that comprehensive access to the media has made the rhetorical component of political communication much more important than its substantial content (see Spence (1973) and McNair (2011)). But the ability to communicate effectively is also obviously very impor- tant in many other fields such as marketing and sales, law and, obviously, academia.1. So, in order to fully understand the process of communication and to determine to what extent it can be successful, it is very important to not just to determine what the parties involved want to say, but also how they say it. In this paper, we take the “how” issue

1A large part of Thomson (2001) is dedicated to discussing issues that ultimately boil down to how to communicate research findings in Economics.

2 seriously and study “language barriers”, as introduced in Blume and Board (2013), in one-shot communication games. These language barriers allow us to model the possibil- ity that in situations of strategic communication individuals may not be able to send or understand certain messages.

To get a more precise intuition for our results, consider a standard sender-receiver model, where a sender (S) privately observes the payoff-relevant state t T, and then 2 sends a message m M to the receiver (R), where M denotes the set of all possible mes- 2 sages. The receiver cannot observe t, but she has to take a payoff-relevant action a A 2 upon receiving m. However, each player i may understand only a subset of the messages, denoted by λ M. Following Crawford and Sobel (1982), almost all of previous papers i  implicitly assume of λi = M for every i: we say “language barriers” do not exist, if this holds, and exist otherwise. When language barriers exist, a subset λ M denotes a language type of player i, while Λ and Λ represent the sets of all i  S R language types of the sender and the receiver, respectively. Then, a common prior on T Λ Λ defines a standard , which is the “language barriers” model  S  R proposed in Blume and Board (2013). This provides a parsimonious way to study a fun- damental question: do language barriers improve or harm welfare of communication, or equivalently, is equilibrium welfare under language barriers bigger or smaller than that under no language barriers? A first answer is provided in Blume and Board (2013), who show that in the presence of language barriers with language types being private infor- mation, there will necessarily be indeterminacies of meaning in common-interest games.2 A direct consequence of their result is that though an efficient equilibrium always exists in common-interest games without language barriers, efficiency is impossible in the pres- ence of private information over language types. Facing this negative result, we pursue this fundamental question in two directions, a normative one and a positive one. We first ask: is there a natural communication protocol that can eliminate the negative welfare effect of any language barriers? Our second question is: for any payoff primitive, can we find some language barriers that (weakly) improve equilibrium welfare? We provide positive answers to both questions.

2Indeterminacies of meaning arise when, in the presence of language barriers, players’s equilibrium strategies are such that they would want to deviate if they knew their opponent’s language type.

3 Our first main result is inspired by a phenomenon we observe in real-life commu- nication, which is that messages are formed by combining basic units to make complex structures that convey meaning; such structures can always achieve ever higher levels of complexity, depending on how complex the meaning is. So, the English language can form a relatively simple sentence structure to convey a simple message such as “close the door”, but can build much more complex structures if communication requires it.3 Thus, it seems restrictive in modeling communication to assume a fixed number of messages, each with a predetermined level of complexity rather than assuming that such messages can always be used as building blocks capable of forming more complex structures. The communication protocol in Crawford and Sobel (1982) or Blume and Board (2013) which is 1-dimensional, implicitly forbids forming such more complex structures, so we relax this assumption in the simplest way possible by assuming that the set of available mes- sages extends to a multi-dimensional set MN (for some finite integer N). This simple change in assumptions allows us to describe to some extent this self-generating property of real-life communication (aside from languages, think of binary codes in computer sci- ence and Morse code) and yet, to the best of our knowledge, we are the first paper to formalize this in the literature on strategic communication.

To be more precise, in 1-dimensional communication, as in Crawford and Sobel (1982) or Blume and Board (2013), for a given M, the sender is allowed to send a mes- sage m M . Under our N-dimensional communication protocol, the sender is allowed 2 to send a message m MN.4 In addition, N-dimensional communication must respect 2 language barriers, if they exist. In particular, type λS ( M) of the sender can send only N  messages in (λS) , and type λR ( M) of the receiver can only understand messages N  in (λR) . Our first main result is that any (finite) equilibrium which would obtain in a game with one-dimensional communication and no language barriers can be mimicked by an equilibrium of the same game if we added any language barriers but allowed for N-dimensional communication (for sufficiently large N). In this sense, the negative ef-

3Chrystal (2006) discusses how the one of the fundamental characteristics of any language is a hierar- chical structure of its syntax. 4We consider only one-shot messages, and as a result, N-dimensional communication is not a conver- sation (i.e., N-round communication). Furthemore, it is different from multi-dimensional cheap talk as in Battaglini (2002) and Levy and Razin (2007), where the multiple dimensions there refers to the dimensions of the payoff types.

4 fect of language barriers can be completely eliminated, if we allow for multi-dimensional communication.

Technically, there are three obstacles for effective communication in the presence of language barriers: (1) the sender may not know the receiver’s language type, and hence may not know what messages to send; (2) the receiver may not know the sender’s lan- guage type, and hence may not know how to interpret a received message; (3) there may not be enough commonly known messages to transmit all the information. In Section 4.2.1, we show all of the three obstacles can be overcome with multi-dimensional com- munication but there is one important point that is worth emphasizing. It is obvious that multi-dimensional communication enlarges the set of possible messages available to the sender, which may lead one to wonder whether this is all that matters. In fact, this point resolves only the third technical obstacle mentioned above, but does not eliminate asym- metric information regarding the sender’s and the receiver’s language types. In section 4.2.1, we show through a couple of examples that it is the (multiple) dimensionality of the communication that overcomes asymmetric information about language types.

In the second part of the paper, we tackle the second question in the context of 1-dimensional communication. In particular, we follow Goltsman, Horner,¨ Pavlov, and Squintani (2009) and Blume and Board (2010) in comparing welfare across several proto- cols for cheap-talk communication: arbitration, mediation, language barriers and noisy talk. Our main result is a linear ranking of the maximal welfare achieved in these different protocols: ΦLB ΦM ΦILB ΦN,    where ΦLB, ΦM, ΦILB, ΦN are the maximal equilibrium welfare achieved in a generic sender-receiver game under language barriers, mediation, language barriers with the re- striction that language types are distributed independently of payoff states (we refer to these as independent language barriers from now on), and noisy talk, respectively. One immediate implication is that, for any payoff primitive, there exist some language barri- ers whose maximal equilibrium welfare (weakly) dominates any equilibrium in the cor- responding game without language barriers.

While both Goltsman, Horner,¨ Pavlov, and Squintani (2009) and Blume and Board

5 (2010) ask a very similar question, methodologically our approach is quite different. Golts- man, Horner,¨ Pavlov, and Squintani (2009) and Blume and Board (2010) consider the case of quadratic preferences and uniform payoff distribution; they first argue mediation pro- vides an upper bound to the welfare achievable under language barriers with indepen- dence and noisy talk, and then construct a specific equilibrium under such language bar- riers and under noisy talk which achieve the welfare upper bound, i.e., they establish an equivalence result on (maximal) welfare for the three protocols. Instead, we go to the roots of the incentives underneath each protocol, and show that equilibria with language barriers, mediation, independent language barriers and noisy talk correspond to a series of increasingly restrictive conditions in that order, which gener- ates the welfare order described above. Because of this approach, our results go beyond the environment with quadratic preferences and uniform payoff distribution and indeed hold for any general and distributional assumptions. We provide two further results. Firstly, we consider two possible notions of arbitration, which are simply forms of mediation where one of the two incentive constraints - the sender’s incentive to reveal the truth to the mediator and the receiver’s incentive to follow the mediator’s suggested actions - are relaxed. The first notion of arbitration corresponds to the one defined in Goltsman, Horner,¨ Pavlov, and Squintani (2009), where it is assumed that the receiver must play the strategies recommended by the arbitrator whereas the sender must still be incentivised to reveal the payoff state. Compatibly with Myerson (1991)’s terminology we call this arbitration with adverse selection. The second notion of arbitration is absent in the previous literature but is a modification of mediation where it is assumed the sender must truthfully report the payoff state to the arbitrator whereas the receiver must still be incentivised to follow the arbitrator’s recommended action. We call this arbitration with moral hazard. Given these definitions, we show that the maximal equilibrium wel- fare achieved with language barriers dominates that under arbitration with moral hazard whereas no such ranking can be established with regard to arbitration with adverse se- lection. This immediately establishes that both arbitration and language barriers welfare- dominate mediation, but a general ranking between arbitration and language barriers is not possible.

Our final result shows, through an example, that the welfare equivalence between mediation, independent language barriers and noisy talk established by Goltsman, Horner,¨

6 Pavlov, and Squintani (2009) and Blume and Board (2010) is not robust if we relax the uniform-distribution assumption on payoff states.

The remainder of the paper proceeds as follows: we discuss the literature in Section 2; we describe the model in Section 3; Section 4 shows how N-dimensional communica- tion can always replicate equilibria obtained without language barriers, no matter what these are; Section 5 shows how some language barriers can improve welfare even under 1-dimensional communication and compares such language barriers to other noisy com- munication protocols; Section 6 concludes and Appendix contains all the proofs not in the main part of the paper.

2 Literature Review

The literature on communication in games of asymmetric information is very large. Craw- ford and Sobel (1982) introduced the canonical “cheap talk” setting, with an informed “sender” who sends (costless) messages to an uninformed “receiver” who, in turn, takes an action which affects them both. Since then, a vast literature has developed, which ex- tended the analysis in many different directions. For example, beginning with Milgrom (1981), there is significant amount of work that considers communication when messages are (costless) evidence so that lying is not allowed, including Kartik (2009) where ly- ing is arbitrarily costly. Another important areas of research are those where the anal- ysis has been extended to multiple senders or multi-dimensional payoff state spaces (e.g. Battaglini (2002), Chakraborthy and Harbaugh (2007) and Levy and Razin (2007)) or to is- sues of commitment amongst the parties: Dessein (2002) and Krishna and Morgan (2008) focus on various types of commitment on the part of the receiver while Kamenica and Gentzkow (2011) assume the sender commits ex-ante to an informational mechanism. Finally, there are important extensions which consider the dynamics of interactions be- tween senders and receivers when their preferences differ (e.g. Sobel (1985) and Morris (2001)) or when there is uncertainty about the quality of the sender’s information (e.g. Scharfstein and Stein (1990) and Ottaviani and Sorensen (2006)).

In all of this literature, one assumption is that language is never an issue. A signifi-

7 cant exception is Farrell (1993), where the issue of how exactly information is transmitted is taken seriously. There is a “rich language assumption”, which excludes language barri- ers, and the crucial restriction is that messages come with some intrinsic meaning. Thus, for Farrell (1993), the restriction is not that players cannot use or understand some mes- sages but rather that, whenever credible, messages should be taken literally.

Still, a few authors have argued that language is necessarily too coarse for com- munication in certain environments. For example, Arrow (1975) discusses the reasons of organizational codes and both Cremer,´ Garicano, and Prat (2007) and Sobel (2015) model such codes by using a setting where messages are too few to avoid ambiguity. While our results suggest that N-dimensional communication can overcome all such issues, in those environments there may be reasons, such as the complexity or the time needed to develop and understand such messages, that pose substantial limits on how much can be done with them. In other contexts, on the other hand, it is likely that successful commu- nication is so important that such complex messaging strategies are worth pursuing. For example, in the Arecibo Message Project a message was broadcast from Earth to potential intelligent alien civilizations. This message contains information about our DNA and our solar system and is encoded using a binary system not dissimilarly from our equilibrium construction. The science fiction novel Contact (by Sagan (1985)) also addresses the is- sue of one-shot communication in the presence of language barriers and it too provides a solution where a common language is established before the content of the message is delivered.5

As already discussed, the closest work to ours is Blume and Board (2013) who in- troduce the notion of language types and use it to describe language barriers. The focus in Blume and Board (2013) is on describing how even in common interest games, several inefficiencies do arise as a result of language barriers. We adopt the same framework but consider any communication game (not just common-interest) and introduce the notion of N-dimensional communication. We show that such communication protocol can repli- cate any equilibrium of the corresponding game without language barriers. Blume (2015) looks again at the issues raised by language barriers in a sender-receiver context where

5Sagan also participated in the design of the various messages attached to the two Pioneer and two Voyager probes. For a scientific discussion of communication with extra-terrestials, see D.A. Vakoch (2011).

8 the sender still has private information about her language type but there is no common prior on it. We do not focus on higher-order uncertainty but due to the ex-post nature of our results, these would be robust in such settings.

In our paper, we also look at whether particular language barriers can improve upon communication in non-common interest setting. A few papers have particular relevance to our work here. Krishna and Morgan (2004) show that more (Pareto) efficient equilibria may be obtained by allowing for the informed sender and uninformed receiver to ex- change messages at a first stage and then allowing the sender sends a second message. The N-dimensional messages in our setting should not be interpreted as a conversation as all communication takes place in a single stage. Blume, Board, and Kawamura (2007) show that the exogenously given possibility of an error in communication actually im- proves communication in equilibrium, while in our setting it is exogenous language bar- riers that provide such results. In fact, Goltsman, Horner,¨ Pavlov, and Squintani (2009) provide an upper bound on ex-ante efficiency if mediation is introduced in the model and show that both Krishna and Morgan (2004) and Blume, Board, and Kawamura (2007) at best can reach, but not surpass that bound.6 Blume and Board (2010) study language bar- riers under the assumption of independence between the priors on language types and payoff types and argue that the efficiency bound can be reached by language barriers. We extend those results to a class of much more general communication games and provide a linear ranking amongst all these communication protocols. In particular, we show that under the independence assumption but in this general setting, optimal language barri- ers will always do no worse than optimal noisy talk and provide an example where they do strictly better. This implies that, in general and in contrast with the conclusions drawn in Goltsman, Horner,¨ Pavlov, and Squintani (2009) and Blume and Board (2010), noisy communication cannot always achieve the efficiency bound obtained through mediated communication. Finally, we go beyond the independence assumption between payoff and language types and show that the optimal such language barriers can do better than mediation, whereas we show with an example that a comparison with arbitration cannot

6Ganguly and Ray (2011) argue that any noisy communication protocol requires a larger set of messages than those used in the standard Crawford and Sobel (1982) setting. They show that simple mediation, where no more messages can be used than in the corresponding Crawford and Sobel (1982) setting, does not improve on such setting.

9 be made without specifying the welfare function.

3 Model

Let I denote a finite set of agents, and for every agent i I, we use A and T to denote 2 i i the sets of actions and payoff states of agent i, respectively. Throughout the paper, we utilize the notational convention that a subscript i refers to agent i whereas no subscript

refers to all agents. Thus, A ∏i I Ai and T ∏i I Ti. Agent i has the utility function  2  2 u : T A R. i  ! Let M denote the set of all possible messages. For every agent i I, we use a non- 2 M empty Λi 2 ? to denote the set of language types of agent i. Each language type  f g λ Λ is defined as the set of messages that agent i understands. There is a common i 2 i prior π on T Λ, and let π and π denote the marginal distributions on T and Λ,  T Λ respectively. We will sometimes impose the following assumption, and we will state it explicitly, if we do.

Assumption 1 T and Λ are independently distributed.

We use X to denote the cardinality of of a set X. Throughout the paper, we assume j j M > 1 and Λ < ∞. As usual, i represents I i , and x represents x . i j j I i j j j j f g 2 f g  For any positive integer N, we define a N-dimensional communication game. Before the game starts, nature chooses a state-type profile (t, λ) according to π. Then, upon privately observing (t , λ ), every agent i I sends an N-dimensional message m i i 2 i  1 N ( )N mi , ..., mi λi . Finally, upon observing ti,, λi, mj j I , every agent i I takes an 2 2 2 action a A .7 h i i 2 i  7This setting allows for everyone to be both a “sender” and a “receiver” but can easily be accommodated to allow for the cases where only some players are senders and/or only some players are receivers. For the former, it suffices to impose that some players (the non-senders) have a singleton payoff state space and for the latter, it suffices to impose that some players (the non-receivers) have a singleton action space.

10 Thus, a game is defined by a tuple I, M, T, Λ, π, A, (ui : T A R)i I , N , and  ! 2 8 players’ strategies in the game are I σ : T Λ MN , ρ : T Λ MN A , i I, i i  i ! i i  i  ! i 8 2       such that σi and ρi are measurable with respect to Λi. More precisely, the measurability of σ means σ (t , λ ) (λ )N for every (i, t, λ) I T Λ. The interpretation is that a i i i i 2 i 2   language type λi understands only the messages with which he is endowed, and hence this type can send a string of messages only in λ , i.e., the restriction σ (t , λ ) (λ )N i i i i 2 i defines “language barriers” for agents when sending messages.

We define the measurability of ρi as follows. Define

x λ y if and only if x = y λi or x, y λi = ?,  i 2 f g \

i.e., type λi can distinguish two messages with which he is endowed, but treats all the other messages as a single and distinct “nonsense” message. Then, for any positive inte- ger K, define

x1, ..., xK y1, ..., yK if and only if xk yk, k 1, ..., K . λi λi 8 2 f g     The measurability of ρi means

m m = t m = t m j j I λi 0j ρi i, λi, j j I ρi i, λi, 0j , 2  j I ) 2 j I   2 h i    2    N N i, t, λ, m, m0 I T Λ M M . 8 2      We use “m λ m0” to denote that “m λ m0 is false”. This measurability requirement i  i captures “language barriers” for agents when receiving messages.

Given a profile (σ, ρ) (σi, ρi)i I, the final utility of agent i given a state-  2 type profile (t, λ), denoted by U (σ, ρ t, λ), is defined as follows. i j

Ui (σ, ρ t, λ) = ui tj, ρ tj, λj, [σl (tl, λl)]l I . (1) j j j J  2 2  h i 8For notational ease, we focus on pure strategies. The analysis can be extended to mixed strategies in a straightforward way, but requires much more messy notation.

11 Define

Ui (σ, ρ) = Ui (σ, ρ t, λ) π (dt, dλ) , i I, T Λ j 8 2 Z  i.e., Ui (σ, ρ) is player i’s expected payoff given the strategy profile (σ, ρ).

Instead of considering (language-)interim equilibria as in Blume and Board (2013), we adopt the stronger of (language-)ex-post equilibrium.9 Given any (t , λ) i 2 Ti Λ, let π ( ti, λ) denote the distribution of t i conditional on (ti, λ).  j

Definition 1 (σ, ρ) is an equilibrium if

Ui (σ, ρ ti, t i, λ) Ui σi0, σ i , ρi0, ρ i ti, t i, λ π (dt i ti, λ) 0, (2) ZT i j j j        i I, (t , λ) T Λ, σ0, ρ0 . 8 2 8 i 2 i  8 i i  Note that equation (2) describes the (language-)ex-post incentive compatibility of

agent i in the equilibrium: knowing (ti, λ), agent i chooses the best strategy. In Blume and

Board (2013), (language-)interim equilibria are instead defined as: knowing (ti, λi), agent i chooses the best strategy. For them, “indeterminacies of meaning” arise when there is a (language-)interim equilibrium that is not a (language-)ex-post equilibrium. Therefore, “ of meaning” is embedded in our equilibrium notion.

Throughout the paper, we impose the following necessary assumption for informa- tive communication.

i I, λ 2, λ Λ , (3) 8 2 j ij  8 i 2 i π (t, λ) T Λ : λi λj = ?, i, j I with i = j = 1. (4) 2  \ 6 8 2 6   (3) says that every player is able to transmit non-trivial information (i.e., λ j ij  2). (4) says that, any two language types of two distinct agents must have non-empty intersection, because communication is not informative otherwise.10 9The “ex-post” is defined with respect to realization of language types (rather than payoff states). That is, our equilibrium provides best replies for all agents, even if all the agents’ language types were truthfully revealed. Clearly, this is much stronger than the corresponding “interim” and “ex-ante” equilibria. 10Blume and Board (2013) assume existence of a common message for all language types of all players, which, clearly, is stronger than (4).

12 4 Main Results: N-dimensional Communication

In this section, we show that any equilibrium in a communication game with no language barriers can be replicated by an equilibrium of the corresponding game if we introduce language barriers. In Section 4.1, we first define what this means formally; in Section 4.2, we prove our main result for this section and illustrate some of its implications.

4.1 Similar games and -equivalent equilibria

We will compare equilibria between communication games which only differ in whether language barriers exist or not. To make the comparison between two such games mean- ingful, they must be “similar” i.e., they must share the same primitives in terms of agents, actions, payoffs, etc. Rigorously, we apply the following definition

Definition 2 Two games G and G

G = b I, Me, T, Λ, π, A, ui : T A R , N ,  ! i I D   2 E Gb = bI, Mb , Tb, Λb, π, Ab, ui : Tb Ab R , Nb , b b  ! i I D   2 E are similar if e e e e e e e e e e e

I, M, T, A, (ui)i I = I, M, T, A, (ui)i I , 2 2 D and πE = πD . E b b b b b T eT e e e e b b e e That is, two similar games may differ only in language types and the dimension of messages they send. We now define outcome-equivalent equilibria in two similar games.

Definition 3 Given two similar games, G and G, an equilibrium (σ, ρ) in G is outcome-equivalent to and an equilibrium (σ, ρ) in G if b e b b b e ρ ti, λi, σj tj, λj e e = ρ ti, λi, σj tj, λj , i, t, λ, λ I T Λ Λ, i j I i j I 8 2        2      2    b b b b e e e e b e b e 13 Outcome-equivalent equilibria in similar games induce the same action profile for any given profile of payoff types, regardless of language types. As a result,

U (σ, ρ) = U (σ, ρ) , i I, i i 8 2 i.e., they induce the same expectedb utilityb for everye e player.

4.2 Outcome-equivalence for similar games

For any game G = I, M, T, Λ, π, A, (ui)i I , N , define 2

   N 1, λi M, Λ = ∏ Λi λi ;   i I  f g 2 h i π E Λ = π (E) , E T,  T 8  h i    i.e., G = I, M, T, Λ , π , A, (ui)i I , N = 1 is the standard communication game with 2 1-dimensionalD messages and no language barriers,E which is similar to G. Our first main

result says that for any equilibrium of G, there exists an outcome-equivalent equilibrium of G, if N is sufficiently large. In this sense, language barriers bring no harm to welfare.

(σ ρ ) Let (σ , ρ ) denote an equilibrium of G . Let ,  denote the set of messages    Ei agent i sends in the equilibrium, i.e.,

(σ,ρ) σ (t , λ) : t T . Ei  f i i i i 2 ig (σ,ρ) (σ, ρ) is called a finite-message equilibrium if ∏i I i < ∞ and an infinite-message 2 E equilibrium otherwise. For notational ease, we focus on finite-message equilibria but the analysis can be easily extended to infinite-message equilibria.

Theorem 1 Suppose Assumption 1 holds. Then, for any finite-message equilibrium (σ, ρ) in

   any game without language barriers G = I, M, T, Λ , π , A, (ui)i I , N = 1 , a positive in- 2 teger N exists, such that in any similar gameD G = I, M, T, Λ, π, A, (ui)i I , NE with N N, 2  there exists an equilibrium (σ, ρ) of G that is outcome-equivalent to (σ, ρ). b b

Recall that (σ, ρ) and (σ, ρ) being outcome-equivalent means that the two games induce the same equilibrium action profile for every t T, regardless of λ Λ. In this 2 2 14 sense, we say (σ, ρ) replicates (σ, ρ). In 4.2.1 and 4.2.2 we prove this result, focusing first on the role of N-dimensionality in overcoming language barriers, absent the issue of incentive compatibility and then showing how incentive compatibility is assured.

4.2.1 The role of N-dimensionality in Theorem 1

In this section, we leave incentive compatibility aside, and show that multiple dimensions of messages suffice for effective communication. We return to incentive compatibility in the next section. To prove Theorem 1, we need to overcome three technical obstacles: the first is that senders may not know receivers’ language types; the second is that receivers may not know senders’ language types; finally, we need to show how players transmit information using their endowed messages given that they know each others’ language types. We show that all of these can be achieved by utilizing messages with multiple dimensions.

We first tackle the problem that senders do not know the receivers’ language types. With multiple dimensions, this type of asymmetric information is easily eliminated, be-

cause, we can break a N-dimensional message into Λi strings, with each string in-

i[I 2

tended for a receiver’s language type. Specifically, suppose N = N0 Λi for some  i I [2 i I Λi λi N0 j[ 2 j integer N0, and a sender’s N-dimensional message is m = m i I,λ Λ M , 2 i2 i 2 where mλi MN0 is the intended message from the sender to λ . Upon receivingh i m, the 2 i λ language type λi just goes to his designated string to retrieve his intended message m i , so that the asymmetric information regarding senders not knowing receivers’ language types is eliminated. This is analogous to what happens in many tourist attractions, where information is written in different languages, and tourists from different countries just jump to the bit written in a language they understand to retrieve the information. Thus, messages with multiple dimensions do more than just increase the size of the message space. To see this, consider the following example, with one sender and one receiver who

15 share a common interest. Suppose that

T = A = α, β, γ f g 1 i f a = t uS (t, a) = uR (t, a) = 8 0 i f a = t < 6 + M = Z; Λ = λ ; Λ = λ, λ S f Sg :R R R + λ = 100, 99, ..., 0, ..., 99, 100 ; λ = 1, 2, ... ;, λ = 1, 2, ... S f g R f g R f g

Assume that each t T has positive probability but that the true realization is the 2 + sender’s private information whereas both λR and λR also have positive probability, but their realization is the receiver’s private information. Since sender and receiver have iden- tical preferences, the only issue is how the sender can communicate her information to the receiver. Clearly, without language barriers, the efficient outcome (i.e., perfect communi- cation) is an equilibrium. However, it is not an equilibrium for 1-dimensional communi- + cation under these language barriers: to achieve efficiency, λR must be able to distinguish between m (β), m (γ) and m (δ) , the three equilibrium messages from the sender in the + three states. Hence, at least two of the three messages must be in λR , and as a result, + λ cannot distinguish these two messages, since λ λ = ?, i.e., efficiency cannot be R R \ R achieved in an equilibrium. It is also easy to see that even if we increased the number of messages available to the sender up to the point where λS = M, the same difficulty would remain. On the other hand, we could achieve full communication even if we restricted λ to the set 2, 1, 0, 1, 2 , but allowed 2-dimensional messages. This would allow the S f g + + sender to produce a message (m (t) , m (t)) where m and m are the strings that de- scribe the payoff relevant information for each possible type of receiver.11 Note that in this example the number of messages is not the issue as when we make λ = 2, 1, 0, 1, 2 , S f g even with 2-dimensional messages we have actually reduced the number of messages available to the sender compared to the case λ = 100, 99, ..., 0, ..., 99, 100 . Yet, effi- S f g cient communication can now be guaranteed.

11 + For instance, there is an equilibrium where the strings m and m are described by

m+ (α) = 0, m+ (β) = 1, m+ (γ) = 2,

m (α) = 0, m (β) = 1, m (γ) = 2, + + + + i.e., λR can distinguish m (α), m (β) and m (γ), and type λR can distinguish m (α), m (β) and m (γ).

16 The second problem is that, without knowing the senders’ language types, the re- ceivers do not know how to interpret the senders’ messages. Hence, to achieve effective communication, the senders must reveal their language types. Fix any N such that

N 3 Λi . (5)   i[I 2

Ignoring incentive compatibility, the following lemma shows that there is a procedure such that every sender is able to reveal his language type. The proof of Lemma 1 can be found in Appendix A.1.

N Lemma 1 For every (i, j, λ) I I Λ with i = j, there exists a function Υ : Λi M 2   6 (i,λj) ! such that

N Υ [λi] (λi) , λi Λi, (i,λj) 2 8 2

and λi = λ0 = Υ [λi] λ Υ λ0 . (6) 6 i ) (i,λj) j (i,λj) i   Given i = j, suppose agent i follows Υ to reveal his language type to agent j 6 (i,λj) N whose type is λj: if agent i is of type λi, he sends Υ [λi] (λi) to type λj. For (i,λj) 2 any two distinct language types of i, λi0 and λi00, because of (6), λj can distinguish the message sent by λ0 (i.e., Υ λ0 ) from the message sent by λ00 (i.e., Υ λ00 ). Thus, i (i,λj) i i (i,λj) i the asymmetric information due to receivers not knowing the senders’ language  type is eliminated.

Once again, the N-dimensional nature of messages is crucial. Consider the following sender-receiver common-interest example where now it is the sender that has language barriers:

T = A = α, β f g 1 i f a = t uS (t, a) = uR (t, a) = 8 0 i f a = t < 6 M = Z; Λ = λ ; Λ = λ0 , λ00, λ000 R f Rg :S S S S λ = 1, 2 ; λ0 = 1, 2 ; λ00 = 1, 3 ; λ000 = 2, 3 R f g S f g S f g S f g

17 Suppose all language types and all payoff states have positive probability. Clearly, with- out language barriers, the efficient outcome (i.e., perfect communication) is an equilib- rium. However, it is not an equilibrium for 1-dimensional communication for these par- ticular language barriers. To see this, suppose otherwise. Then, to achieve efficiency for

λS0 , states α and β must be truthfully revealed by messages 1 and 2. Without loss of gen- erality, λR plays α and β upon receiving messages 1 and 2, respectively. As a result, if λR

plays α upon receiving messages 3, then efficiency is not achieved for λS00; if λR plays β

upon receiving messages 3, then efficiency is not achieved for λS000—we get a contradiction.

Nevertheless, because of Lemma 1, an equilibrium with multi-dimensional mes- sages mλ, mt (λ) which guarantees full communication exists. In such equilibria, the λ first component, m identifies the sender’s language type while the second component identifies the payoff type. As in the previous example, giving arbitrary additional mes- sages to each sender type would not work because the receiver would not be able to understand such messages.

The last obstacle is technical and amounts to making sure that, once asymmetries of information about language types are resolved, there are still enough dimensions to convey the payoff relevant information. For a given Λ, π , some sender’s language type h i (σ ρ ) λ may have fewer messages than needed to replicate (σ , ρ ), i.e., λ < ,  , where i   j ij Ei (σ,ρ) i denotes the set of messages player i sends under (σ, ρ). We show this too can E be overcome by multiple dimensions via the following lemma and its proof can be found in Appendix A.2. Fix any positive integer N such that

(σ ρ ) N max b ,  : i I . (7)  Ei 2 n o

b (σ ρ ) Lemma 2 For every (i, j, λ) I I Λ with i = j, there exists a function Γ : ,  2   6 (λi,λj) Ei ! (σ ρ ) (λ )N such that for any m, m ,  , i 0 2 Ei b m = m0 = Γ (m) λ Γ m0 . (8) 6 ) (λi,λj) j (λi,λj)  By the previous two steps, both senders’ and receivers’ language types can be truth- fully revealed. Given this and i = j, suppose sender λi follows Γ to send messages 6 (λi,λj)

18 (σ,ρ) to receiver λj, and Γ translates equilibrium messages in to i’s endowed mes- (λi,λj) Ei (σ ρ ) sages in (λ )N. Then, for any two distinct messages m , m in ,  , because of (8), i 0 00 Ei receiver λ canb distinguish the two translated messages, Γ (m ) and Γ (m ), j (λi,λj) 0 (λi,λj) 00 i.e., equilibrium messages are effectively transmitted. In this case, N-dimensional mes- sages have the role of increasing the number of messages at the sender’s disposal, and they achieve this using components that the receiver can understand.

We now proceed to integrate these observations together in the more general frame- work where incentive compatibility matters, thus proving Theorem 1.

4.2.2 Proof of Theorem 1

Fix any game G without language barriers and any finite-message equilibrium (σ, ρ)

in G, which are listed as follows.

   G = I, M, T, Λ , π , A, (ui)i I , N = 1 ; 2 D I E (σ, ρ) = (σi : Ti M)i I , ρi : Ti M Ai . ! 2  ! i I h   2 i

Consider N = N + N Λi , where N and N are defined in (5) and (7), respectively.    i[I 2 b b For notationl convenience, for every i I and every λ , λ0 Λ Λ , fix any two 2 i i 2 i  i functions: 

N Υ : Λi M ; (i,λi0) ! (σ,ρ) N Γ : (λi) . (λi,λi0) Ei ! b These two functions will not play any essential role in our equilibrium (see footnote 14).

Senders’ strategies: let m denote the message intended from sender i to receiver (i,λj) j of language type λ . For every player i I and every (t , λ ) T Λ , define j 2 i i 2 i  i N σ (t , λ ) = m = Υ (λ ) , Γ [σ (t )] M . (9) i i i (i,λj) (i,λj) i (λi,λj) i i j I, λj Λj j I, λj Λj 2   2 2   2 2 I.e., given j = i, sender λi tells receiver λj about i’s true language type via Υ (λi) as 6 (i,λj) 19 described in Lemma 1 and the equilibrium message σ (t ) under (σ , ρ ) via Γ σ (t ) i i   (λi,λj) i i 12 as described in Lemma 2.  

Fix any t T . i 2 i e Receivers’ strategies: upon receiving the intended message m from sender i (= j), (i,λj) 6 receiver λj uses the following function to translate it back to an equilibrium message un-

der (σ, ρ) via the following function:

if there exists (ti, λi) Ti Λi such that σ (ti) , 2  = 8 i = ( ) ( ) Σ(λ ,i) m(i,λ ) m(i,λ ) Υ(i,λ ) λi , Γ(λ ,λ ) σi ti ; (10) j j > j j i j h i <>   σi ti , otherwise.   > >  Note that, by Lemmas 1:and 2e, if there exist multiple (ti, λi) Ti Λi such that m = 2  (i,λj) Υ (λ ) , Γ σ (t ) , then λ must be unique, and all (t , λ ) have the same (i,λj) i (λi,λj) i i i i i equilibrium messageσ (t ), and hence, Σ m is well-defined. We are ready i i (λj,i) (i,λj) to define ρ for every j I as follows. h i j 2

λ λ j j = ρj tj, λj, mi , mi ρj tj, σj tj , Σ(λ ,i) m(i,λ ) . i I j j i I j 2  2 f g       h i b That is, under (σ, ρ) and any given (t, λ) T Λ, each sender’s type λ follows 2  i σ (t ) by sending two pieces of information, Υ (λ ) , Γ σ (t ) , to each re- i i (i,λj) i (λi,λj) i i ceiver’s type λj, where the former truthfully reveals λi, and the latter is the coded message of σi (ti) by using the endowed message of λi. Upon receiving the message, each receiver

λj decodes it back to σi (ti), and plays the action ρj tj, σi (ti) i I . As a result, 2 h   i ρ ti, σj tj = ρ ti, λi, σj tj, λj , (i, t, λ) I T Λ, i j I i j I 8 2    2  2 h i h  i i.e., (σ, ρ) and (σ, ρ) are outcome-equivalent.

12 For i = j, the messages Υ (λi) , Γ σ (ti) is never used in our equilibrium — it is the (i,λi0) (λi,λi0) i message intended from λi to λi0, but λi knows herself/himself  is the language type λi not λi0, and as a result, such messages are redundant. We include these redundent messages purely for notational convenience.

20 Finally, we show the incentive compatibility of both senders and players. First, for

any receiver j under (σ, ρ), he forms a posterior belief on t j upon receiving the mes- sages σi (ti) i I, and chooses the best strategy ρj tj, σi (ti) i I . For receiver j under 2 2 h i (σ, ρ), he receives two pieces of information, i.e., λ in addition  to σi (ti) i I. Since T and 2 Λ are independent by Assumption 1, receiver j forms the same posterior  belief on t j as that under (σ, ρ). And hence, the same strategy ρj tj, σi (ti) i I is a best reply for 2 j. Second, for any sender i under (σ, ρ), sending σi (hti) is a best strategyi given the true payoff state t , i.e., sending σ (t ) is weakly better than sending σ t for any t T . i i i i i0 i0 2 i Note that under (σ, ρ), the equilibrium message of sender i with the true payoff state ti will be interpreted as σi (ti) by the receivers. Furthermore, any message from sender i would be interpreted as σ t for some t T (see (10)). Since sending σ (t ) is weakly i i0 i0 2 i i i better than sending σ t for any t T , it is a best strategy for sender i to send the i i0  i0 2 i equilibrium message under  (σ, ρ).

4.2.3 Implications of Theorem 1

One immediate implication of Theorem 1 is that any language barriers in the canonical Crawford and Sobel (1982) cheap-talk model can be overcome. In that model, there ex- ists a maximally-revealing equilibrium, in which finite messages are transmitted. Hence, all equilibria in the model are finite-message equilibria, and Theorem 1 immediately im- plies that all of them can be replicated whatever language barriers there are, if multi- dimensional communication is allowed.

A second, less immediate, implication focuses on the setting studied by Blume and Board (2013), which is that of a common-interest sender-receiver game. Specifically, we assume the following:

Assumption 2 (common-interest sender-receiver game)

I = 1, 2 , A = T = 1, T > 1, A > 1, M < ∞, f g j 1j j 2j j 1j j 2j j j u u u (i.e., common interest) is continuous and T and A are compact metric spaces. 1  2  1 2

I.e., player 1 is the sender and player 2 is the receiver; A1 and T2 are degenerate; we

21 use u to denote the common utility function for both players. In this setting Blume and Board (2013) prove that indeterminacies of meaning are inevitably induced by language barriers under 1-dimensional communication. As previously discussed, this means that there will not be efficient equilibria.13

However, given no language barrier, Proposition 1 below shows that approximate efficiency can always be achieved if there are sufficiently many, albeit finite, messages. Its proof can be found in Appendix A.3.

Proposition 1 For any ε > 0 and any game with 1-dimensional communication and no language

   barriers G = I, M, T, Λ , π , A, (ui)i I , N = 1 which satisfies Assumption 2, there exists 2 a positive integerD K such that E

M K = sup U (σ, ρ) max u (t, a) πT (dt) ε, j j  ) a A  (σ,ρ) ΣG Z  2  2 t T 2 G where Σ denotes the set of equilibria of G.

Note that [maxa A u (t, a)] πT (dt) is the maximal utility that players can possibly 2 t ZT get. We say an equilibrium2 (σ, ρ) achieves ε-efficiency if and only if

U (σ, ρ) max u (t, a) πT (dt) ε. a A  Z  2  t T 2

Then, Theorem 1 and Proposition 1 together immediately imply the following corollary:

Corollary 1 For any ε > 0 and any game satisfying Assumptions 1 and 2, ε-efficiency can be

achieved in equilibria of similar games G = I, M, T, Λ, π, A, (ui)i I , N for sufficiently large 2 N.

That it, multi-dimensional communication not only eliminates indeterminacies of meaning caused by language barriers, but also achieves almost-efficiency.14

13Indeterminancies of meaning imply inefficiency, or equivalently, efficiency implies determinancy of meaning. 14Theorem 1 assumes that language types and payoff states are independently distributed. For common-

22 5 Main Results: 1-dimensional Communication

In the previous section, we showed that for any language barriers, if multi-dimensional communication is allowed, we can always replicate outcomes that would obtain in the absence of such language barriers. In this sense, in the presence of language barriers multi-dimensional communication allows us to do no worse than if such language bar- riers did not exist. In this section, we change quantifier and focus on one-dimensional communication to study whether there exist language barriers that allow us to do “better” than what we can achieve without them.

In particular, we follow the Goltsman, Horner,¨ Pavlov, and Squintani (2009) strat- egy of studying several modified versions of cheap-talk communication games, although the games studied here generalizes theirs over two dimensions: we consider any arbi- trary distributions and utility functions, while Goltsman, Horner,¨ Pavlov, and Squintani (2009) focus on the uniform distribution and the quadratic utility function.15 In Section 5.2, we define arbitration and mediation equilibria (Goltsman, Horner,¨ Pavlov, and Squin- tani (2009)), noisy-talk equilibria (Blume, Board, and Kawamura (2007)), and language- barrier equilibria, all of which may Pareto dominate cheap-talk equilibria. In Section 5.3, we provide a linear ranking regarding the maximal welfare induced by all the equilibria except for arbitration equilibria. In Sections 5.4 and 5.5, we further clarify the relation- ship between arbitration equilibria and language-barrier equilibria on the one hand and language-barrier equilibria and noisy-talk equilibria on the other.

We begin in Section 5.1 by showing, through an example, that language barriers can (ex-ante) Pareto improve outcomes strictly.

interest games, however, in previous versions of this paper we showed that even in the absence of indepen- dence, with a N-dimensional protocol there exist ε-equilibria that achieve almost-efficiency. 15(cite BB WP version) also undertook an exercise similar to ous, by adding language-barrier equilibria to the Goltsman, Horner,¨ Pavlov, and Squintani (2009) analysis. However, they focused on the Goltsman, Horner,¨ Pavlov, and Squintani (2009) class of games, so that our results, which consider more general set- tings, differ.

23 5.1 An example of language barriers strictly improving over the cheap talk

Example 1 Consider a canonical Crawford and Sobel (1982) one sender- one receiver game, i.e., I = S, R ; A = T = 1, f g j Sj j Rj T = M = 0, 1 and A = ( ∞, ∞) , S f g R 5 2 u = a t and u = (a t )2 , (a , t ) A T , S r S 8 R r S 8 r S 2 R  S   Consider two scenarios:

1. no language barriers: Λ = Λ = M , S R f g and the prior on T Λ Λ has a uniform distribution;  S  R 2. language barriers for the sender:

Λ = λ = 0 , λ = 0, 1 , S S f g S f g n o Λ = M , b R f bg and the prior on T Λ Λ has a uniform distribution.  S  R b Without language barriers, with one-dimensional communication, only the pooling 5 equilibrium exists, because the bias between the sender and receiver is too large (i.e., 8 > 1 1 2 ). In the pooling equilibrium, the receiver takes the action 2 , and the ex-ante expected utilities of the two agents are EuS = 41 and EuR = 1 . However, with the language 64 4 barriers for the sender specified above, it is easy to check that the following strategy S profile is an equilibrium: type λS always sends message 0, type λ sends the message 1 if t = 1, and the message 0 if t = 0,. Finally, the receiver plays action 1 if he gets the b 1 16 message 1 and action 3 if he gets message 0. The agents’ ex-ante expected utilities in

16It is straightforward to see that the receiver and λS are choosing best replies in equilibrium. To check S 1 the incentive compatibility of λ , note that the receiver plays only two actions: 1 and 3 . Furthermore, the

b

24 this equilibrium are:

1 S 1 S 321 41 EuS = Euλ + Euλ = > 2  2  576 64 b 1 1 2 1 1 2 1 1 1 uR = 0 1 (1 1)2 = > 2  3 4  3 4  6 4     Therefore, the equilibrium with these language barriers (ex-ante) Pareto dominates the equilibrium with no language barriers.

5.2 Cheap talk communication devices

The welfare improvement in Example 1 is not entirely surprising, since the literature has already pointed out that while faulty communication may reduce message precision, it may also weaken the sender’s incentive compatibility constraints in such a way to more than compensate for the lesser precision. In particular, Blume, Board, and Kawamura (2007) show this for “noisy talk” in a sender-receiver game, where talk is “noisy” in the sense that there is an exogenous probability that the Receiver will not hear the intended message, but one randomly chosen by nature. Furthermore, Goltsman, Horner,¨ Pavlov, and Squintani (2009) show that if an an unbiased mediator is introduced in the standard Crawford and Sobel (1982) model, this mediator may improve communication by sending noisy messages to the receiver.

Example 1 shows that language barriers can also achieve welfare improvements, so the question we wish to answer is: what is the relationship regarding welfare between language barriers and cheap talk or generalized versions of cheap talk such as noisy talk and mediated communication mentioned above? To answer this question, we follow the

5 ideal point for the sender is 8 when t = 0, and 1 5 7 9 5 = < = 1 . 3 8 24 24 8

S As a result, it is a best reply for λ to send the message 0 when t = 0. Similarly, the ideal point for the 13 sender is 8 when t = 1, and b 1 13 31 15 13 = > = 1 . 3 8 24 24 8

S As a result, it is a best reply for λ to send the message 1 when t = 1.

b 25 Goltsman, Horner,¨ Pavlov, and Squintani (2009) strategy in studying optimal equilibria under language barriers, noisy talk, mediated communication, and arbitrated communi- cation, and compare their welfare properties.17

Recall that a communication game is defined by a tuple

I, M, T, Λ, π, A, (ui : T A R)i I , N .  ! 2

From now on, we fix any primitive (excluding language barriers),

I, M, T, πT, A, (ui : T A R)i I , N = 1 ,  ! 2 so as to make comparisons meaningful. In particular, we fix πT but allow Λ and the marginal distribution on it to change. For simplicity, we assume

I = 1, 2 , A = T = 1, f g j 1j j 2j i.e., we focus on the standard one sender-one receiver game, and 1 and 2 are the sender and the receiver, respectively.

Note that A = T = 1 means that the sender (i.e., player 1) takes a degenerate j 1j j 2j action, and the receiver (i.e., player 2) observes a degenerate payoff type — this is com- mon knowledge. Hence, it is with loss of generality for us to omit A1 and T2 and to use A and T to denote A and T , respectively. However, we still consider Λ = Λ Λ , i.e., 2 1 1  2 we allow for the possibility that both the sender and the receiver have language barriers. Finally, for simplicity, we assume

M = A = R.

5.2.1 Mediation and arbitration equilibria

First, we define arbitration and mediation equilibria.

17Of course, this means comparing the best available equilibrium under the different “faulty” devices. For instance, we say language barriers strictly improve welfare over noisy talk if and only if the optimal equilibrium under some language barriers has a strictly larger welfare than the optimal equilibrium under all noisy talk.

26 Definition 4 [p : T (A)] is an arbitration equilibrium with adverse selection if ! 4

u [t, a] p (t)(da) u [t, a] p t0 (da) , t, t0 T. (11) 1  1 8 2 a ZA a ZA 2 2  [p : T (A)] is an arbitration equilibrium with moral hazard if ! 4 ι : A A, (12) 8 !

u [t, a] p (t)(da) π [dt] u [t, ι (a)] p (t)(da) π [dt] . 2 2 3 T  2 2 3 T ZT a ZA ZT a ZA 4 2 5 4 2 5 [p : T (A)] is a mediation equilibrium if both (11) and (12) are satisfied. ! 4

We say [p : T (A)] is an arbitration equilibrium if it is either an arbitration ! 4 equilibrium with adverse selection or an arbitration equilibrium with moral hazard. Clearly, a mediation equilibrium is an arbitration equilibrium. In particular, we share the same definition of mediation equilibrium with Goltsman, Horner,¨ Pavlov, and Squintani (2009). Our notion of arbitration equilibrium with adverse selection is the same as the origi- nal ”arbitration equilibrium” defined in Goltsman, Horner,¨ Pavlov, and Squintani (2009), while ”arbitration equilibrium with moral harzard” is a new notion.18

Our terminology is inspired by Myerson (1991)’s notions of moral hazard and ad- verse selection in communication games. Suppose there is a non-strategic middleman (i.e., an arbitrator or a mediator) besides the players. The sender reports his private pay- off type to the middleman; upon receiving t, the middleman commits to draw from a lottery on A following the distribution p (t); given every realized value a of the lottery, the receiver plays a. In the case of mediation, the sender is not committed to reporting the true payoff type and condition (11) requires that truthful reporting be optimal for the sender in an equilibrium. At the same time, the receiver is also not committed to follow the action suggested by the middleman, and condition (12) describes the incentive com- patibility condition for the receiver, where the function ι : A A in (12) represents the ! 18Presumably, ”arbitration equilibrium with adverse selection” is a more practically relevant one between the two. However, our sole purpose to introduce the other notion is for conceptual clarification. More precisely, it helps us compare language-barrier equilibria and arbitration equilibria (see Section 5.4).

27 receiver’s (possible) deviation from the recommended actions, i.e., when the mediator recommends a, the receiver may deviate to play ι (a). The two forms of arbitration follow when we impose just one of the two conditions. In arbitration with adverse selection, the receiver must follow the arbitrator’s recommended action but the adverse selection problem (i.e., the sender still needs to be incentivized to report her true payoff state) re- mains. In arbitration with moral hazard, the sender must report her true payoff state, but the moral hazard problem (i.e., the receiver still needs to be incentivized to follow the arbitrator’s recommended action) remains.

5.2.2 Noisy-talk equilibria

A noisy-talk game is defined by a tuple (ε, ξ) [0, 1] (M), i.e., with probability ε, 2  4 the sender’s message is replaced by an exogenous and independent noise following the distribution ξ. A potential candidate for a noisy-talk equilibrium is a strategy profile

([s : T (M)] , [r : M (A)]) . ! 4 ! 4 Given [(ε, ξ) , s, r], type t of the sender follows s (t) (M) to send a random mes- 2 4 sage; for any realized message m from the sender, with probability (1 ε), the receiver observes m, and with probability ε, the receiver observes a random message generated by

the distribution ξ; finally, upon receiving a (possibly distorted) message m0, the receiver takes a random action r (m ) (A). We aggregate this process as follows. 0 2 4 p[(ε,ξ), s, r] : T (A) , (13) ! 4

p[(ε,ξ), s, r] (t)[E] = (1 ε) r (m)[E] + ε r (m)[E] ξ [dm] s (t)[dm] , E A, 2   3 8  MZ MZ 4 5 i.e., p[(ε,ξ), s, r] (t) is the ex-post action distribution induced by the equilibrium, given t. We now define noisy-talk equilibria.

Definition 5 ([s : T (M)] , [r : M (A)]) is a noisy-talk equilibrium if there ex- ! 4 ! 4

28 ists (ε, ξ) [0, 1] (M) such that 2  4

t T, s0 : T (M) , (14) 8 2 8 ! 4 u (t, a) p[(ε,ξ), s, r] (t)(da) u (t, a) p[(ε,ξ), s0, r] (t)(da) , 1  1 a ZA a ZA 2 2

and r0 : M (A) , (15) 8 ! 4

u (t, a) p[(ε,ξ), s, r] (t)(da) π [t] u (t, a) p[(ε,ξ), s, r0] (t)(da) π [t] . 2 2 3 T  2 2 3 T ZT a ZA ZT a ZA 4 2 5 4 2 5 (14) and (15) in Definition 5 describe the incentive compatibility conditions for the sender and the receiver, respectively.

5.2.3 Language-barriers equilibria

A valid language-barriers game is defined by a tuple [Λ = Λ Λ , π (T Λ)] such 1  2 2 4  that the marginal distribution of π on T matches the fixed πT and assumptions (3) and (4) are satisfied. In game [Λ, π], a potential candidate for a language-barriers equilibrium is a strategy profile

[σ : T Λ (M) , ρ : Λ M (A)] .  1 ! 4 2  ! 4 We say [σ, ρ] is a valid strategy profile if and only if σ and ρ are measurable with respect to Λ1 and Λ2, respectively, where measurability is as defined in Section 3.

Given (t, λ , λ ), the sender follows σ (t, λ ) (M) to send a random message; 1 2 1 2 4 upon receiving a realized message m, the receiver follows ρ (λ , m) (A) to play a 2 2 4 random action. Abstractly, we use the function p(σ, ρ) defined below to aggregate this process.

p(σ, ρ) : T Λ Λ (A) , (16)  1  2 ! 4 p(σ, ρ) (t, λ , λ )[E] = [ρ (λ , m)[E]] σ (t, λ )(dm) , E A. 1 2 2 1 8  MZ

29 Definition 6 For any valid language-barriers game (Λ, π), we say a valid strategy profile

[σ : T Λ (M) , ρ : Λ M (A)] ,  1 ! 4 2  ! 4 is a language-barriers equilibrium if

(t, λ ) T Λ , σ0 : T Λ (M) , (17) 8 1 2  1 8  1 ! 4

u (t, a) p(σ, ρ) (t, λ , λ )[da] u (t, a) p(σ0, ρ) (t, λ , λ )[da] π [dλ t, λ ] 0, 0 1 1 2 1 1 2 1 2 1 Z Z Z j  Λ2 a A a A @ 2 2 A

and λ Λ , ρ0 : Λ M (A) , (18) 8 2 2 2 8 2  ! 4

u (t, a) p(σ, ρ) (t, λ , λ )[da] u (t, a) p(σ, ρ0) (t, λ , λ )[da] π [(dt, dλ ) λ ] 0. 0 2 1 2 2 1 2 1 1 2 Z Z Z j  T Λ1 a A a A  @ 2 2 A Furthermore, we say it is an independent-language-barriers equilibrium, if T and Λ are indepen- dent according to π.

Finally, to compare these different notions of equilibria, we define a notion of out- come equivalence, as in Definition 3.

Definition 7 Consider any arbitration equilbrium [p : T (A)], any noisy-talk equilib- ! 4 rium (s, r) under noise (ε, ξ) and any language-barriers equilibrium [σ, ρ], which, given t, in- duce ex-post action distributions p (t), p[(ε,ξ), s, r] (t) as defined in (13), and (σ, ρ) (t) as defined P in (16), respectively. Any two of the equilibria are outcome-equivalent if they induce the same ex-post action distribution for any t T. 2

5.3 Welfare comparison

In this section, we compare welfare induced by different equilibria. Goltsman, Horner,¨ Pavlov, and Squintani (2009) consider the canonical Crawford and Sobel (1982) with quadratic utility, where, in any mediation equilibrium, the sender’s expected utility differs from the receiver’s expected utility by a constant determined by the “bias.” In that setting, it is

30 without loss of generality to compare only the sender’s (or the receiver’s) expected utility in different equilibria. However, in the general communication model as studied here, this nice property no longer holds. We thus use any weakly-increasing social welfare function Φ to aggregate players’ utility, i.e.,

Φ : RI R such that !

xi xi0 , i I = Φ (xi)i I Φ xi0 i I .  8 2 ) 2  2   h  i That is, under an equilibrium, if every player i I gets expected utility x , we say this 2 i equilibrium achieves social welfare of Φ (xi)i I . Then, given a social welfare func- 2 A MH A AS M N LB ILB tion Φ, let Φ , Φ , Φ , Φ , Φ , Φ  denote the supremum of the social welfare achieved by equilibria in each of our possible protocols (arbitration with moral hazard, arbitration with adverse selection, mediation, noisy-talk, language-barriers, and independent-language-barriers, respectively). We now present the main result of this sec- tion.

Theorem 2 For any weakly increasing social welfare function Φ, we have

LB A MH M ILB N Φ Φ Φ Φ Φ .    

It is straightforward to see ΦA MH ΦM, because every mediation equilibrium  is an arbitration equilibrium with moral hazard. Given this, Theorem 2 is immediately implied by the following three lemmas. The idea of the proofs is to show that equilibria with language barriers, arbitration with moral hazard, mediation, independent language barriers and noisy talk correspond to a series of increasingly restrictive incentive compat- ibility conditions in that order. The proof of Lemmas 3, 4 and 5 are relegated to Appendix A.5, 4 and 5.

Lemma 3 For any noisy talk equilibrium, there exists an outcome-equivalent independent-language- barrier equilibrium.

Lemma 4 For any independent language barriers equilibrium, there exists an outcome-equivalent mediation equilibrium.

31 Lemma 5 For any arbitration equilibrium with moral hazard, there exists an outcome-equivalent language-barrier equilibrium.

It is straightforward to see ΦA AS ΦM, because every mediation equilibrium  is an arbitration equilibrium with adverse selection. One question remaining is how to A AS LB A MH compare Φ and Φ (and Φ ), which will be discussed in Section 5.4.

5.4 Arbitration equilibria and language-barrier equilibria

It is difficult to directly compare the maximal welfare of languag-barrier equilibrium and arbitration equilibrium with adverse selection (i.e., the original ”arbitration equilibrium” defined in Goltsman, Horner,¨ Pavlov, and Squintani (2009)). However, it is easy to com- pare the two forms of arbitration equilibrium, which is the reason that we introduce the new arbitration notion. Furthemore, the comparision helps us clarify the relationship between languag-barrier equilibrium and arbitration equilibrium with adverse selection.

The following example shows that neither ΦA AS ΦA MH nor ΦA MH ΦA AS   hold generally.

Example 2 Consider the standard cheap-talk model with quadratic utility such that

3 2 u (a, t) = a t ; 1 4   u (a, t) = (a t)2 . 2 and µT ∆ (T) is defined as 2 1 µ ( 0 ) = µ ( 1 ) = . T f g T f g 2

Consider [p : T A] and [p : T A] such that ! ! 3 7 p (0) = and p (1) = ; b e 4 4 p (0) = 0 and p (1) = 1. b b That is, the sender and the receivere achieve thee ideal actions (at both payoff states) in p and p, respectively. As a result, p and p are arbitration equilibrium with adverse selection b e b e 32 and arbitration equilibrium with moral hazard, respectively. Consider Φ : R2 R and ! Φ : R2 R defined as ! b e Φ [u , u ] u and Φ [u , u ] u . 1 2  1 1 2  2 Hence, b e A AS A MH Φ = Φ = 0. (19)

It is easy to show b e 9 ΦLB , (20)  16 A AS 1 Φ b . (21)  128 A AS LB LB Then, (19), (20) imply Φ > Φ e. Furthermore, (19), (21) and Lemma 5 imply Φ > A AS LB A AS A AS LB Φ . Therefore, neither Φ Φ nor Φ Φ hold generally. The detailed b b  e analysis can be found in Appendix A.8. e

5.5 Independent-language barrier equilibria and noisy-talk equilibria

Example 3 Consider the sender-receiver game with I = S, R ; A = T = 1, f g j Sj j Rj T = M = [0, 1] and A = ( ∞, ∞) , S R 1 2 u = a t and u = (a t )2 , (a , t ) A T , S r S 4 R r S 8 r S 2 R  S   35 1 with the common prior µ ( 0 ) = µ = µ ( 1 ) = . f g 72 f g 3   Then, there exists an independent-language-barriers equilibrium that strictly Pareto dominates any noisy talk equilibria.

The proof is quite tedious and is relegated to Appendix A.9, but here we provide some intuition. Given the prior µ described above, a mediated communication equilib- rium can be constructed where the mediator proposes action zero when the type is zero, 1 35 action 2 when the type is 72 and mixes when the type is 1 by proposing action 1 with 35 1 probability 36 and action 2 with complementary probability. The same outcome can be

33 obtained by an independent-language-barriers setting when one language type has three 35 messages (regardless of payoff type, this language type occurs with probability 36 ) and the other has only two of the three messages available. Then, there is an equilibrium where one common message is used by both language types when the payoff state is zero to communicate that the action that should be taken is zero, the second common message 35 is used by both language types when the payoff state is 72 to indicate that the action that 1 should be taken is 2 and finally, if that payoff state is 1, then the remaining message is used to communicate that the action that should be taken is one by the language type that has that message available; the other language type, who only has two messages, uses the second common message.

These equilibria cannot be replicated by a noisy-talk equilibrium. In mediated com- munication it is the mediator that injects noise and can do so depending on the payoff state: in this particular case, the mediator can make the receiver unsure of the sender’s 1 payoff state when she proposes action 2 , but when she proposes actions zero and one, there is no uncertainty about the underlying payoff state. In the independent-language- barriers case, this can be replicated because upon observing the second common message the receiver is again uncertain about the sender’s payoff state, whereas there is no uncer- tainty with the other two messages. In noisy-talk, this cannot be replicated because the same noise distribution must apply for each payoff state.

6 Conclusions

At an intuitive level, “language barriers” bring obstacles to communication. However, in this paper we show that they may not be if a different communication protocol (than that in Crawford and Sobel (1982)) is allowed. In particular, with N-dimensional commu- nication, (almost) efficiency can always be achieved in common-interest games, and any equilibrium in the canonical cheap-talk game can be mimicked by an equilibrium under any “language barriers.” As a result, players cannot be worse off under “language barri- ers.” Of course in the real world, plently of examples of miscommunication exist, so our results imply that miscommunication must arise from something outside of this setting. A simple extension would be to incorporate in the model the cognitive cost of sending

34 and comprehending more complex messages thus reconciling our results with those of Blume and Board (2013).19 More generally, miscommunication might also arise from the fact that real-world messages have a semantic meaning and different agents might not have the same vocabulary. In our model the meaning of messages is emergent in equi- librium. We would argue that this makes the notion of equilibrium itself unsuitable for studying this aspet of language, whereas a more promising approach would be based on learning.

The second part of our paper shows that even if the original (1-dimensional) com- munication in Crawford and Sobel (1982) is imposed, some language barriers can im- prove upon equilibria obtainable in their absence. In particular, we show the optimal independent-language-barrier equilibrium always weakly dominates—and sometimes, strictly dominates— any generalized noisy talk equilibria, which includes the equilibria in the canonical cheap-talk Crawford and Sobel (1982) model (without noise) as special cases.20

We also believe that the Blume and Board (2013) framework with language types utilized here is rich enough to accommodate both the standard cheap talk and the “per- suasion games” literature (e.g.,Milgrom (1981)) as special cases. In cheap-talk, the sender could send any possible messages, whereas in persuasion games, the privately informed parties cannot lie about the payoff states. This no-lie assumption is equivalent to M = 2T ? and each payoff state t correponds to a language type who is endowed with a f g set of subsets (of M) containing t. I.e., at t, the sender can send only a message E such that t E, meaning that only states in E are possibly true. Much of the literature on per- 2 suasion games has focused on the conditions on preferences necessary to guarantee full communication of payoff type.21 It is easy to see that guaranteeing full communication with arbitrary language barriers is easy but future research should focus on determining the minimal conditions on language types necessary to guarantee full communication for a given preference profile.

19See Garicano and Prat (2013) for a discussion of cognitive costs in communication in organisations. 20One open question remains. We show that the optimal mediated communication is always weakly better than communication under the optimal independent “language barriers.” Is the converse true? 21See Seidmann and Winter (1997), Giovannoni and Seidmann (2007) and Jeanne Hagenbach and Perez- Richet (2014) for details.

35 A Proofs

A.1 The proof of Lemma 1

Fix any (i, j, λ) I I Λ with i = j. Recall N 3 l IΛl , and hence, Λi N. 2   ( ) ( )6 (K)   j[ 2 j j j  Label the elements in Λ as λ 1 , λ 2 , ..., λ , where K = Λ N. i i i i j ij  (k) (k) (k) For each λ Λi with k K, we have λj λ = ? and λ 2, due to (3) and i 2  \ i 6 i  (k) (k) (k) (k) (k ) (k) (k) (4). Thus, we fix some m λj λ , and some m λ m , i.e., m = m . 2 \ i 2 i 6 Note that n o (k) (k)e e m λj m , (22) when either m(k) λ or m(k) / λ is true. 2 j 2 j e N Then, definee Υ :eΛi M as follows. For each k 1, 2, ..., K , (i,λj) ! 2 f g

m(k), if l = k; (k) = [ ]N N = Υ(i,λ ) λi ml l=1 M such that ml j 2 8 m(k), otherwise. h i < (k) (k) (k) : That is, type λi uses m to denote ”yes” and m for ”no.”e Furthermore, player i as- (k) sociates each of the first K dimensions of the message Υ(i,λ ) λi to one element in Λi, e j h i (k) and player i reveals whether he is that element in the associated dimension. Precisely, λi says ”yes” (i.e., m(k)) in the k-th dimension, and ”no” (i.e., m(k)) in all other dimensions.

(k) (k0) For k = k0, we show Υ λ λ Υ λ , ase needed in (6). By the defini- 6 (i,λj) i j (i,λj) i tion of Υ , we have h i h i (i,λj)

(k) N (k) (k) Υ λ = [ml]l= = mk = m , ml = m ; (i,λj) i 1 l=k h i    6  (k ) Υ λ 0 = [m ]N = m = m(k0), m = me (k0) . (i,λj) h l l=1 k0 l l=k0 h i    6  Consider two cases: (1) m(k) = bm(k0) and (2)b m(k) = m(k0b). Ine case (1), m(k) = m(k0) and 6 6 m(k) λ implies 2 j e (k) (k0) e e mk = m λj m = mk,

(k) (k0) i.e., in the k-th dimension, mk λ mk, which further implies Υ λ λ Υ λ . j e b (i,λj) i j (i,λj) i h i h i b 36 In case (2), recall N 3 by (5). Pick any k00 1, ..., N k, k0 . Then, (22) implies  2 f g m = m(k) m(k) = m(k0) = m k00 λj k00 , (k) (k ) i.e., in the k -th dimension, m m , which further implies Υ λ Υ λ 0 . 00 k00 λej k00 e b (i,λj) i λj (i,λj) h  h i h i b A.2 The proof of Lemma 2

Fix any (i, j, λ) I I Λ with i = j. Recall λi λj = ? and λi 2. Thus, we fix some 2   6 \ 6 j j  m λi λj, and some m λi m , i.e., m = m. Note that 2 \ 2 f g 6 m m e λj e, (23) when either m λ or m / λ is true. 2 j 2 j e (σ ,ρ ) (σ ,ρ ) Recall Ne e  by (7). Label the elements in   as m(1), m(2), ..., m(K),  Ei Ei (σ,ρ ) (σ,ρ) N where K = N. Then, define Γ : (λi) as follows. For each Ebl  (λi,λj) Ei ! b k 1, 2, ..., K , 2 f g b m, if l = k; (k) = [ ]N N = Γ(λ ,λ ) m ml l=1 M such that ml i j 2 8 m, otherwise. h i b b < That is, type λ use m to denote ”yes” and m for ”no.” Furthermore, λ associates each i : e i (k) (σ,ρ) of the first K dimensions of the message Γ(λ ,λ ) m to one element in i , and λi ie j E reveals whether he intends to send that elementh in thei associated dimension. Precisely, (σ ρ ) to send the message m(k) ,  , λ say ”yes” (i.e., m) in the k-th dimension, and ”no” 2 Ei i (i.e., m) in all other dimensions.

(k) (k ) eFor k = k0, we show Γ m λ Γ m 0 , as needed in (8). By the 6 (λi,λj) j (λi,λj) definition of Γ , we have h i h i (λi,λj)

m(k) = [m ]N = m = m (m = m) Γ(λh,λi) l l=1 k , l l=k ; 6 h i b h i m(k0) = [m ]N = m = m (m = m) Γ(λh,λi) l l=1 k0 , l l=k . e 6 0 h i b h i Since k = k ,(23) implies 6 0 b b b e mk = m λi m = mk, (k) (k ) i.e. in the k-th dimension, mk λ mk, which further implies Γ m λ Γ m 0 . i e b (λi,λj) j (λi,λj) h i h i b 37 A.3 Proof of Proposition 1

We use the following two lemmas to prove Proposition 1, and the proofs can be found in Appendix A.3.1 and A.3.2.

Lemma 6 Suppose Assumption 2 holds. For any ε > 0, there exists δ > 0 such that

t, t0 T, d t, t0 < δ = max u (t, a) u [t, a] < ε, a arg max u t0, a . (24) 8 2 ) a A 8 2 a A 2 2  

Lemma 7 For any game satisfying Assumption 2, there exists an optimal equilibrium (σ, ρ) in

   G = I, M, T, Λ , π , A, (ui)i I , N such that U (σ, ρ) U (σ, ρ) for any strategy profile 2  (σ, ρ) inD G. E

Proof of Proposition 1: Fix any game satisfying Assumption 2 and any ε > 0. By Lemma 6, there exists δ > 0 such that

t, t0 T, d t, t0 < δ = max u (t, a) u [t, a] < ε, a arg max u t0, a . (25) 8 2 ) a A 8 2 a A 2 2  

Since T is compact, it is totally bounded. Hence, there exists a positive integer K, such that T can be partitioned by E , ..., E and f 1 Kg

t, t0 E = d t, t0 < δ , k 1, ..., K . (26) 2 k ) 8 2 f g  For each k 1, ..., K , fix some tk Ek and some ak arg maxa A u (tk, a). Then, 2 f g 2 2 2

∑ u (t, ak) πT (dt) ∑ max u (t, a) ε πT (dt) (27) t E  t E a A k 1,...,K Z k k 1,...,K Z k 2 2f g 2 2f g 2  

= max u (t, a) πT (dt) ε, a A t ZT  2  2 where the inequality follows from (25) and (26).

Suppose M K. Then, the expected utility u (t, a ) πT (dt) can be ∑ t Ek k j j  k 1,...,K 2 2f g achieved in a strategy profile, i.e., fix K messages, m1, ..., mK;R the sender sends mk if and

38 only t E ; and the receiver plays a if and only if he receives m . By Lemma 7, an optimal 2 k k k equilibrium exists, and denote it by (σ, ρ), and hence

U (σ, ρ) ∑ u (t, an) πT (dt) . (28)  n 1,...,N Zt En 2f g 2 Furthermore,

max u (t, a) πT (dt) U (σ, ρ) . (29) a A  t ZT  2  2 Thus, (27), (28) and (29) imply

U (σ, ρ) max u (t, a) πT (dt) ε, a A  Z  2  t T 2 which completes the proof of Proposition 1.

A.3.1 Proof of Lemma 6

Since u is continuous and T, A are compact, u is uniformly continuous. Then, by Berg’s

Maximum Theorem, φ (t) maxa A u (t, a) is continuous on t T. Since T is compact,  2 2 φ (t) is uniformly continuous, and hence,

ε ε > 0, δ > 0, such that d t, t0 < δ = max u (t, a) max u t0, a < , (30) 8 9 ) a A a A 2 2 2  

The uniform continuity of u implies

ε > 0, δ > 0, such that (31) 8 9 ε d t, t0 < δ = u (t, a) max u t0, a = u (t, a) u t0, a < , a arg max u t0, a . ) a A 2 8 2 a A 2 2    

Then, (30) and (31 ) imply

ε > 0, δ > 0, such that d t, t0 < δ = max u (t, a) u (t, a) < ε, a arg max u t0, a . 8 9 ) a A 8 2 a A 2 2  

This completes the proof of Lemma 6.

39 A.3.2 Proof of Lemma 7

Suppose M = n. Define a function, ψ : An R as follows. j j !

ψ (a1, ..., an) = max u (t, a) πT (dt) . a a1,...,an t ZT  2f g  2 First, we show ψ is uniformly continuous, i.e.,

ε > 0, δ > 0, such that (32) 8 9 a a < δ, k 1, 2, ..., n = ψ (a , ..., an) ψ (a , ..., an) < ε. j k kj 8 2 f g ) j 1 1 j

Consider anyb (ae1, ..., an) and (a1, ..., an) such thatb maxkb 1,2,...,n eak eak < δ. For each 2f g j j t T, fix any k (t) arg maxk 1,...,n u (t, ak). We thus have 2 b 2 b 2bf gb b e b ψ (a1, ..., an) = u t, ak(t) πT (dt) . (33) t ZT 2 h  i b b b By uniform continuity of u,

ε > 0, δ > 0, such that (34) 8 9

a a < δ, k 1, 2, ..., n = u t, a π (dt) u t, a π (dt) < ε. k k 8 2 ) k(t) T k(t) T j j f g Z Z t T h  i t T h  i 2 2 b e b e Furthermore, by the definition of ψ ( a1, ..., an), we have

ψ (a , ..., a )e e u t, a π (dt) . (35) 1 n  k(t) T t ZT 2 h  i e e e Then, (33), (34) and (35) imply

ε > 0, δ > 0, such that (36) 8 9 a a < δ, k 1, 2, ..., n = ψ (a , ..., an) ψ (a , ..., an) ε. j k kj 8 2 f g ) 1  1

If we changeb thee roles of (a1, ..., an) and (a1, ..., an),e and repeate theb analysis,b we get

e e ε > 0, bδ > 0,b such that (37) 8 9 a a < δ, k 1, 2, ..., n = ψ (a , ..., an) ψ (a , ..., an) ε. j k kj 8 2 f g ) 1  1 b e b b e e 40 Therefore, (36) and (37) imply (32), i.e., ψ is uniformly continuous.

Second, there exists

(a, ..., a) arg max ψ (a1, ..., an) , (38) 1 n n 2 (a ,...,an) A 1 2 due to compactness of A and continuity of ψ, i.e.,

n max u (t, a) πT (dt) max u (t, a) πT (dt) , (a1, ..., an) A . "a a,...,an #  a a1,...,an 8 2 t ZT 2 1 t ZT  2f g  2 f g 2

Third, recall that there are at most M = n messages. Label the elements in M as j j m , ..., mn, i.e., m , ..., mn . For any fixed strategy profile (σ, ρ), let a A denote the 1 f 1 g k 2 action taken by the receiver upon getting mk under (σ, ρ). Then, the expected utility of the players under (σ, ρ) is at most maxa a ,...,an u (t, a) πT (dt). 2f 1 g t ZT 2 h i

Finally, (a1, ..., an) as defined in (38) corresponds to an equilibrium, denoted by

(σ, ρ), under which players’ expected utility is maxa a ,...,a u (t, a) πT (dt). To 2 1 n t ZT h f g i see this, define 2

Ek = t T : ak arg max u (t, a) , k 1, 2, ..., n . ( 2 2 a a,...,an ) 8 2 f g 2f 1 g Then, define

E1 = E1 and k 1 Ek = Ek Ek , k 2, ..., n . [l=1 8 2 f g h i As a result, E1, ..., En is a partition of T, and each ak is the optimal action for every t E . Thus, the following strategy profile is an equilibrium. 2 k  sender’s strategy: send m if and only if t E , k 1, 2, ..., n . k 2 k 8 2 f g 2 receiver’s strategy: play a if and only if he receives m , k 1, 2, ..., n . 3 k k 8 2 f g 4 5 The incentive compatibility of the sender is implied by the definition of Ek and the incen-

n tive compatibility of the receiver is implied by (a1, ..., an) arg max(a ,...,an) A ψ (a1, ..., an). 2 1 2

To sum, the last two ponts show the existence of an equilibrium (σ, ρ) such that U (σ , ρ ) U (σ, ρ) for any strategy profile (σ, ρ).     41 A.4 Weak-language-barrier equilibria

We introduce a notion of weak-language-barrier equilibria (resp. weak-independent- language-barrier equilibria), which differ from language-barrier equilibria (resp. independent- language-barrier equilibria) only on one assumption:

λ 2, (i, λ) I Λ. (39) j ij  8 2  That is, every language type must have at least two messages in any language-barrier equilibrium, but language types in a weak-language-barrier equilibrium may be endowed with just one single message.

Clearly, a language-barrier equilibrium is a weak-language-barrier equilibrium. Con- versely, for any weak-language-barrier equilibrium, there is an outcome-equivalent language- barrier equilibrium, which is summarized in the following lemma. Because of this, it is without loss of generality to focus on weak-language-barrier equilibria.

Lemma 8 For any weak-language-barrier equilibrium, there exists an outcome-equivalent language- barrier equilibrium. Furthermore, for any weak-independent-language-barrier equilibrium, there exists an outcome-equivalent independent-language-barrier equilibrium.

Proof: Fix any valid language-barrier game (Λ, π), and any weak-language-barrier equilibrium, [σ : T Λ (M) , ρ : Λ M (A)] .  1 ! 4 2  ! 4 Recall M = R. Pick any disjoint M ( M) and M ( M) which are both homeomor-     1 2 phic to M, e.g., M = 0, 3 and M = 3 , 1 . Let    γ : M M, ! γ : M M, !

1 1 denote the homeomorphisms, and let γ and γ denote the inverse function.

42 Define a new valid language-barrier game (Λ, π),

Λ = γ (λ ) γ (λ ) : λ Λ ; 1 f 1 [ 1 1 2 1g Λ = γ (λ ) γ (λ ) : λ Λ ; e 2 f 2 [ 2 2 2 2g M M π (E) = π ( [t, λ , λ ] : [t, γ (λ ) γ (λ ) , γ (λ ) γ (λ )] E ) , E T 2 2 , f 1 2 e 1 [ 1 2 [ 2 2 g 8   

i.e.,e each of the sender’s language type λ1 is transformed to a new type containing two

copies of the original type, with the first copy transformed from λ1 via γ and the second

copy from λ1 via γ; similar construction applies to the receiver’s language types; the new prior π inheritates the distribution from the original prior π.

For anye µ (M), define γ (µ) (M ) as 2 4  2 4  1 γ (µ)[E] = µ γ [E] , E M, 8    i.e., for any random message generated by µ, we transform it to a message in M via γ, and γ (µ) is the the distribution of the transformed message from µ.

λ For each λ2 Λ2 such that λ2 $ M, fix any m 2 M λ2. Furthermore, if λ2 = M, 2 2 fix any mλ2 M. The sole purpose of construction of mλ2 is for the measurablility (with 2 respect to λ2) of ρ defined below.

We now definee the outcome-equivalent language-barrier equilibrium.

σ : T Λ (M) , ρ : Λ M (A) ,  1 ! 4 2  ! 4 h i σ [t, γ (λ ) γ (λ )] = γ (σ [t, λ ]) , e e 1 [ 1e e 1 ρ λ , γ 1 (m) , if m M ; e 2  2  1 ρ [γ (λ2) γ (λ2) , m] = 8 ρ λ2, γ (m) , if m M; [ > 2 <> λ2 ρ λ2, m , otherwise. e > That is, a new sender’s type γ (λ ) :γ (λ ) follows the strategy σ [t, λ ] of the old  1 [  1 1 type λ1, but transform the (random) message to transform a message in M via γ; a new receiver’s type γ (λ ) γ (λ ) first decode the messages in M and M via γ 1 and  2 [  2    1 1 1 γ , respectively, and then follows the strategies ρ λ2, γ (m) and ρ λ2, γ (m)

of the old type λ2.    

43 First, with probability 1, the sender sends messages in M and M. Second, the

receiver treats M and M as the transformed copies of the same set M (via γ and

γ, respectively). Hence, it is without loss of generality for the sender to send messages 22 only in M. Given this, [σ, ρ] just replicates [σ, ρ], and [σ, ρ] inheritates the incen- tive comptibility of the players from [σ, ρ]. Therefore, [σ, ρ] is an outcome-equivalent e e e e language-barrier equilibrium. e e A similar argument applies to weak-independent-language-barrier equilibria.

A.5 Proof of Lemma 3

In light of Lemma 8, it is without loss of generality for us to focus on weak-independent- language-barrier equilibria. Fix any noisy-talk game (ε, ξ) [0, 1] (M), and any 2  4 noisy-talk equilibrium ([s : T (M)] , [r : M (A)]) in the game. Define a language- ! 4 ! 4 barrier game (Λ, π), such that T and Λ are independent under π, and

Λ = M m : m M ; 1 f g [ ff g 2 g Λ = M ; 2 f g π [ M M ] = 1 ε; Λ f g  f g M πΛ [E M ] = ε ξ [ m : m E ] , E 2 M .  f g  f f g 2 g 8 2 f g That is, the receiver understands all messages in M; with probability 1 ε, the sender understand all messages in M, and with probability ε, the sender is endowed with a single message; conditional on the probability-ε event, the distribution follows ξ, with m replacing m. f g Then, we define a weak-independent-language-barrier equilibrium

[σ : T Λ (M) , ρ : Λ M (A)] ,  1 ! 4 2  ! 4 22 Any message in M has a corresponding message M which plays the same role.

44 such that for every (t, m) T M, 2 

σ (t, λ1 = M) = s (t) ,

σ (t, λ = m ) = δm, 1 f g ρ (λ2 = M, m) = r (m) ,

where δm denotes the Dirac measure on m. Clearly, incentive compatibility for every λ1 = m is satisfied. Then, the incentive compatibility of the sender’s language type λ = M f g 1 and the receiver’s language type λ2 = M in [σ, ρ] inheritates the incentive compatibility of the sender and the receiver in the noisy-talk equilibrium (s, r), respectively. I.e., [σ, ρ] is an outcome-equivalent weak-independent-language-barrier equilibrium. Finally, by Lemma 8, an outcome-equivalent independent-language-barrier equilibrium exists.

A.6 Proof of Lemma 4

Fix any valid language-barrier game (Λ, π), and any independent-language-barrier equi- librium [σ : T Λ (M) , ρ : Λ M (A)] ,  1 ! 4 2  ! 4 Recall p(σ, ρ) : T Λ Λ (A) defined in (16):  1  2 ! 4

p(σ, ρ) (t, λ , λ )[E] = [ρ (λ , m)[E]] σ (t, λ )(dm) , E A. 1 2 2 1 8  MZ (σ, ρ) i.e., p (t, λ1, λ2) is the ex-post action distribution induced by [σ, ρ], given (t, λ1, λ2). Then, define

(σ, ρ) : T (A) , P ! 4 (σ, ρ) (t)[E] = p(σ, ρ) (t, λ , λ )[E] π [dλ , dλ ] , E A, (40) P 1 2 Λ 1 2 8  ΛZ h i i.e., (σ, ρ) (t) is the ex-post action distribution induced by [σ, ρ], given t. We now show P (σ, ρ) : T (A) defined above is a mediation equilibrium. First, since [σ, ρ] is a P ! 4

45 language-barrier equilibrium, (17) in Definition 6 implies

(t, λ ) T Λ , σ0 : T Λ (M) , 8 1 2  1 8  1 ! 4

u (t, a) p(σ, ρ) (t, λ , λ )[da] u (t, a) p(σ0, ρ) (t, λ , λ )[da] π [dλ t, λ ] 0, 0 1 1 2 1 1 2 1 2 j 1  ΛZ a ZA a ZA 2 2 2 @ A (41) Recall Λ and π are indepdent, and hence (41) reduces to

u (t, a) p(σ, ρ) (t, λ , λ )[da] u (t, a) p(σ0, ρ) (t, λ , λ )[da] π [dλ λ ] 0. 0 1 1 2 1 1 2 1 2 1 Z Z Z j  Λ2 a A a A @ 2 2 A (42) Given the definition of (σ, ρ) defined in (40), if we integrate (42) over Λ , we get P 1 (σ, ρ) (σ , ρ) u [t, a] (t)[da] u [t, a] 0 (t)[da] , t, σ0. (43) 1 P  1 P 8 a ZA a ZA 2 2 Finally, for every t T, consider σ (t) σ (t ), and (43) becomes 0 2 0  0 (σ, ρ) (σ, ρ) u [t, a] (t)[da] u [t, a] (t)[da] , t, t0 T. 1 P  1 P 8 2 a ZA a ZA 2 2

Second, since [σ, ρ] is a language-barrier equilibrium, (18) in Definition 6 implies

and λ Λ , ρ0 : Λ M (A) , 8 2 2 2 8 2  ! 4

u (t, a) p(σ, ρ) (t, λ , λ )[da] u (t, a) p(σ, ρ0) (t, λ , λ )[da] π [(dt, dλ ) λ ] 0. 0 2 1 2 2 1 2 1 1 j 2  T ZΛ a ZA a ZA  1 2 2 @ A (44)

Given the definition of (σ, ρ) defined in (40), if we intergrate (44) over Λ , we get P 2

(σ, ρ) (σ, ρ ) u [t, a] (t)(da) π [dt] u (t, a) 0 (t)(da) π [dt] , λ , ρ0, 2 2 P 3 T  2 2 P 3 T 8 2 8 ZT a ZA ZT a ZA 2 2 which4 further implies 5 4 5

ι : A A, 8 !

u [t, a] (σ, ρ) (da) π [dt] u [t, ι (a)] (σ, ρ) (da) π [dt] . 2 2 P 3 T  2 2 P 3 T ZT a ZA ZT a ZA 4 2 5 4 2 5 Therefore, (σ, ρ) : T (A) defined above is a mediation equilibrium. P ! 4  46 A.7 Proof of Lemma 5

Fix any arbitration equilibrium with moral hazard [p : T (A)], i.e., ! 4 ι : A A, 8 !

u [t, a] p (t)(da) π [dt] u [t, ι (a)] p (t)(da) π [dt] . (45) 2 2 3 T  2 2 3 T ZT a ZA ZT a ZA 2 2 In light of4 Lemma 8, it is without5 loss of generality4 for us to focus on5 weak-language- barrier equilibria. Recall M = A = R. Define a language-barrier game (Λ, π), such that

Λ = a : a A = M , 1 ff g 2 g Λ = M , 2 f g π [E] = p (t)[ a : (t, a , M) E ] π [dt] , E T 2M 2M, f f g 2 g T 8    ZT i.e., the receiver has a unique language type M, who understands all messages; the sender’s language type has the form a for a A = M; conditional on payoff type t, π [λ = a , λ = M t] f g 2 1 f g 2 j inheritates the distribution from p (t)[a], with λ = a replacing a. 1 f g Define [σ : T Λ (M) , ρ : Λ M (A)] as follows.  1 ! 4 2  ! 4

σ [t, λ = a ] = δa, a A = M, 1 f g 8 2 ρ [λ = M, m = a] = δ , a A = M, 2 a 8 2 where δa is the Dirac measure on a. Clearly, incentive compatibility of each sender’s language type a is satsified. The incentive comptability of the receiver follows from f g (45). More specifically, p(σ, ρ) : T Λ Λ (A) defined in (16) has the value  1  2 ! 4 (σ, ρ) p [t, λ = a , λ = M] = δa. 1 f g 2 And hence, (45) implies

λ Λ , ρ0 : Λ M (A) , 8 2 2 2 8 2  ! 4

u (t, a) p(σ, ρ) (t, λ , λ )[da] u (t, a) p(σ, ρ0) (t, λ , λ )[da] π [(dt, dλ ) λ ] 0, 0 2 1 2 2 1 2 1 1 2 Z Z Z j  T Λ1 a A a A  @ 2 2 A 47 i.e., incentive comptability of the receiver is satisfied, and [σ, ρ] is an outcome-equivalent weak-language-barrier equilibria. Finally, by Lemma 8, an outcome-equivalent independent- language-barrier equilibrium exists.

A.8 Analysis on Example 2

Recall 3 2 u (a, t) = a t ; 1 4   u (a, t) = (a t)2 ; 2 1 µ ( 0 ) = µ ( 1 ) = . T f g T f g 2

First, we consider Φ [u , u ] u , and show ΦLB 9 . Fix any language-barrier 1 2  1  16 equilibrium, b b [σ : T Λ (M) , ρ : Λ M (A)] .  1 ! 4 2  ! 4 Since the receiver has the strictly quadratic utility u (a, t) = (a t)2, his best reply is 2 to take the pure action a = Et, where the expection is taken over his posterier belief on t. Hence, ρ (λ , σ (t, λ )) = E [t λ , σ (t, λ )] . 2 1 j 2 1 By the rule of iterative expection, we have

E(t,λ) π [ρ (λ2, σ (t, λ1))] = Et πT [t] ,   or equivalently,

E [a (σ, ρ)] = Et π [t] , (46) j  T where E [a (σ, ρ)] denotes the expected value of the equilibrium actions. Furthermore, j let E [u (a, t) (σ, ρ)] and E [u (a, t) (σ, ρ)] denote the expected utility of the two play- 1 j 2 j ers. We thus have 3 2 E [u (a, t) (σ, ρ)] = E (a t) (σ, ρ) 1 j 4 j "   # 9 = E (a t)2 (σ, ρ) j 16 h i 9 = E [u (a, t) (σ, ρ)] , 2 j 16 48 where the second inequality follows from (46). Then E [u (a, t) (σ, ρ)] 0 implies 2 j 

9 E [u (a, t) (σ, ρ)] . 1 j  16 Since (σ, ρ) is arbitrary, we therefore conclude ΦLB 9 .  16 Second, we consider Φ [u , u ] u , andb prove ΦA AS 1 by contradiction. 1 2  2  128 Suppose otherwise. I.e., there exists an arbitration equilibrium with adverse selection e e [p : T (A)] such that ! 4 1 2 1 2 1 Ea p(0) (a 0) + Ea p(1) (a 1) > , 2   2   128 h i h i which impies

2 1 Ea p(0) (a 0) < (47)  64 h i2 1 and Ea p(1) (a 1) < . (48)  64 h i Note that 2 2 Ea p(0) (a 0) = Ea p(0) a Ea p(0) [a] + Ea p(0) [a] 0 (49)       h i h 2  i 2 = Ea p(0) a Ea p(0) [a] + Ea p(0) [a] 0 .       h i Then, (47) and (49) imply 1 Ea p(0) [a] 0 . (50)   64 r Similar argument shows

1 Ea p(1) [a] 1 . (51)   64 r Now, consider payoff state t = 0. If the sender sends message 0, the receiver follows p (0).

As a result, the sender’s expected utility is

2 3 2 9 3 Ea p(0) a 0 = Ea p(0) (a 0) + Ea p(0) [a] (52)  " 4 #  16 2     h i 9 3 + Ea p(0) [a]  16 2   9 3 1 +  16 2  r64 3 ,  8 49 where the second inequality follows from (50). At payoff state t = 0, if the sender sends message 1, the receiver follows p (1). As a result, the sender’s expected utility is

3 2 1 2 Ea p(1) a 0 = Ea p(1) a 1 + (53)  4  4 "   # "   # 2 1 1 = Ea p(1) (a 1) + Ea p(1) [a 1]  16 2   1 1h 1 i + Ea p(1) [a 1]  64 16 2   1 1 1 1  16 16 2  r64 3 .  16 where the first inequality follows from (48) and the second inequality follows from (51). Hence, (52) and (53) imply that the sender prefers message 1 to message 0 at t = 0, contradicting [p : T (A)] being an arbitration equilibrium with adverse selection. ! 4

A.9 Analysis on Example 3

Recall

1 2 u = a t and u = (a t )2 , S r S 4 R r S   35 1 µ ( 0 ) = µ = µ ( 1 ) = . f g 72 f g 3  

35 In what follows, we sometimes write z = 72 to economize on notation. Consider language barriers as described below, where T and Λ are independently distributed.

1 1 Λ = λ = 0, , 1 , λ = 0, ; 1 2 2 2      35 1 π (λ ) = and π (λ ) = . Λ 1 36 Λ 2 36 Consider the pure-strategy independent language-barrier equilibrium (h, g) defined as

50 follows.

[h (t, λ) λ](t,λ) T Λ , [g (m) A]m M 2 2  2 2  1  h (0, λ ) = 0; h (z, λ ) = ; h (1, λ ) = 1 1 1 2 1 1 1 h (0, λ ) = 0; h (z, λ ) = ; h (1, λ ) = 2 2 2 2 2 g (m) = m for m M 2 It is easy to check that this is indeed an equilibrium according to Definition 6.23 Clearly, in this equilibrium, payoff types are almost fully revealed and the expected utility of the receiver is:

1 1 35 2 1 1 1 2 37 1 = 0.002379 (54) 3  2 72 36  3 2 3 72 72 '       Given quadratic utility, it is easy to see that, in any language-barrier equilibrium, the sender’s expected utility differs from the receiver’s expected utility by a constant de- termined by the “bias” (see the discussed in Section A.8). Furthermore, any noisy-talk equilibrium can be transformed to an outcome equivalent language-barrier equilibrium (see Lemma 3). Therefore, it is without of loss generality for us to compare only the the receiver’s expected utility. In particular, we show the expected utility of the receiver in any noisy-talk equilibrium is less that that of (h, g) constructed above.

Now, consider any noisy talk equilibrium associated with (ε, ξ) [0, 1] (M), 2  4 i.e., with probability ε, the receiver, instead of getting the message from the sender, gets an exogenously chosen noise which is independently (to the sender’s message) generated according to the distribution ξ. Because of the independence, conditional on noise, payoff z+1 107 1 types are uniformly distributed, with a mean 3 = 3 72 < 2 . I.e., conditional on noise,  z+1 the best strategy for the receiver is to take the action 3 , and the welfare loss induced by the noise is at least

1 z + 1 2 1 z + 1 2 1 z + 1 2 2 1 2 3 1 0 + z + 1 = z + > . 3  3 3  3 3  3 9  2 4 6       "  # Hence, the total welfare loss induced by the noise is larger than 1 ε, which, together 6  23 35 1 35 1 5 The ideal point of t = 0, 72 , 1 are 4 , 72 + 4 , 4 , and it is easy to check h (t, λ) is consistent with their 1 35 1 1 1 3 36 + 36 3 1 1 preference. Furthermore, E (t m = 0) = 0, E (t m = 1) = 1, E t m = 2 =  1 1 1  = 2 . j j j 3 + 36 3    51 with (54), implies that (h, g) dominates the noisy talk equilibrium if 37 1 ε 3 72 72  6    37 ε ()  72 36  And, hence, we must have 37 1 ε < < . (55) 72 36 40  By the revelation principle, a noisy-talk equilibrium can always be transformed to one in which the sender takes a (mixed) strategy of recommending actions, and the re- ceiver follows the recommended actions, and furthermore, it is a best reply for the sender to recommend the designated actions, and conditional on receiving a recommended ac- tion, it is a best reply for the receiver to follow it. We now prove our result by contra- diction in 7 steps, i.e., we assume there is such an equilibrium, in which the receiver’s 37 expected utility is larger than 3 72 72 (as calculated in (54)).   Given quadratic utility function, the receiver’s best reply in an equilibrium is the expectation of his posterior belief on t upon receiving a message. Given the sender’s quadratic utility function, each payoff type t has at most two best actions to recommend in an equilibrium (i.e., one smaller than than his ideal point, and the other larger than it).

Step 1: we show that, with a positive probability, a sender of type t 0, z, 1 2 f g recommend an action in the interval t 1 , t + 1 . Suppose otherwise. Then, given 11 11 1 ε < 40 , the total welfare loss for the receiver (at payoff state t) is at least 1 ε 1 2 1 1 1 13 40 = 0.0026859, 3  11  3  121 4840 '   37 which is larger than 3 72 72 0.002379, i.e., the receiver’s welfare loss in (h, g) (see   ' (54))—a contradiction.

1 1 1 Furthermore, since the ideal point of the sender of type t is t + 4 and t + 11 < t + 4 , type t must have a unique action in t 1 , t + 1 to recommend in the equilibrium. Let 8 8 at t 1 , t + 1 denote the action recommended by type t 0, z, 1 . 2 11 11 2 f g   t 19 Step 2: a sender with type t must recommend a with a probability larger than 20 . Suppose otherwise, i.e., type t recommend another action, denoted by at, with at least

52 b 1 probability 20 . To make both actions best replies, they must have the same distance to the ideal point t + 1 . Furthermore, since at < t + 1 , we conclude at > 2 t + 1 4 11  4 t + 1 = t + 1 1 . Then the total welfare loss for the receiver due to type t recom- 11 2 11 b mending  at is at least 

1 1 ε 1 1 2 1 1 1 81 1053 b 40 = 0.0027195, 20  3  2 11  20  3  484 387200 '   37 which is larger than 3 72 72 0.002379, i.e., the receiver’s welfare loss in (h, g) (see   ' (54))—a contradiction.

5 Step 3: the sender of type t = 1 has an ideal point 4 > 1. As a result, she has a unique best recommendation in the equilibrium, which is the largest action recommended in the equilibrium — this is a1. Furthermore, let a denote the second largest action recom- mended in the equilibrium, i.e., a < a1. Since type t = 1 never recommends a, only type 1 zb+1 1 t = z < 2 , type t = 0 and the noise (with mean 3 < 2 ) may recommend a. As a result, b b 1 1 a = E (t m = a) < < z + . b j 2 4 I.e., az a < z + 1 , where z +b1 is the idealb point for t = z. Therefore, a = az.  4 4 Stepb 4: we calculate an upper bound for az = E (t m = az). Noteb 0 < z < z+1 j 3 z+1 z and that only the noise (with mean 3 ) and types t = 0, t = z may recommend a . We would increase the posterior expectation of t if type t = 0 is not allowed to recommend az. Moreover, to further increase the expectation, we should reduce the probability of type z+1 t = z and increase the probability of noise, due to z < 3 . To sum, we have

az = E (t m = az) (56) j 19 1 ε z + ε z+1 20  3   3  19 1 ε + ε 20  3 1 1 35 +1 19 40 35 + 1 72 20  3  72 40  3 1 1  19 40 + 1 20  3 40 28075 = < 0.4869, 57672

35 1 where the second inequality follows from z = 72 and ε < 40 (see (55)).

53 1 0 Step 5: the sender of type t = 0 has an ideal point of 4 , and we have shown a < 1 z 0 4 < a . Hence, to make a a best recommendation for type t = 0, we must have 1 1 a0 az , (57) 4  4 0 1 i.e., a is more close to the ideal point 4 . Then, (56) and (57) imply

a0 > 0.0131. (58)

Step 6: let γ denote the ex-ante probability that noise generates the recommendation 0 1 0 1 a , and we show γ < 130 . Suppose otherwise. Recall a < 11 . Then, the total welfare loss due to the noise recommending a0 is at least

1 1 2 1 1 2 1 1 1 35 2 1 1 2 γ z + 1 + 1  3  11 3  11  130  3  11 72 3  11 "     # "     # 616369 = 0.002519 244632960 ' 37 which is larger than 3 72 72 0.002379, i.e., the receiver’s welfare loss in (h, g) (see   ' (54))—a contradiction.

0 z 1 1 5 1 5 Step 7: since a < a < z + 4 < a < 4 , where z + 4 and 4 are the ideal points of types t = z and t = 1 respectively. As a s result, types t = z and t = 1 do not recommend a0, i.e., only type t = 0 and the noise may recommend a0. By Step 2 above, the sender of 0 19 type t = 0 recommend a with a probability larger than 20 (which corresponds to ex-ante probability 19 1 ε ). Hence, we have 20  3 35 19 1 ε z+1 1 72 +1 0 + γ 8560 a0 = E t m = a0 20  3   3 130  3 = 0.01204, 19 1 ε 1 1 710856 j  20 3 + γ  19 40 + 1 '    20 3 130  (59) 35 1 where the second inequality follows from z = 72 and γ < 130 . In particular, (58) contra- dicts (59)).

References

ARROW, K. J. (1975): “The Limits of Organization,” New York, NY, Norton.

54 BATTAGLINI, M. (2002): “Multiple Referrals and Multidimensional Cheap Talk,” Econo- metrica, 70, 1379–1401.

BLUME, A. (2015): “Failure of Common Knowledge of Language in Common-Interest Communication Games,” Mimeo.

BLUME,A., AND O.BOARD (2010): “Language Barriers,” Mimeo.

(2013): “Language Barriers,” Econometrica, 81, 781–812.

BLUME,A.,O.BOARD, AND K.KAWAMURA (2007): “Noisy Talk,” Theoretical Economics, 2, 395–440.

CHAKRABORTHY,A., AND R.HARBAUGH (2007): “Comparative Cheap Talk,” Journal of Economic Theory, 132, 70–94.

CHRYSTAL, D. (2006): How Language Works. Penguin.

CRAWFORD, V., AND J.SOBEL (1982): “Strategic Information Transmission,” Econometrica, 50, 1431–1451.

CREMER´ ,J.,L.GARICANO, AND A.PRAT (2007): “Language and the theory of the firm,” Quarterly Journal of Economics, 122, 373–407.

CURRAN,K., AND M.CASEY (2006): “Expressing emotion in electronic mail,” Kybernetes, 35, 616–631.

D.A.VAKOCH, E. (2011): “Communication with Extraterrestrial Intelligence,” Stony Brook, NY, SUNYP.

DESSEIN, W. (2002): “Authority and Communication in Organizations,” Review of Eco- nomic Studies, 69, 811–838.

FARRELL, J. (1993): “Meaning and Credibility in Cheap-Talk Games,” Games and Economic Behavior, 5, 514531.

GANGULY,C., AND I.RAY (2011): “Simple Mediation in a Cheap-Talk Game,” University of Birmingham, Department of Economics Discussion Paper 05-08RR.

55 GARICANO,L., AND A.PRAT (2013): Organizational Economics with Cognitive Costs. Econo- metric Society Monographs, Cambridge University Press, pp.342-388.

GIOVANNONI, F., AND D.SEIDMANN (2007): “Secrecy, two-sided bias and the value of evidence,” Games and Economic Behavior, 59, 296–315.

GOLTSMAN,M.,J.HORNER¨ ,G.PAVLOV, AND F. SQUINTANI (2009): “Mediation, arbitra- tion and negotiation,” Journal of Economic Theory, 144, 1397–1420.

JEANNE HAGENBACH, F. K., AND E.PEREZ-RICHET (2014): “Certifiable Pre-Play Com- munication: Full Disclosure,” Econometrica, 82, 1093–1131.

KAMENICA,E., AND M.GENTZKOW (2011): “Bayesian Persuasion,” American Economic Review, 101, 2590–2615.

KARTIK, N. (2009): “Strategic Communication with Lying Costs,” Review of Economic Studies, 76, 1359–1395.

KRISHNA, V., AND J.MORGAN (2004): “The art of conversation: eliciting information from experts through multi-stage communication,” Journal of Economic Theory, 117, 147179.

(2008): “Contracting for information under imperfect commitment,” The Rand Journal of Economics, 39, 905925.

LEVY,G., AND R.RAZIN (2007): “On the Limits of Communication in Multidimensional Cheap Talk: A Comment,” Econometrica, 75, 885–893.

MCNAIR, B. (2011): An Introduction to Political Communication. Taylor and Francis.

MILGROM, P. R. (1981): “Good News and Bad News: Representation Theorems and Ap- plications,” The Bell Journal of Economics, 12, 380–391.

MORRIS, S. (2001): “Political Correctness,” Journal of Political Economy, 109, 231–265.

MYERSON, R. (1991): : Analysis of Conflict. Harvard University Press.

OTTAVIANI,M., AND P. N. SORENSEN (2006): “Professional Advice,” Journal of Economic Theory, 126, 120–142.

56 ROSS,S.E., AND C.-T. LIN (2003): “The Effects of Promoting Patient Access to Medical Records: A Review,” Journal of the American Medical Informatics Association, 10, 129–138.

SAGAN, C. (1985): Contact. Simon & Schuster.

SCHARFSTEIN,D., AND J.STEIN (1990): “Herd Behavior and Investment,” American Eco- nomic Review, 80, 465–479.

SEIDMANN,D., AND E.WINTER (1997): “Strategic Information Transmission with Verifi- able Messages,” Econometrica, 65, 163–169.

SOBEL, J. (1985): “A Theory of Credibility,” Review of Economic Studies, 52, 557–573.

(2015): “Broad Terms and Organizational Codes,” Mimeo.

SPENCE, M. (1973): “Job Market Signaling,” The Quarterly Journal of Economics, 87, 355– 374.

THOMSON, W. (2001): A Guide for the Young Economist. MIT Press.

57