
Journal of Dynamics and Games doi:10.3934/jdg.2014.1.45 c American Institute of Mathematical Sciences Volume 1, Number 1, January 2014 pp. 45{56 TAIL PROBABILITIES FOR TRIANGULAR ARRAYS Drew Fudenberg Department of Economics, Harvard University Littauer Center, 1805 Cambridge Street, Cambridge, MA 02138, USA David K. Levine Department of Economics, Washington University in St. Louis 1 Brookings Dr., St. Louis MO 63130-4899, USA (Communicated by William H. Sandholm) Abstract. Different discrete time triangular arrays representing a noisy sig- nal of players' activities can lead to the same limiting diffusion process yet lead to different limit equilibria. Whether the limit equilibria are equilibria of the limiting continuous time game depends on the limit properties of test statistics for whether a player has deviated. We provide an estimate of the tail probabilities along these arrays that allows us to determine the asymptotic behavior of the best test and thus of the best equilibrium. 1. Introduction. It is frequently difficult to determine the set of equilibrium pay- offs in discrete time repeated games with imperfect public monitoring when the discount factor is bounded away from one. In the continuous time case Sannikov [2007] (see [5]) and Sannikov and Skrzypacz [2007] (see [6]) have obtained striking characterizations of the equilibrium set in continuous time games where the public signals are modeled as a diffusion process, with the players' actions altering the dif- fusion's drift but not its volatility. These continuous-time models are motivated as modeling the limit of very high frequency interactions, which raises the question of what sorts of high-frequency limits the models capture. This in turn depends on the relationship between the signal processes in discrete and continuous time. Fuden- berg and Levine [2009] (hereafter referred to as FL, see [3]) show by example that the same limiting diffusion processes can arise as the limit of different discrete-time structures that have very different limit equilibria. In characterizing the cooperative equilibria of a repeated game it is necessary to understand which punishment schemes are incentive compatible for players. This can be thought of as testing for whether a deviation has occurred combined with a punishment if the test is failed. Intuitively, as with the normal distribution, the tails of a diffusion process permit a very accurate test for the difference in means by using a cutoff for the signal, above which the test is considered to have failed. However, since the worst possible punishment in a repeated game is bounded, what matters is not just the accuracy of the test but whether defections can be detected with sufficient probability. As we approach continuous time as the limit of shorter 2010 Mathematics Subject Classification. Primary: 60G50, 91A20; Secondary: 60B10, 91A25, 91B70. Key words and phrases. Triangular array, tail probabilities, limit equilibria, continuous-time games. 45 46 DREW FUDENBERG AND DAVID K. LEVINE discrete intervals, the question becomes how rapidly the probability with which defections can be detected decreases relative to the size of available punishment. If the only way to create a sufficiently accurate test is to send the cutoffs very quickly to infinity, then punishment will occur too rarely to provide sufficient incentives for cooperation. In this case we can expect that there will only be static equilibria in the limit. Consequently a key question is whether it is possible to design a test that finds an appropriate balance between accuracy and frequency of punishment as the period length shrinks. For concreteness we will illustrate this idea in a simple principal-agent game instead of the repeated game studied in FL. In many - if not most - cases of interest, the public signal is not literally contin- uously distributed, but the diffusion process arises as the limit of the aggregate of many small discrete events such as price changes. In this case we are interested not in the normal distribution per se, but rather a distribution that approaches normal- ity in the limit. It might be hoped that a version of the central limit theorem could be used to examine the convergence properties of the test statistic. Unfortunately as periods shrink the optimal cutoff increases in such a way that the probability of detection decreases (the cutoff normalized by the standard deviation increases) so the standard central limit theorem is not useful. Instead what is required is an estimate of the tail probabilities, that is of the probabilities of very unlikely but informative signals.1 The most closely related result in the literature is what Feller [1971] (see [1]) calls a large deviations theorem, although that term is now used for other things. Fellers result applies only to i.i.d. random variables, and not to triangular arrays; this note provides the additional uniformity assumptions needed to adapt the Feller proof to the case of triangular arrays and adapts the proof to show how these uniformity assumptions are used. The result reported here can then be used to show that the equilibria of discrete time games whose signals are binomial arrays do indeed converge to the equilibria of the associated continuous time game, as it was in FLs study of games with a long run player against a myopic opponent. In the next section we sketch a simpler one-shot agency problem where the tail probability estimates can be used in similar way.2 2. A motivating example. The information issues that arise in repeated game setting arise in a simplified form even in a principal-agent problem, as we now show. Suppose that there is a period of length τ. At the beginning of the period the agent may choose not to be employed by the principal in which case he receives zero. If he chooses employment he must decide between working (W ) and shirking (S). If he works he is paid an amount W τ proportional to the length of time he works. If he shirks he gets a bonus of Gτ. At the end of the period, a principal observes a noisy signal y of the agent's lack of effort and if this signal exceeds a threshold y he imposes a fixed penalty P . Notice that P is not proportional to the length of the period; the idea is that the principal can impose a long-term punishment on the agent if he feels the agent has shirked even for a short period of time. For example if the principal can fire the agent, then we would expect that P = W=r, which is 1This issue is delicate because the likelihood ratio between two normal distributions with a common variance and different mean becomes unbounded in the tail: this was originally exploited by Mirlees [1974] (see [4]). 2Sadzik and Stachetti [2012] (see [7]) study the limit of discrete-time agency problems when the discrete-time signals have a continuous density as opposed to being the sum of discrete random variables. Their hidden action case corresponds to the example presented here. TAIL PROBABILITIES FOR TRIANGULAR ARRAYS 47 the amount that the agent would have earned from a lifetime of employment with the principal. The question we wish to address is for particular distributions of y whether it is possible to set the threshold y so that the agent can be induced to work rather than shirk. Notice that whether or not it is desirable to do this depends on payoffs to the principal which we do not specify. Let p represent the probability that the punishment is received if the agent works and q the probability of punishment if the agent shirks. Then if it is to be optimal for the agent to work rather than shirk then it should be that the incentive constraint q − p G ρ(τ) ≡ ≥ τ P holds. This is similar to (1) in FL (see [3]). If it is to be optimal to choose employment then the participation constraint should be satisfied, that is p W µ(τ) ≡ ≤ : τ P If in the limit as τ ! 0 both of these are to hold for some values of G; P; W then it must be that lim ρ(τ) > 0 and lim µ(τ) < 1. This is analogous to Corollary 2 in FL [2009] (see [3]). We suppose that the signal y is generated by stochastic process S0 if the agent works and process S1 if the agent shirks. This state of the appropriate process is observed at the terminal time τ, and we shall be interested in the case where τ is small. The simplest and quite standard way to do this is to assume that Sd are diffusions with common volatility σ2 and drift d = 0; 1 respectively, so that the signal is distributed as N(dτ; σ2τ). Consider first the incentive constraint p p q − p Φ(y/σ τ) − Φ((y − τ)/σ τ) ρ(τ) = = : τ τ It is easy to ensure that ρ remainsp bounded away from 0 as τ ! 0; for example when the normalized cutoff z ≡ y/σ τ is constant independent of τ, limτ!0 ρ(τ) = 1. However, with z fixed, p = Φ(z) is a fixed positive constant, so µ(τ) = p/τ ! 1 and in the limit the participation constraint would be violated. Hence, we must allow z ! 1 as τ ! 0 to have p/τ bounded above. Thus the question becomes whether it is possible to keep p/τ bounded above at the same time allowing z to grow sufficiently slowly that ρ(τ) remains bounded away from zero. The answer depends on the behavior of the normal distribution Φ in the upper tail where z is large, and using bounds for the normal distribution FL [2007] (see [2]) show that in fact it is impossible to do so.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages12 Page
-
File Size-