£.'ID tlllli!O'. 1991 (~ '3 ·tm .. ~.:ichard J. : a review with applications to vertical control in agricultural markets #20228

Department UCO of Agricultural Economics

..

WORKING PAPER SERIES

' ------

University of California, Davis Department of Agricultural Economics

Working papers are circulated by the author without formal review. They should not be quoted without author's permission. All inquiries should be addressed to the author, Department of Agricultural Economics, University of California, Davis, California 95616 USA.

GAME THEORY: A REVIEW WITH APPLICATIONS TO VERTICAL CONTROL IN AGRICULTURAL MARKETS

by Richard J. ~exton

Working Paper No. 91-13

• --

.. ,._ _

- ,.

GAME THEORY: A REVIEW WITH APPLICATIONS TO VERTICAL CONTROL IN AGRICULTURAL MARKETS

Richard·J. Sexton

Department of Agricultural Economics University of California, Davis

Paper to be presented at the NC-194 symposium: Examining the Economic Theory Base for Vertical Coordination, Chicago, Illinois, October 17-18, 1991.

L ._,.

GAME THEORY: A REVIEW WITH APPLICATIONS TO VERTICAL CONTROL IN AGRICULTURAL MARKETS

The advent of Game theory is considered to be the publication of von Neumann and

Morgenstern's book, The Theory of Games and Economic Behavior in 1944. In the immediately

succeeding years important advances in game theoretic analysis were made by game theory's other pioneers including Nash (1950, 1951), and Shapley (1953). The state of the art during this era was summarized in Luce and Raiffa's classic book, Games and Decisions: Introduction and

Critical Survey, published in 1957. However, few results useful to economics were developed over the next twenty years, and during this time, one could continue to recommend Luce and

Raiffa's book as the definitive source on basic game theory.

An upsurge of interest in pure and applied game theory in economics began in the mid

1970s as research began to emphasize decision makers who were rational but had limited information and who interacted with others in explicitly dynamic settings. Much has been accomplished during this period; and game theory texts published today bear little resemblance to Luce and Raiffa's book. It seems certain that with the publication in 1990 of David Kreps text, A Course in Microeconomic Theory, g.ame theory will be integrated into the training of most new Ph.Ds in economics and agricultural economics.

In this survey I hope to chronicle recent conceptual advances in game theoretic analysis relevant to economics and offer an assessment of its successes and failures. Further, I will examine use of game theory tools to study vertical coordination in the food and fiber sector.

Although considerable game theoretic analysis of vertical coordination problems has been conducted, the methodology has been little used by agricultural economists. For this reason, I ------

2

will offer some specific suggestions for application of game theory tools to study vertical

coordination issues relevant to agriculture.

The paper is organized into three relatively distinct parts. Part I reviews basic concepts

and recent advances in noncooperative game theory. Part II discusses application of

noncooperative game theory to the study of vertical coordination. Part III focuses on

with application to vertical coordination issues.

Part I: Noncooperative Game Theory

SOME BASIC CLASSIFICATIONS AND CONCEPTS

Games are partitioned into two broad classes: cooperative and noncooperative. Players

in cooperative games can make binding commitments, whereas in noqcooperative games they

cannot. This distinction must be interpreted narrowly. For example, communication among

players can be modelled under either game structure. And players in a noncooperative game

setting can agree to cooperate and sign contracts if the game structure allows it. However, if

it is individually desirable for a player to defect from an agreement or breach his contract, he

will do so in a noncooperative game setting. Cooperative game theory is most useful in settings

where players can form groups or coalitions. The analysis then focuses on what these coalitions

can accomplish with little or no emphasis on the processes whereby these outcomes are achieved

within the coalition.

Most of the recent progress and interest in game theory has been in the area of

noncooperative games, and, hence, those games are the focus of most of this paper. However,

a paper on vertical coordination in the agricultural sector would be remiss to exclude cooperative

games completely because two important institutions allow coalitions of farmers to form and 3 make binding agreements concerning the marketing of their production. One institution is the agricultural cooperative, wherein the ability of farmers to form into coalitions and make binding agreements is guaranteed in the U.S. by the Capper-Volstead Act. 1 The other is the marketing order or agreement, enabled by the Agricultural Marketing Agreements Act of 1937 . I will examine these institutions in a cooperative game theory framework in Part III.

· The goal of this paper is not to provide a comprehensive introduction to game theory.

Rather, I hope to describe and illustrate some of the key concepts in use today and demonstrate their relevance to vertical coordination questions in agricultural markets. A number of book- length treatments of the subject have appeared in recent years for the interested reader to pursue. 2

Noncooperative game theory basics

Noncooperative games are analyzed in either their normal or extensive form. The extensive form is manifest as the familiar game tree. It specifies the order of play, information,

1Agricultural cooperatives are horizontal coalitions of sellers. Their opportunity to explicitly coordinate behavior would be constrained by the antitrust laws if not for Capper-Volstead. Antitrust is an important reason why most interactions among sellers are modelled as noncooperative games.

2These include Kreps' microeconomic theory text (1990a), a second book by Kreps (1990b) that is not concept oriented, but, rather, is a thoughtful discussion of noncooperative game theory's successes, failures, and future prospects by one of its leading scholars. Rasmusen (1989) is an ex~llent , modern introduction to noncooperative game theory. Tirole's recent text (1988) in industrial organization is a masterful presentation of noncooperative game theory applications. The Handbook of Industrial Organization (Schmalensee and Willig 1989) focuses heavily on noncooperative game theory applications and includes a chapter on noncooperative game theory methods by Fudenberg and Tirole. Books that treat both cooperative and noncooperative games include Friedman (1986--a rather mathematical orientation) and the two volume treatise by Shubik (1982, 1984). For readers primarily interested in cooperative game theory, Luce and Raiffa remains an excellent reference. 4

and choices available to each player and the ensuing payoffs that are contingent upon the

players' choices. The normal form is a summarized description of the extensive form. It

usually is depicted as a matrix associating payoffs with each possible combination of (pure)

choices by the players. The normal form is also sometimes referred to as the strategic fonn .

Every extensive form has a corresponding normal or strategic form, but different

extensive forms may be represented by the same normal form. A main reason for this

occurrence is that the normal form necessarily abstracts from the dynamic aspects of most

interesting games. Kreps (1990a) argues that the "great successes of game theory in economics"

have arisen primarily due to the opportunity to think about the dynamic _character of competitive

interactions afforded by the extensive form.

Because the discussion here will focus on games in extensive form, it is useful to review

terminology relating to the extensive form. Refer to Figure 1. It is a simple model of an

important vertical coordination issue in agriculture--post-contractual opportunism. There are two

players, a grower (the principal) and a marketer (the agent). If the farm product is marketed

effectively (e.g. , no spoilage) , it is worth 3.0 at retail. A marketing agent can provide these

services at a cost of 0.5, or the grower, who is less efficient at marketing, can provide them at

a cost of 1.0. The farm product net of marketing costs is worth 2.5 if the agent expends a high

effort in marketing it. I assume that there are many competing agents, so that agents' services

are priced at cost. The product is worth 2.0 if the farmer vertically integrates and markets the

product himself. The product is only worth 1.5 if the agent shirks and expends low effort. · 5

The points in Figure 1 at which either player takes an action are referred to as nodes.

A successor to a node is any node that may occur later in the game if the given node has been reached. An end node is a node with no successors. A branch is one action from among a player's set of potential actions at a particular node. A path is a sequence of nodes and branches from the starting node to an end node. Payoffs for (grower, agent) are denoted at each terminal node.

The cornerstone for noncooperative games is the Nash equilibriwn. A strategy combination is a if no player would wish to deviate from his strategy, given that no other players deviate. In other words, taking his opponents' actions as given, if no player would wish to change his own action, the resulting strategy combination is a Nash equilibrium.

The concept is worth stating formally. Define a set of players N = {l, ... ,n} with strategy sets Si and payoff func tions ?ri(s 1, ••• ,sJ, i = 1, ... ,n. The strategy combinations· = {s1 •

... ,Su·} is a Nash equilibrium if

for all si E Si, and for all i = 1, ... ,n.

Many well-known results in economics are Nash equilibria of their associated games.

The most famous is mutual defection or "finking" in the various incarnations of the prisoners' dilemma game. 3 The Cournot equilibrium is the Nash equilibrium to the static game where

3Everyone is familiar with the two prisoners whose finking on each other produces long prison terms for each. However, the term "prisoners' dilemma" is applied to any context where cooperation is in players' mutual interests, but individually each has incentive to behave noncooperatively. Examples are duopolists setting prices or output levels, nations choosing trade policies, or communities competing for industry through tax breaks. A stimulating book by ' , 6 oligopolists choose quantities, and the Bertrand equilibrium is the Nash equilibrium to the static

game where they set prices. Von Stackelberg's leader-follower equilibrium is a Nash

equilibrium to a dynamic game where the leader moves first and then the follower moves. The

Nash equilibrium to the post-contractual opportunism game is for the agent to expend low effort

(if given an opportunity to play) and for the grower to vertically integrate.

· A number of existence results for Nash equilibria have been proven, many of which are

summarized by Friedman (1986). A fundamental result due to Nash (1951) is that every game

with a finite number of pure strategies has at least one Nash equilibrium, possibly in mixed

strategies. Mixed strategies involve a player randomizing among his pure strategies. 4 Similar

existence results can be proven for games with a continuum of actions (such as the choice of a

price or quantity), but complications enter when payoff functions are discontinuous or nonquasi-

concave in the strategy choices. Dasgupta and Maskin (1986) provide sufficient conditions for

the existence of pure and mixed strategy equilibria in these cases.

The process of finding pure strategy Nash equilibria is usually quite straightforward. The

analyst merely proposes candidate equilibrium strategies and then checks for each player if his

strategy is optimal given the candidate strategies for all other players. If so, the candidate

strategies are a Nash equilibrium.

Axelrod (1984) is devoted to the study of prisoners' dilemma situations.

4Most often economists are interested in pure strategy equilibria because mixed strategies are often difficult to interpret from an economic perspective. Many games may have both pure and mixed strategy equilibria, and the modeler will emphasize the pure strategy equilibria. However, see Fudenberg and Tirole (1989) for a justification of mixed strategy equilibria based on games with uncertainty and the concept of Bayesian equilibrium. 7

It is worth commenting upon the Nash equilibrium as a solution concept because its problems have inspired refinements of the equilibrium concept that have comprised much of the recent progress in pure noncooperative game theory. The mutual best reply property of a Nash equilibrium is indeed an appealing property. However, three classes of criticisms of the Nash equilibrium as a solution concept can be raised:

1. Many games have multiple Nash equilibria, raising the question of how to choose among them,

2. Nash equilibria are very "noncooperative" in that the solutions they characterize often involve players doing distinctly worse than if they were somehow able to coordinate their actions.

3. Nash equilibria define necessary but not sufficient conditions for an "obvious way to play the game" (Kreps 1990a, 1990b).

I will consider each argument in turn. The games mentioned above in introducing the

Nash equilibrium concept generally have a unique equilibrium, but it is very easy to ·construct games that have a multiplicity of equilibria in pure and/or mixed strategies. The most famous of these is The Battle of the Sexes illustrated in normal form in Figure 2. In this somewhat sexist game, the male prefers to go to a prize fight, while the female prefers the ballet, but they each prefer the other's company sufficiently that attending the less preferred event together is desired relative to attending the preferred event alone. The two Nash equilibria in the Battle of the Sexes are (prize fight , prize fight) and (ballet, ballet).

A famous economics example of multiple Nash equilibria is the simple game of entry and entry deterrence illustrated in extensive form in Figure 3. In this game the entrant moves first ' ,, 8

and chooses IN the market or OUT. The incumbent then responds by choosing either

PREDATE or ACCOMMODATE, where the latter might imply either Coumot or collusive

behavior. Denote the entrant by subscript E and the incumbent by subscript I. Denote

monopoly, predation, and accommodation by superscripts M,P, and A respectively. Then

and

The Nash equilibria for this game are (IN, ACCOMMODATE) AND (OUT,PREDATE).

Given that an important purpose for using game theory is to predict behavior, the

pervasity of multiple Nash equilibria is a serious problem. A multiplicity of Nash equilibria

might signal either that the formal game specification fails to capture real-world elements that

might suggest an obvious way to play the game or that the Nash equilibrium concept is ill-suited

to analyze the game at hand. This is the case in the entry-deterrence game, where the

equilibrium (OUT, PREDATE) involves a noncredible threat by the incumbent, i.e., if actually

called upon to choose between PREDATE and ACCOMMODATE by the entrant's choice ofIN,

the incumbent rationally chooses ACCOMMODATE. Situations such as this have inspired

refinements of the Nash Equilibrium that we will examine shortly.

The notion of an obvious way to play a game is based on the pioneering work by

Schelling (1960). The idea is that in many games that have multiple Nash equilibria, players

may still know what to do. These equilibria are called focal points. They are Nash equilibria

that are compelling for psychological reasons that are not easily incorporated in the formal game 9 specification. Focal points may be based on past experience or a general sense of how people will behave.

The concern about the extreme "noncooperativeness" of Nash equilibria is that they often predict a distinctly suboptimal from the perspective of the collective welfare of the players. All of the games I mentioned at the outset are this way. The "prisoners" in the prisoners' dilemma game both get long jail sentences from finking on each other, the Bertrand and Coumot equilibria both earn the oligopolists less than the join profit maximum output.5

And in the post-contractual opportunism game, the Nash equilibrium outcome with vertical integration is Pareto dominated by contracting with an agent who expends high effort.

Two comments are in order. First, in these games' static contexts the noncooperative outcomes are probably realistic. Although superior outcomes to the Nash equilibrium are available in each instance, players have unilateral incentives to defect from these solutions.

People can be their own worst enemies. Second, the divergence between equilibrium and optimum (in the sense of maximizing total payoffs) behavior may signal that the model is a poor representation of real-world behavior. For example, in single play games, reputation is not an issue, nor are players able to make precommitments that might subsequently bind them to a more advantageous course of action. These considerations suggest the importance of including dynamics and information in game specifications, which, in fact, has been an important dimension of recent game theory research.

5In the case of Bertrand's equilibrium, absent binding capacity constraints, even duopolists earn zero profit. -.. 10 Kreps' criticism (1990a, 1990b) based on the necessity but not sufficiency of Nash

equilibrium is intertwined with the multiplicity-of-equilibria and extreme-noncooperativeness

criticisms. Refining solution concepts to eliminate candidate equilibria is one means of moving

from necessity to sufficiency; another is to identify obvious ways to play (focal points) if they

exist. Kreps further notes that some games don 't admit an obvious way to play, in which case

pursing Nash equilibria can lead to precisely wrong inferences.

Having established the Nash equilibrium as a foundation to build upon, it is time now to

consider the advancements that have lead to the recent years' explosion of interest in game

theory modelling.

INFORMATION AND EXTENSIVE FORM GAMES

A player's information set at any particular point in a game consists of the different nodes

in the game tree that he knows might be the actual node but cannot distinguish among by direct

observation. Consider the simplified illustration of a coordination problem among farmers in

Figure 4. There are two market periods, early and late, and either farmer can plant a perishable

crop for harvest during one but not both periods. The early harvest period is more lucrative due

to greater demand, and Farmer A, who runs a larger scale operation is better able to take

advantage of the early market than is Farmer B. However, if the farmers can coordinate their

plantings to smoothen supply across market periods, they will each do better than if they harvest

for the same period and create a glut. The ensuing payoffs under the alternative outcomes are

listed at the end nodes in Figure 4. 6

6 A similar coordination story might involve scheduling harvests to best utilize fixed processing facilities. L 11 \,, -

Panels (a) and (b) in Figure 4 illustrate two alternative ways this game might be played.

In panel (a) the players commit to planting decisions simultaneously. Thus, although Farmer

A is depicted first on the game tree, Farmer B does not know A's choice when it is time to make his own choice, i.e., he does not know whether B1 or B2 is the actual node. His information set consists of {B 1, Bi}. Information sets are depicted on game trees by either encircling nodes that comprise an information set as in panel (a) or connecting the nodes with a dashed line.

Panel (b) depicts a case where Farmer A is able to move first. How he achieves this position might be an interesting strategic question. For example, he could sign a labor contract specifying an early planting cycle and containing a large penalty for breach. In this case Farmer

B knows what action farmer A has taken when it is time to make his decision. Every information set in panel (b) consists of a single node or in game theory parlance is a singleton.

Figure 4 illustrates the ·distinction in game theory between perfect infonnation and imperfect infonnation. In a game of each information set is a singleton; otherwise it is a game of imperfect information.

What are the pure strategy Nash equilibria to the coordination games in Figure 4? The game in panel (a) has two equilibria for (A,B): (EARLY, LATE) and (LATE, EARLY). The total payoff from (EARLY, LATE), exceeds that from (LATE, EARLY), but there is no way in this noncooperative game structure for Farmer A to necessarily persuade Farmer B to undertake that option. 12

Farmer B's strategy choices are complicated somewhat in the game depicted in panel (b).

They must specify his move in response to either of A's possible actions. Three Nash equilibrium strategy combinations emerge:

1. (EARLY, if EARLY then LA TE; if LA TE then EARLY) with outcome that A plays EARLY and B plays LATE.

2. (LA TE, if EARLY then EARLY; if LATE then EARLY) with outcome that A plays LA TE and B plays EARLY.

3. (EARLY, if EARLY then LA TE; if LA TE then LA TE) with outcome that A plays EARLY and B plays LATE.

Although we will soon wish to consider further classifications of games based upon their . information content, now is a good time to introduce an important refinement of Nash equilibrium--the concept of perfect equilibrium (SPEJ due to Selton (1975). The game depicted in Figure 4(b) is dynamic in that A moves first and B observes his move. Yet the construct of Nash equilibrium requires A to take B's strategy as given in choosing his own move. This fact tends to produce Nash equilibria in dynamic games that involve noncredible threats on the part of some player(s). Both the second and third equilibrium to the game in panel (b) involve such threats. Equilibrium 2 involves a threat by B to play EARLY regardless of A's action. Taking this strategy as given, A's best reply is LATE. However, if A chose

EARLY so that it was fait accompli, B's optimal response is to choose LA TE, not EARLY.

Similarly, the threat to play LA TE if LATE in equilibrium 3 makes no sense, yet because B is never called upon to make that move in equilibrium, the strategy combination is a Nash equilibrium. 13

Subgame perfection works to eliminate noncredible threats. To understand the concept it is necessary to define a subgame. A subgame is a game consisting of a node that is a singleton for all players, that node's successors and the payoffs at the associated end nodes. The game in Figure 4(b) has three : the complete game itself and the games beginning at

nodes B1 and 13i. ( Jnversely in panel (a) the only subgame is the game itself. The game of entry and entry deterrence in Figure 3 has two subgames: the game itself and the game beginning at the node following the entrant's choice of IN. The post-contractual opportunism game in Figure 1 also has two subgames.

A SPE is a set of strategies for each player such that the strategies comprise a Nash equilibrium for the entire game and also for every subgame. Subgame perfection requires strategies to be in equilibrium everywhere along the game tree, not only along the equilibrium path.

The concept is exceedingly useful for analyzing dynamic games of perfect information such as those depicted in Figures 1,3 and 4(b) and also games that Tirole (1988) calls games of

'almost peifect' information. These are dynamic games where at a given date t players choose actions simultaneously knowing all actions taken during the preceding periods 1, ... ,t-1. The within-periods simultaneity is a deviation from perfect information. The most common example of these games are repeated games where players repeatedly play a simultaneous single period game, such as a prisoners' dilemma or choices of price or quantity by oligopolists in a static market environment.

The virtues of the SPE concept are twofold: SPE are usually straightforward to derive using , and requiring sub game perfection is often very effective at eliminating 14

nonplausible Nash equilibria in dynamic game. Solution by backwards induction involves

proceeding to the final play (a node whose successors are all end nodes) and deriving the optimal

behavior for the player who has the move at that node. The solution at this point will be simple

common sense; the player will choose whatever option maximizes his payoff among the

alternatives. That portion of the game tree can then be replaced with the optimal action to take

place there and the associated payoffs, and the analyst can proceed up the game tree to the next

node or set of nodes. Optimal play can be derived here given that it is now known what will

transpire subsequently. In this manner the game can continue to be folded back and solved.

The manner in which the solution is derived insures that the properties of a SPE are satisfied,

i.e., optimal behavior was derived at each node. 7

The backwards induction algorithm can be used to solve the dynamic games posited thus

far in this paper. In Figure 1 's post-contractual opportunism game, if the agent gets the move,

his is to exert LOW effort. Given the Nash equilibrium to this subgame, the

grower's best response at his move is to vertically integrate. Thus (INTEGRATE, LOW) is

the unique SPE.

7This solution algorithm is effective so long as the game tree isn't too big or complicated. Circumstances where players are indifferent among alternatives can also create problems because the manner in which ties are resolved likely will effect play of the game. Usually the analyst is given leeway to resolve ties, and some justification from theory can often be given for a particular resolution. Figure 1 illustrates this point. In a great many games one type of player will be assumed to behave competitively and earn just some reservation level of payoff, usually normalized to zero. The agent in Figure 1 earns zero from accepting a contract and expending high effort and from staying out of the market under grower integration. Any payoff to the agent strictly above his reservation payoff cannot be an equilibrium because another payoff that paid him slightly less could be proposed and would be accepted.

- L 15

Subgame perfection eliminates one of the equilibria in the entry-deterrence game. Given a choice of IN by the entrant, the monopolist's best response in the ensuing subgame is to

ACCOMMODATE. Given accommodation, the entrant's best move at his play is to choose IN.

Thus, (IN, ACCOMMODATE) is the unique SPE, and the Nash equilibrium (OUT, PREDA TE) is eliminated because predation is not an equilibrium response to IN by the incumbent. In this manner, subgame perfection eliminates equilibria that involve noncredible threats.

Finally, the in Figure 4b had three Nash equilibria. It should be clear that two of them involve noncredible threats by B, and will not satisfy the requirements of subgame perfection. These are the threat to play EARLY in response to EARLY by A in the second equilibrium, and the threat to play LATE in response to LATE by A in the third. The unique SPE then involves A playing EARLY and B playing LA TE.

Consider now dynamic games with "almost perfect" information. Two classic examples exist in the literature--the iterated prisoners' dilemma and the chainstore game made popular by

Selton (1978). They are useful to consider because they suggest the failure of subgame perfection in certain contexts which has led to the consideration of further refinements of equilibrium.

Consider playing a prisoners' dilemma game some large but finite number of periods.

Whereas the Nash equilibrium of mutual finking and joint punishment is intuitive in any single play of the game, it seems sensible that as the players repeated the game several times they would eventually learn to cooperate with each other and, thus, each achieve a better payoff.

Such is not the case. Solving the game via backward induction, it is clear that mutual finking is the unique Nash equilibrium in the final period, because there can be no gain from playing - ... 16 a cooperative strategy. Since the final period's play is now determinate, there is no gain from

cooperating in the penultimate period, so mutual finking ensues there also. And so the game

unravels to produce a unique SPE wherein each player finks at any and every opportunity.

The chainstore game is essentially a many period replication of the entry-deterrence game

of Figure 3. The intuition here is that, whereas accommodation of a single entrant makes sense,

a firm that faces entry in different markets in successive periods ought to respond aggressively

early in the game (choose PREDATE) in hopes of deterring suJsequent entrants. Such is not the

case, however, as the SPE calls for accommodation and entry in every period, a solution easily

verified by backward induction.

Infinitely Repeated Games

Game theorists have attacked the paradoxical outcomes in the repeated prisoners' , dilemma and chainstore games by introducing uncertainty into the models in a manner to be

discussed shortly. Another line of attack, though, may be to consider what happens if the game

is repeated infinitely. The answer is that the backward induction algorithm that generated the

SPE described above breaks down; there is no final period to solve to begin folding the game

back.

The fundamental result for infinitely repeated games is the folk theorem which asserts that

almost any outcome can be a Nash equilibrium provided players are sufficiently patient (don't

discount the future too heavily). The idea is that any feasible, individually rational payoffs can

be supported as a Nash equilibrium by the players espousing strategies to punish anyone who

deviates from the prescribed equilibrium path. These strategies will satisfy the properties of a

.· 17

Nash equilibrium if the one period gain from cheating does not exceed the subsequent discounted losses from punishment.

Such strategies need not be subgame perfect, i.e., players may not have incentive to play their threat strategies if actually called upon to do so. However, restricting attention to SPE is not helpful in infinitely repeated games as another version of the folk theorem shows that this refinement does not reduce the limit set of equilibrium payoffs. 8

What are the implications of repeated games and the folk theorems for applied researchers who may wish to use game theory? Most fundamentally, considerable suspicion is called for if anyone puts much emphasis on a particular equilibrium for an infinitely . A second point is that infinitely repeated games are not very reflective of real-world . contexts. Most decision makers do not have infinite horizons, but it is notable that this feature does not undermine the message of the folk theorems because the theorems also hold for games with a finite probability of ending in any period, provided this probability is sufficiently low .9

8Consider, for example, the following analysis due to Friedman (1971). Oligopolists adopt strategies that call for in the initial period and all subsequent periods provided no cheating has~ been detected. If cheating is detected, the players punish it by playing their single period Nash strategies (e.g., Cournot) forever. Some reflection should reveal that these strategies comprise a SPE, provided players do not discount the future so heavily that the single­ period gain to cheating outweighs the future discounted losses from earning Cournot rather than collusive profits. The "perfect" folk theorems indicate that an essentially unlimited number of other payoffs can be enforced as SPE, including the Stackelberg equilibrium. Again, the key is that discount rates are not too high. Once a critical discount value is exceeded, the only SPE is to play the single-period Nash equilibrium strategies forever. See Fudenberg and Tirole (pp. 279-82) for further discussion and folk theorem references.

9If 'Y < 1 is the discount parameter and () ~ 1 is the probability that play continues at each period, then players should merely use the factor -y() to discount the future.

•, 18

A more significant indictment of repeated games (whether finite or infinite) is that life does not usually involve repeated play of the same game. For example, a firm that cheated in

Friedman's model (see note 8) might capture customer loyalties or achieve learning curve advantages that would influence play in subsequent periods. Consider also repeated play of

Figure l's post-contractual opportunism game. LOW effort by an agent may be interpreted to mean letting product quality deteriorate. Consequentially, consumers may be alienated from the product in subsequent periods, and, hence, the structure of those games is altered. In other words, what happens today usually affects the games to be played in the future.

The main virtue of repeated games lies not in their value as realistic modelling paradigms, but, rather, in suggesting, through the stark results they generate, that richer and more realistic specifications of the game environment are called for. 10 Providing richer game structures has also inspired further refinements in equilibrium that we now examine.

Garnes of Incomplete or Imperfect Information

An element missing from either the iterated prisoners' dilemma or chainstore games is reputation. It would seem that the "prisoners" have a great interest in acquiring a cooperative reputation. Similarly the chainstore should value a reputation as one who responds aggressively to entry. These elements have no way of emerging in the prototype finite-horizon versions of

10An important example in this tradition is the "trigger pricing" model of Green and Porter. In the prototypical repeated game players observe perfectly the outcomes from each period's play, and, hence, are in a position to punish deviations. Green and Porter consider an oligopoly model with demand uncertainty. Therefore, price decreases can be due either to cheating or to low demand. Since players cannot distinguish between the two signals, they must respond by playing noncooperatively whenever price falls below some trigger threshold. However, punishment has a finite duration and cooperation can ensue, unlike in Friedman's model (note 8). Thus, Green and Porter's model explains the episodic price wars that are common to cartels. 19 .. - these games. Another important game that illustrates a shortcoming of finite-period, perfect- information games is Rosenthal's (1981) illustrated in Figure 5. By playing their cards right players (i.e., DOWN) A and B can each secure payoffs of 10 in this game. Yet the unique SPE results in A playing RIGHT at his first opportunity, leading to payoffs of (0,0).

The intuition in the iterated prisoners' dilemma or centipede games is that a player might

"take a chance" on playing cooperatively at the outset just to see what might happen. The backward induction algorithm of subgame perfection does not permit this intuition to emerge.

The environment where it can emerge is in games of incomplete infonnation. Analysis of these games was facilitated by Harsanyi's observation (1967) that a game with incomplete information could be transformed into a game with imperfect information by introducing Nature as a player who moves first at the outset of a game. The choices made by Nature define a player's type, including possibly his strategy set, payoff functions, and knowledge concerning locations on the game tree--infonnation panition in game theory parlance. When nature moves in these environments, she is said to establish a state of the world.

I will illustrate the modelling procedure for games with incomplete information and describe the refinements in equilibrium they have inspired. We can then demonstrate how incomplete information can be used to unravel the logic that produces the paradoxical equilibria in the games just discussed.

Figure 6 illustrates the modelling process for the sequential-choice version of the coordination game among farmers. The incomplete information concerns player B's type. He might be either a "profit maximizer" or "mean spirited." A profit-maximizing B has the same payoffs as in Figure 4. A mean-spirited B, however, obtains utility from inflicting pain upon

-. - "' 20 his neighbor, and, hence, will always time his planting to diminish A's payoff. The way to

model this uncertainty is to let Nature choose between (maximizer, mean) with probabilities (P,

1-P).

Moves by Nature at the outset of a game convert the game to one of incomplete

information whenever at least one of the players is uninformed of Nature's choice. If some

players observe nature's choice and others do not, then the game involves asymmetric

infonnation. In games of asymmetric information some players have valuable private

infonnation. 11

In figure 6 the more sensible alternative is that A is uninformed, which produces the

extensive form in Figure 6(a). The less realistic alternative in this parµcular example but the

alternative with more important consequences for game theoretic modelling is that B is

uninformed as illustrated in Figure 6(b). The dotted lines depict information sets which are not

singletons. In Fig. 6(a) Farmer A does not know Nature's choice and, hence, whether the actual

node is A1 or A2• Player B's information sets are all singletons because he observes both

Nature's and A's move.

In Fig. 6(b) B cannot distinguish between B1 and B3 or between B2 and B4 • The

introduction of incomplete information in the manner depicted in Fig. 6(a) does not complicate

solving the game in any meaningful way. A knows that Maximizer B will choose the opposite

11 In technical terms private information means that some player's information partition is finer than some other player's partition. Games of asymmetric information are necessarily games of imperfect information because if the players' information partitions differ, the information sets cannot all be singletons. Game ;an have asymmetric information without having incomplete information. For example, players may undertake moves at the outset of a game that are not revealed to other players but which influence the way they play subsequently in the game . .- 21 of A's choice of EARLY or LATE, and Mean B will choose the same as A. To solve this type of game, A is assumed to have a von Neumann-Morgenstern utility function and choose between

{EARLY, LATE} to maximize his expected payoff. In this case EARLY is a dominant choice for A regardless of the value of P, so equilibrium involves A choosing EARLY and B choosing

EARLY (LATE) if he is mean spirited (a profit maximizer).

The type of game depicted in Fig. 6(b) is interesting because it possibly allows the uninformed player to update his information based upon the informed player's move. 12 This type of scenario has prompted further important refinements of Nash equilibrium.

Figure 6(b) illustrates the problem that arises for sub game perfection as a solution concept for these types of games. Because of the imperfect information, the nodes where B moves are no longer subgames; none of nodes Bl - B4 are singletons. Thus, the only subgame .., is the entire game itself, and requiring sub game perfection does not eliminate either of the Nash equilibria that involve noncredible threats.

It is natural that a refinement of Nash equilibrium to accommodate games of incomplete and asymmetric information should consider both players' strategies and their beliefs and the manner in which those beliefs are updated as the game is played. A refinement that accomplishes this objective is peifect Bayesian equilibrium (PBE). In a PBE players' strategies are optimal given their beliefs and beliefs are obtained from strategies and observed actions using Bayes' rule whenever possible. 13

12Notice that this happens not to be the case in the Figure 6(b) game because A has the dominant strategy of EARLY regardless of B's type.

13Credit for the development of perfect Bayesian equilibrium is somewhat hard to pinpoint. The concept is aligned with Selton's work (1975) on perfection and Kreps and Wilson's work ------I- 22

The following formal definition of a PBE is due to Rasmusen: A PBE consists of a

strategy combination and a set of beliefs such that at each node of the game: ( 1) the strategies

are Nash for the remainder of the game given the beliefs and strategies of the other players, and

(2) the beliefs at each information set are rational given the ·evidence, if any, from previous play

in the game. Condition (1) is a perfectness condition, and condition (2) says that beliefs should

be formed using Bayesian updating whenever possible. 14

There is no general solution method to calculate PBE comparable to the backward

induction algorithm for SPE. Rather, solution is a thought process that involves proposing

(1982a) on . Early sigQalling models such as Akerlof (1970) and Spence · (1973) implicitly use the concept. The first explicit application is Milgrom and Roberts (1982a). Kreps (1990b) credits Fudenberg and Tirole (1988) with formalizing the concept.

14The following example illustrates using Bayes rule to calculate posterior probabilities. As a native Minnesotan, I construct it to commemorate the Minnesota Twins 1991 World Series appearance. Suppose the event I am interested in is whether the Twins faced a left-handed (LH) pitcher in a particular game. My prior probability on that event is P(LH) = 0.25. The data I observe is that Kirby Puckett, a right handed batter, had multiple hits (MH) in the game. I know the conditional probabilities of observing this information if the pitcher was left or right handed (RH): P(MH/LH) = 0. 6 P(MH/RH) = 0.2. The marginal probability of observing Kirby Puckett getting MH is P(MH) = [P(MH/LH) x P(LH)] +[P(MH/RH) x P(RH)] 0.3 = (0.6 x 0.25) + (0.2 x 0. 75). Both terms in parenthesis in the equation above equal 0.15, implying that if I observe Kirby Puckett getting MH in a game, the probability is 0.5 that the pitcher was LH. In other words the posterior probability of LH given MH is P(LH/MH) = P(MH/LH) x (LH)]/P(MH) = 0.5.

Because I observed data more consistent with LH than RH, it is intuitive that I should revise upward my prior on LH. Bayes rule provides the vehicle to do so. Although Bayes rule is most intuitive in the context of an example, the above equations can be converted to general formulae by replacing MH with "data," LH with "event," and RH with "not the event." 23 plausible strategy combinations and testing to see if they are best responses (i.e., Nash). Then each player's strategy is tested at each node to see if it is a best response given the player's beliefs at each node. Out-of-equilibrium beliefs and strategies are an important part of constructing a PBE. In particular, the analyst must check whether any player would like to take an out-of-equilibrium action in order to influence other players' beliefs.

· To further develop the PBE concept, I will illustrate its application to an important class of incomplete information games--the signalling game.

Signalling Games

The basic signalling game is a two-period dynamic game. The player who moves fust

(the leader) has private information about his type that affects the player who moves last (the follower). Signalling's origin is Spence's model of education published in 1973 without the benefit of the formal concept of PBE. The model has proven to be rich in application in the succeeding years.

The following definition of PBE for a signalling game is from Fudenberg and Tirole

(1989). Player 1 (the le.:. ~ . ..: r) observes private information as to his type t1 and chooses action a1• Player 2 observes a1 and chooses action a2• Payoffs for each player are ?r;(a1,a2,t1). Prior to play, player 2 has beliefs P 1(t 1) concerning player 1 's type. Player 2 can update his belief about t1 based upon his observation of l's action, a1• Denote this posterior probability as

P1 "(tifa1). However, player 1 anticipates that his action will influence player 2's posterior beliefs and, hence, his action. A PBE is a set of strategies a 1*(t1) and a2*(a1) and posterior beliefs

P1*(tifa 1) that satisfy the following conditions:

1. a1*(t 1) maximizes ?r 1(a 1,ai(a1),t1), - .. 24

2. a2·(a1) maximizes I; 11 P 1.(t/a1)1r"i(a 1,a2 ,t1)

3. P1.(t1/a1) is derived from the prior P1, aii and Bayes rule whenever possible.

Conditions 1 and 2 are perfectness conditions, and condition 3 is the Bayesian updating

requirement. Notice that condition 1 requires player 1 to take into account his role in

influencing player 2's action. The qualifier on condition 3 is important because Bayes rule is

not applicable for events that occur off the equilibrium path. These events occur with zero

probability, which implies a division by zero in Bayes formula (see note 14), making the

posterior undefined. Any posterior beliefs are compatible with Bayes rule in these cases. This

result, in tum, admits many perfect Bayesian equilibria for some games and has inspired a

search in recent years for what Rasmusen terms "exotic refinements" to eliminate some of the

equilibria.

To illustrate the construction of PBE in a signalling game, consider Spence's model of

education. Workers can be either HIGH or LOW ability based upon Nature's choice.

Employers cannot observe ability, but they know the distribution of abilities and can observe

workers' education levels. For simplicity a worker' s strategy set is assumed to be dichotomous:

{EDUCATION, NO EDUCATION}. Education is costly, but it is less costly for high-ability

workers, i.e., they don't have to work as hard at it. Thus, education may provide a means for

high-ability workers to signal their attribute, but it does not augment their ability. However,

depending upon the relationship between wages and education, low-ability workers may also

acquire education and thereby masquerade as high-ability workers.

Whether or not players succeed in signalling their types is an important dimension of

signalling models. A PBE where signalling does distinguish among types is known as a

,-

-- 25

. A PBE where the types remain undistinguished is known as a pooling

equilibrium. Many signalling games have both types of equilibria. 15

In general three types of constraints must be satisfied to establish a separating

equilibrium:

1. Participation--in games where the uninformed player is offering contracts to the informed

players, the contracts offered in equilibrium must be financially viable for the uninformed

player.

2. --in the context of the education game, low-ability workers must not

be attracted to the high-ability workers' contract. 16

3. Nonpooling--high ability workers must prefer their contract to the contract that emerges if

all workers pool (and no one obtains education).

In a separating equilibrium observing the equilibrium choices of the informed players

allows a complete inference to be made as to their types. In the context of the education model

the posterior probability is p•(HIGH/EDUCATION) = 1.0. Moreover, in games like the education model with dichotomous strategy sets, there is no need to specify out-of-equilibrium

beliefs because both actions (EDUCATION, NO EDUCATION) may be observed in equilibrium. However ,in a model with a continuous strategy space it is still necessary to assign probabilities to actions neither type of player would take in equilibrium.

15In addition, a third type of equilibrium may exist, where, in the context of the education model, the low-ability worker randomizes between obtaining and not obtaining education.

16The fact that education is more expensive for low-ability workers is the key feature in meeting this constraint.

-. 26 - " Both types of players elect the same strategy in a pooling equilibrium. In the education

model, depending upon parameterizations, pooling equilibria may involve both types acquiring

education or neither type acquiring it. In a pooling equilibrium both types of players receive

the same payoff--a composite of the payoffs for high- and low-ability workers in a separating

equilibrium. The intuition for a pooling equilibrium is that it may not be worthwhile for the

high-ability workers to incur the cost necessary to signal their type, or, alternatively, that it may

be worthwhile for the low-ability workers to masquerade as high ability by acquiring education.

Because only one action is observed in equilibrium in a pooling equilibrium, the

specification of beliefs off the equilibrium path is a crucial part of defining the PBE. Changing

this specification may well cause the pooling equilibrium to break down. For example,

P*(LOW/EDUCATION) = 0 will not support a pooling equilibrium where neither type of

worker obtains education under reasonable parameterizations of the game because high-ability

types would want to acquire education. It is important to stress that, because Bayes rule does

not apply to out-of-equilibrium actions, one choice for P*() is technically as valid as any other

under the PBE concept. This feature, as noted, has inspired the search for further refinements.

Explicit derivation of equilibria for the education signalling game requires formal

parameterization of a model. Rasmusen, Ch. 9, provides several illustrations.

To further examine the application of PBE, consider Milgrom and Roberts' (1982a) limit

pricing model. Limit pricing is generally no rational in a perfect information environment

because potential entrants will base their entry decision upon their perceptions of the post-entry

environment, rather than upon any pre-entry posturing by the incumbent. Milgrom and Roberts

I ·- 27 wished to show that limit pricing might emerge as a rational strategy under incomplete information.

The asymmetric information concerns the incumbent's unit costs, which may be either

HIGH or LOW and denoted respectively as cH and cL. If the entrant enters, he incurs a sunk cost K > 0, and post-entry play is assumed to be Cournot. Let the entrant's profits net of K be denoted by -irE and assume that

i.e., entry is profitable if the incumbent is high cost but not if he is low cost. 17

Signalling enters the Milgrom-Roberts model because a low-cost incumbent produces more and charges less than a high-cost counterpart under normal conditions. For example, denote the static profit-maximizing monopoly outputs for high- and low-cost incumbents as qM(c.J and qM(cJ, respectively. However, producing qM(cJ may not be sufficient for a low~cost incumbent to signal its type because a high-cost incumbent may be willing to produce this output, thereby reducing its period 1 profit in order to masquerade as low cost in ·hopes of deterring entry.

Milgrom and Roberts show that this model also tends to have both pooling and separating equilibria. A separating equilibrium involves the incumbent producing an output, q·(cJ sufficiently in excess of qM(cJ that a high-cost version would not be tempted to pool (constraint no. 2 above) and, rather, would choose qM(c.J. The entrant correctly infers this result and chooses not to enter if it observes q·(cJ. To complete specification of the PBE, posterior

17 A low-cost incumbent will produce more in a Cournot equilibrium than will a high-cost version, and, thus, post-entry profits will be lower if the incumbent is low cost. -. 28

beliefs, p•() on the part of the entrant for outputs other than q·(cJ or qM(c.J must be specified

that support the proposed equilibrium. These beliefs are arbitrary , so p•(HIGH/q') = 1 for all

q' (/. {qM(c.J, q·(cJ} is a valid choice to support the equilibrium.

If the cost of signalling is sufficiently great, a low-cost incumbent will instead choose

qM(cJ (constraint no. 3 above is violated) and a pooling equilibrium will ensue where the entrant

enters if its expected profit is positive, given its priors on the incumbent's type.

An important implication of this type of model is that the introduction of just a small

probability in the entrant's mind that the incumbent is high cost possibly causes the rational low-

cost incumbent to discretely increase its peri u 1 output above its profit-maximizing monopoly

level to signal its type.

Reconsidering the Paradoxical Equilibria in Finite-Horizon Games

The preceding observation is key to understanding the manner in which game theory's

leading scholars have unraveled the paradoxical equilibria in the iterated prisoners' dilemma,

chainstore, and centipede games. The key references are Kreps and Wilson (1982b) and

Milgrom and Roberts (1982b) on the chainstore game and Kreps, Wilson, Milgrom, and Roberts

on the prisoners' dilemma. The modelling approach to resolving the paradoxes is similar in each

case. The game is converted to one of incomplete and asymmetric information by introducing

the probability that a player's type is not as modelled in the original specifications of the game.

For example, Kreps et al. consider the possibility that one of the "prisoners" can only play a

"tit-for-tat" strategy that calls for him to cooperate at the outset of play and at any subsequent

period t if his opponent cooperated at period t-1. Or in the chainstore game, the possibility of

a "rapacious" incumbent who enjoys predation is introduced by Kreps and Wilson.

I

I : LI 29

A key facet of these (and any other) games is that the game structure is common

knowledge. This means that each player knows the configuration of the game tree and the other

player(s) know that he knows and so on. This point is important because it means that an

informed player has an opportunity to exploit an uninformed player's uncertainty. For example

a rational (non tit-for-tat risoners' dilemma player can play cooperatively at the outset of the

game to give the impression that he is . The other player is not fooled by this

behavior, but, nonetheless, as long as his partner is playing cooperatively, it may be in his

interest to play along by choosing to cooperate also.

Analogously, in the chainstore game, a nonrapacious incumbent has incentive to predate

during the early periods of play of this game to perpetuate the possibility in entrants' minds that

he is rapacious. Potential entrants, being aware that even a nonrapacious incumbent may fight

entry during early periods of play, elect rationally not to enter.

Introducing uncertainty into these models is, thus, seen to rather drastically alter the

equilibria from the stark results obtained by applying subgame perfection to the perfect

information versions of these games. The new equilibria call for players in the prisoners'

dilemma to cooperate in early periods and only fink towards the end of play, or in the chainstore

game for the incumbent to fi 0 • entry in early periods and accommodate only towards the end

of play. These outcomes comport better with intuition and, moreover, with actual play of the

games in experimental settings (see, for example, Axelrod 1984). A further key point is that

these new equilibria are obtained even with very modest degrees of uncertainty, e.g., low probabilities that a prisoner is tit for tat or an incumbent is rapacious. -" 30

Further Refinements

I tum now to discuss briefly other refinements to Nash equilibrium that have emerged

in the literature in recent years. Two equilibrium concepts that were developed

contemporaneously with PBE and have similar properties (and, hence, yield similar equilibria)

to PBE are Selton's (1975) concept of trembling-hand perfect equilibrium and Kreps and

Wilson's (1982a) sequential equilibrium. The idea behind trembling hand perfection is that

players may make mistakes (their hands may tremble) during play of a game. A trembling-hand

perfect equilibrium strategy continues to be optimal for a player even if there is a small chance

that some other player will pick an out-of-equilibrium action. 18

The concept of sequential equilibrium is also based upon the specification of strategy

profiles that are Nash for the remainder of the game, given the beliefs and strategies of the other

players, and updating beliefs using Bayesian inference whenever possible. Kreps and Wilson

add a further consistency requirement for sequential equilibrium which for some games limits

the range of equilibria relative to perfect Bayesian equilibrium. The consistency requirement

comes into play in rather complex games. For example, it would require that two players

18For an example of how trembling-hand perfection refines equilibrium consider the coordination game between farmers in Figure 4(b). One Nash equilibrium involves A, who moves first, playing EARLY and B playing (if EARLY then LATE; if LATE then LATE). As long as A plays EARLY, B's strategy is a best reply, but if there is a chance that A will tremble and play LATE, then it is certainly not optimal for B to respond with LATE, i.e., this Nash equilibrium is not trembling-hand perfect. The equilibrium where A plays LATE and B plays (if EARLY then EARLY, if LATE then EARLY) can be eliminated by the same argument. : 31 observing another player's actions should form the same beliefs as to that player's type. It also imposes consistency of beliefs over time. 19

The concepts of SPE, PBE, trembling-hand perfect equilibrium, and sequential equilibrium can be related as follows: Every sequential, perfect Bayesian, and trembling-hand perfect equilibrium is also subgame perfect. Every trembling-hand perfect equilibrium is a sequential equilibrium, and every sequential equilibrium is also a perfect Bayesian equilibrium but not vice-versa.

As noted, the problem of multiplicity of PBE due to the arbitrariness of out-of- equilibrium beliefs has stimulated the search for ways to restrict these beliefs and, hence, limit the admissible PBE. This is an area of considerable on-going research, and I will attempt here to only illustrate briefly the spirit of some of the refinements. A book by Van Damme (1987) provides a comprehensive discussion, although some work has been accomplished since its publication date.

A main motivation for the further refinements has been to eliminate out-of-equilibrium beliefs that do not make sense. These refinements are often called intuitive criteria. One specific avenue to pursue is the notion that if an action is dominated for some type of player

(conditional upon subsequent equilibrium behavior) but not another, then, upon observing that action, posterior beliefs should assign zero probability to the type for which the action is

19The additional restrictions on equilibrium imposed by sequential equilibrium relative to PBE imply a mechanical check of the PBE to see whether they satisfy the consistency requirement of sequential equilibrium. -.

32

dominated. Milgrom and Roberts (1982a) applied this criterion in their limit pricing game to

find a unique separating equilibrium rather than a continuum of such equilibria. 20

Another criterion due to Cho and Kreps (1987) is to look at strategies dominated by the

proposed equilibrium outcome. This intuitive criterion tends to eliminate more strategies than

the simple dominance criterion discussed above. Consider a proposed equilibrium with payoff

7r•(t1) for a player of type t1• Now consider that player 1 deviates from his equilibrium strategy

and plays the out-of-equilibrium action a'. It is said that a' is equilibrium weakly dominated for

type t1 if for any optimal response a+ to a' by other player(s), the payoff for type t1 is no greater

than ""•(t1) and is strictly less for some a+. The point is that if players of a certain type have

no incentive to take the observed out-of-equilibrium action, then othe~ players should place no

probability weight on those types upon observing the action, i.e., the posterior p•(tifa') = 0.

Further discussion of refinements is beyond the scope of this paper, but those interested

in serious pursuit of the subject can consult papers by McLennan (1985), Kohlberg and Mertens

(1986), Grossman and Perry (1986), Banks and Sobel (1987), and Fudenberg, Kreps, ·and Levine

(1988).

20Milgrom and Roberts defined a range of separating equilibria, say [q·, q·1, with q· identifying the smallest output such that a high-cost incumbent prefers not to masquerade as low cost and, rather, accept his period 1 monopoly profit and invite entry in period 2. Conversely, q .. is the maximum output that a low-cost incumbent is willing to produce to signal its type rather than accept a pooling equilibrium payoff. Therefore, if the entrant observes any q' E [q°, q·1, he should put zero probability on the event that the incumbent is high cost and, hence, should not enter (i.e., outputs in this interval are dominated for the high cost entrant by his simple profit-maximizing monopoly output). Thus, the low-cost incumbent need not produce above q· to deter entry, and all other outputs in the interval are eliminated from consideration as equilibria. L 33

Problems in Noncooperative Game Theory

Before concluding Part I of this paper and turning to applications, it is appropriate to summarize what one of modem game theory's leaders, David Kreps, considers to be its major problems--see Kreps (1990a) for a full discussion. Kreps first observation is that game theory requires clear and precise specification of the rules of the game. This means that modes of

"free-form" competition are not amenable to game theory analysis. More significant is the problem that the equilibria of games often shift dramatically due to seemingly minor modifications of the rules. This situation is observed most vividly in games of , a subject of discussion in Part II. Related to this point is Kreps' concern that the rules of the game are specified exogenously by the analyst and taken for granted. Where do the rules come from? Might they be endogenous?

Kreps' second concern is with the multiplicity of equilibria that often emerge and the associated problems of choosing among them. Of course, this problem has led to the search for refinements as we have just seen, but Kreps' third problem is with the method of most refinements. Most refinements focus upon out-of-equilibrium actions, but Kreps notes that most are "based on the assumption that observing a fact that runs counter to the theory doesn't invalidate the theory in anyone's mind for the rest of the game (p.114)." This concern has led

Kreps to focus on so-called complete theories, whereby no action is absolutely precluded, but out-of-equilibrium actions are held to be unlikely a priori (see Fudenberg, Kreps, and Levine

1988).

Kreps' final concern is with the mode of equilibrium analysis itself. Again, to quote:

Equilibrium analysis is based formally on the presumptions that every player maximizes perfectly and completely against the strategies of his opponents, that

. j .- 34

the character of those opponents and their strategies are perfectly known (or any uncertainty on the part of one player about another player is fully appreciated by all the players and the strategy as a function of the other player's character is also known), and that players are able to evaluate all their options (p.139).

The point is that none of these conditions are met fully in reality, and the approximation may

be appropriate in some cases but not others.

Part II: Applications to Vertical Coordination

Noncooperative game theory is fundamentally a theory of imperfect competition. If the

tenets of classical competition are met, there is no scope for strategic behavior. In assessing this

statement recall that imperfect competition can be caused by either small numbers of players,

imperfect information, or both.

In considering applying noncooperative game theory to vertical relations in agricultural

markets, we must first evaluate the importance of imperfect competition in this sector. Figure :

7 presents a schematic of the market chain to facilitate this discussion. The degree to which

agricultural markets are characterized by market power is a topic of some controversy. For

example, Wohlgenant (1989) in a recent study was unable to reject a competition hypothesis for

food manufacturing in most aggregate product categories. Other studies, though, offer quite

different conclusions. The comprehensive analysis of the food marketing system contained in

·Connor, Rogers, Marion, and Mueller (1985) and Marion (1986) suggests that seller market

power may be important at most levels of the food chain, except the raw product (farm) level.

_. l_ -.

35 " - Econometric studies of single sectors in the food industry such as meat (Schroeter, Schroeter and

Azzam, Azzam and Pagoulatos) and fruit (Wann) support this conclusion. 21

A seldom-considered but possibly important dimension of imperfect competition in agricultural markets concerns the exercise of monopsony or oligopsony power by processors and handlers over farmers. Because agricultural products are often bulky and/or perishable, they are costly to transport. This observation implies that markets for raw agricultural products are spatial markets, an arena where imperfect competition is almost certain. 22

A characteristic of agricultural markets upon which there is probably general agreement is that imperfect information and uncertainty are often important. The analysis in Part I demonstrated that uncertainty opens the door to strategic behavior particularly when the uncertainty or lack of information is asymmetric across agents. Such informational asymmetries would seem to be important in agricultural markets. For example, processors are probably better informed about market demand conditions than are farmers. Processors have incentives to exploit these informational advantages, whereas farmers have incentives to encourage processors to reveal truthfully their knowledge of market conditions.

21 Modelling of oligopoly interactions is a prime area for application of game theory methods. These applications, however, are not within the scope of this paper, which focuses upon interactions among parties who are related vertically, not horizontally, in the market chain. Tirole (1988) and the Handbook of Industrial Organization (Schmalensee and Willig 1989) are sources of information on game theory modelling of interaction among oligopolists or potential oligopolists.

22High transportation costs generally limit the number of processor/handlers a farmer can access. The fewness of buyers within a market area, in turn, leads to market power. See Greenhut, Norman, and Hung for the general theory of spatial imperfect competition and Sexton and also Durham for discussions in an agricultural markets context. 36

By the same token, farmers in many cases will have informational advantages over

processor-handlers concerning their characteristics as growers. In the simplest signalling model

context, a grower might be HIGH or LOW quality, with HIGH-quality growers' problem being

to signal their type to processors, while LOW-quality types try to masquerade. Quality of the

agricultural product itself is an issue in many contexts, opening the door to interesting adverse

selection problems. Although product quality is always important, it becomes a subject for game

theory only when information as to quality is asymmetric,e.g., the handler knows whether the

produce is fresh, but the retailer does not and verification is costly.

Thus, the scope for application of game theory methods to vertical coordination questions

in agricultural markets appears to be rather promising. To date, however, such applications

have not been forthcoming. This circumstance is not surprising given that the noncooperative

game theory revolution is of recent vintage and its application to vertical control questions is of

even more recent vintage. I will discuss three categories of applications in this paper:

Principal-agent models with asymmetric information, the economics of vertical control, and

bargaining. Each seems relevant to analysis of agricultural markets. The distinction between

the first and second topics is artificial because vertical control problems are essentially principal­

agent problems. I separate out the topic because it has a well-established literature in its own

right.

PRINCIPAL-AGENT MODELS

The principal-agent relationship is inherently vertical in character. The principal is the

entity who hires the agent to perform some task. In almost all cases, the agent acquires an

informational advantage at some point in the game as to his type, actions, or other states of the

: 37 world. Contexts for application of this basic model to vertical coordination in agricultural markets may be several. Some applications may involve the farmer or grower as the principal seeking to contract with a marketing firm as agent to sell his production. The agent may have specialized knowledge as to his own ability, market conditions, etc.

Alternatively, it may be desirable to reverse the roles. A process/handler may be modelled as the principal who seeks farmers to grow pr0ducts to his specifications. Growers may have specialized knowledge as to their types, production costs, etc.

Potential applications of the model need not be limited to the first-handler level either.

Referencing Figure 7, it may be appropriate, for example, to model the behavior of a large retail food chain seeking manufacturers of private-label products as a principal and the manufacturer as an agent. Or in some contexts it may be useful to consider a manufacturer as the principal and retailing firms as the agents. 23

Key references on principal-agent models are Arrow (1985) and Hart and Holstrom

(1987). The models can be partitioned according to the nature of the information asymmetry.

Models where the agent takes actions unobserved by the principal are known as moral hazard models. Models where the agent has hidden knowledge prior to contracting with the principal are known as adverse selection models. Adverse selection models may involve signalling, with the agent taking actions to signal (or conceal) his type to (from) the principal.

Models with Moral Hazard

2Yfhis relationship is usually the implicit context of the literature on vertical controls to be discussed shortly. 38

I will frame the moral hazard problem in the context of a grower seeking a marketing agent to handle his production. This problem was introduced in Figure 1. In most principal­ agent models with moral hazard the unobserved action is referred to as the agent's ejfon. This term must be interpreted broadly within the context of the application to include a wide range of actions (or, as the outcome may be, nonactions). In the context of a marketing firm, effort could refer to speed of transit to market for sake of freshness, proper refrigeration to retard spoilage, advertising and promotion activities, diligence in processing, etc.

The essence of the moral hazard problem is indicated by the SPE to the Figure 1 game, where, if given the opportunity, the agent accepts a contract and expends low effort, causing the grower to market the product himself at a cost in terms of inefficiency. The problem arose because the grower could not observe the agent's level of effort (i.e., the action was hidden).

A more sophisticated version of the moral-hazard model is obtained by assuming that, although effort is unobservable, a variable related to effort is observable. This variable may be profits, the level of output, or the per unit price that the grower receives net of any marketing costs.

In this case the problem is to design a· contract based on the observed variable to elicit the optimal expenditure of the unobserved variable--effort. To model this problem, assume that the effort choice is not dichotomous but, rather, is distributed along the interval f:Et, Ez].

Suppose the grower cannot observe effort but can observe the revenue received for the product

R(E), R'(E) > 0. Given that production has already taken place, the grower's profit function is simply:

(1) ?r(E) = R(E) - W(R(E)), 39 and his problem is to choose a payment schedule, W(R(E)), for the marketing agent as a function of revenues received so as to maximize profit.

The formulation of this problem is completed by specifying a utility function for the

agent, U(W,E), which is increasing in W and decreasing in E, and a reservation level, U17 of utility that specifies the agent's opportunity cost. Any contract that the grower offers must satisfy the individual rationality or panicipation constraint that

(2) max{E} U(W(R(E)),E) ~ U1•

Secondly, the grower wishes the marketing agent to voluntarily expend the level of effort, E", that maximizes T(E). This condition is known as the incentive compatibility constraint:

(3) E" = argmax{E} U(W(R(E)),E).

The payment scheme that maximizes (1) subject to (2) and (3) is known as aforcing contract because it forces the agent to choose the level of E that maximizes the grower's profits.

An important layer of complication is added to this basic moral hazard problem when the observable variable, revenues in our illustration, is observable only with noise. This complication is a very realistic consideration ·for agricultural contexts where markets are often rather volatile. To depict this problem, let E represent a random variable that affects revenue so that now R(E,e) is the revenue function . A low observed revenue can now be due either to poor market conditions or shirking by the agent.

Specification of this more realistic problem is the same fundamentally as the nonstochastic problem depicted in (1) , (2), and (3) except that expected values over possible realizations of e must be taken for 7r and U. Solution of the modified problem has proven to be exceedingly 40 difficult unless restrictions are placed on the problem. Discussion of these issues is beyond the scope of this paper; the crucial references are Grossman and Hart (1983) and Rogerson (1985).

Repeated play and agent reputation may be ways of mitigating moral hazard problems, but some of the lessons from Part I are instructive here. In a finite horizon setting, the subgame perfect equilibrium will unravel to reveal an agent producing low quality or low effort at every opportunity, if that is the optimal response for any single iteration of the game. For reputation to have its effect, the model must be specified with incomplete information as in Kreps and

Wilson and Milg om and Roberts. For example, if the principal entertains even a slight probability that the agent is predisposed to produce high quality or effort, the agent has incentive to actually produce high quality or effort to perpetuate that perception at least until the latter plays of the game.

As noted , this framework may yield valuable insights regarding contract structure in agriculture when the processor/handler is modelled as the principal and the grower as the agent.

For example, product quality dimensions are increasingly important in today's food market.24

Raw product -tlity can be influenced by farmers horticultural practices (effort), but it is also influenced by random factors that cannot be observed perfectly by the processor. Depending upon the raw product and the nature of the harvest technology, aspects of product quality may be discerned directly through grading. The processors' job in these cases is to specify contracts with growers that solicit the processor's desired quality level subject to incentive compatibility

241 intend a very broad interpretation of the word "quality" here, much in the same way "effort" should be interpreted broadly. For example, quality may refer to the physical characteristics of the product itself, or it may refer to the specific time that the product is available for harvest. 41 with growers and also their financial viability. Imperfect monitoring may involve inability to observe directly either farmers' horticultural practices or the characteristics of the harvested product.is

Contractual practices vary widely across raw agricultural product markets. I suspect the variation in contracts has much to do with differences across markets in the importance of and the variability in quality and, in tum, on the extent to which quality can be monitored by observation of the product or growers' horticultural practices.

Models with Adverse Selection

Adverse selection models differ from moral hazard models in that the former has hidden knowledge rather than hidden actions. In the principal-agent context, the principal's job is to sort out agents of alternative characteristics. These situations are modelled as games of incomplete information, where Nature selects the agent's type, and the choice is unobserved by the principal. The principal then offers one or more contracts to the agent who may accept one or reject them all.

Akerlofs seminal work on lemons (1970) introduced the problem of adverse selection.

The extensive form game representation of Akerlofs basic model in Figure 8 is due to

Rasmusen. In this model cars are either of two quality types (HIGH or LOW). Both buyer and

25 Although modelling and solving optimal contract problems in the presence of moral hazard is a difficult problem without considering it, one would be remiss to not mention the matter of in this context. In agriculture it is very realistic to consider that growers (as agents) are risk averse and a processor (as principal) is risk neutral, due, perhaps, to having diversified stockholders (obviously not the case if the processor is a cooperative). The processor has incentive in these cases to specify contracts to shift risk away from growers (i.e., they have to be compensated, ceteris paribus, to bear risk). A price schedule that is constant across realizations of random variables accomplishes this objective but will not yield growers' optimal incentives in the presence of moral hazard. 42 - ' seller value HIGH = 6000 and LOW = 2000, so payoffs for a buyer are either 6000 - P or

2000 - P, depending upon whether HIGH or LOW quality is purchased, and similarly for the

seller we have either P - 6000 or P - 2000. Nature chooses between the two states of the world

with equal probability, and the extensive form in Fig. 8 indicates that the buyer cannot

distinguish between the states. In this model if all cars were put on the market, the expected

value of a car is 4000. But the owner of a high-quality car would not sell for this price. Hence

any car priced at 4000 or any other price less than 6000 must be low quality, so the buyer will

refuse to pay more than 2000. A price of 6000 will elicit the sale of high-quality cars, but also

the sale of masquerading low-quality cars, so no buyer will pay 6000. Perfect equilibrium in

this model is for only low-quality cars to be placed on the market and sold for 2000.

In this prototype model with identical buyer and seller valuations, there is no social loss

from the absence of a high-qualuy car market. With a little more work, however, the same

result can be derived when buyers' valuations exceed sellers' valuations, so trade is socially

desirable, or when quality is distributed along an interval rather than dichotomously .. Consider,

for example, car quality distributed uniformly along the interval [2000, 6000]. Average quality

remains 4000, but a price of 4000 solicits offers to sell only from sellers whose cars are valued

at 4000 or less. The average value of these cars is 3000. If the price drops to 3000, the

average value of cars offered for sale at that price falls to 2500. Continuing this line of

reasoning demonstrates that the market disappears entirely.

A number of conditions may attenuate adverse selection problems. Contracts may specify

dimensions of product quality, products may be tested, and sellers may offer warranties. 43

Adverse selection also provides a rationale for government intervention in the form of quality standards, licenses, and certification.

Another important feature of adverse selection models is that they often will involve signalling of the type discussed in Part I. For example, high-quality sellers can provide a warranty more cheaply than low-quality sellers and have incentive to do so as a means of establishing their type. Whether low-quality types will also offer warranties and induce a pooling equilibrium hinges on the cost of providing a warranty versus the costs of being pinpointed as low quality. Price itself may be used as a signal, though, depending on the model specification, the high-quality firm may use either a high price or a low price as its signal.

Advertising provides another mechanism to signal quality. The reason .is that the likelihood of repeat sales is greater for high-quality sellers than low-quality counterparts. Thus, advertising is relatively more valuable for high-quality sellers.

There also appears to be considerable scope for application of models of adverse selection to the agricultural sector. As noted in the prior subsection, consumers' emphasis on product quality places a premium on the sector's collective ability to provide the desired product attributes. A direct response to product quality concerns is to write contracts that specify quality standards or provide premiums or discounts for departures from a benchmark quality. Writing these contracts and monitoring them for compliance is, of course, an expensive process. Some dimensions of quality can be monitored only at considerable cost, if at all.

If the marketing sector at its various stages is unable to recognize and reward quality, the message of the adverse selection models is that high-quality will be driven out. Again, the pooling practices of cooperatives are especially worrisome in this regard. If cooperatives are 44 less able to reward quality than other organizational forms, the equilibrium configuration across organizations calls for predominantly low-quality producers to patronize cooperatives. 26

In agriculture, the various quality provisions mandated by marketing orders may be justified as a response to adverse selection. If not for adverse selection, quality standards that proscribe products with certain characteristics merely limit consumers' choices. With asymmetric information, however, failure to impose quality standards also limits consumer choice by driving out high quality.

In the context of vertical control, an important question to consider is the extent vertical integration can attenuate problems of moral hazard and adverse selection. The position of

Grossman and Hart (1986) is that vertical integration does nothing to alter the realm of feasible contracts. In other words, why should it be easier to solve such problems between two parties simply because they are lumped together in the same firm ? This is a good point, but the position is probably extreme., The transactions cost savings that come from internalizing transactions (e.g., Williamson 1971) may, for example, make it simpler to inflict penalties on nonperforming parties than if the judicial system had to be used. Nonetheless one espousing vertical integration as a solution to these problems should be held to explain how such integration will improve contracting possibilities among the affected parties.

VERTICAL CONTROL

Vertical control refers to the contractual practices whereby an upstream entity, usually the manufacturer, resti.d s the behavior of a downstream entity, usually a dealer or retailer.

260n of the most recurring concerns expressed to me by growers over the years is that handlers fail to adequately reward high quality. The concern seems to be particularly acute among members of cooperatives and may well have just cause under common pooling practices. 45

Vertical restraints include such contractual arrangements as franchise fees, bundling of distinct goods into a single package, quantity fixing , royalties, exclusive sales arrangements

(requirements contracts), exclusive sales territories, and resale price maintenance.

In the hierarchy of vertical control, these contractual arrangements may be considered intermediate modes of control, ranging between the extremes of simple arm's length transacting with uniform prices and full vertical integration. Models of vertical control are essentially principal-agent models with the manufacturer as principal and a retailer as agent, so these interactions are well suited to modelling via the analytical devices considered so far.

The objective of the manufacturer is to select contractual instruments to maximize his profit. In modelling this interaction as a game the manufacturer moves first and offers one or more contracts. The dealer can either accept a contract or reject them all. To be accepted, a contract must insure the agent's financial viability. The dealer may take actions that cannot be monitored fully by the manufacturer or may possess private information, so much of the concern with vertical control is inspired by moral hazard or adverse selection problems.

In the absence of sophisticated contracts, a manufacturer's price, pm, to a retailer must be in excess of marginal costs, c, to obtain profit. This deviation of price from cost introduces a fundamental externality between the manufacturer and dealer in that any dealer action that affects consumer demand impacts on the manufacturer's profit, but this impact is not considered by the dealer.

The prototypical example of this externality is the "double marginalization" (Spengler

1950) that occurs when the dealer also has market power and marks price above his cost, P11.

Double marginalization reduces the manufacturer's profits. In a simple perfect informatior. 46

setting, this extemality can be overcome by the manufacturer setting pm = c and using a

franchise fee to extract profit from the dealer or setting price at the monopoly level and imposing a resale price ceiling to prevent the dealer from implementing a further price markup. In more complex settings involving uncertainty and dealer risk aversion these contracts may no longer be desirable. n

A Single Manufacturer and Dealer

The following general framework for analyzing vertical control is due to Katz (1989).

Consider a game between an manufacturer and a dealer. Revenues in the downstream industry are denoted as R(X, Y,E,8) , where X is the upstream good produced by the manufacturer at constant marginal cost, c, Y is an input used by the dealer which is p~rchased competitively,

E denotes dealer "effort," and 8 is a parameter that may represent a dealer characteristic or a realization of market demand.

The dealer's utility is expressed as U(M,E;8), where M is income or profits; the

reservation utility is U 1• The payment made by the dealer to the manufacturer is W(X,Y,E,8).

The manufacturer's objective is to implement a contract that maximizes profit to the two production levels and transfers all profit to him. As noted, the prototype solution to this problem is or W to take the following form:

W(X) = F(O) + ex,

where F is a franchise fee set to achieve U = U 1•

n1n the presence of multiple dealers discriminatory fixed fees may be considered illegal price discrimination . 47

In essence, the manufacturer has two objectives: to provide the dealer with correct economic signals and to transfer revenues to himself. Charging price equal to marginal cost accomplishes the first objective, and the franchise fee accompl · hes the second.

Simple two-part tariffs are no longer optimal in the presence of uncertainty and risk aversion. Suppose there is asymmetric information concerning the realization of 0, and, specifically, that the manufacturer is the uninformed party. An interesting possibility is that X(O) and X' > 0. In this case it is optimal for the manufacturer to set pn > c to allow dealers to signal their value of (). The manufacturer extracts profits based on the deviation of price from c but does not drive out low-{} agents, as would be the case if franchise fees were used, because

F cannot be set conditional upon 0.

The converse case is that the manufacturer is informed about 0. The manufacturer can then use the contract specification to signal his value of () to dealers. If, for example, () refers to the sales potential of the product, high-{} types can signal by setting per unit price in excess of c. By tying his profits to the level of sales, the manufacturer credibly signals that the product has high sales potential.

A further reason for the manufacturer-dealer contract to specify price in excess of c is to prevent manufacturer moral hazard. For example, a manufacturer may commit to provide promotional support for a dealer, but if pm = c, the manufacturer has no incentive to carry out the promise, and the rational dealer will not believe it.28

28 An implicit point in discussions of vertical control is that some aspects of principal-agent interactions are simply not contractible because a court would be unable to enforce the provision. This would be true, for example, if the court could not verify whether an action at issue had been undertaken. Some aspects of manufacturer or dealer commitment to provide promotional support undoubtedly fit into this category. 48

The preceding illustrations indicate that as the contractual environment becomes complex,

departures from marginal cost pricing may be desirable. In these cases, further complexity in

contract specification is called for to ameliorate the distortions caused by pm > c. The

distortions are twofold: (1) if inputs (X and Y in our formulation) are substitutable downstream,

setting pm > c induces distortions in the input mix (Vernon and Graham 1971), and (2) higher

cosis borne by the dealer will cause him to restrict output. Alternative solutions to the first

problem are for the manufacturer to invoke a royalty scheme, where the manufacturer receives

a fraction of the dealers' final revenues, or a tie, where the manufacturer forces the dealer to

jointly purchase both X and Y, setting their relative prices to achieve the efficient input mix.

Finally, a retail price ceiling may be used to prevent a price mark up and, hence, output

contraction at retail.

Multiple Manufacturers and/or Dealers

Several dealers competing to sell a single manufacturer's product succeeds in eliminating

the double marginalization problem, but creates other problems. The manufacturer can no

longer set pm = c and use a franchise fee to . extract monopoly profits because the competing

dealers will be unable to jointly establish the monopoly price downstream. Setting pn > c, of

course, encounters the distortion problems just discussed. A further problem is that competing

dealers may provide suboptimal promotion of the product because of free riding among

themselves. A contractual solution to this problem is to eliminate dealer competition through

imposing exclusive territories. Resale price maintenance may also preserve dealer incentives

by eliminating price cutting (Matthewson and Winter 1984)

I I I L 49

Rey and Tirole (1986) have demonstrated, however, that use of these competition- reducing restraints may induce undesirable side effects in the presence of uncertainty. Let () be a parameter representing demand-side uncertainty. If () is not observed by the manufacturer or is not observed until after the dealer's price is set under retail price maintenance, then the dealer's price is completely unresponsive to demand conditions. Similarly, rigid retail prices cause the dealers to bear the full brunt of the uncertainty, which is undesirable in the oft-posited case of risk-averse dealers and risk-neutral manufacturers.

A new set of issues come to the forefront in the realistic setting of multiple manufacturers. Just as multiple dealers could free ride on each other's promotional efforts, so may multiple manufacturers. When dealers carry multiple brands, the manufacturer whose advertisements attract consumers to the store may not end up getting the sale. Exclusive dealerships address this problem at a probable cost of reduced efficiency of the retail operation.

When dealers can ch~se from among multiple manufacturers, opportunism on the dealers' parts is a concern. In a rational agent-game setting a propensity on the part of one player to behave opportunistically can be mutually detrimental because the affected party will anticipate the behavior and respond accordingly. Opportunism becomes a problem when a party has sunk assets, i.e., assets that are dedicated to a particular task and cannot be recovered in the short term. One response to the threat of opportunism is to underinvest in dedicated assets.

The threat of opportunism is reduced if parties expect to interact over multiple periods.

The asymmetric information story of Kreps and Wilson and Milgrom and Roberts is once again instructive in this context. An innovative response to the threat of opportunism is for the party prone to opportunism (the dealer in our context) to also invest in dedicated assets that would not 50 be recoverable if the contracting parties failed to reach agreement (Williamson 1983). These investments, called "hostages" are a way for a player to commit credibly to not behave opportunistically.

A final way for manufacturers to overcome free ridership among themselves is to delegate decision making authority to a common marketing agent who internalizes the externalities among dealers and maximizes joint industry profits. This possibility is considered by Bernheim and Whinston ( 1985) and by Katz ( 1989), who establishes the preceding result as a subgame perfect equilibrium of a multistage game.

Analysis of vertical control is important both for understanding contractual arrangements and for recommending policy, especially in terms of antitrust. The discussion here has been exclusively positive in orientation, focusing on private incentives to pursue various contractual arrangements. Whether some of these arrangements should be the focus of antitrust enforcement is an important question that e·xceeds the scope of inquiry here. Rather, see Katz (1989).

Welfare questions concerning vertical control in general are not clear cut. On the one hand, control usually operates to eliminate vertical or horizontal externalities, a positive effect, but also serves to consolidate the manufacturer's market power, a negative effect.

Application to Vertical Control in Agriculture

Again referencing the vertical market chain diagram in Figure 7, it would appear that the interactions where most of the important vertical control questions arise are between processor/handlers and retailers or large food service companies. The market structural information summarized in Connor et al. (1985) demonstrates that many food manufacturing industries are structural oligopolies, and the manners of control they employ in dealings with 51 retailers have important implications for the performance of the sector and the welfare of farmers and consumers.

These games would be modelled in the usual mode with manufacturers as principals and retailers as agents. However, the emerging power of large retail food chains suggests that some role reversal with retailers as principals and food manufacturers as agents may prove illuminating. For example, an important trend in food etailing is for the retailer to impose slotting allowances--fees charged by the retailer to carry ufacturer's product.29

The recent paper by McLaughlin and Rao (1990) on new product selection by supermarkets provides an excellent illustration of the potential application of noncooperative game theory to interactions at this stage of the food marketing chain. McLaughlin and Rao's study is empirical and does not employ game theory, b t + process of new product selection they describe is very strategic in nature. Consider mode ling the process of new product introduction as an extensive form game. A prototype model has the manufacturer moving first

(as principal) and offering a contract to the supermarket to carry its product. The supermarket either accepts or rejects the contract. McLaughlin and Rao speculate that slotting allowances may be linked to inferior products and that superior products will be stocked without extra inducements. This may be true, but if quality information is asymmetric with the manufacturer informed and the retailer uninformed as seems likely, then manufacturers of low quality products

29Negative franchise fees (the analytical equivalent of slotting allowances) may be compatible with manufacturer control in some cases. The casual empirics of slotting allowances suggests, however, that the fees are charged most often to smaller food manufacturers who lack power in their own right. Thus, they seem to be a manifestation of the retailer's power. 52

have incentive to conceal that fact by not offering slotting allowances, i.e., under McLaughlin

and Rao's logic, refusal to pay a slotting allowance is a signal that the product is high quality.

Test markets, also a topic of analysis for McLaughlin and Rao, are another interesting

illustration. Their empirical results showed a positive relation between acceptance of the product

and the presentation of test market results. Test markets again strike at incompleteness and

imperfectness of information in the new product adoption game. Test market results can be used

to signal high quality, but a low-quality manufacturer may be able to manipulate these tests to

masquerade as high quality.

My intuition on these matters is that a manufacturer who knows his product is high quality has incentives to incur expenses for test markets and to pay slotting allowances because he knows the product will be successful if adopted. By the same logic, a low-quality

manufacturer will be unwilling to incur these costs because subsequent expected profits do not justify it. Thus, under a separating equilibrium, manufacturers of high-quality products may incur substantial introductory marketing costs, that otherwise may be wasteful, to signal their high quality.

Turning now to grower-handler interactions at the other end of the market chain, there is little doubt that in some sectors there is a substantial, and probably increasing, amount of vertical control exercised by the handler in some sectors. Although monopsony power in these

first-handler sectors is a concern, my initial reaction is that the degree of control exercised is

mainly to address moral hazard and adverse selection problems rather than a manifestation of

the handler's market power as in the cases discussed above. 53 If a processor/handler has monopsony power, the uniform monopsony price, wn, induces a deadweight loss because the marginal product of the resulting raw product production R(W11) exceeds the grower's marginal production cost, v/11 . The processor/handler could correct this inefficiency, in principle, by raising the raw product price and charging the grower a "slotting allowance" to handle his product. However, this type of multipart pricing scheme is not common at the first handler level. One reason may be that farmers could negate such a scheme through arbitrage in some cases.

"Market basket" pricing by processors is being observed in some industries. Here the grower's price is determined based on an index of the processor's price for the various processed products produced from the farm input. This arrangement is analogous to a royalty scheme but has the curious effect of shifting risk on to growers.

BARGAINING

Bargaining between growers and processors is important in agriculture, especially in the fruit, vegetable and dairy industries. Iskow and Sexton (1990) present information on the scope and processes of in the U.S.

Although various descriptive studies of bargaining in U.S. agriculture have been produced

(e.g., Lang 1977, Biggs 1982), the main attempt to date to analyze the process conceptually has been Helmberger and Hoos (1965), who employed a model. Bargaining has been one of the most important areas of application for noncooperative game theory in the last 54

10 years. I will examine this work for what it may offer in terms of understanding cooperative

bargaining in agriculture. 30

The fundamental problem in bargaining is the division of a fixed pie between two parties.

The value of the pie can be set at 1.0. To obtain a solution, players must have incentive to

come to an agreement. This is accomplished by discounting. Let 01> Oi < 1 denote the discount

rates for players 1 and 2, respectively. Another important feature in modelling the prototype

bargaining problem is to specify the order of play. The usual possibilities are seller offer with

buyer acceptance or refusal, buyer bid with seller acceptance or refusal, or alternating offers.

Not surprisingly, the bargaining equilibrium is affected by the set up of play.

The key paper on noncooperative game theory analysis of bargaining is Rubinstein

(1982). Rubinstein studied a game with alternating offers between players and an infinite

horizon with discounting. In other words, players may alternate offers forever unless they come

to an agreement. 31 Rubinstein showed that there was a unique subgame perfect equilibrium to

this game in which the players reach agreement immediately, and the payoffs are as follows

(assuming layer 1 moves first):

(4)

(5)

~y focus here will be exclusively on noncooperative game theory models of bargaining. A cooperative game theory literature on the subject also exists that was inaugurated by Nash's seminal paper (1950) . The cooperative game theory approach is axiomatic in character, specifying features that a solution should entail and then determining the types of solutions, if any, that satisfy the axioms. Roth (1979) summarizes work conducted under this framework.

31 I Notice that this specification is not a repeated game because play ends if the players ever ' reach agreement. Thus the folk theorem does not apply.

I I .

L___ ------~------55

In the simple case of equal discount rates, the payoff to 1 is simply 1/(1 + o). 32 Examination

of the payoffs yields two conclusions about bargaining in this context: It pays to go first, 33 and

it hurts to be impatient (have a low o) relative to your rival. What if the costs from failure

to reach agreement were a fixed amount c" c2 > 0 per period, rather than a proportional

discount rate? If c 1 = Ci = c, any division that guarantees each player at least c can be

supported as a perfect equilibrium. If c2 > c1, delay hurts 2 more than 1. In this case if 1 moves first he gets the entire pie. As noted in Part I, equilibria in bargaining games may be very sensitive to what seem to be modest changes in the specification of the model. This result illustrates the point.

Much of the work on bargaining subsequent to Rubinstein has involved specifying richer bargaining environments and examining their impact on the bargaining equilibria. One realistic generalization is to consider that parties may have options to the bargaining process. For example, in agriculture growers may be able to dispose of their product in export markets, if they cannot reach agreement with domestic processors. By the same token, processors may be

able to source product externally. Let s1,Si. ~ 0 denote the value of the outside option for players 1 and 2, respectively, and otherwise maintain the same structure of play as in

Rubinstein's model (s 1 + s2 < 1 is also needed to make agreement beneficial).

It can be shown (see Shaked and Sutton 1984 or Sutton 1986) that if the outside options are voluntary and s; ~ ?r;, i = 1,2, then the presence of the outside options does not matter.

32Rubinstein's proof of this result is rather difficult, but a simple, elegant proof was subsequently given by Shaked and Sutton (1984).

33 As the time delay between periods goes to zero, this advantage disappears. 56 The unique perfect equilibrium remains as specified in (4) and (5). Thus, for example, threats on the part of processors to procure production from outside a bargaining association are meaningless to the bargaining process unless the value of this option exceeds what the processor would otherwise obtain in dealing with the association.

What if the threat to take an outside option is not voluntary? For example, what if an outside force can elect to randomly terminate bargaining? In this case it can be shown that as the likelihood of breakdown becomes large, the equilibrium payoffs converge to a "split the difference" solution where each player gets the value of his outside option and one-half of anything that is left over. The puzzling issue this result presents for potential bargainers is how to make the threat of the outside option credible.

Another mode of enrichment to the noncooperative bargaining model has been to incorporate incomplete and imperfect information.34 If anyone has read continuously ·to the point, the manner in which information impacts upon play will probably be quite intuitive.

Suppose one player's valuation of the product bargained for is known by the player-but not his rival. For example, a buyer may have a high or a low reservation price. Assume a game structure where the seller makes offers and the buyer accepts or rejects the offer. A LOW- reservation buyer will be unwilling to accept certain seller offers that a IDGH-reservation buyer would accept.

This game environment offers the LOW buyer the opportunity to signal his reservation price by rejecting some of the seller's initial offers. Of course, a HIGH buyer may also reject

34Key papers that develop imperfect information models of bargaining are Fudenberg and Tirole (1983) and Sobel and Takahashi (1983). 57 otherwise acceptable offers to mimic the LOW buyer in hopes of generating a pooling equilibrium. An attractive feature of these models is that delays in obtaining agreement (e.g., strikes) emerge in equilibrium. The equilibrium concept for these types of games is perfect

Bayesian equilibrium, where the uninformed player must update his beliefs at each node in a manner consistent with Bayes rule. 35 The problem discussed in Part I of a multiplicity of equilibria is encountered in bargaining models of asymmetric information. The multiplicity-of- equilibria problem is exacerbated if there is two-sided uncertainty (Fudenberg and Tirole 1983).

Almost all of bargaining theory is bilateral. If Rubinstein's model is recast in an n- person bargaining context, there is no longer a unique subgame perfect equilibrium (Sutton

1986).

Application to Cooperative Bargaining in Agriculture36

The noncooperative game theory approach to bargaining has generated some useful insights regarding the bargaining process. The more impatient players do worse. Outside options do not matter if they are small relative to the equilibrium bargaining outcome, and if they are voluntary. Even modest outside options matter, if the choice to pursue the outside option is involuntary. There may be an advantage to moving first in an alternating-offers bargaining environment. Costly delays in failure to reach agreement may be the consequence

35This strand of the bargaining literature can dovetail with games of adverse selection by assuming that the seller knows the value of the good but the buyer does not. The buyer can attempt to infer value, however, based on the seller's bids (Evans 1989, Vincent 1989).

36Much of this subsection is based on the on-going Ph.D. thesis work being conducted by Julie Iskow. 58 of imperfect information, as players attempt to use the bargaining process to either obtain or

convey information.

In considering the relevance of these highly stylized models to agricultural bargaining,

we should consider how the structure of the bargaining models compares to the agricultural

bargaining environment. Surprisingly perhaps, there is a rather good fit in many industries.

Iskow and Sexton (1990) describe the cooperative bargaining environment in the U.S. fruit and vegetable industries. Although there is certainly variation in structure and practices across the

sector, a number of general principles can be distilled. Nearly all bargaining associations

negotiate for price and other factors related to pricing, such as division of costs for first-handler

services and quality premiums and discounts. In most instances quantity to be sold is fixed prior to bargaining, either because the crop is a perennial or because individual growers have standing

sales contracts with processor/handlers. This point is important because it establishes that in

many cases quantity sold is not a function of the bargaining outcome, i.e., bargaining's fixed pie assumption holds. 37

The percentage of output in the relevant market area controlled by the bargaining association varies across industry. In most cases the association controls in excess of 50 % of production in the market, but does not have exclusive control. Associations usually interact with

multiple processors, but the bargaining environment is often structured so that the association

bargains initially with a single handler, often the dominant firm in the industry, and agreements

37This conclusion must be qualified by the observation that the quantity available in future periods may depend upon today's bargaining outcome. 59

with other handlers closely parallel the initial agreement. This structure, thus, is roughly

bilateral in nature and also conforms to the framework of bargaining theory.

Most of the associations in the Iskow-Sexton survey indicated having some outside

options if bargaining broke d0wn . Most common among these were taking legal action, 38

shipping to other processors, and relying on fresh product sales. Processors presumably also

have outside options through external sourcing or sourcing from nonassociation members. Thus,

the outside option feature of bargaining models may be an important feature to understanding

bargaining in agriculture.

In the realm of information, the asymmetry tends to favor processors. Given a volume of crop, R·, to be bargained for, the key items of information needed to determine its value are processors' costs and demand conditions for the processed product. Processors are apt to have

superior knowledge of both items. Conversations with association managers confirm this

asymmetry.

Presently we have no conceptual model of bargaining in agriculture apart from

Helmberger and Hoos adoption of the bilateral monopoly framework. 39 My conclusion is that

38Legal action becomes a viable outside option in states that have adopted fair bargaining legislation.

39In joint work with Terri Sexton (Sexton and Sexton 1987) I have examined the interaction between an incumbent monopolist and a potential cooperative entrant. The cooperative entrant in this noncooperative game model can be thought of as a bargaining association in that it consists of a horizontal affiliation of the monopolist's customers that may or may not vertically integrate into production. We showed that the incumbent would in most cases deter this type of entry by charging a "limit price" just low enough to dissipate any benefits to the coalition from integrating into production. This result can be interpreted as a bargaining outcome. In particular, it corresponds to the case where the monopolist can make the coalition a tak:e-it-or­ leave-it offer. As such, the monopolist just offers the coalition the value of its outside option (to actually integrate into production) and retains the remainder of its monopoly profits. 60

the recent progress in analyzing bargaining using noncooperative games can guide us in

constructing bargaining models for agriculture. Two key questions to be addressed are (1) What

are the key factors determining the division of benefits between growers and processors?

Clearly, the bargaining theory results give us some initial insights in this regard, and (2) When

is cooperative bargaining desirable for farmers? The market structure in which bargaining

emerges is generally oligopsony, not monopsony (Iskow and Sexton 1990), but the advent of

bargaining often converts the environment to one approximating bilateral monopoly. Under what

conditions is this shift in market environment good or bad for farmers?

Part ill: Cooperative Game Theory and Applications

INTRODUCTION

In shifting gears to focus on cooperative games, recall that players in cooperative games

can form binding agreements that are precluded in noncooperative games. Two-person

cooperative games have been studied, primarily in the context of bargaining (e.g., Nash 1950).

However, the main use of cooperative games has been in studying players' incentives to form

and maintain coalitions, a process that comes into play only in an n-person game context. Here

the modelling approach is to specify the value that each possible coalition of players can obtain,

and on this basis analyze the types of coalition structures that may emerge under alternative

payoff arrangements.

The richest area for application of cooperative games to economics has been in the field

of club theory.40 Clubs are voluntary organizations whose members derive mutual benefit from

I 40J'he first applications of cooperative game theory to the analysis of clubs were by Pauly (1967, 1970). I . L 61 their association. Clubs may form to share costs of production, consumption of a good characterized by excludable benefits, or merely each others' characteristics. 41 Agricultural cooperatives are an example of clubs formed to share production costs. Irrigation districts are examples of clubs formed to share consumption of a good.

A second area of economics where cooperative games have been important is the area of public utility pricing. Utilities are club-like organizations in that the members share a decreasing-cost production technology. Membership may not be voluntary, though. Cooperative game theory has proven to be extremely useful in terms of identifying the characteristics of equitable or "subsidy free" pricing schemes for these organizations.42

COOPERATIVE GAME THEORY CONCEPTS

The cornerstone concept to analyzing cooperative games is the characteristic function. • This is a function that assigns a value, V, to each possible coalition of players in an n-i>erson game. The function V(S) indicates what payoff the coalition S s; N can achieve on its own irrespective of what players outside S do. The arithmetic of computing V can be rather imposing because 2n - 1 unique, nonempty coalitions can be formed from the group of n players. 43

41 Comprehensive discussions of both game theoretic and nongame theoretic approaches to club theory analysis are available in Sandler and Tschirhart (1980) and Cornes and Sandler (1986).

42The seminal paper on cooperative game theory analysis of public utility pncrng is Faulhaber (1975). Sharkey (1982) summarizes much of the work conducted in this area.

43To state this point in another way, there are 2n - 1 nonempty subsets of the set N containing all n players. 62

Several comments about the characteristic function are in order. First, critics of cooperative game theory analyses observe that the game description often leaves unspecified the process by which a coalition, S, will achieve V(S). This is an important criticism, particularly when the cooperative game is being used for behavioral analysis. I will return to this point later when discussing application of cooperative game theory to cooperative organizations. Second, for the characteristic function to be represented by a simple scaler function requires that utility be freely transferable among members of a coalition. implies that side payments among players can be made that transfer utility on a one-for-one basis. 44

Third, not all cooperative games can be represented in characteristic function form because in some cases what outsiders do will effect what S can achiev~, and there is no solid basis to predict what the outsiders will do. Shubik (1982) defines two broad classes of cooperative games where the characteristic function specification is appropriate. These are constant sum-games and games of consent. In constant-sum games the total payoff is fixed.

Thus, any amount S achieves diminishes what N - S can achieve on a one for one basis, and it is reasonable to specify V(S) as the amount S can achieve assuming N - S acts in direct opposition to S.

In games of consent the value that S can achieve is independent of what N - S does, i.e., the only impact the players in N - S can have on S is through their decisions whether to join in coalition with S. In games of consent, either players cooperate or they ignore each other; they

44Cooperative games can be analyzed without the assumption of transferable utility and side payments. The characteristic function for the t-person coalition T is not scaler but, rather, consists of the t-dimensional vectors of payoffs that T can guarantee for itself (see, e.g., Friedman 1986). 63 cannot actively hurt each other. I would argue that most games of coalition building among farmers are games of consent.

Two fundamental properties of the characteristic fu nction are that V(0) = 0, i.e., the null set earns nothing, and

(6) V(S U T) ~ V(S) + V(T) for S n T = 0.

Equation (6) states a superadditivity property. It specifies that two disjoint coalitions can achieve at least as much acting together as on their own. Superadditivity is essential to guarantee incentives to form coalitions. Interestingly, we may encounter cases in economic applications when (6) does not hold. The most common situation would be in a production club like a cooperative when diseconomies of scale set in.

A payoff to the n-person cooperative game is a vector x = (xi. ... ,xJ indicating the amount to be received by each player. The essence of cooperative game theory lies in specifying properties that payoffs might be expected to satisfy and then determining which if any payoffs satisfy these properties for a given specification of the characteristic function. Two fundamental properties are individual and collective rationality:

(7) xi ~ V({i}) for all i E N,

(8) V(N) = L: itN Xj.

Condition (7) merely requires that each player receive at least as much as obtainable on his own, while (8) is a Pareto optimality condition for the group N. Payoffs that satisfy (7) and (8) are called imputations. 64

Depending upon the nature of V, many different payoffs may satisfy (7) and (8).45 In selecting from among them the concept of dominance is important. An imputation y is said to dominate x through subset S if

(9) Yi > xi for all i E S, and

(10) L itS Yi ~ V(S).

Condition (9) indicates that all members of S prefer their payoff in y relative to x, and (10) indicates that they are capable of achieving at least their collective payoff under y.

Consideration of dominance leads to definition of the , cooperative game theory's most important "solution" concept. The core to the cooperative game defined by {N, V(S)} consists merely of all undominated imputations. Thus, core payoffs must satisfy (7) and (8) and, based on (9) and (10), the following condition as well:

(11) · Li.s xi ~ V(S) for all S C N.

Thus, core payoffs satisfy the individual and collective rationality condition of an imputation and also an additional subgroup rationality condition specified in (11).

Payoffs that satisfy (7), (8), and (11) satisfy the appealing equilibrium property that no player or group of players has both the incentive and ability to disrupt the proposed outcome.

Problems with the core as a cooperative game theory solution concept are that, depending upon

45 Given that (6) holds, at least one payoff must satisfy (7) and (8). If V(N) = [ i.N V({i}), the game is said to be inessential. There are absolutely no gains to coalition building, and the only imputation is xi = V({i}) for all i E N. For essential games there will be multiple imputations.

J 65

the specification of {N, V(S)}, no payoff may satisfy the conditions, i.e., the core is empty, or

a great many payoffs satisfy the conditions. Seldom does the core portend a unique solution.

Mere superadditivity, as defined in (6) is not sufficient to guarantee a nonempty core.

The implications of this result are that games may be encountered where there are distinct gains

to forming the grand coalition N, but yet no payoff can be achieved that will insure the stability

of N. Shapley (1971) defined a class of convex games for which a "large" core is known to

exist. Convex games satisfy the following condition:

(12) V(S U {k}) - V(S) ~ V(T U {k}) - V(T), S £ T £ N - {k}.

Condition (12) has been likened to a "bandwagon" effect in that it specifies that the incremental

benefit to a player joining a larger coalition is at least as great as the incremental benefit to his joining a smaller coalition.

Emptiness or nonuniqueness of the core does not restrict its usefulness for some types

of behavioral analysis. For example, identifying a game situation where the superadditivity

condition (6) is satisfied, but yet the core is empty implies a coalition structure in a state of flux.

Players have incentives to merge into the grand coalition by virtue of (6), but yet no satisfactory

way can be found to allocate the benefits from the grand coalition. 46

In games when the core is known to be nonempty (e.g., when (12) is satisfied), the analyst is assured that a stable solution can be achieved, and interest may turn to analyzing whether specific allocation schemes satisfy the core conditions.

46When the core of a game is empty, game theorists have developed the concept of the epsilon core which consists of imputations that are within e of being in the core, i.e., under the epsilon core an imputation must award each subset, S, at least V(S) - e rather than V(S) . Obviously for large enough e the epsilon core will be nonempty. ,.

66

Thus, the core can be an important tool for behavioral analysis of club-type

organizations. It also can be a powerful tool of normative analysis. The reason is the precise

linkage between the requirements for the core and the requirements for a payment scheme to be

subsidy free . A vector of payments (charges) is subsidy free if no person or group is receiving

less (paying more) than is "fair." An obvious way of operationalizing "fair" is to equate it with

stand-alone costs or revenues a group could achieve on its own.

The example employed by Faulhaber (1975) in his seminal work on this topic was a

water utility seeking to achieve a breakeven allocation of costs among multiple municipalities.

A single city's stand-alone cost in this context is its cost of unilaterally supplying itself with

water. If the charge to city A exceeds its stand-alone cost, city A is cross subsidizing the other

cities in the utility. Of course, this principle should be extended to encompass subgroups of the

cities, so that cities A and B should not be charged more collectively than the cost they would

incur in a joint water project. ·

Faulhaber's fundamental insight was that the conditions for a subsidy-free allocation were

precisely the conditions defining a core allocation for the cooperative game of water utility

formation. Thus, the behavioral analyst might use the core in this context to predict instability

or dissolution of a water utility that persisted in implementing a noncore allocation scheme. The

welfare economist might use the core to evaluate the equity properties of the utility's pricing

scheme in the case where, say, the utility was a government franchise monopoly.

If for whatever reason the analyst desires an unique solution to an n-person cooperative

game, game theory has them to offer. Two that have received some application in economics 67 are the (Shapley 1953) and the nucleolus (Schmeidler 1969). 47 The Shapley value allocates to each player a weighted average of his contribution to each possible coalition. 48

Interestingly, the allocation generated from application of the Shapley value may not be an element of the core even when the core is known to be nonempty. However, it is an element of the core of a convex game.

The nucleolus considers that a group's characteristic function value less its proposed payoff is considered that group's objection to the payoff. The nucleolus is the payoff which lexicographically minimizes the maximum objection over all coalitions compared to any other allocation method. By construction the nucleolus is part of any nonempty core. 49

APPLICATIONS TO COOPERATIVE ORGANIZATIONS IN AGi

Agricultural cooperatives are a horizontal affiliation of farmers for the purpose of , achieving vertical integration in the market chain. A purchasing or supply cooperative provides upstream integration for its members, while a marketing cooperative provides downstream integration.

Cooperative game theory has been used by Staatz (1983) and Sexton (1986) to analyze farmers' incentives for form cooperatives. Both studies focused on purchasing cooperatives and

47See Gately (1974) for application of the core and Shapley value to interregional investment in electric power in India, Suzuki and Nakayama (1976) for application of the nucleolus to cooperative provision of water resources among Japanese cities, and Littlechild and Thompson (1977) on use of both the Shapley value and nucleolus to compute aircraft landing fees at an airport.

48The Shapley value payoff is as follows: X; = L sc N ;,s [(s - l)! (n - s)!/n!] [V(S) -(V(S) - {i})], where s, n denote the number of members in S and N, respectively.

49'fhe nucleolus is hard to express in equation form but can be computed from a series of linear programs. 68 considered the core as a solution concept. Adapting the methodology to analyze marketing cooperatives, however, poses no additional problems.

In formulating the characteristic function for this type of game it is natural and useful to consider that farmers have some outside option other than the proposed cooperative for marketing their production or purchasing their inputs. Focusing on the marketing cooperative case~ the characteristic function value for a coalition S C N is determined by the price, net of any marketing costs, the group can achieve for itself either through forming its own cooperative, bargaining with processors, or simply selling through existing market channels. Coalitions that cannot do better than the prevailing price are impotent and should be assigned· V() = 0. The characteristic function for all other coalitions is then the value the coalition can achieve for its members over and above that attainable through normal market channels (e.g., selling to for- profit handlers). 50

Sexton (1986), drawing upon club theory results due to Sorenson, Tschirhart, and

Whinston (1978), discusses core existence results for the purchasing cooperative formation game. The key results can be stated succinctly: When marginal costs of procuring the input for a purchasing cooperative are nonincreasing throughout the range of production, the game is convex and a "large" core exists. The core is also nonempty for ranges of production where average total cost is decreasing even though marginal cost is increasing, although the game is no longer convex. When production takes place in the range of increasing unit costs, existence

50Specification of the characteristic function in terms of coalitions' actual market alternatives addresses the criticism that cooperative game models are often vague on this point. 69 of a nonempty core cannot be assured. V() may still be superadditive over all subsets of N, but the core, nonetheless, may be empty.

Continuing to expand production into the range of diseconomies of size ultimately will cause V( ) to no longer be superadditive across N. At this point a single cooperative is no longer optimal, and the emergence of multiple cooperatives is predicted. The conditions whereby multiple cooperatives will have a stable membership, though, are unlikely to be obtained, and instability of membership is likely to result. 5152

Given the conditions on core existence for cooperative organizations, an interesting and important analysis would be to evaluate common pricing schemes relative to the conditions for the core from both a behavioral and normative perspective. Although concerns over cooperatives' financing schemes are often expressed, little analysis has been conducted in this vein. One result suggestive of the possible problems with cooperatives' financing methods is that traditional pure patronage fi nance need not generate a core allocation under even the most favorable conditions, namely when the cooperative game is convex (Sexton 1986). 5354

51 For those familiar with cooperative theory, a similar set of core existence results for marketing cooperatives can be attained from the preceding two paragraphs essentially by substituting net marginal revenue product in place of marginal costs, net average revenue product in place of average total costs, and substituting increasing for decreasing and vice versa.

52The reason for instability is that it is unlikely that players can be partitioned into equally efficient cooperatives. Failure to do so implies attempts by players to enter (exit) the more (less) efficient cooperatives. These considerations, in turn, may induce closed membership policies.

53The specific scheme examined was a two part tariff consisting of a per unit price to members set equal to marginal cost, plus a fixed (membership) fee set in proportion to each member's patronage with the co-op. Marginal cost pricing is necessary to achieve the Pareto optimality condition (8) of the core, but it necessarily incurs a deficit when marginal cost is decreasing. 70

Although the analysis has yet to be conducted, I believe a cooperative game analysis of marketing orders has the potential to yield valuable insights. As governmental creations, marketing order membership is not voluntary, so some of the behavioral implications of the core are less applicable to analysis of marketing orders than to voluntary cooperatives.ss The considerable wrangling that often takes place on marketing order boards is an indication that marketing order provisions are not beneficial to all members.

For example, many orders have provisions enabling the order to mandate storage of a given percentage of the year's harvest or diversion of a given percentage of the harvest to a secondary market. These allocations are imposed across the board without regard to individual participants' costs and benefits from complying with the provisions. In essence, these policies have the potential to induce cross subsidization among participants in the order.

Conclusions

This pa r has surveyed cooperative and noncooperative game theory concepts that might be useful to analyzing vertical coordination problems in agriculture. Over the last 10 to 15 years game theory has emerged as a prominent analytical tool among economists. Yet, in thumbing through various agricultural economics journals in the process of writing this paper I was reminded of how infrequently these methods were being utilized by agricultural economists.

54Pooling practices can also be studied within a cooperative game context. For example, cooperatives that process multiple products often pool at least partially the revenues from sale of these products. A game could be configured with growers of a particular product treated as a single, composite player. In this manner n is kept to a manageable number.

ssHowever, in analyzing the decision to form a marketing order, cooperative game theory and the core might be used to characterize which players or groups of players will benefit or be harmed from the proposed order. These results could then be used to predict the voting outcome on the marketing order referendum. 71

Agricultural economics is fundamentally an applied field and game theory is a tool of economic theory, so perhaps the infrequency of usage is not surprising. Another factor is that many continue to regard agricultural markets as prototypical competitive markets, and game theory is a tool of imperfect competition.

I reject this latter argument for the infrequency of use of game theory in agricultural economics and will not repeat the bases for this rejection given at the outset of Part II of this paper. I agree, though, that agricultural economics is and should remain an applied field.

However, most would accept theory's fundamental importance in guiding application, and it is my opinion that agriculture as an industry is unique to the point where we cannot always rely upon theory developed without consideration of the unique features of agricultural markets.

For example, concerns over monopsony or oligopsony power are relatively unique to ,. agriculture, given the typi immobility of the raw product and fewness of processors. The fact that the marketing process for a:gricultural products is initiated by the production and sale of a particular raw product that is relatively nonsubstitutable for other inputs is also unique. Third, at the retail level, the emerging power of the large food chains is important and relatively unique. Given that manufacturers are also often powerful, this consideration raises important bilateral monopoly/oligopoly issues. Fourth, agriculture is quite unique among industries in that producers are allowed, even encouraged to form coalitions for the purposes of procuring inputs and marketing production.

Thus, my conclusion is that there is considerable scope for application of game theory tools to agricultural markets, and it is unlikely that economists outside of agriculture will fully develop these applications. In closing I do not want to over sell game theory's virtues. 72

Although the subject is certainly in vogue among economists and probably will become even more popular as it integrates fully into graduate curricula, even the intellectual giants of the field such as Kreps warn of its over application. I hope this presentation has been even handed in indicating both the strengths and weaknesses of game theory as a tool of economic analysis. 73

REFERENCES

Akerlof, G. (1970) "The Market for Lemons: Quality Uncertainty and the Market Mechanism." Quarterly Journal of Economics, 84:488-500.

Arrow, K. (1985) "The Economics of Agency." in Principals and Agents: The Structure of Business, J. Pratt and R. Zeckhauser, eds., Boston: Harvard Business School Press.

Axelrod, R. (1984) The Evolution of Cooperation. New York: Basic Books.

Azzaro, A. and Pagoulatos, E. (1990) "Testing Oligopolistic and Oligopsonistic Behavior: An Application to the U.S. Meat Packing Industry." Journal of Agricultural Economics, 41:362-70.

Banks, J. and Sobel, J. (1987) " in Signaling Games." Econometrica, 55:647-61.

Bernheim, B.D. and Whinston, M. (1985) "Common Marketing Agency as a Device for Facilitating Collusion." Rand Journal of Economics, 16:269-81.

Biggs, G.W. (1982) "S us of Bargaining Cooperatives." U.S. Department of Agriculture, ACS Research Report No. 16. , Cho, 1-K., and Kreps, D.M. (1987) "Signalling Games and Stab le Equilibria." Quarterly Journal of Economics, 102: 179-221 .

Connor, J.M., Rogers, R.T., Marion, B.W., and Mueller, W.F. (1985) The Food Manufacturing Industries: Structures. Strategies. Performance. and Policies. Lexington, Massachusetts: D.C. Heath.

Cornes, R. and Sandler, T. (1986) The Theory of Externalities. Public Goods. and Club Goods. Cambridge: Cambridge University Press.

Dasgupta, P. and Maskin, E. (1986) "The Existence of Equilibrium in Discontinuous Economic Games, I: Theory." Review of Economic Studies, 53: 1-26.

Durham, C. (1991) "Analysis of Competition in the Processing Tomato Market." Ph.D. Dissertation, University of California, Davis.

Evans, R. (1989) "Sequential Bargaining with Correlated Values." Review of Economic Studies, 56:499-510.

Faulhaber, G. (1975) "Cross Subsidization: Pricing in Public Enterprise." American Economic Review, 65:966-77. .. 74

Friedman, J. (1971) "A Noncooperative Equilibrium for Supergames." Review of Economic Studies, 38: 1-12.

Friedman, J. (1986) Game Theory with Applications to Economics. New York: Oxford University Press.

Fudenberg, D. and Tirole, J. (1983) "Sequential Bargaining with Incomplete Information." Review of Economic Studies, 50:221-47.

Fudenberg, D. and Tirole, J. (1988) "Perfect Bayesian and Sequential Equilibrium." Massachusetts Institute of Technology, mimeo.

Fudenberg, D. and Tirole, J. (1989) "Noncooperative Game Theory for Industrial Organization: An Introduction and Overview" in Handbook of Industrial Organization, Vol 1, R. Schmalensee and R. Willig eds., Amsterdam: North Holland.

Fudenberg, D., Kreps, D.M., and Levine, D. (1988) "On the Robustness of Equilibrium Refinements." Journal of Economic Theory, 44:354-80.

Gately, D. (1974) "Sharing the Gains From Regional Cooperation: A Game Theoretic Application to Planning Investment in Electric Power." International Economic Review, 15: 195- 208. ~' Green, E. and Porter, R. (1984) "Noncooperative Collusion Under Imperfect Price Information." Econometrica, 54:975-94.

Greenhut, M.L., Norman, G., and Hung, C-H. (1987) The Economics of Imperfect Competition: A Spatial Approach. Cambridge: Cambridge University Press.

Grossman, S., and Perry, M. (1986) "Perfect Sequential Equilibria." Journal of Economic Theory, 39:97-119.

Grossman, S., and Hart, 0. (1983) "An Analysis of the Principal-Agent Problem." Econometrica, 51:7-45.

Grossman, S., and Hart, 0. ( 1986) "The Costs and Benefits of Ownership: A Theory of Vertical and Lateral Integration." Journal of Political Economy, 94:691-719.

Harsanyi, J. (1967) "Games with Incomplete Information Played by 'Bayesian' Players, I: The Basic Model." Management Science, 14: 159-82.

Hart, 0. and Holmstrom, B. (1987) "The Theory of Contracts." in Advances in Economic Theory. Fifth World Congress, T. Bewley, ed., Cambridge: Cambridge University Press. 75

Helmberger, P. and Hoos, S. (1965) "Cooperative Bargaining in Agriculture: Grower-Processor Markets for Fruits and Vegetables." University of California, Division of Agricultural Sciences.

Iskow, J. and Sexton, R.J. (1990) "Status of Cooperative Bargaining for Fruits and Vegetables in the United States. 11 U.S. Department of Agriculture, ACS Service Report No. 28.

Katz, M. (1989) "Vertical Contractual Relations." in Handbook of Industrial Organization, Vol 1, R. Schmalensee and R. Willig, eds., Amsterdam: North Holland.

Kohlberg, E., and Mertens, J. (1986) "On the Strategic Stability of Equilibria. 11 Econometrica, 54: 1003-7.

Kreps, D.M. (1990a) Game Theory and Economic Modelling. New York: Oxford University Press.

Kreps, D.M. (1990b) A Course in Microeconomic Theory. Princeton: Princeton University Press.

Kreps, D.M. and Wilson, R. (1982a) "Sequential Equilibrium." Econometrica, 50:863-94.

Kreps, D.M. and Wilson, R. (1982b) "Reputation and Imperfect Information. 11 Journal of Economic Theory, 27:253-79. I !" Kreps, D.M., Wilson, R., Milgrom, P. and Roberts, I. (1 982) "Rational Cooperation in the I Finitely Repeated Prisoners' - I Dilemma." Journal of Economic Theory, 27:245-52.

Lang, M.G. (1977) "Collective Bargaining Between Producers and Handlers of Fruits and Vegetables for Processing: Setting Laws, Alternative Rules, and Selected Consequences. 11 Ph.D. Dissertation, Michigan State University.

Littlechild, S.C. and Thompson, G.F. (1977) "Aircraft Landing Fe A Game Theoretic Approach." Bell Journal of Economics, 8: 186-204.

Luce, R.D. and Raiffa, H. (1957) Games and Decisions: Introduction and Critical Survey. New York: John Wiley.

Marion, B., ed. (1986) The Organization and Performance of the U.S. Food System. Lexington, Massachusetts: D.C. Heath.

Matthewson, G.G. and Winter, R.A. (1984) 11 An Economic Theory of Vertical Restraints." Rand Journal of Economics, 15:27-38. .. 76 McLaughlin, E.W. and Rao, V.R. (1990) "The Strategic Role of Supermarket Buyer Intermediaries in New Product Selection: Implications for System-Wide Efficiency." American Journal of A~ricultural Economics, 72:358-70.

McLennan, A. (1985) "Justifiable Beliefs in Sequential Equilibrium." Econometrica, 53:889-904.

Milgrom, P. and Roberts, J. (1982a) "Limit Pricing and Entry under Incomplete Information: An Equilibrium Analysis." Econometrica, 50:443-59.

Milgrom, P. and Roberts, J. (1982b) "Predation, Reputation, and Entry Deterrence." Journal of Economic Theory, 27:280-312.

Nash, J. (1950) "The Bargaining Problem." Econometrica, 18: 155-62.

Nash, J. (1951) "Non-Cooperative Games." Annals of Mathematics, 54:286-95.

Pauly, M.V. (1967) "Clubs, Commonality, and the Core: An Integration of Gaine Theory and the Theory of Public Goods." Economica, 34:314-24.

Pauly, M. V. (1970) "Cores and Clubs." Public Choice, 9:53-65.

Rasmusen, E. (1989) Games and Information: An Introduction to Game Theory. Oxford: Basil Blackwell.

Rey, P. and Tirole, J. (1986) "The Logic of Vertical Restraints." American Economic Review 76:921-39.

Rogerson, W. (1985) "The First-Order Approach to Principal-Agent Problems." Econometrica, 53: 1357-68.

Rosenthal, R. (1981) "Gaines of Perfect Information, Predatory Pricing, and the Chain-Store Paradox." Journal of Economic Theory, 25: 92-100.

Roth, A. (1979) "Axiomatic Models of Bargaining. II Lectures Notes in Economics and Mathematical Systems, 170, Berlin: Springer-Verlag.

Rubinstein A. (1982) "Perfect Equilibrium in a Bargaining Model." Econometrica, 50:97-109.

Sandler, T. and Tschirhart, J. (1980)"The Economic Theory of Clubs: An Evaluative Survey." Journal of Economic Literature, 18:1481-1521.

Schelling, T. (1960) The Strategy of Conflict. Cambridge, Massachusetts: Harvard University Press. 77

Schmalensee, R. and Willig, R., eds. (1989) Handbook of Industrial Organization. Amsterdam: North Holland.

Schmeidler, D. (1969) "The Nucleolus of a Characteristic Function Game." SIAM Journal of Applied Mathematics, 17: 1163-70.

Schroeter, J.R. (1988) "Estimating the Degree of Market Power in the Beef Packing Industry." Review of Economics and Statistics, 70: 158-62.

Schroeter, J.R. and Azzarn, A. (1990) "Measuring Market Power in Multi-Product Oligopolies: The U.S. Meat Industry." Applied Economics, 22: 1365-1376.

Selton, R. (1975) "Re-examination of the Perfectness Concept for Equilibrium Points m Extensive Games." International Journal of Game Theory, 4: 25-55.

Selton, R. (1978) "The Chain-Store Paradox." Theory and Decision, 9: 127-59.

Sexton, R.J. (1986) "The Formation of Cooperatives: A Game-Theoretic Approach with Implications for Cooperative Finance, Decision Making, and Stability:" American Journal of Agricultural Economics, 68:214-25.

Sexton, R.J. (1990) "Imperfect Competition in Agricultural Markets and the Role of

Cooperatives: A Spatial Analysis." American Journal of Ag ricultural Economics, 72:709-20. ~'-

Sexton, R.J. and Sexton, T.A. (1987) "Cooperatives as Entrants." Rand Journal of Economics, 18:581-95.

Shaked, A. and Sutton J. (1984) "Involuntary Unemployment as a Perfect Equilibrium in a Bargaining Model." Econometrica, 52: 1351-64.

Shapley, L. (1953) "A Value for n-Person Games." in Contributions to the Theory of Garnes, H. Kuhn and A. Tucker, eds., Vol. I. Annals of Mathematics Studies, No. 28. Princeton: Princeton University Press.

Shapley, L. (1971) "Cores and Convex Games." International Journal of Game Theory, 1: 11-26.

Sharkey, W.W. (1982) The Theory of Natural Monopoly. Cambridge: Cambridge University Press.

Shubik, M. (1982) Game Theory in the Social Sciences: Concepts and Solutions. Cambridge, Massachusetts: MIT Press.

Shubik, M. (1984) A Game-Theoretic Approach to Political Economy. Cambridge, Massachusetts: MIT Press. ·- 78

Sobel, J. and Takahashi, I. (1983) "A Multi-Stage Model of Bargaining." Review of Economic Studies, 50:411-26.

Sorenson, J.R., Tschirhart, J., and Whinston, A.B. (1978) "Private Good Clubs and the Core." Journal of Public Economics, 10:77-95.

Spence, A.M. (1973) "Job Market Signalling." Quarterly Journal of Economics, 87:355-74.

Spengler, J.J. (1950) "Vertical Integration and Antitrust Policy." Journal of Political Economy, 58:347-52.

Staatz, J.M. (1983) "The Cooperative as a Coalition: A Game-Theoretic Approach." American Journal of A~ricultural Economics, 65: 1084-89.

Sutton, J. (1986) "Non-Cooperative Bargaining Theory: An Introduction." Review of Economic Studies, 53:709-24.

Suzuki, M. and Nakayama, M (1976) "The Cost Assignment of the Cooperative Water Resource Development: A Game Theoretic Application." Management Science, 22: 1081-86.

Tirole, J. (1988) The Theory oflndustrial Organization. Cambridge, Massachusetts: MIT Press.

Van Damme, E. (1987) Stability and Perfection of Nash Equilibrium. Berlin: Springer-Verlag. '') Vernon, J. and Graham, D. (1971) "Profitability of Monopolization by Vertical Integration." Journal of Political Economy, 79:924-25.

Vincent, D.R. (1989) "Bargaining with Common Values." Journal of Economic Theory, 48:47- 62. von Neumann, J. and Morgenstern, 0. (1944) The Theory of Games and Economic Behavior. New York: John Wiley.

Wann, J.J. (1990) "Imperfect Competition in Multiproduct Industries with Application to California Pear Processing." Ph.D. Dissertation, University of California, Davis.

Williamson, 0. (1971) "The Vertical Integration of Production: Market Failure Considerations." American Economic Review, 61: 112-23.

Williamson, 0 . (1983) "Credible Commitments: Using Hostages to Support Exchange." American Economic Review, 73:519-40.

Wohlgenant, M.K. (1989) "Demand for Farm Output in a Complete System of Demand Functions." American Journal of Agricultural Economics 71:241-52. 79

Figure 1 An Extensive Form Game: Post-Contractual Opportunism

Grower

Contract

Agent (2.0, O) (,,''

Low

(1.5, 0.5) (2.5, O)

,. 80 ""-

Figure 2

Multiple Nash Equilibria: The Battle of the Sexes

WOMAN

Prize fight Ballet

Prize fight 2, 1 -1,-1

MAN Ballet -5,-5 1, 2

Payoffs to: (MAN, WOMAN) 81

~ .

Figure 3

Multiple Nash Equilibria: Entry Deterrence

Entrant

IN

Monopolist r'

Predate Accommodate

r 82 •. Figure 4

Coordination Games Between Farmers

(a) Simultaneous Choices Farmer A

Farmer B

(2.0, 1.5) (3.0, 2.0) (2.5, 2.5) (1.5, 1.0)

(b) Sequential Choices

Farmer A

Farmer B

(2.0, 1.5) (3.0, 2.0) (2.5, 2.5) (1.5, 1.0) ?53

Figure 5 .•

The Centipede Game

(O,O)

(-1, 3)

(2, 2)

(4, 4)

...' (3, 7) . " R • (6, 6)

(8, 8)

(10, 10)

(7, 11)

r .... 84 •. Figure 6

Coordination Between Farmers Under Incomplete Information

(a) Player A is uninformed

Maximizer Mean

I.I• (2.0, 1.5) (3.0, 2.0) (2.5, 2.5) (1.5, 1.0) (2.0, 5.0) (3.0, O) (2.5, O) (1.5, 4.0) (b) Player Bis uninformed

Maximizer

• .- (2.0, 1.5) (3.0, 2.0) (2.5, 2.5) (1.5, 1.0) (2.0, 5.0) (3.0, O) (2.5, O) (1.5, 4.0) Figure 7

An Agricultural Market Chain

RAW MATERIAL PRODUCTION iron crude oil timber

RAW MATERIALS PROCESSING steel refined oil lumber

FARM INPUT MANUFACTURING tractors, implements gas, oil, lubricants wagons, hand tools

FARM RETAIL SUPPLY

\;' FARM PRODUCTION . livestock • dairy grain fruits and vegetables . j,

FOOD PROCESSING fresh and processed meats fluid milk, butter, cheese milled grains, cereals, breads canned, frozen and dried fruits and vegetables

j,-7-7-7-7-7-7-7-71 FOOD WHOLESALING j, '--~~~~~~-j,~~~~~~--' j,-7-7-7-7-7~-7-7-7-7-7 -7-7-7-7-7-7-7-7-7-7-7-7-7,j, FOOD SERVICE OUTLETS j,-7 IFOOD RETAILING I ~f-+-+-+-t-7 j, j, j, restraurants hospitals j, * j, schools r -7-7-7-7 I CONSUMERS • -- •• 86

Figure 8 Akerlof's Lemon Model

Good Car Lemon

Offer Price P Offer Price P

.. •

(0, O) (6,000 - P, P - 6,000) (0, O) (2,000 - P, P - 2,000)

..1\ .· .•

·......

••