Investment under uncertainty, and regulation Adrien Nguyen Huu

To cite this version:

Adrien Nguyen Huu. Investment under uncertainty, competition and regulation. Journal of Dynamics and Games, AIMS, 2014, 1 (4), pp.579 – 598. ￿hal-00831263v4￿

HAL Id: hal-00831263 https://hal.archives-ouvertes.fr/hal-00831263v4 Submitted on 3 Feb 2014

HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Investment under uncertainty, competition and regulation

Adrien Nguyen Huu∗1

1 IMPA, Estrada Dona Castorina 110, Rio de Janeiro 22460-320, Brasil

February 3, 2014

Abstract We investigate a randomization procedure undertaken in real option games which can serve as a basic model of regulation in a duopoly model of preemptive investment. We recall the rigorous framework of M. Grasselli, V. Lecl`ereand M. Ludkovsky (Priority Option: the value of being a leader, International Journal of Theoretical and Applied Finance, 16, 2013), and extend it to a random regulator. This model generalizes and unifies the different competitive frameworks proposed in the literature, and creates a new one similar to a Stackelberg leader- ship. We fully characterize strategic interactions in the several situations following from the parametrization of the regulator. Finally, we study the effect of the and uncertainty of outcome when agents are risk-averse, providing new intuitions for the standard case.

1 Introduction

A significant progress on the net present valuation method has been made by real option theory for new investment valuation. The latter uses recent methods from stochastic finance to price uncertainty and competition in complex investment situations. Real option games are the part of this theory dedicated to the competitive dimension. Following the seminal work of Smets [12], they correspond to an extensively studied situation, where two or more economic agents face with time a common project to invest into, and where there might be an advantage of being a leader (preemptive game) or a follower (attrition game). Grenadier [8], Paxson and Pinto [11] or Weeds [16] among many others develop nice examples of this type of model. On these problems and the related literature, Azevedo and Paxson [1] or Chevalier-Roignant et al. [5] provide comprehensive reviews. Real option games are especially dedicated to new investment opportunities following R&D developments, like technological products, a new drug or even real estate projects. The later investment opportunities indeed add competition risk to the uncertainty of future revenues in the strategic behavior of investors. The regulatory dimension is also an important part of such investment projects, and the present paper attempts to contribute in this direction. The economic situation can be the following. We consider an investment opportunity on a new market for an homogeneous good, available for two agents labeled one and two. This investment opportunity is not constrained in time, but the corresponding market is regulated by an outside agent. Both agents have the same model for the project’s revenues, and both have also access to a financial market with one risky asset (a market portfolio) and a risk-free asset (a bank account). The situation is a non-cooperative duopoly of preemptive timing game for a single investment oppotunity.

∗Corresponding author: [email protected]

1 A simple approach to regulation is to consider that when an agent wants to take advantage of such an opportunity, he must satisfy some non-financial criteria which are scrutinized by the regulator. Even though a model of regulatory decision is at the heart of this matter, we will take the most simple and naive approach. We will just assume that the regulator can accept or refuse an agent to proceed into the investment project, and that this decision is taken randomly. This extremely simple model finds its root in a widespread convenient assumption. In the standard real option game representing a Stackelberg duopoly, the settlement of leadership and followership is decided via the flipping of an even coin, independent of the model. The procedure is used for example in Weeds [16], Tsekeros [13] or Paxson and Pinto [11] who all refer to Grenadier [8] for the justification, which appears in a footnote for a real estate investment project: A potential rationale for this assumption is that development requires approval from the local government. Approval may depend on who is first in line or arbitrary considerations. This justification opens the door to many alterations of this assumption, inspired from other similar economic situations: the aforementioned considerations might lead to favor one competitor on the other or lead to a different outcome. Let us take a brief moment to describe one of those economic situations we have in mind. Assume that two economic agents are running for the investment in a project with possibility of simultaneous investment, as in Grasselli et al. [7]. In practice, even if they are accurately described as symmetrical, they would never act at the exact same time, providing that instanta- neous action in a continuous time model is just an idealized situation. Instead, they show their intention to invest to a third party (a regulator, a public institution) at approximatively the same time. An answer to a call for tenders is a typical example of such interactions. Then the arbitrator is invoked to judge the validity of these intentions. For example, he can evaluate which agent is the most suitable to be granted the project as a leader regarding qualitative criteria. This situation might be illustrated in particular where environmental or health exigences are in line. When simultaneous investment is impossible, the real estate market example of Grenadier [8] can also be cited again bearing in mind that, in addition to safety constraints, an aesthetic or confidence indicator can intervene in the decision of a market regulator. Because these criteria can be numerous, an approach via randomness of the decision is a decent first approximation. In the extremal case, the arbitrator can be biaised and show his for on agent. In general, it leads to an asymmetry in the chance to be elected leader and an unhedgeable risk, and perfectly informed agents should take into account this fact into their decision to invest or to defer. Those are some of the situations the introduction of a random regulator can shed some light on. Our model is presented in a very particular way by using a normal form game in an extended natural virtual time, as initially proposed by Fudenberg and Tirole [6], and followed by Thijssen and al. [15] or Grasselli and al. [7]. This model bears a very specific interpretation, but appears to encompass in a continuous manner the setting of Smets [12] and the Stackelberg competition of Grenadier [8]. The introduction of a regulator also renders the market incomplete. We then follow two classical approaches, namely the risk-neutral and the risk-averse cases respectively. The latter case actually brings up new insights on the strategic interactions of players in the Cournot and Stackelberg setttings. Against the existing literature, the present contribution situates itself as a contribution to mathematical clarifications of the standard real option game, with new angles. Let us present how the remaining of the paper proceeds. Section2 introduces the standard model and its extension to the random regulatory framework. Section3 provides the study of the model and optimal strategies in the general case. In Section4, we present how the proposed model encompasses the usual types of competition, and propose a new asymmetrical situation related to the Stackelberg competition framework. Section5 introduces risk-aversion in agents evaluation

2 to study the effect of a random regulator and the uncertain outcome of a coordination game on equilibrium strategies of two opponents. Section6 finishes on critism and possible extentions.

2 The model

2.1 The investment opportunity We assume for the moment that the regulator does not intervene. The framework is thus the standard one that one shall find for example in Grasselli et al. [7]. Notations and results of this section follow from the latter. The later research will then be cited repeatedly in the present article. We consider a stochastic basis (Ω, F, F, P) where F := (Ft)t≥0 is the filtration of a one- dimensional Brownian motion (Wt)t≥0, i.e., Ft := σ{Ws, 0 ≤ s ≤ t}. The project delivers for each involved agent a random continuous stream of cash-flows (DQ(t)Yt)t≥0. Here, Yt is the stochastic profit per unit sold and DQ(t) is the quantity of sold units per agent actively involved on the market, when Q(t) agents are actively participating at time t. The increments of (Q(t))t≥0 inform on the timing decision of agents, and considering a setting, we assume that (Q(t))t≥0 is a F-adapted right-continuous process: agents are informed of competitors entry in the market, and expected future cash-flows can be computed. It is natural to assume that being alone in the market is better than sharing it with a competitor:

0 =: D0 < D2 < D1 , (1) but we also assume these quantities to be known and constant. The process (Yt)t≥0 is F-adapted, non negative and continuous with dynamics given by

dYt = Yt(νdt + ηdWt), t ≥ 0 , (2)

∗ with (ν, η) ∈ R × R+. We assume that (Yt)t≥0 is perfectly correlated to a liquid traded asset whose price dynamics is given by

Q dPt = Pt(µdt + σdWt) = Pt(rdt + σdWt ) (3)

∗ where (µ, σ) ∈ R × R+, r is the constant interest rate of the risk-free bank account available for Q both agents, and Wt = Wt + λt is a Brownian motion under the unique risk-neutral measure Q ∼ P of the arbitrage-free market. The variable λ := (µ − r)/σ in3 is the Sharpe ratio. The present financial setting is thus the standard Black-Scholes-Merton model [2] of a complete perfect financial market.

Remark 1. The above setting defines Yt as a random unitary profit and D as a fixed quantity, whereas Grenadier [9] and Grasselli et al.[7] labelled Yt and D as the demand and the inverse demand curve respectively. This model choice has no incidence in complete market and Sections 2.2 and 2.3 herefater. The choice will be discussed in Remark3 regarding the introduction of the regulator in Section 2.4.

2.2 The follower’s problem

In this setting, since DQ(t) takes known values, the future stream of cash-flows can be evaluated under the risk-neutral measure under which

Q dYt = Yt((ν − ηλ)dt + ηdW ) . (4)

3 Assume that one of the two agents, say agent one, desires to invest at time t when Yt = y. If Q(t) = 1, then the available market for agent one is D2. The risk-neutral expectation of the project’s discounted cash flows are then given by Z ∞  D y D y F Q −r(s−t) 2 2 V (t, y) := E e D2Ysds = = (5) t ηλ − (ν − r) δ with δ := ηλ − (ν − r). We assume from now on δ > 0. Now to price the investment option of a follower, we recall that agent one can wait to invest as long as he wants. We also recall the sunk cost K at time τ he invests. In the financial literature, this is interpreted as a Perpetual + American call option of payoff (D2Yτ /δ − K) . The value function of this option is given by " # D Y + Q −r(τ−t) 2 τ 1 F (t, y) := sup E e − K {τ<+∞}|Ft (6) τ∈Tt δ where Tt denotes the collection of all F-stopping times with values in [t, ∞]. The solution to6 is well-known in the literature, see Huang and Li [10] and Grenadier [8]. A formal recent proof can be found in Grasselli et al. [7]. Proposition 1 (Prop.1, [7]). The solution to6 is given by

 β K  y   if y ≤ YF , F (y) = β−1 YF (7) D2y  δ − K if y > YF , with a threshold YF given by δKβ YF := (8) D2(β − 1) and s 1 r − δ  1 r − δ 2 2r β := − + − + > 1. (9) 2 η2 2 η2 η2 The behavior of the follower is thus quite explicit. He will defer investment until the demand reaches at least the level YF = β/(β − 1)K > K which depends on the profitability of the investment opportunity, the latter being conditioned by δ > 0. We thus introduce

τF := τ(YF ) = inf{t ≥ 0 : Yt ≥ YF } . (10)

2.3 The leader’s problem Assume now that instead of having Q(t) = 1 we have Q(t) = 0. Agent one investing at time t will receive a stream of cash-flows associated to the level D1 for some time, but he expects agent two to enter the market when the threshold YF is triggered. After the moment τF , both agents share the market and agent one receives cash-flows determined by level D2. The project value is thus

Z ∞  L Q −r(s−t) 1 1 t,y V (t, y) := E e (D1 {s<τF } + D2 {s≥τF })Ys ds t D y (D − D )Y  y β = 1 − 1 2 F δ δ YF where detailed computation can be found in Grasselli et al. [7]. This allows to characterize the leader’s value function L(t, y), i.e., the option to invest at time t for a demand y, as well as the value of the project S(t, y) in the situation of simultaneous investment.

4 Proposition 2 (Prop. 2, [7]). The value function of a leader is given by

 β D1y (D1−D2) Kβ  y   − if y < YF , L(y) = δ D2 β−1 YF (11) D2y  δ − K if y ≥ YF , If both agents invest simultaneously, we have D y S(y) := 2 − K. (12) δ Remark 2. Notice that no exercise time is involved as we consider the interest of exercising immediately, Y being non-negative. Notice also that L, F and S do not depend on t, since the problem is stationary. The payoff of the investment opportunity is then fully characterized for any situation in the case of no regulatory intervention.

2.4 The regulator

Let us define τ1 and τ2 the time at which agents one and two respectively express their desire to invest. In full generality, τi for i = 1, 2 can depend on a probability of acting in a game, see the next subsection. We assume that agents cannot predict the decision of the regulator, so that τ1, τ2 are F-adapted stopping times. The regulator only intervenes at such times. If at time τi for − i = 1, 2, Q(τi ) = 0, but Q(τi) = 1 then his decision affects only agent i who expresses his desire to be a leader. If however Q(τi) = 2, then τ1 = τ2 and the regulator shall decide if one or the − other agent is accepted, none is or both are. Finally, if Q(τi ) = 1, then the regulator takes his decision upon the follower’s fate. The regulator decision thus depends on F. We introduce a probability space (Λ, P(Λ), A) where Λ = {α0, α1, α2, αS}. We then introduce + + + the product space (Ω × Λ, F × P(Λ), P ) and the augmented filtration F := (Ft )t≥0 with + + Ft := σ{Ft, P(Λ)}. The regulator is modeled by a F -adapted process.

Definition 2.1. Fix t and Yt = y. For i = 1, 2, if j = 3 − i is the index of the opponent and τj his time of investment, then agent i desiring to invest at time t receives  0 if α = α  0  L(y)1 + F (y)1 if α = α R (t, y) := {t≤τj } {t>τj } i . (13) i F (y)1 if α = α  {t=τj } j  1 1 1 L(y) {t<τj } + S(y) {t=τj } + F (y) {t>τj } if α = αS Let us discuss briefly this representation. According to 13, agent i is accepted if alternative αi or αS is picked up, and denied if α0 or αj is picked up. It is therefore implicit in the model that time does not affect the regulator’s decision upon acceptability. However Q(t) affects the + position of leader or follower. Probability P can thus be given by P×A, and probability A given by a quartet {q0, q1, q2, qS}. However, as we will see shortly, the alternative α0 is irrelevant in what follows due to the way regulatory intevention is modelled. We assume q0 < 1. Since for the unregulated model we assumed that agents are symmetrical, the general study of the regulator’s parameters given by A will be chosen without loss of generality such that q1 ≥ q2. An additional major change comes into the evaluation of payoffs. Agents information is reduced to F. Therefore the final settlement is not evaluable as in the complete market setting, and we shall introduce a pricing criterion. We follow Smets [12], Grenadier [8] and many others by making the usual assumption that agents are risk-neutral, i.e., they evaluate the payoffs by taking the expectation of previously computed values under A. The incomplete market is also commonly handled via utility maximization, see Benssoussan et al. [3] and Grasselli et al. [7]. We postpone this approach to Section5.

5 Remark 3. Expectations of 13 are made under the minimal entropy martingale measure Q×A for this problem, meaning that uncertainty of the model follows a semi-complete market hypothesis: if we reduce uncertainty to the market information F, then the market is complete. See Becherer [4] for details. Recalling Remark1, we can thus highligh that payoffs L(y), F (y) or S(y) can be perfectectly evaluated. Therefore definitions of Y and D only matter in the interpretation of the model and the present choice of probability is not affected by such interpretation.

Once the outcome of Λ is settled, the payoff then depends on Q(t), i.e., τj. We thus focus now on strategic interactions that determine Q(t).

2.5 Timing strategies It has been observed since Fudenberg & Tirole [6] that real time t ≥ 0 is not sufficient to describe strategic possibilities of opponents in a coordination game. If Q(t) = 0 and agents coordinate by deciding to invest with probabilities pi ∈ (0, 1) for i = 1, 2, then a one-round game at time t implies a probability (1 − p1)(1 − p2) to exit without at least one investor. However, another coordination situation appears for the instant just after and the game shall be repeated with the same parameters. The problem has been settled by Fudenberg and Tirole [6] in the deterministic case, and recently by Thijssen et al. [15] for the stochastic setting. We extend it to the model with regulator in the following manner. ∗ ∗ It consists in extending time t ∈ R+ to (t, k, l) ∈ R+ × N × N . The filtration F is augmented 0 via Ft,i,j = Ft,k,l ⊆ Ft0,i,j for any t < t and any (i, j) 6= (k, l), and the state process is extended to Y(t,k,l) := Yt. Therefore, when both agents desire to invest at the same time t, we extend the time line by freezing real time t and indefinitely repeating a game on natural time k. Once the issue of the game is settled, the regulator intervenes on natural time l = 1. If no participant is accepted, i.e., if α0 is picked up. the game is replayed for l = 2, and so on. This model needs to be commented. At first glance, a natural assumption is to make the regulator intervene once and for all for each agent or both at the same time. If the regulator refuses the entry of a competitor, one could assume for example that the game is over or that a delay is imposed before any other action of the rejected agent. The time extention above and the following Definition 2.2 do not obey this rule. However, it can be related to a unique intervention of the regulator by forthcoming Corollary1 and Proposition4, see Remark4 at the end of Section 3.1. The motivation for the present model lies in presentation details. In Definition 2.1, it allows the generic expression 13 and avoids direct conditioning of Ri(t, y) on the position of agent i. Expression 13 can thus be directly handled and modified to represent a more realistic situation than one implying the systematic entry of at least one player, as it will be shown in the next section. The present model also allows for the relevant specific cases of Section 4.1 and Section 4.2. By using the framework of Fudenberg and Tirole [6], the present model also shows its limitations as an idealized situation.

Definition 2.2. A for agent i ∈ {1, 2} is defined as a pair of F-adapted processes i i 2 (G(t,k,l), p(t,k,l)) taking values in [0, 1] such that i i i 1 (i) The process G(t,k,l) is of the type G(t,k,l)(Yt) = Gt(Yt) = {t≥τ(ˆy)} with τ(ˆy) := inf{t ≥ 0 : Yt ≥ yˆ}.

i (ii) The process pi(t, k, l) is of the type pi(t, k, l) = pi(t) = p (Yt). The reduced set of strategies is motivated by several facts. Since the process Y is Markov, we can focus without loss of generality on Markov sub-game perfect equilibrium strategies. The i process G(t,k) is a non-decreasing c`adl`agprocess, and refers to the cumulative probability of agent i exercising before t. Its use is kept when agent i does not exercise immediately the option to

6 invest, and when exercise depends on a specific stopping time of the form τ(Yˆ ), such as the i follower strategy. The process p(t,k,l) denotes the probability of exercising in a coordinate game at round k, after l − 1 denials of the regulator, when α = α0. It should be stationary and not depend on the previous rounds of the game since no additional information is given. For both processes, the information is given by F and thus reduces to Yt at time t. Additional details can be found in Thijssen et al. [15].

3 Optimal behavior and Nash equilibria

3.1 Conditions for a coordination game A first statement about payoffs can be immediately provided. A formal proof can be found in Grasselli et al. [7], following arguments initially developped in Grenadier [8,9].

Proposition 3 (Prop. 1, [9]). There exists a unique point YL ∈ (0,YF ) such that

 S(y) < L(y) < F (y) for y < Y ,  L  S(y) < L(y) = F (y) for y = Y , L (14) S(y) < F (y) < L(y) for Y < y < Y ,  L F  S(y) = F (y) = L(y) for y ≥ YF .

Fix t and Yt = y. In the deregulated situation, three different cases are thus possible, depend- ing on the three intervals given by 0 < YL < YF < +∞. It appears that in our framework, the discrimination also applies. Consider the following situation. Assume Q(t) = 0 and t = τ1 < τ2: agent one wants to start investing in the project as a leader and agent two allows it, i.e., (p1(t), p2(t)) = (1, 0). By looking at 13, agent one receives L(y) with probability q1 + qS and 0 with probability q2 + q0. However, as noticed for the coordination game, if agent one is denied investment at (t, 1, 1), he can try at (t, 1, 2) and so on until he obtains L(y). Consequently, if q0 < 1 agents can do as if α0 is never picked up, see Remark4 below. The setting is limited by the continuous time approach and Proposition3 applies as well as in the standard case. The situation is identical if τ2 < τ1. Corollary 1. Let t ≥ 0 and y > 0. Then for i = 1, 2

(d) if y < YL, agent i defers action until τ(YL);

(e) if y > YF , agent i exercises immediately, i.e., τi = t.

Proof. (d) According to 14, if τ1 6= τ2, expected payoffs given to 13 verify (qS + qi)L(y) < (qS + qi)F (y) and there is no incentive to act for agent i, i = 1, 2. (e) Since S(y) = F (y) = L(y) and τF = t, both agents act with probability (p1(t), p2(t)) = (1, 1). Since q0 < 1, they submit their request as much as needed and receive S(y) with probability (1 − q ) P ql = 1. 0 l∈N 0

Corollary1 does not describe which action is undertaken at τ(YL) or if y = YF . We are thus left to study the case where YL ≤ y ≤ YF . This is done by considering a coordination game. Following the reasoning of the above proof, definition 13 allows to give the expected payoffs of the game. Let us define

 1 1  (S1,S2) := (q1L + q2F + qSS), (q2L + q1F + qSS) . (15) 1 − q0 1 − q0

Notice that Si are bounded by max{L, F, S} since q1 + q2 + qs = 1 − q0.

7 Exercise Defer Exercise (S1(y),S2(y)) (L(y),F (y)) Defer (F (y),L(y)) Repeat

Table 1: Coordination game at (t, k, l).

Proposition 4. For any q0 < 1, agents play the coordination game given by Table1 if YL < y < YF . Consequently, we can assume q0 = 0 without loss of generality.

Proof. If α = α0 after settlement at round k of the game, time goes from (t, k, l) to (t, 1, l + 1) and the game is repeated. Therefore, according to 13, the game takes the form of Table1 for a fixed l and is settled with probability 1 − q0, or canceled and repeated with probability q0. If (p1, p2) is a given strategy for the game at t and (E1(p1, p2),E2(p1, p2)) the consequent expected payoff for agents one and two in the game of Table1, then the total expected payoff for agent i at time t is X l Ei(p1, p2)(1 − q0) q0 = Ei(p1, p2) . (16) l∈N

The game is thus not affected by q0 < 1, which can take any value stricly lower than one. Without loss of generality, we can then take q0 = 0 and reduce the game to one intervention of the regulator, i.e. l ≤ 1. When τ1 6= τ2, the probability that the regulator accepts agent i demand for investment is qi + qS, see Corollary1. There is a complete equivalence of payoffs and strategies for quartet {q0, q1, q2, qS} and quartet {0, q1/(1 − q0), q2/(1 − q0), qS/(1 − q0)}. Probability q0 can then be settled to zero.

Remark 4. Extended time model of Section 2.5 implies strong limitations to the issues of the game: Corollary1 and Proposition4 impose the emergence of at least one player at τ1 and τ2. The confrontation has thus three outcomes and can bear the following interpretation. In the present model of competition, a regulator intervenes only if the two agents attemps to enter the investment opportunity at the same time. He then decides among three alternatives: letting one, or the other, or both agents enter the opportunity. The decision of the regulator is definitive and it is thus possible to connect this procedure to the original one of Grenadier [8] mentionned in the introduction, see also Section 4.1.

According to Remark4, we settle q0 = 0 from now on. We now turn to the solution to Table 1.

3.2 Solution in the regular case

We reduce the analysis here to the case where 0 < q2 ≤ q1 < 1 − q2. We can now assume that

max(p1, p2) > 0 . (17)

We now introduce p0(y) := (L(y) − F (y))/(L(y) − S(y)) and two functions

p0(y) L(y) − F (y) 2 Pi(y) := = , i 6= j ∈ {1, 2} . (18) qip0(y) + qS L(y) − Sj(y)

The values of Pi strongly discriminates the issue of the game. Since q1 ≥ q2, if YL ≤ y ≤ YF , then S1(y) ≥ S2(y) according to 14 on that interval and P2(y) ≥ P1(y).

Lemma 3.1. The functions P2 and P1 are increasing on [YL,YF ].

8 Proof. By taking d1(y) := L(y) − F (y) and d2(y) := S(y) − F (y), we get   1 d1(y) Pi(y) = qi d1(y) + γi(d1(y) − d2(y)) and

 0 0  0 1 γi(d1(y)d2(y) − d2(y)d1(y)) Pi (y) = 2 qi (d1(y) + γi(d1(y) − d2(y))) where γi := qS/qi ≤ 1 with i ∈ {1, 2}. We are thus interested in the sign of the quantity g(y) := 0 0 d1(y)d2(y) − d2(y)d1(y) which quickly leads to

g(y)δ  y β−1  1 δK  δK = (D1 − D2)(β + − 2 − ) + (D1 − D2) . yD2 YF β D2 D2 β−1 Since β > 1, (y/YF ) is increasing in y. Since 0 < y ≤ YF , it suffices to verify that (δK)/D2 ≥ (β + 1/β − 2 − (δK)/D2), which is naturally the case for any β, to obtain that g is non-negative on the interval.

We will omit for now the symmetric case and assume P1(y) < P2(y) for YL < y < YF . Recall now that qi > 0 for i = 1, 2. Then Pi(YF ) = 1/(qi + qS) > 1 for i = 1, 2. Accordingly and by Lemma 3.1, there exists YL < Y1 < Y2 < YF such that

F (Y ) = q L(Y ) + q F (Y ) + q S(Y ) = S (Y ) , 1 1 1 2 1 S 1 1 1 (19) F (Y2) = q2L(Y2) + q1F (Y2) + qSS(Y2) = S2(Y2) .

Proposition 5. Assume YL < y < YF . Then solutions of Table1 are of three types:

(a) If YL < y < Y1 the game has three Nash Equilibria given by two pure strategies (1, 0) and (1, 0), and one mixed strategy (P1(y),P2(y)).

(b) If Y1 ≤ 1 < Y2, the game has one Nash Equilibrium given by strategies (1, 0).

(c) If Y2 ≤ y < YF , the game has one Nash Equilibrium given by strategies (1, 1).

2 Proof. For YL < y < Y1, we have P1 < P2 < 1. Fix an arbitrary constant strategy (p1, p2) ∈ [0, 1] . Considering the A-expected payoffs, agent one receives L(y) at the end of the game with a probability X k−1 k p1(1 − p2) a1 := p1 (1 − p1) (1 − p2) = . (20) p1 + p2 − p1p2 k∈N∗ Symmetrically, he receives F (y), and agent two receives L(y) with probability

p2(1 − p1) a2 := , (21) p1 + p2 − p1p2 and they receives (S1(y),S2(y)) with probability

p1p2 aS := . (22) p1 + p2 − p1p2 The expected payoff of the game for agent one is given by

E (p , p ) := a L(y) + a F (y) + a S (y) 1 1 2 1 2 S 1 (23) = (a1 + aSq1)L(y) + (a2 + aSq2)F (y) + aSqSS(y) .

9 A similar expression E2 is given for agent two. Now fix p2. Since E1 is a continuous function of both variables, maximum of 23 depends on 2 ∂E1 p2(L(y) − F (y) + p2(S1(y) − L(y)) (p1, p2) = 2 . (24) ∂p1 (p1(1 − p2) + p2)

One can then see that the sign of 24 is the sign of (p2 − P2). A similar discrimination for agent two implies P1. (a) If YL < y < Y1, then according to 18 and 19, Pi < 1 for i = 1, 2. Three situations emerge:

(i) If p2 > P2, the optimal p1 is 0. Then by 24, E2 should not depend on p2, and the situation is stable for any pair (0, p2) with p2 in (P2, 1].

(ii) If p2 = P2, E1 is constant and p1 can take any value. If p1 < P1, then by symmetry p2 should take value 1, leading to case (i). If p1 = P1, E2 is constant and either p2 = P2, or we fall in case (i) or (iii). The only possible equilibrium is thus (P1,P2).

(iii) If p2 < P2, E1 is increasing with p1 and agent one shall play with probability p1 = 1 > P1. Therefore p2 optimizes E2 when being 0, and E1 becomes independent of p1. Altogether, situation stays unchanged if p1 ∈ (P1, 1] or if p1 = 0. Otherwise, if p1 ≤ P1, we fall back into cases (i) or (ii). The equilibria here are (p1, 0) with p1 ∈ (P1, 1], and the trivial case (0, 0). Recalling constraint 17, we get rid of case (0, 0). Coming back to the issue of the game when k goes to infinity in (t, k, 1), three situations emerge from the above calculation. Two of them are pure coordinated equilibriums, of the type (a1, a2) = (1, 0) or (0, 1), which can be produced by pure coordinated strategies (p1, p2) = (a1, a2), settling the game in only one round. The third one is a mixed equilibrium given by (p1, p2) := (P1,P2). (b) According to 19, S1(Y1) = F (Y1). Following Lemma 3.1, agent one prefers being a leader for y ≥ Y1 and prefers a regulator intervention rather than the follower position, i.e. S1(y) ≥ F (y). Thus p1 = 1. For agent two, defering means receiving F (y) > F (Y1) and exercising implies a regulation intervention. Since y < Y2, q1F (y) + q2L(y) + qSS(y) < F (y) and defering is his best option: p2 = 0. That means that on (Y1,Y2), the equilibrium strategy is (p1, p2) = (1, 0). (c) On the interval [Y2,YF ), the reasoning of (b) still applies for agent one by monotony, and p1 = 1. The second agent can finally bear the same uncertainty if y ≥ Y2, and p2 = 1. Here, 1 ≤ P1(y) ≤ P2(y) and both agents have greater expected payoff by letting the regulator intervene rather than being follower. Equilibrium exists when both agents exercise.

Two reasons force us to prefer the strategy (P1,P2) on (1, 0) and (0, 1) in case (a). First, it is the only one which extends naturally the symmetric case. Second, it is the only trembling-hand equilibrium. Considering (P1,P2) in the interval (a), (a1, a2, aS) follows according to 20, 21 and 22,   1 − p0 1 − p0 p0 (a1, a2, aS) = , , , (25) 2 − p0 2 − p0 2 − p0

If we plug 25 into 23, we obtain that the payoff of respective agents do not depend on (q1, q2, qS):

1 − p0 a0 E1(P1,P2) = E2(P1,P2) = (L(y) + F (y)) + S(y) . (26) 2 − p0 2 − p0

In the case qS > 0, they are equal to F . As notice in Grasselli et al.[7], we retrieve a mathematical expression of the rent equalization principle of Fudenberg and Tirole [6]: agents are indifferent between playing the game and being the follower, and time value of leadership vanishes with preemption. In addition, asymmetry q1 ≥ q2 does not affect the payoffs and the final outcome of the game after decision of the regulator has the same probability as in the deregulated situation, see Grasselli et al. [7].

10 1

0.8

0.6

0.4

0.2 (a) (b) (c)

0 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8

Figure 1: Values of p1(y) (blue) and p2(y) (red) for (q1, q2, qS) = (0.5, 0.2, 0.3). Areas (a), (b) and (c) are separated by lines at Y1 = 0.53 and Y2 = 0.72 on [YL,YF ] = [0.37, 1.83]. Area (d) is at the left of the graph and (e) at the right of it. Parameters set at (K, ν, η, µ, σ, r, D1,D2) = (10, 0.01, 0.2, 0.04, 0.3, 0.03, 1, 0.35).

3.2.1 Endpoints and overall strategy We have to study junctions of areas (d) with (a), and (c) with (e). The technical issue has been settled in Thijssen et al. [15].

Lemma 3.2. Assume y = YL. Then both agents have a probability 1/2 to be leader or follower, and receive L(YL) = F (YL).

Proof. The junction of [0,YL) with [YL,Y1] is a delicate point. At the left of point YL, no agent i wants to invest. We thus shall use the strategy G (YL) for both agents. By right-continuity of this process, both agents shall exercise with probability 1 at point YL. Each agent receives Ri(y) which takes value L(y) = F (y) with probability q1 + q2 and S(y) with probability qS:

E1(y) = E2(y) = (q1 + q2)F (y) + qSS(y) . (27)

At the right side of point YL however, Pi converges to 0 when y converges to YL, for i = 1, 2. i Therefore, so do (p1, p2) toward (0, 0). We cannot reconcile G (YL) with (p1(YL), p2(YL)) = τ(YL) (0, 0) and shall compare the payoffs. A short calculation provides

a1(y) lim = 1 and lim as(y) = 0 . (28) y↓YL a2(y) y↓YL

Therefore at point YL,(a1(YL), a2(YL), aS(YL)) = (1/2, 1/2, 0). It is clear at point YL that the second option is better for both agents.

Remark 5. There is a continuity of behavior between (d) and (a) from the fact that regulatory intervention does not impact the outcome of the game, from the point of view of discrimination between the two agents: simultaneous investment is improbable at point YL, and probability aS(y) is continuous and null at this point. Consequently, the local behavior of agents around YL is similar to the one given in Grasselli et al. [7]. Altogether and observing Figure1, mixed strategies are right-continuous.

11 Let us complete the strategic interaction of the two agents by summarizing the previous results in the following Theorem. As we will see in the next section for singular cases, it extends Theorem 4 in [7].

Theorem 3.3. Consider the strategic economic situation presented in Section2, with

min{q1, q2, qS} > 0 and q0 = 0 . (29)

Then there exists a Markov sub-game perfect equilibrium with strategies depending on the level of profits as follows:

(i) If y < YL, both agents wait for the profit level to rise and reach YL.

(ii) At y = YL, there is no simultaneous exercise and each agent has an equal probability of emerging as a leader while the other becomes a follower and waits until the profit level reaches YF .

(iii) If YL < y < Y1, each agent choses a mixed strategy consisting of exercising the option to invest with probability Pi(y). Their expected payoffs are equal, and the regulator intervenes on the settlement of positions.

(iv) If Y1 ≤ y < Y2, agent one exercises his option and agent two becomes a follower and waits until y reaches YF .

(v) If Y2 ≤ y < YF , both agents express their desire to invest immediately, and the regulator is called. If one agent is elected leader, the other one becomes follower and waits until y reaches YF .

(vi) If YF ≥ y, both agents act as in (v), but if a follower emerges, he invest immediately after the other agent.

Some comments are in order. First, we shall emphasize that the regulator theoretically inter- venes for any situation where y ≥ YL. However, as explained at the beginning of this section, its intervention becomes mostly irrelevant in continuous time when agents do not act simultaneously. Its impact is unavoidable to settle the final payoff after the game, but its influence on agents’ strategies boils down to the interval (YL,Y2]. At point Y1, there is a strong discontinuity in the optimal behavior of both agents. For y < Y1, the mixed strategy used by agent two tends toward a pure strategy with systematic investment. However at the point itself, the second agent defers investment and becomes the follower. It follows from the fact that agent one is indifferent between being follower or letting the regulator decide the outcome at this point. He suddenly seeks, without hesitation, for the leader’s position, creating a discontinuity in his behavior. The same happens for agent two at Y2, creating another discontinuity in the strategy of the latter.

4 Singular cases

The proposed framework encompasses in a natural way the two main competition situations encountered in the literature, namely the Cournot competition and the Stackelberg competition. By introducing minor changes in the regulatory intervention, it is also possible to represent the situation of Stackelberg leadership advantage. Finally, our setting allows to study a new and weaker type of advantage we call weak Stackelberg leadership.

12 4.1 The Cournot and Stackelberg competitions The Cournot duopoly refers to a situation when both players adjust quantities simultaneously. This is the framework of Grasseli et al. [7], involving the payoff S(y) if agents invest at the same time. Recalling Definition 2.1, this framework corresponds to

(q1, q2, qS) = (0, 0, 1) . (30)

We notice that agents then become symmetrical. This appears from 18 which implies P1(y) = P2(y) = p0(y). Additionally, p0(y) ∈ (0, 1) if y ∈ (YL,YF ), and is increasing on this interval according to Lemma3. The Stackelberg competition refers to a situation where competitors adjust the level DQ(t) sequentially. In a preemptive game, this implies that in the case of simultaneous investment, one agent is elected leader and the other one becomes follower. This setting implies an exogenous randomization procedure, which by symmetry is given by a flipping of a fair coin. The procedure is described as such in Grenadier [8], Weeds [16], Tsekeros [13] or Paxson and Pinto [11]. Recalling Definition 2.1, the setting is retrieved by fixing

(q1, q2, qS) = (1/2, 1/2, 0) . (31)

The implication in our context is the following: by symmetry, P1(y) = P2(y) = 2. Therefore, the interval (YL,Y2) reduces to nothing, i.e., YL = Y1 = Y2, and the strategical behavior boils down to (i), (v) and (vi) in Theorem 3.3.

Remark 6. Notice that any combination q2 = 1 − q1 ∈ (0, 1) provides the same result as in 31, as Pi(y) = 1/qi > 1 for i = 1, 2. The strategic behavior is unchanged with an unfair coin flipping. This is foreseeable as qiL(y) + (1 − qi)F (y) > F (y) on (YL,YF ). This will also hold for a convex combination, i.e., for risk-averse agents.

Remark 7. Assume now symmetry in the initial framework, i.e., q := q1 = q2 ∈ (0, 1/2). We have YS := Y1 = Y2, and the region (b) reduces to nothing. By recalling 19, we straightly obtain

lim YS = YL and lim YS = YF . (32) q↑1/2 q↓0

Therefore, the regulation intervention encompasses in a continuous manner the two usual types of games described above.

4.2 Stackelberg leadership This economic situation represents an asymmetrical competition where the roles of leader and follower are predetermined exogenously. It can be justified as in Bensoussan et al. [3] by regulatory or competitive advantage. However Definition 2.1 does not allow to retrieve this case directly. Instead, we can extend it by conditioning the probability quartet {q0, q1, q2, qS}. In this situation, + the probability P depends on F-adapted events in the following manner:

+ + P (α0|t < τ1) = 1 and P (α1|t ≥ τ1) = 1 . (33)

This means that no investment is allowed until agent one decides to invest, which leads auto- matically to the leader position. The strategical interaction is then pretty different from the endogenous attribution of roles. See Grasselli and al.[7] for a comparison of this situation with the Cournot game.

13 4.3 The weak Stackelberg advantage We propose here a different type of competitive situation. Consider the investment timing problem in a duopoly where agent one has, as in the Stackelberg leadership, a significant advantage due to exogenous qualities. We assume that for particular reasons, the regulator only allows one agent at a time to invest in the shared opportunity. The advantage of agent one thus translates into a preference of the regulator, but only in the case of simultaneous move of the two agents. That means that agent two can still, in theory, preempt agent one. This situation can be covered by simply setting (q1, q2, qS) = (1, 0, 0) . (34) In this setting, the results of Section3 apply without loss of generality.

Proposition 6. Assume 34. Then the optimal strategy for agents one and two is given by 1 2 (G (YL),G (YF )).

Proof. It suffices to consider the game of Table1 on the interval [ YL,YF ). The Nash equilibrium is given by (p1(y), p2(y)) = (1, 0) for any y ∈ [YL,YF ]. Corollary1 applies for other values. Remark 8. As in Remark7, we can observe that this situation corresponds to strategy (iv) of Theorem 3.3 expanded to the interval [YL,YF ). This situation can be obtained continuously with q1 converging to one. Following Grasselli et al. [7], we compare the advantage given by such an asymmetry to the usual Cournot game. In the latter, the positive difference between the Stackelberg leadership and the Cournot game provides a financial advantage which is called a priority option. By similarity, we call the marginal advantage of the weak Stackelberg advantage on the Cournot game a preference option.

(q1,qS ) Corollary 2. Let us assume q2 = q0 = 0. Let us denote E1 (y) the expected payoff of agent one following from Theorem 3.3 when (q1, qS) is given, for a level y ∈ R+. Then the preference 0 (1,0) (0,1) option value is given by π (y) := E1 (y) − E1 (y) for all y ∈ R+. Its value is equal to

0 + π (y) = (L(y) − F (y)) ∀y ∈ R+ . (35)

(0,1) Proof. The proof is straightforward and follows from 26, where E1 (y) = F (y) for y ∈ [YL,YF ]. Following Proposition6, agent one shall invest at τ(YL) if y ≤ YL at first. In this condition, his payoff is L(YL) = F (YL), which provides 35. This option gives an advantage to its owner, agent one, without penalizing the other agent, who can always expect the payoff F . A comparison with the priority option is given in Figure2. A very similar situation to the weak Stackelberg advantage can be observed if we take q2 = 0 but qS > 0. The economic situation however loses a part of its meaning. It would convey the situation where agent two is accepted as investor only if he shares the market with agent one or become a follower. In this setting we can apply also the results of Section3. We observe from definition 19 that in this case, Y2 = YF and Y1 verifies F (Y1) = q1L(Y1) + (1 − q1)S(Y1). The consequence is straightforward: the interval [Y1,Y2) expands to [Y1,YF ). The fact that Y1 > YL for q1 < 1 has also a specific implication. In that case, the equilibrium (1, 0) is more relevant than (P1(y),P2(y)) on [YL,Y1). Indeed, if agent one invests systematically on that interval then agent two has no chance of being the leader since q2 = 0. Thus p2 = 0 and the payoffs become

(E1(y),E2(y)) = (L(y),F (y)) for y ∈ [YL,Y1) . (36)

14 7 priority option preference option

6

5

4

3

2

1

0 0 0.5 1 1.5 2 2.5 3

Figure 2: Priority option value (red) and Preference option value (blue) in function of y. Vertical lines at YL = 0.37, Y1 = 0.64, Y2 = 1.37 and YF = 1.83. Option values are equal on [Y1,Y2]. Same parameters as in Figure1.

In opposition of the trembling-hand equilibrium, this pure strategy can be well figured by being called steady-hand Comparing 26 to 36, agent one shall use the pure strategy, whereas agent two is indifferent between the mixed and the pure strategy. Setting q2 = 0 thus provides a preference option to agent one.

5 Aversion for confrontation

The introduction of the regulator rendering the market incomplete, utility indifference pricing can be invoked to evaluate payoffs. Although its action is purely random in the presented model, the regulator represents a third actor in the game. We study here the the impact of risk aversion on the coordination game only, which can be seen as an original dimension we denote aversion for confrontation.

5.1 Risk aversion in complete market We keep the market model of Section2. However, we now endow each agent with the same CARA utility function U(x) = − exp(−γx) (37) where γ > 0 is the risk-aversion of agents. The function U is strictly concave and is chosen to avoid dependence on initial wealth. Recalling Remark3, the market without regulator is complete and free of arbitrage, so that both agents still price the leader, the follower and sharing positions with the unique risk-neutral probability Q. Thus, agents compare the utility of the different investment option market prices, denoted l(y) := U(L(y)), f(y) := U(F (y)) and s(y) := U(S(y)). The si, i = 1, 2 are defined the same way. When needed, variables will be indexed with γ to make the dependence explicit. The definition of regulation is updated.

15 Definition 5.1. Fix t and Yt = y. For i = 1, 2, if j = 3 − i is the index of the opponent and τj his time of investment, then agent i receives utility

 0 if α = α  0  l(y)1 + f(y)1 if α = α r (t, y) := {t≤τj } {t>τj } i . (38) i f(y)1 if α = α  {t=τj } j  1 1 1 l(y) {t<τj } + s(y) {t=τj } + f(y) {t>τj } if α = αS From Definition 5.1, it appears that by monotonicity of U, the game is strictly the same as in Section2, apart from payoffs. Both agents defer for y < YL and both act immediately for y ≥ YF . The incomplete market setting is now handled with a utility maximization criterion. Each agent γ uses now a mixed strategy pi , in order to maximize an expected utility slightly different from 23 on (YL,YF ): γ γ γ γ γ γ E1 (y) = (a1 + as q1)l(y) + (a2 + aSq2)f(y) + as qSs(y) (39) Results of Section3 thus hold by changing ( L, F, S) for (l, f, s). It follows that in that case

l(y) − f(y) Pi,γ(y) = with i ∈ {1, 2} (40) qi(l(y) − f(y)) − qS(l(y) − s(y)) play the central role and that optimal strategic interaction of agents can be characterized by the value of y, or equivalently on the interval [YF ,YL] by the values of P1,γ(y) and P2,γ(y). The question we address in this section is how risk-aversion influences the different strategic interactions.

5.2 Influence of γ on strategies First, aversion for confrontation is expressed through diminishing probability of intervention with γ.

Proposition 7. Assume qS = 1, and y ∈ (YL,YS). Denote pγ(y) := P1,γ(y) = P2,γ(y). Then pγ ≤ p0 and furthermore, lim pγ = p0 and lim pγ = 0 . (41) γ↓0 γ↑∞

Proof. According to 40 with qS = 1, pγ(y) ∈ (0, 1) on (YL,YS). From 40 we get

l(y) − f(y) −eγL(y) + eγF (y) eγ(L−F (y)) − 1 p (y) = = = . (42) γ l(y) − s(y) −eγL(y) + eγS(y) eγ(L−S(y)) − 1

γx Since u(x) := −1 − U(−x) = e − 1 is a positive strictly convex function on R+ with u(0) = 0, we have that u(L(y) − F (y)) L(y) − F (y) p (y) = < =: p (y) . (43) γ u(L(y) − S(y)) L(y) − S(y) 0

For γ going to zero, we apply l’Hˆopital’srule to obtain that limγ↓0 pγ = p0. The other limit follows from expliciting the utility function:

−γ(F (y)−S(y)) lim pγ(y) = lim e = 0 . (44) γ↑∞ γ↑∞

Remark 9. Notice that there is no uniform convergence since p0 is continuous and p0(YF ) = 1. The above convergence holds for all y ∈ [YL,YF ). It is clear from 43 that pγ is monotonous in γ ∗ on R+. Then according to 44, it is convex decreasing with γ.

16 For the general case qS < 1, the above result still holds, but on a reduced interval. Neverthe- less, this interval depends on γ.

Proposition 8. Assume min{q1, q2, qS} > 0. Let Y1,γ ∈ [YL,YF ] be such that P2,γ(Y1,γ) = 1, and Y2,γ ∈ [YL,YF ] such that P1,γ(Y2,γ) = 1. Then for i = 1, 2, Yi,γ is increasing in γ and

lim Yi,γ = YF . (45) γ↑∞

Proof. Consider i ∈ {1, 2}. First notice that along Lemma3, Yi,γ is uniquely defined on the designated interval. Following Remark9, Pi,γ is a concave non-decreasing functions of pγ. It is then a decreasing function of γ. Since Pi,γ(y) is decreasing with γ, Yi,γ is an increasing function of γ: the region (YL,Y1,γ) spreads on the right with γ. Adapting 19 to the present values, Yi,γ shall verify:

γ(L(Yi,γ )−F (Yi,γ )) γ(S(Yi,γ )−F (Yi,γ )) qi(1 − e ) + qS(1 − e ) = 0 and when γ goes to ∞, following 44, we need L(Yi,γ) − F (Yi,γ) to go to 0, so that Yi,γ tends toward YF . With risk aversion, the region (a) takes more importance and the competitive advantage of agent one decreases with γ. Figure3 resumes the evolution.

1.8

1.6 (c)

1.4 (b)

1.2

1

0.8 (a)

0.6

0.4

0 0.5 1 1.5 2 2.5 3 3.5 4

Figure 3: Values of Y1 (blue) and Y2 (red) as a function of risk aversion γ. Y-axis limited to [YL,YF ] = [0.37, 1.83]. Limit values of (Y1,Y2) for γ going to 0 corresponds to (0.53, 0.72) of Figure1. Same parameters as previous figures.

The interval (YL,Y1,γ) describing the case (a) is the more relevant in terms of strategies, as the only one to involve mixed strategies. Propositions7 and8 imply the following result on this interval. γ γ γ Corollary 3. Assume y ∈ [YL,Y1,γ). Let (a1 , a2 , aS) be defined by 20, 21 and 22 where (p1, p2) is replaced by (P1,γ,P2,γ). Then γ a1 lim aS = 0 and lim γ = 1 . (46) γ↑∞ γ↑∞ a2

17 γ Proof. Denote pi := Pi,γ and pγ := (l(y) − f(y))/(l(y) − s(y)) the equivalent of p0 in the risk γ γ γ averse setting. From 22, aS is a decreasing function of both p1 and p2 , and the first limit is a straightforward consequence of 41. γ Plugging 40 into ai , we obtain pγ(1 − pγ) γ i j 2 ai = γ γ γ γ , i 6= j ∈ {1, 2} , (47) p1 + p2 − p1 p2 γ and differentiating in pγ, we obtain that ai is increasing in γ. It also follows that γ γ γ γ a1 p1 − p1 p2 qS − (1 − q2)pγ γ = γ γ γ = ≤ 1 . (48) a2 p2 − p1 p2 qS − (1 − q1)pγ The second limit of 46 follows immediately from Proposition7.

γ Remark 10. We can easily assert as in Proposition7 that aS < aS for any γ > 0, where aS is the probability of simultaneous action in the game for risk-neutral agents. Equivalently, γ a1 F (y) − S1(y) a1 = < γ , ∀γ > 0 . (49) a2 F (y) − S2(y) a2 The above results can be interpreted as follows. The parameter γ is an aversion for the uncertainty following from the coordination game and the regulatory intervention. As expected, the higher this parameter, the lower the probability to act Pi,γ in the game. However the game being infinitely repeated until action of at least one agent, for values of Pi,γ lower than one, the game is extended to a bigger interval [YL,Y1,γ) with γ. Then, it naturally reduces simultaneity of investment, but tends also to an even chance of becoming a leader for both agents. There is thus a consequence to a high risk-aversion γ: agents synchronize to avoid playing the game and the regulator decision. In some sense, the behavior in a Stackelberg competition is the limit with risk-aversion of the behavior of competitors in a Cournot game. How does γ impact the outcome of the game? As above since we are in complete market, the risk aversion makes no relevant modification to the cases where the coordination game is not played. On the interval (YL,Y1,γ) then, we need to compare the expected values of options L, F and S to homogeneously quantities. γ Proposition 9. For y ∈ [YL,Y1,γ), let E1 (y) be the expected utility of agent one in the coordina- tion game when strategies (P1,γ(y),P2,γ(y)) are played. Then the indifference value of the game for agent one is given by 1 e (y) := U −1(Eγ(y)) = − log −aγl(y) − aγf(y) − aγ s (y) . (50) 1,γ 1 γ 1 2 S i

We define e2,γ similarly. Assume now qS > 0. For y ∈ [YL,Y1,γ), we have for i = 1, 2 that

ei,γ(y) = Ei(y) = F (y) ∀γ > 0 , (51) with Ei(y) defined in 26. Proof. Using 47, we can proceed as in the end of Section3 to retrieve

a1,γl(y) + a2,γf(y) + aS,γsi(y) = f(y), i = 1, 2 , (52) and 51 follows from the definition of ei,γ. Risk-aversion in complete market has thus the interesting property to keep true the rent equalization principle of Fudenberg and Tirole [6]: agents adapt their strategies to be indifferent between playing the game or obtaining the follower’s position.

18 6 Criticism and extensions

If real option games model are to be applied, the role of a regulator shall be introduced in order to suit realistic situations of competition. Indeed regulators often intervene for important projects in energy, territorial acquisition and highly sensitive products such as drugs. Despite its simplicity and its idealization, the present model has still something to say. One can see it as an archetype model, unifying mathematically the Stackelberg and the Cournot competition frameworks and leading to simple formulas. The model we proposed is a first attempt, and could be improved on several grounds. Three main objections can be raised. As observed in Remark4 of Section3, the extention of strategies in 2.5 in the fashion of Fudenberg and Tirole [6] has unrealistic and strongly constrained implications due to continuous time and perfect information. This is emphasized here in the interpretation that the regulator can only intervene when agents simultaneously invest. A prioritize objective would be to propose a new setting for the standard game conveying a dynamical dimension to strategies. + A realistic but involved complication is the influence of explicit parameters on the law P , parameters on which agents have some control. The value of being preferred, introduced as a financial option in subsection 4.3, would then provide a price to a competitive advantage and to side efforts to satisfy non-financial criteria. Eventually, a game with the regulator as a third player appears as a natural track of inquiry. Finally, the introduction of risk-aversion should not be restrained to the coordination game. This issue is already tackled in Bensoussan et al. [3], and Grasselli et al. [7] for the incomplete market setting. But as a fundamentally different source of risk, the coordination game and the regulator’s decision shall be evaluated with a different risk-aversion parameter, as we proposed in Section5. An analysis of asymmetrical risk aversion as in Appendix C of Grasselli et al. [7] shall naturally be undertaken.

References

[1] A.F. Azevedo and D.A. Paxson, Real options game models: A review, Real Options 2010, 2010.

[2] F. Black and M. Scholes, The pricing of options and corporate liabilities, The journal of political economy (1973), 637–654.

[3] A. Bensoussan, J.D. Diltz and S. Hoe, Real options games in complete and incomplete markets with several decision makers, SIAM Journal on Financial Mathematics, 1(2010), 666–728.

[4] D. Becherer, Rational hedging and valuation of integrated risks under constant absolute risk aversion, Insurance: Mathematics and , 33 (2003), 1–28.

[5] B. Chevalier-Roignant, C.M Flath, A. Huchzermeier and L. Trigeorgis, Strategic investment under uncertainty: a synthesis, European Journal of Operational Research, 215 (2011), 639–650.

[6] D. Fudenberg and J. Tirole, Preemption and rent equalization in the adoption of new technology. The Review of Economic Studies, 52 (1985), 383–401.

[7] M. Grasselli, V. Leclere and M. Ludkovski, Priority Option: The Value of Being a Leader, International Journal of Theoretical and Applied Finance, 16.01, 2013.

19 [8] S.R. Grenadier, The strategic exercise of options: Development cascades and overbuilding in real estate markets, The Journal of Finance, 51 (1996), 1653–1679.

[9] S.R. Grenadier, Option exercise games: the intersection of real options and , Journal of Applied Corporate Finance, 13 (2000), 99–107.

[10] C-f Huang and L. Lode, Entry and exit: perfect equilibria in continuous-time stopping games, working paper, 1991.

[11] D. Paxson and H. Pinto, Rivalry under price and quantity uncertainty, Review of Financial Economics, 14 (2005), 209–224.

[12] F. Smets, F., “Essays on Foreign Direct Investment”, Ph.D. Thesis, Yale University, 1993.

[13] A. Tsekrekos, The effect of first-mover’s advantages on the strategic exercise of real option, in “Real R&D Options” (eds. D. Paxson), Butterworth-Heinemann (2003), 185–207.

[14] J.J. Thijssen, Preemption in a real option game with a first mover advantage and player- specific uncertainty, Journal of Economic Theory, 145 (2010), 2448–2462.

[15] J.J. Thijssen, K.J. Huisman and P.M. Kort, Symmetric equilibrium strategies in game theoretic real option models, CentER DP 81, 2002.

[16] H. Weeds, Strategic delay in a real options model of R&D competition, The Review of Economic Studies, 69(2002), 729–747.

20