Online Appendix to Political Alignment and Bureaucratic Pay

Online Appendix to Political Alignment and Bureaucratic Pay

Online Appendix to Political Alignment and Bureaucratic Pay Jon H. Fiva∗ Benny Geysy Tom-Reiel Heggedalz Rune Sørensenx November 13, 2020 ∗BI Norwegian Business School, Department of Economics, Nydalsveien 37, N-0484 Oslo, Norway. E-mail: jon.h.fi[email protected] yBI Norwegian Business School, Department of Economics, Kong Christian Frederiks plass 5, N-5006 Bergen, Norway. E-mail: [email protected] zBI Norwegian Business School, Department of Economics, Nydalsveien 37, N-0484 Oslo, Norway. E-mail: [email protected] xBI Norwegian Business School, Department of Economics, Nydalsveien 37, N-0484 Oslo, Norway. E-mail: [email protected] 1 Table of contents Appendix A Principal-agent model Appendix B Supplementary figures and tables Appendix C Data sources and measurement Appendix D RD analysis of changes in council majority 2 A Principal-agent model In this section, we formally analyze how political preference alignment between a principal (politician) and an agent (CMO) affects the agent's expected pay. Preference alignment is thereby understood as a similarity along relevant preference dimensions between prin- cipal and agent (see also below). We focus on two potential underlying mechanisms. First, preference alignment gives policy-motivated agents a direct stake in achieving the public output desired by the political principal (the political mission). This is equivalent to the assumptions on policy motivation by Besley and Ghatak (2005). Second, prefer- ence alignment works to streamline communication and facilitates cooperation between contracting partners, and thereby improves the productivity of a match. This notion of productivity in a match is central to the literature on the ally principle (Bendor, Glazer and Hammond 2001; Huber and Shipan 2008; Dahlstr¨omand Holmgren 2019), and its micro-foundations { including improved communication, cooperation and control { have been extensively debated in the foregoing literature (Peters and Pierre 2004; Kopecky et al. 2016; Toral 2019). Although our empirical setting is characterized by the absence of one-off bonus pay- ments, the model developed below nonetheless builds on an incentive payment scheme. This is done to maintain comparability with previous work on performance contracts (e.g., Besley and Ghatak 2005), but it naturally implies that the model represents an imperfect analogy in our setting. To tie the model closer to the empirical application, however, one could think of the wage contract as consisting of a permanent wage plus an incentive-based, performance-related increase of this wage lasting over the entire em- ployment spell. The net present value of this permanent wage increase would correspond to the value of the bonus, leaving the per-period incentives in the model unchanged.1 The model thus can give us a sense of expected pay levels under different conditions, as well as providing additional clarity on how our two mechanisms push bureaucratic pay 1Note also that a performance contract with an output-related bonus may equivalently be set up as a fixed wage contract with a dismissal probability related to output. By convention, we discuss mechanisms using the former, while appreciating that the latter may better fit with our empirical setting. 3 in opposite directions. In the main model, we focus on the moral hazard problem in the agency relation and incentive pay. However, we also show that our main theoretical propositions persist in a framework without information problems. A.1 Matching and political alignment Assume that a principal P and an agent B are randomly matched and choose whether to produce together. We denote the principal's and agent's party-types by Pi and Bi, respectively, where i is either L (left) or R (right). One can think of the policy placement of both players as being measured on the unit interval with L-types placed at 0 and R- types placed at 1. Since preference alignment implies that principal and agent are similar along relevant preference dimensions, we say that political preferences are aligned when A = 1−jPi−Bij = 1, while political preferences are not aligned when A = 1−jPi−Bij = 0. A.2 Technology and preferences If the principal and agent in a match decide not to produce together, they receive their outside options (represented by u). If they do produce together, let output be given by f = (a + Aλ)e + " where a > 0 is a productivity parameter and λ ≥ 0 is the match specific productivity parameter that interacts with preference alignment A. The agent's e2 2 effort is e, and " is a stochastic element (noise). Agents' cost of effort is given by c = 2 . An agent's utility U is increasing in the expected financial value of the contract w (pay), and decreasing in the risk associated with the contract as well as cost c. U = E[w] − 0:5rV ar[w] − c + AθE[f]; where r > 0 measures the degree of risk aversion. The agent's utility may also be directly and positively affected by output f, depending on her potential intrinsic motivation for 2Note that our results go through if we let alignment affect productivity in a match through the e2 cost of effort. For instance, assuming cost of effort is given by c = 2t , where the parameter t 2 (0; 1) represents agents' cost-type, provides similar inferences to those reported below. 4 achieving output f (reflected in parameter θ 2 [0; 1)). The principal is risk neutral with utility π = E[f] − E[w]: Lastly, we assume that the principal can observe the agent's party-type, as these are mapped by party affiliation in the empirical application. A.3 Optimal performance contract In this section, we assume that the principal cannot observe effort and, hence, effort is not contractible (note that the next subsection solves the model assuming that the principal can observe effort). Restricting the analysis to linear contracts, let a contract w be given by w = τ + kf; where τ is a fixed transfer and k is a fraction of output (the incentive part { or `power' { of the contract). We analyze the optimal contract in two cases: in the first case agents are not motivated by policies (i.e., θ = 0), while in the second case agents have the same productivity in all matches (i.e., λ = 0). Looking first at the case without policy motivation, agents' utility of a contract is given by U = E[w] − 0:5rV ar[w] − c: Inserting for w and f we get U = τ + k(a + Aλ)e − 0:5rk2V ar["] − c: The agent maximizes U with respect to effort e. This gives rise to the incentive 5 compatibility constraint facing the principal k(a + Aλ) = c0: This equality implies that for a given k, agents in an aligned match (A = 1) will put in more effort than in a non-aligned match (A = 0) that has lower productivity. However, it is not optimal for the principal to give the same incentives k to agents of different alignment-types. In fact, it is straightforward to show that the incentive part of the op- timal contract following from the principal's maximization problem (taking the incentive compatibility and participation constraints as given) is3 (a + Aλ)2 k = : (a + Aλ)2 + rV ar(") The optimal output-related pay k thus increases in agents' productivity in a match (which is higher for agents in aligned matches). The intuition is that the principal wants to in- centivize the high productivity agents more than other agents, and these agents need to be compensated for taking on more risk (and suffering from the induced larger ef- fort). Thus, agents in aligned matches demand higher expected pay to participate. The following proposition summarizes this result. Proposition 1. Suppose θ = 0 and λ > 0. Then politician-bureaucrat preference align- ment increases bureaucrats' expected pay, i.e., E[wjA = 1] > E[wjA = 0]. Now, what happens when we allow for policy-motivated agents? In this case we let θ > 0 while λ = 0. The agent's utility of a contract w is then be given by U = τ + (Aθ + k)ae − 0:5rk2V ar["] − c: 3The participation constraint simply states that the value of the contract to the agent must at least equal her outside option, i.e. it must satisfy U = E[w] − 0:5rV ar[w] − c = u. 6 It follows that the agent's first order condition is (Aθ + k)a = c0: Thus, for a given k, the agent puts in more effort when the match is aligned (A = 1). Equivalently, the same level of effort can also be achieved with a lower k, although this does not constitute an optimal contract. In fact, it turns out that the optimal k is the same to agents of different alignment-types. This result stems from the fact that the change in effort induced by changing k is the same for all effort levels when c00 is constant (i.e., de=dk = a is invariant to θ). The principal's trade-off when increasing k between the marginal gain in production (through effort) and the marginal cost of risk shifting, is then the same for all agents regardless of the match's (mis-)alignment. Turning to expected pay, note first that { for given transfer τ { the surplus is larger for agents in aligned matches. Both aligned and misaligned agents face the same risk and get the same performance pay for a given output level. However, the aligned agents also have a direct stake in output. Moreover, these agents work harder creating additional surplus for themselves. That this indeed is a surplus, follows from that these agents could choose the same effort level as the misaligned ones, but optimally choose higher effort for given k.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    39 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us