Composing Policy Interventions for Antibiotic Development

Christopher Okhravi

Composing Policy Interventions for Antibiotic Development Dissertation presented at Uppsala University to be publicly examined in Lecture Hall 2, Ekonomikum, Kyrkogårdsgatan 10A, Uppsala, Tuesday, 29 September 2020 at 13:15 for the degree of Doctor of Philosophy. The examination will be conducted in English. Faculty examiner: Jonathan Michie, Professor of Innovation and Knowledge Exchange at the University of Oxford, Director of the University's Department for Continuing Education.

Abstract Okhravi, C. 2020. Composing Policy Interventions for Antibiotic Development. 173 pp. Uppsala: Uppsala University. ISBN 978-91-506-2838-8.

Antibiotic resistance is eroding the efficacy of the drugs we have and, unless future science dictates otherwise, bacteria will eventually become resistant to whatever new antibiotics we discover. We must therefore plan for a continuous stream of innovation. Unfortunately, pharmaceutical firms have left the scene to pursue more profitable areas. While the free market may eventually give rise to a solution, the question is how much destruction we are willing to accept on the way, and whether it eventually will be too late. A plethora of policy interventions, aimed at stimulating antibiotic research and development, have been suggested, and simulation modelers have begun estimating their effects. Suggested interventions range from prizes, grants, and competitions to regulatory fast-tracking and non-profit development. No unified picture of what to do has emerged. From the perspective of policy-makers, the need does not seem to be for more but for better information. This thesis suggests that to truly compare policy interventions, aimed at stimulating antibiotic development, we should draw on simulation model alignment techniques. To support such an endeavor this thesis presents the seeds of a compositional language capable of formally expressing policy interventions as offers that can be actualized into contracts. The language is not merely theoretical but implementable and usable within actual simulation models. The language is not only derived from previous research on compositional contracts in functional languages and the resources- events-agents ontology, but also the author's unique position as a participant in DRIVE-AB, which comprised 16 public and 7 private partners from 12 countries, and finally six separately published simulation experiments that are all based on work by the author. A constructive proof is provided to establish the utility of the solution in terms of its capacity to capture important facets of important policy interventions.

Keywords: evidence-based policy, simulation, antibiotics, ontology, composition, alignment, docking, contracts, REA, research and development, design science research, domain-specific language

Christopher Okhravi, Department of Informatics and Media, Kyrkogårdsg. 10, Uppsala University, SE-751 20 Uppsala, Sweden.

© Christopher Okhravi 2020

ISBN 978-91-506-2838-8 urn:nbn:se:uu:diva-417037 (http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-417037) By making things as similar as possible their differences emerge. Acknowledgements My sincerest gratitude is extended to the following few individuals who in various ways have supported me at critical junctures on this journey. Steve McKeever Simone Callegari Enrico Baraldi Francesco Ciabuschi Carl Anderson Kronlid Olof Lindahl Jenny Eriksson Lundström Martin Stojanov Görkem Paçacı Madelen Hermelin Mattias Nordlindh Anneli Edman Owen Eriksson Many names are left unmentioned to emphasize the weight of my gratitude extended to the above. I am convinced that you who remain unmentioned already know that you truly are valued as a colleague or friend. Finally, I wish to express my heartfelt gratitude to my significant other and my parents for their love, strength, and unending support.

Funding Part of this work has received support from the Innovative Medicines Initia- tive Joint Undertaking under grant agreement n◦115618, resources of which are composed of financial contribution from the European Union’s Seventh Framework Programme (FP7/2007-2013) and EFPIA companies’ in kind con- tribution.

Disclosure The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Contents

1 Introduction ...... 9 1.1 Problem ...... 11 1.2 Research question ...... 15 1.3 Contributions ...... 18 1.4 Context ...... 20 1.5 Related simulation work ...... 21 1.6 Overview ...... 24

2 Background ...... 26 2.1 Antibiotic resistance ...... 26 2.2 Development of antibiotics ...... 27 2.3 Economics of antibiotics ...... 30 2.4 Policy interventions for antibiotics ...... 33 2.5 Evidence-based policy ...... 34 2.6 Docking and alignment ...... 37 2.7 Domain specific languages ...... 40

3 Theory ...... 42 3.1 Agent-based modeling ...... 42 3.2 Resources, events, and agents ...... 47 3.3 Compositional financial contracts ...... 58

4 Methodology ...... 72 4.1 Paradigm ...... 72 4.2 Research strategy ...... 72 4.3 Research output ...... 73 4.4 Evaluation ...... 75 4.5 Delimitations ...... 76

5 Experiments ...... 78 5.1 Summary of experiments ...... 78 5.2 Detailed case ...... 80

6 Objectives of a solution ...... 92 6.1 Compositionality ...... 93 6.2 Actualizability ...... 93 6.3 Prospectability ...... 95 6.4 Atomicity ...... 97 6.5 Transferability ...... 99 6.6 Transformability ...... 99 6.7 Optionality ...... 100 6.8 Parallel conjunctivity ...... 101 6.9 Sequential conjunctivity ...... 102 6.10 Conditionality ...... 104 6.11 Scalability ...... 106 6.12 Causality ...... 108 6.13 Finality ...... 109 6.14 Cyclicity ...... 110 6.15 Reducibility ...... 111

7 Solution space ...... 113 7.1 Behavers ...... 113 7.2 State ...... 116 7.3 Execution ...... 117

8 Proposal ...... 121 8.1 Contracts ...... 121 8.2 Observables ...... 125 8.3 Reduction ...... 128 8.4 Done ...... 133 8.5 Events ...... 133 8.6 Actualization ...... 135

9 Evaluation ...... 141 9.1 Proof of utility ...... 141

10 Conclusion ...... 145 10.1 Revisiting the research question ...... 145 10.2 Limitations ...... 146 10.3 Future work ...... 147 10.4 Closing thoughts ...... 148

References ...... 149

Appendix A: Complete language ...... 161

Appendix B: Executable example ...... 168 1. Introduction

A post-antibiotic era, where common infections and minor injuries may kill, is no longer an apocalyptic fantasy but instead “a very real possibility for the 21st century” (World Health Organization, 2014). The increasing prevalence of an- tibiotic resistance is eroding the efficacy of the currently available antibiotics (Laxminarayan et al., 2013). Yet, our healthcare system depends on their avail- ability (Towse et al., 2017). Antibiotics are the backbone of modern medicine and a necessary prerequisite for treating medical conditions ranging from can- cer to broken bones to pneumonia (Harbarth et al., 2015). We need them, not only to treat primary bacterial infections but also to treat secondary bacterial infections incurred while a patient is being treated for some other (possibly vi- ral) infection. The global coronavirus pandemic in the spring of 2020, where COVID-19 spread like wildfire, not only serves as a warning of how quickly a global need for large volumes of effective antibiotics could arise, but may even “exacerbate the rise of antibiotic resistant superbugs” (Manohar et al., 2020) following the increased prophylactic use. Even without a pandemic, antimicrobial-resistant1 infections are responsi- ble for at least 50,000 deaths each year across Europe and US alone, and while estimates vary, some have suggested that antimicrobial resistance will claim 10 million lives each year by 2050, and accumulate a cost of 100 trillion USD (O’Neill, 2016). Others have argued that the increased annual healthcare costs, in the USA, as a consequence of antibacterial resistance, could be as high as 20 billion dollars, with annual lost productivity costs as high as 35 billion (Centres for Disease Control and Prevention, 2013). The unrecorded numbers may however be much higher as many parts of the world lack surveillance and reporting systems. The severity of the situation is evident, and the need for global action even more so (Laxminarayan et al., 2013). Unfortunately, there has been a lack of research and development (R&D) into new antibiotics to replace the old ones facing resistant bacteria (Rex & Outterson, 2016). The number of companies engaged in antibiotic re- search and development has decreased (Cooper & Shlaes, 2011), while an- tibiotic resistance and the rate at which resistance spreads have both increased (Eliopoulos et al., 2003). Promisingly, we are now witnessing an unprece- dented progress in the history of antibiotics, with a strong trend towards non- traditional, pathogen-specific and adjunctive approaches (Theuretzbacher et

1Antimicrobial resistance is a broader term than antibiotic resistance as it considers all antimi- crobials and not just antibiotics.

9 al., 2019). Further, antibiotics have been put on the public agenda by for example consortiums like DRIVE-AB2 and public-private partnerships like ENABLE.3 Yet, more work, focus and funding is needed if these efforts are to result in “effective antibacterial therapies” (Theuretzbacher et al., 2019). The potential of the is, in the words of Theuretzbacher et al. (2019), “encouraging but fragile”. Similar to the challenge of climate change, the invisible hand of the free market ought to eventually give rise to a solution. The question, however, is how much destruction we are willing to accept on the way, and whether the so- lution will arrive before it’s ‘too late’. Put differently, both climate change and the lack of effective antibiotics can be described as “market failures”, meaning that their outcomes under a free market are not obviously Pareto efficient.

Relying on growth in MDR [multi-drug resistant] infections to make the existing commercial model for drug development work would require such a high preva- lence of MDR infections that the sustainability of developed country health sys- tems would be threatened. (Towse et al., 2017)

In attempts to mitigate this catastrophic combination of increasing resis- tance and decreasing antibiotic research and development, a plethora of policy interventions (also known as incentives or simply policies) have been proposed (Mossialos, 2010; Renwick, Brogan, et al., 2016). Such policy interventions seek to incentivize pharmaceutical firms to either increase their efforts, or re- commit to the field. Substantial effort (Grace & Kyle, 2009; Kozak & Larsen, 2018; Mossia- los, 2010; Renwick, Brogan, et al., 2016; Towse & Sharma, 2011) has been put into summarizing, documenting, and balancing the various predicted ad- vantages and disadvantages of various policy interventions. While consistent terminology is beginning to emerge, we are far from a unified picture of un- ambiguously differentiable policy interventions for antibiotic research and de- velopment, and also far from a unified understanding of their effects. “Even though there is consensus on the need to do ‘something’, there seems to be disparate views on exactly how to do it” (Sertkaya et al., 2017), or rather what to do. The ‘wickedness’ of policy design is made abundantly clear in the fol- lowing words of Cartwright and Stegenga (2011):

Will the policy work? Does it have unpleasant side effects? Does it have ben- eficial side effects? How much does it cost? Have we made the correct choice of target outcomes? Is the policy morally, politically and culturally accept- able? Can we get the necessary agreement to get it enacted? Do we have the resources to implement it? (Cartwright & Stegenga, 2011, p. 290)

2http://drive-ab.eu/ 3http://nd4bb-enable.eu/

10 Generating evidence for evidence-based policy is hard, especially in cases where we cannot perform randomized controlled trials (RCTs). In the case of antibiotic research and development there simply aren’t enough samples (i.e. new discoveries or antibiotics in the pipeline) to experiment with, and even if we had samples, withholding funding from antibiotic projects that need money in order to see if they ‘make it on their own’ because we need a control group, is not only irresponsible but possibly even a violation of antitrust law. Unsur- prisingly, the community has, beyond logical argument, resorted to economic modeling and simulation.

1.1 Problem Substantial modeling efforts surrounding everything from the rise (Massad et al., 1993) and spread (Almagor et al., 2018) of antibiotic resistance, to its im- pact on public health (Eliopoulos et al., 2003) have been undertaken. More recently, attempts have also been made to estimate the possible impact of var- ious policy interventions in terms of, for example, their public cost and cost efficiency, as well as their effects on go-/no-decisions of antibiotic projects (Baraldi et al., 2019; Okhravi, 2020; Okhravi et al., 2018; Okhravi et al., 2017; Sertkaya et al., 2014; Sertkaya et al., 2017; Sharma et al., 2011; Towse et al., 2017). Unfortunately, no clearly unified answers appear to emerge from all these models. According to Cartwright and Hardie (2012), there are two main prob- lems in evidence-based policy. The first is generating or finding evidence that a policy has worked “somewhere”. The second is finding evidence supporting that the same policy, when enacted, will work “elsewhere”. Cartwright and Hardie (2012) suggest that the more difficult of these two is the second. Proof that it worked “there” is not proof that it will work “here” since the devil tends to be found in the details.

“If we are to be able to trust the simulations we use, we must independently replicate them. An unreplicated simulation is an untrustworthy simulation – do not rely on their results, they are almost certainly wrong.” (Edmonds & Hales, 2003, para. 12.2)

The process of translating policy evidence from “here” to “there” can, when both places are models ‘in silico’, be likened to the idea of alignment of com- putational models. Alignment was originally introduced by Axtell et al. (1996) as a suggestion for how to validate social science models that do not readily lend themselves to empirical testing such as randomized controlled trials. To abbreviate, they employed the term “docking”, suggesting an analogy with “orbital docking of dissimilar spacecraft”. Alignment is concerned with iden- tifying differences and similarities in model input, behavior, and output, with

11 the ultimate intent of determining whether some models are equivalent under some specification of equivalence. One way of approaching simulation model alignment is to use a subsuming language and to re-express both models in that language. By making things as similar as possible, their differences emerge. Aligning simulation models by means of subsumption, thus has the potential to aid in comparing models “here” and “there” so that we may begin to form informed opinions on ‘where’ a given policy intervention has ‘what’ effects. Policy interventions aimed at stimulating antibiotic research and develop- ment can often be portrayed as bilateral contracts between benefactors and beneficiaries. Consider for example the simple notion of a financial grant. A benefactor (such as a state) agrees to pay some amount of money, in some number of instalments, over some period of time, to some beneficiary (such as a pharmaceutical developer) in exchange for that beneficiary promising to undertake some activity (such as the development of some antibiotic) to its best of abilities. To take an example of a slightly more esoteric, but still bilat- eral, policy intervention, consider what has been called (among other names) a ‘partially delinked market entry reward’. In this case, the benefactor agrees to pay some amount of money, in some number of instalments, over some period of time, to the beneficiary starting as soon as (or if) that beneficiary manages to bring a product that meets some specification (set by the benefactor) to the market. While the above policy interventions essentially capture monetary transac- tions from benefactor to beneficiary in exchange for some activity, we can of course capture policy interventions that are not directly monetary as contracts as well. Consider for example the idea of ‘fast-tracking’ of drug approval. In such an intervention, the benefactor (in this case the drug approval agency), agrees to process some new drug application sent by the beneficiary and re- spond with either an approval or rejection, before some (earlier than without fast-tracking) due date, in exchange for the beneficiary showing that the drug in question meets some target criteria set forth by the benefactor. While the above monetary and non-monetary policy interventions are all bilateral, some policies are clearly multilateral. In fact, we can trivially turn the bilateral market entry reward policy above into a multilateral policy by re- designing it such that multiple countries partake to pay the total prize, where each country pays a piece proportional to some objectively measurable metric such as GDP, level of resistance, stewardship programs, and so forth. Alterna- tively, consider the proposal by Årdal et al. (2020) where countries contribute to an antibiotics fund in exchange for promised future access. Describing such a contract in detail is evidently less trivial but just as important nonetheless. Following this logic it is reasonable to assume that many policy interven- tions aimed at stimulating antibiotic research and development could be de- scribed in a language that resembles that of contracts. It should be noted how- ever that the formalizations above are by no means the only ways to interpret what people call ‘grants’ or ‘partially delinked market entry rewards’. Yet,

12 this is precisely the problem. Policy interventions referred to by name tend to leave important details unspecified. In short, they are ambiguous. Consider for example how the term “market entry reward” sometimes (Okhravi et al., 2018) is pre-fixed with “fully” or “partially” to suggest whether the intellec- tual property of the antibiotic receiving the prize would be transferred to the benefactor or remain in the hands of the beneficiary. Yet, the insufficiency of such prefixes become evident when observing policy intervention propos- als such as Outterson et al. (2016) that additionally parameterize market entry rewards by who sets sales prices, who arranges manufacturing, and who per- forms post-approval studies. As if these added parameters were not enough they also consider the notion of IP licensing (as opposed to a simple transfer) which itself requires additional parameterization. The same essential problem was observed by Peyton Jones et al. (2000) who proposed that function combinators could be used to model contracts for the financial industry. Function combinators stem from combinatory logic which was originally invented in the 1920s by Moses Schönfinkel but was later rediscovered by Haskell Curry. A combinator is informally defined a (higher-order) function that takes functions as arguments and only returns the result of function application of and using other combinators. Peyton Jones et al. (2000) stated that the problem with building a large catalogue of contract types, where each type is entirely different from any other, is that “someone will soon want a contract that is not in the catalogue”. The answer to such a problem, Peyton Jones et al. (2000) propose, is to instead design a set of combinators that can be composed to build the same contracts, but also any other arbitrarily complex contract. Works building on Peyton Jones et al. (2000) and the updated Peyton Jones and Eber (2003) such as Andersen et al. (2006) and Stefansen (2005), have managed to capture (among other things) multi-party contracts over arbitrary resources. Given the existence of formal combinators that can be used to de- scribe multilateral contracts of future transfers of arbitrary resources, it is rea- sonable to assume that combinators that capture what can be interpreted as formal or informal contracts of policy interventions for antibiotic research and development indeed can be designed. There are however two major variables to consider when simulating pol- icy interventions: the policy intervention itself and the context in which the policy is enacted. Even if contracts can capture the core of any policy inter- vention that does not necessarily tell us anything about the context in which the contract is to be interacted with. A policy intervention cannot be simu- lated in isolation since it by definition intends to intervene in some system. Borrowing terminology from Cartwright and Stegenga (2011) we will use the term ‘causal model’ to refer to binary functions that map policy interventions and system states to policy intervention ‘outcomes’ or ‘effects’. This is sim- ilar to the way Cartwright and Stegenga (2011) use the term, but different in that what is called “auxiliary factors” or “support factors” are assumed to be

13 InterventionA = | = InterventionB

in in

ModelA = | = ModelB

yields yields

ModelA ( InterventionA ) = | = ModelB ( InterventionB )

Figure 1.1. Comparing two hypothetical policy intervention experiments involves comparing input policy interventions, simulations (causal models), and output.

contained within the causal model itself. Examples of auxiliary factors for our domain include development phase durations, scientific probabilities of suc- cess, availability and costs of private capital, assumed interventions already in place, and so forth. Imagine two policy intervention simulation experiments, where the causal models are encoded in different languages, and where the policy interventions are embedded in the code of each model. Treating both models as black boxes for a moment, we may wish to know whether the policy interventions embed- ded in the two models are equivalent, whether the causal models excluding the policy interventions are equivalent, and whether the resulting outputs are equivalent. These three questions are visualized in Figure 1.1, and can in a generalized sense be thought of as there being three pairs of ‘things’ that can be compared for equality. Namely, input, model/function, and output. If you, for some two policy intervention simulation experiments, for example find that both the causal models and the outputs are, for all intents and purposes, equivalent, but that the input policy interventions are not, then you’ve identified two differ- ent policy interventions that yield equivalent effects. Say that you for some two policy intervention simulation experiments, find that both the input policy interventions and the output effects are, for all intents and purposes, equiva- lent, but that the causal models are different. Then you have found evidence suggesting that the same policy intervention can produce the same effect in different models. In Section 2.5 we explore different notions of equality and how to interpret different combinations of equality and non-equality between the left and right hand sides of Figure 1.1. All this is in some sense trivially obvious, but the point is that without a way to compare policy intervention models, causal models, and output for equality, it is not trivial to do.

14 For the purpose of completeness we should mention that some simulation models parameterize assumptions beyond the policy intervention to be ex- plored. Such parameters include development phase durations, probabilities of success, assumed interventions already in place, and so forth. These can, as previously mentioned, be viewed as “auxiliary factors” in the terminology of Cartwright and Stegenga (2011) or “support factors” in that of Cartwright and Hardie (2012). In fact, in some models the auxiliary factors might even be bundled with the intervention under consideration, which means that the simulation experiment runner must manually draw the line between what con- stitutes the intervention and what constitutes the assumptions, which again can be thought of as the auxiliary factors, meaning the ‘environment’ or ‘context’. In some very rigid models, both the auxiliary factors and the policy inter- vention are contained within the model itself. Viewing the model as a pure function, such a model is thus a constant, or a nullary function. For our purposes we assume that it is possible to re-interpret any simulation model of policy interventions, regardless of its implementation details, as a unary function from policy intervention to effect, where auxiliary factors are expressed within the model, as visualized in Figure 1.1. Whether the model function is produced by means of partial application of a binary function that first take auxiliary factors (which appears to be a reasonable design choice) is, in the context of this work, an implementation detail. In conclusion, the thesis is that it is possible to design a subsuming lan- guage, based on primitive composable constructs, which we can use to express policy interventions for antibiotic development, for the purpose of simulation model alignment, so that we can begin to systematically approach the ques- tion of ‘what’ works ‘where’. The ultimate goal is to provide decision-support for policy-makers in finding interventions that ensure access to a sustainable stream of new and effective antibiotics. In this monograph, I provide a model that can be used to express contract offers that are at the heart of policy inter- ventions aimed at stimulating antibiotic development. This model serves as a proof-of-concept to justify the claim that the above thesis might be true, and hence ought to be further explored.

1.2 Research question Up to this point the terms ‘policy’, ‘interventions’, and ‘policy interventions’ have been employed without any formal definitions. Before we can approach the research question we must thus agree on definitions. The term ‘policy intervention’, as used in this thesis, is best understood through its constituent parts. The word ‘policy’ denotes some agent’s delib- erate adoption or proposal of some course or principle of action. A change in policy is thus a change in behavior. The word ‘intervention’ denotes the act of exogenously intercepting some system with the intent of transforming

15 it in some intended manner. Whether the transformation succeeds or not is irrelevant. What matters is the existence of intent related to the introduction of the intervention. An intervention is an exogenous force that when applied to a system becomes endogenous to that very system. In other words, when applied, it may become an indistinguishable part of the system itself. Drugs are a prime example of how the word ‘intervention’ is employed in this thesis. When a patient suffering from some condition ingests some (ex- ogenous) drug intended to treat that condition then the drug is absorbed by the body (the system) and (in some respects) becomes a part of the body (becomes endogenous to the system) and thus necessarily alters the body (transforms the system), hopefully by eradicating the disease without harming the host. Definitions of interventionism from political science or economics have here been deliberately avoided as these usually imply that the agent of change is a state or central government. While it could be argued that it is the respon- sibility of global states and consortiums, possibly multilaterally, to steer the world towards sustainable solutions in this crisis of antibiotic resistance, they are not the only ones capable of autonomously introducing change. For this subtle reason, the term ‘intervention’ is used in the simpler sense of inducing a change to a system by means of policy, meaning by means of some agents adopting some new behaviors. In the medical example above we used the terminology of ‘interventions’, ‘systems’, ‘exogenous’, and ‘endogenous’ to build a model of drug ingestion. We modeled a scenario where the drug induced no effects on the patient before it was ingested, that is, while it was outside the body. One may very well conceive of policy interventions where the very existence of some drug has an effect on patient behavior and in such cases both the system and intervention would have to be defined in broader ways. If we for example were attempting to model social distancing behavior during a pandemic we might assume that the very existence of vaccine would alter individual behavior. In such a model we might thus define the ‘system’ as the population and the intervention as the introduction of a vaccine. The term ‘system’ is here used in a very general sense and is perhaps best understood in terms of the analogous words ‘state’, or ‘computational con- text’. What we have previously described as ‘causal models’ will in this thesis be presumed to always contain some form of such an intervenable state or what above was called ‘system’. To rehash and formalize, causal models can, in this thesis, be defined as instances of the following unary function type:

Model = Intervention → Effect (1.1) that take policy interventions and yield their expected effects. Interventions, in turn, can be defined as:

Intervention = State → State (1.2)

16 where some old state is mapped to some new state. Note that if the inter- vention function is the identity function then the model will yield the effect in absence of any intervention. In Chapter 4 we further capture the idea that models contain state and that interventions operate by means of behavioral changes. Having formally defined the terms ‘policy intervention’, ‘policy’, ‘intervention’, ‘state’, and ‘causal model’, we are now ready to formulate the research question.

RQ: What is the fundamental language of policy interventions usable within causal models of antibiotic development?

The term ‘fundamental’ is meant in the sense of the apocryphal Ockham’s razor, or more precisely of its recent rewording by Schaffer (2015) “The Laser: Do not multiply fundamental entities without necessity” [emphasis mine]. Meaning that a language construct that can be reconstructed as the compo- sition of two other constructs is not fundamental, and any proposed language masquerading composite constructs as fundamental is inferior. Simplistically speaking, fundamentality matters because a solution that invent a unique ter- minal symbol for every possibly policy intervention is, as we saw in the argu- ment by Peyton Jones et al. (2000) on financial contracts, an evidently useless language. Such a solution lack structure, yet policy evidently has structure. The insistence on fundamentality emphasizes that complex policy interven- tions must be built by composing simple building blocks, which in turn reveals why the word ‘composing’ made its way to the title of the thesis. Fundamentality must however also be balanced with utility, which is why we find the word ‘usable’ in the research question. Comparing two languages is thus not as simple as counting production rules in a grammar. To appreciate why, consider for example SKI combinator calculus or cellular automata like Conway’s Game of Life, or Stephen Wolfram’s Rule 110 (Cook, 2004). While these are all Turing complete, they are not particularly useful when seeking to express real-world complex programs involving for example business trans- actions and multilateral contracts. The language of policy interventions must not merely be fundamental, but also useful in the context of a causal model for antibiotic development. The term ‘the’ is used rather than ‘a’ to imply that solutions form a total order under binary comparisons of fundamentality and utility. This means that any language must exhibit either more, less, or equal fundamentality as well as utility. This insistence on solutions forming a total order, enable us, in theory, to qualitatively discuss and compare solution quality. The term ‘language’ is used to concretely refer to the notion of domain- specific languages, and abstractly to that of grammars endowed with seman- tics. The term ‘policy intervention’ is used, as explained, in the sense of de-

17 liberate system to system changes by means of the deliberate introduction, elimination, or alteration of some agent behavior. Finally, the term ‘development’ is used in place of the broader term ‘re- search and development’ or the even broader term ‘research, development, and commercialization’ in order to reduce scope. To improve the realism of policy intervention simulations for this domain, both research and commer- cialization ought to be explored in further detail and this thus serves as an important avenue for future research. It must be noted that while the research question is formulated as if this thesis presents the fundamental language of policy interventions for antibiotic development, this is indeed not the case. This thesis presents the most fun- damental language unveiled by the process of this research. Yet, it is entirely possible, and my sincere hope, that future researchers will find a way to in- crease fundamentality, deliberately abstract away from the domain of policy interventions for antibiotics, and instead focus on compositional modeling of policy interventions in general. Epistemologically, this question inherently provokes prescriptive as opposed to descriptive knowledge, and this research is, in the sense of Simon (1956), thus not concerned with proving “what is” (in the naturalistic sense) but rather what “ought to be”. In other words, the aim is not to discover some ‘naturalis- tic’ language of policy interventions for antibiotic development, but rather to design a “satisficing” solution. Philosophically, this research is grounded in the ideas of Pragmatism as we are concerned with utility rather than truth. Po- etically, this research is concerned with making things similar so that we can find and describe what sets them apart, and its aim is perhaps best summarized in the following words of Herbert Simon.

How complex or simple a structure is depends critically upon the way in which we describe it. Most of the complex structures found in the world are enor- mously redundant, and we can use this redundancy to simplify their description. But to use it, to achieve the simplification, we must find the right representation. (Simon, 1996, p. 215)

1.3 Contributions This thesis proposes a contract model suitable for expressing the offers that underlie policy interventions for antibiotic development. The model draws on the work of Peyton Jones and Eber (2003) and derivatives like Andersen et al. (2006) and Stefansen (2005). The proposal includes types for observables in a vein similar to Peyton Jones and Eber (2003) and economic transfer and trans- formation events in a vein similar to the extended REA (resources, events, agents) ontology of Geerts and McCarthy (2000b) but from the perspective of the “trading partner view” (Hruby, 2006, p. 353). The proposal also in- clude reduction semantics, in the vein of Andersen et al. (2006) and Stefansen

18 (2005), that show how contracts, under the arrival of events, can be reduced to residual contracts that have taken the events into consideration. It is shown that these composable contracts can be used in a context which is argued to be capable of capturing many synchronous simulation models. Specifically, this context draws on the agent-based paradigm and emphasizes subjective behavior and interpretation of messages as opposed to centrally controlled behavior. The bridge between this very general definition of sim- ulation, and the composable contracts underlying policy interventions for an- tibiotic development lies in a shared message (event) type. The key insight is two-fold. The first realization is that much (if not all) activity in social simulations can, in a vein similar to REA, be captured as economic events carried out by agents acting upon some resources. The sec- ond is that this same resource type can be used in what is termed contract ‘reduction’ Informally, reduction simply refers to the unfolding of contracts over time as a consequence of actions of other agents, where even seemingly objective phenomena such as say the growing of a tree or the decaying of fruit is captured in terms of economic events transferred between agents in refer- ence to resources. By making use of the same event type in the compositional contracts, we can use these contract combinators in any domain. It is argued that policy interventions should be construed as contract offers rather than as actual contracts. Thus, the proposed contracts are made paramet- rically polymorphic over events (‘things that can happen’) and commitments (‘things that should happen’) so that it forms a profunctor. This enables what is termed ‘actualization’, which allows a contract mapped over some pair of event and commitment types to another pair, so long as there exists an isomor- phism between the two pairs. This allows contract offers to be expressed as contracts, which in turn allow the composition of not only contracts between actual agents and actual resources, but also of potential contracts between po- tential agents and potential resources. This work contributes to the field of accounting information systems by (1) demonstrating another way in which REA can be combined with composi- tional contracts, (2) demonstrating that the complexity of REA commitments can be reduced by viewing all transfers as fulfilments of contractual obliga- tions, (3) demonstrating how REA can be interpreted in functional program- ming, and (4) demonstrating that REA does not have to be a design pattern but can be treated as a framework by implementing contracts and a simple type for economic events. This work contributes to the field of evidence-based policy by (1) providing the seeds of a language capable of expressing contracts underlying domain- agnostic yet composable policy interventions, and (2) providing the seeds of an agent-based language capable of capturing any single-threaded causal model in which policy interventions can be applied. This work contributes to the field of social simulation by (1) serving as a case study of how it is possible to simplify alignment of social simulation

19 models by translating models into a subsuming language and then pursuing syntactic comparison, and (2) actually providing the seeds of such a language for policy interventions for antibiotic development. This work contributes to the professional and academic debate on policy interventions for antibiotics by (1) re-emphasizing the policy conclusions of the case reported in Chapter 5, (2) furthering the state of the art in quantitative modeling of policy interventions for antibiotic development by letting the sep- arately published experiments, summarized in Chapter 5, serve as examples, and (3) providing the seeds of a language that appears capable of formally and compositionally capturing many policy interventions suggested to date. Finally, this work contributes to the field of information systems by provid- ing a proof of concept suggesting that it seems possible to develop a compo- sitional information model of policy interventions for antibiotic development for purposes of simulation and simulation model alignment.

1.4 Context I was an active member of DRIVE-AB from November 2014 to September 2017. DRIVE-AB was a project composed of 16 public and 7 private part- ners from 12 countries (DRIVE-AB, n.d.). It was funded by the Innovative Medicines Initiative (IMI), which is a joint undertaking between the European Union (EU) and the European Federation of Pharmaceutical Industries and Associations (EFPIA) (DRIVE-AB, n.d.). DRIVE-AB aimed to identify so- lutions for how to (1) “reduce AMR through responsible antibiotic use”, and (2) “identify how, through new economic models, to incentivise the discov- ery and development of new novel antibiotics for use now and in the future” (DRIVE-AB, n.d.). Logistically, DRIVE-AB was composed of three work packages of which each was further divided into tasks. I was part of what was known as Task 9 of Work Package 2. The goal of Task 9 was to model and simulate interven- tion mechanisms brought forth by other tasks with the purpose of providing quantitative evidence for or against, and/or provide useful insights into the qualitative behavior of said interventions. DRIVE-AB culminated in a final report (Årdal et al., 2017), in which I co- authored Appendix C. During DRIVE-AB, I also co-authored the two peer- reviewed publications Okhravi et al. (2017) and Okhravi et al. (2018) and the two technical reports Kronlid et al. (2017a, 2017b) where the former is a pri- vate report (available upon request) and the latter is public. After my partici- pation in the DRIVE-AB project I single-handedly conceived of and authored the peer-reviewed publication Okhravi (2020).

20 1.5 Related simulation work Despite all the complications outlined in this chapter, the community seems to, in the sense of Box (1976), take solace in the idea that a model might be incorrect but still useful. In this section, I account for some key works in the field, to demonstrate how modeling has the ability to yield profound insights for policy-makers and researchers alike in the domain of antibiotic development. Sharma et al. (2011) provides the first substantial published modeling effort of policy interventions for antibiotics R&D. They also summarize their efforts in the peer-reviewed publication Towse and Sharma (2011). Using mathemat- ical modeling they estimate the impact that 11 different policy interventions have on the expected net present value (ENPV) of a prototypical antibiotic project at the beginning of pre-clinical. ENPV is further explained in Sec- tion 2.3. By comparing the ENPV improvement to a baseline ENPV, and that of other therapeutic areas, they reach a series of policy advice. Sharma et al. (2011) assume that an investment must exhibit an ENPV of at least 200 mil- lion USD to be considered a reasonable investment for a pharmaceutical firm. NPV is a financial valuation tool that takes the time of cashflows into consid- eration, while ENPV, in addition, also considers the risk of failure. ENPV was further explained in Section 2.3. While a seminal work, Sharma et al. (2011), also had to make a series of ad- hoc assumptions to reformulate every policy intervention as either a change in a parameter used to calculate ENPV or simply as a change to the final ENPV value. The former being chosen for most of the interventions but the latter for modeling IP extensions. Reformulating an intervention into ENPV parame- ters is not equally straightforward in all cases. Further, varying assumptions across interventions may make it difficult to compare effects within a study, but certainly make comparison across studies more difficult. Orphan drug legislation is a prime example of the complexity involved in modeling the effects of a policy intervention. Sharma et al. (2011) report eight parameters, originally from Mossialos (2010), that could all be important for modeling the effects of the intervention. The eight parameters are: eligibility criteria, market exclusivity length, data exclusivity length, additional funding, associated tax credits, protocol assistance, accelerated review, and whether rejection reconsideration is possible. However, they choose to bundle all pa- rameters into an assumption of increased revenue per region. Regions here as as important as the legislation, and hence some of the parameters (such as the eligibility criteria and the length of market exclusivity), differ between the EU and the US (Sharma et al., 2011). Yet, some unmodeled parameters might be crucial to estimating the effect of the intervention. Regulatory advice for example has encouraged smaller companies to seek approval for orphan prod- ucts (Sharma et al., 2011), which may have positive systemic effects as there is

21 a “strong proven correlation between scientific support and success” (Sharma et al., 2011) In summary, while Sharma et al. (2011) elucidate a plethora of important policy intervention details, and quantify the impact of different interventions, not all considerations are captured in the quantitative modeling. Consequently, there is still a need for more comprehensive efforts to model policy interven- tions for antibiotics R&D. Sertkaya et al. (2014) progress the work of Sharma et al. (2011) by constru- ing an “analytical framework for examining the value of antibacterial prod- ucts”. Using a decision-tree model they move the state of the art forward in four important ways. (1) They consider input parameter uncertainty (through sensitivity analysis), as well as (2) data variation based on targeting different conditions. Further, they not only explore antibiotics but also (3) a vaccine and a rapid point-of-care diagnostic. Lastly, they (4) not only compute the private value of an antibiotic but also compute a corresponding social value, i.e. the public health value of a future hypothetical antibiotic for different indications. To estimate the private value of hypothetical antibiotics at the start of pre- clinical they compute ENPV through a decision-tree model. To estimate the social value they compute the ENPV of the social burden of illness in terms of quality adjusted life years (QALYs) lost. Where the social burden is computed by extrapolating from individual burden, in terms of morbidity and mortality. While Sharma et al. (2011) assume that the cutoff ENPV for a reasonable investment is 200 million USD, Sertkaya et al. (2014) assume 100 million. They conclude that four of the six conditions considered had positive ENPV but none were above the threshold (Sertkaya et al., 2014). Along with other conclusions the authors emphasize that “only a combination of incentives has the potential to sufficiently move the ENPV above the $100 million threshold” but state that identifying satisfactory combinations was beyond the scope of their work. Sertkaya et al. (2014) model all interventions as changes to model param- eters (such as tax incentives were modeled as changes to discount rate, and extended IP as extended time to generic entry). In contrast to Sharma et al. (2011) this is a useful step forward as all modeled interventions can be under- stood within the same framework. This is a fruitful approach as it is easy to see how additional interventions could be modeled by altering parameters as opposed to introducing completely new constructs. However, no subsequent works claim parameterization of all proposed interventions. Sertkaya et al. (2017) continue the work of Sertkaya et al. (2014) by only focusing on the task of expanding the antibacterial pipeline by incentivizing private engagement through improved returns. Sertkaya et al. (2017) show that uniform application of policy intervention risks under-incentivizing early- phase companies while over-incentivizing those in later phases. To under- incentivize is to fail to achieve the desired outcome, by not actually stimu- lating further investments, while to over-incentivize is to achieve the desired

22 outcome, albeit at a higher cost than needed (Sertkaya et al., 2017). Over- incentivizing has also been referred to as overpaying (Mossialos, 2010; Towse & Sharma, 2011), overcompensation (Okhravi et al., 2017) and overspend- ing (Okhravi et al., 2018). Sertkaya et al. (2017) therefore conclude that a policy-maker must take into account, not only the indication(s) targeted by the antibacterial drug in question, but also its stage of development. As explained by Sertkaya et al. (2017), the reasons that the value of in- terventions is larger the closer a drug is to launch are three-fold: (1) less time remaining means fewer years for the cost of capital to compound over, (2) past costs are “sunk” and hence won’t affect the current valuation, and finally (3) the probability of reaching the market increases. This means that intervention levels must be higher to “entice companies to enter [...] than to encourage those that are already in [...] to continue” (Sertkaya et al., 2017). Beyond the issue of over- and under-incentivization, raised above, Sertkaya et al. (2017), in their specific tests, also find that: (1) delaying the entry of generics is not sufficient by itself, (2) cost of capital reductions achieved through tax incentives may need to be as high as 80% (depending on the indi- cation targeted by the drug), and (3) clinical trial times may have to be reduced as much as 80% (depending on the indication targeted by the drug), In con- clusion, Sertkaya et al. (2017) argue that designing optimal policies is “not straightforward”. Towse et al. (2017) first examine some commonly suggested policy inter- ventions, namely (1) public-private partnerships (PPPs), (2) alternative regu- latory pathways, (3) both 1 and 2 in combination, and (4) extended market exclusivity used both in combination with the base case and interventions 1 to 3. They conclude that none of the interventions in isolation nor in com- bination manage to bring ENPV up to the target level, which is here also set to 100 million USD. Towse et al. (2017) then proceed to explore two alter- native interventions: (5) premium prices, and the (6) insurance model. The former is an increase in price per unit, and the latter a global flat annual fee, proportionally paid by each healthcare system, along with a unit price paid to the manufacturer for each unit of drug used. Towse et al. (2017) conclude that while both premium prices and the insurance model may suffice to incen- tivize private R&D of antibiotics, the latter ought to be preferred as it reduces financial risk for both for healthcare systems and manufacturers. Towse et al. (2017) use mathematical modeling and compute ENPV of hy- pothetical antibiotics entering pre-clinical. They elegantly base their market estimates on resistance forecasts and thus the number of individual treatments. Even if such calculations are (as the case was) based on very recent forecasts, they will naturally need to change as forecasts are refined over time. As a first step, modelers need to be able to rerun the same model but with updated mar- ket figures, but as a second step it would be useful if policy-makers who are not also computer programmers or spreadsheet savvy economists can redefine model parameters and trivially rerun the model.

23 Almagor et al. (2018) use an agent-based model to understand the spread of resistance in hospitals, due to the fact that they are “focal points” for the spread of antibiotic resistant bacteria. In their model, resistant bacteria is transferred between patients, and between health care workers and patients. Among other results, they show that “increasing the proportion of patients receiving antibi- otics, increases the rate of acquisition non-linearly”. Almagor et al. (2018) shows that hospitals play an important role in “determining the speed of resis- tance”, and that stewardship interventions that regulate the use of antibiotics have the potential to “rapidly reduce the spread of resistance”. For modeling purposes, this means that the market size of an antibiotic is not only dependent on the use of the antibiotic, but non-linearly, possibly even chaotically so. Further, as the future market size of antibiotics depends on resistance, it also depends on use, which subsequently will depend on the stewardship interventions we manage to put in place. I myself have been involved in a number of publications using Monte Carlo simulation with and without agent-based characteristics to simulate policy in- terventions for antibiotic development. Specifically, Okhravi et al. (2017), Kronlid et al. (2017b), Kronlid et al. (2017a), Årdal et al. (2017, Appendix C), Okhravi et al. (2018), and Okhravi (2020). These works are summa- rized in Chapter 5 and move the state of the art further in that they explore a broader set of input parameters by employing simulation techniques. Col- leagues have also published a report (Baraldi et al., 2019) commissioned by the public health agency of Sweden, which employs a similar methodology but focuses primarily on the role of Sweden.

1.6 Overview This thesis is structured as follows: Chapter 1 introduces the problem, formulates and explains the research question, summarizes the contributions, accounts for the context in which this research has been conducted, and finally briefly summarizes some key works that simulate facets of antibiotic research, development and commercializa- tion. Chapter 2 provides a brief history of antibiotics, explains the cause of resis- tance, and broadly charts the agents involved in discovery, development, and commercialization of antibiotics. The chapter explains why the research and development of antibiotics is considered an unsound investment and grants a deeper understanding of why policy interventions for antibiotics have been suggested in the first place. Finally, this chapter further explains the concepts of evidence-based policy, simulation model alignment, and domain-specific languages as these are important facets of how the research question is framed. Chapter 3 provides a detailed description of the three strands that constitute the theoretical framework of this thesis. These three are agent-based modeling,

24 the resources-events-agents (REA) ontology/model, and finally compositional contracts. Limitations of the two latter are discussed as these form the pathway to the objectives of a solution that later guide the proposal of the thesis. Chapter 4 accounts for the methodological stance and assumptions by stat- ing the paradigm in which this work is conducted, the research strategy used, the evaluation strategy employed, and finally the deliberate delimitations. Chapter 5 gives a brief account of the six simulation experiments I partici- pated in conceiving of and executing during the course of this thesis work. The final experiment is discussed in significant detail to both enrich the reader’s un- derstanding of the complexities involved in simulation of policy interventions for antibiotic development, but also since one of the policy interventions from that experiment serves as the basis for the evaluation chapter. Chapter 6 accounts for the objectives of a solution as inferred from the problem described in Chapters 1 and 2, the theoretical framework accounted for in Chapter 3, the simulation experiments reported in Chapter 5, and finally my unique position as a participant in DRIVE-AB, described in Section 1.4. Chapter 7 suggests a very broad yet simple solution space that ought to capture many synchronous simulations of policy interventions for antibiotics. This model space serves to show how the proposed contract language of policy interventions for antibiotics is not merely theoretical but could be practically employed in any simulation that follows, or can be reformulated into, this structure. Chapter 8 draws on the objectives of a solution and proposes a contract and contract offer language that’s usable within the solution space and capable of capturing policy interventions for antibiotic development. Chapter 9 provides a constructive proof to establish the utility of the solu- tion, by showing that the language indeed can be used to capture important facets of important policy interventions. Chapter 10 revisits the research question in light of the proposal, outlines some key limitations in what can be concluded, suggests some interesting av- enues for future research, and finally shares some closing thoughts on the role of this research in the wider context of policy intervention modeling in general.

25 2. Background

In this chapter we take a brief look at the history of antibiotics and the source of resistance. I broadly chart the agents involved in the discovery, development, and commercialization of antibiotics, but also further elucidate what these ac- tivities entail. We explore the profitability of antibiotics from the perspective of pharmaceutical companies and why it has been suggested that they are un- sound investments. Lastly, we take a further look into the nature of the current policy debate for supporting antibiotic research and development. Overall, I elucidate the problem of antibiotics from a technical, economic, political, and ethical point of view. The purpose of this chapter is first to provide enough information on antibiotics and its surrounding discussion, to enable the unfa- miliar reader to appreciate the chapters following this. Second, the purpose is to elucidate the complexity of the bigger picture to motivate the narrowness of this thesis. In the last three sections of this chapter I further explore the concepts of evidence-based policy, simulation model alignment, and domain- specific languages as these are important facets of how the research question is framed.

2.1 Antibiotic resistance Antibiotics are used to treat and prevent bacterial infections. For more than 60 years they have been regarded as the panacea, but even as far back as 1945, the discoverer of penicillin, Alexander Fleming, in his Nobel Prize speech, warned that bacteria would eventually become resistant (World Health Orga- nization, 2014) Antibiotics are a form of antimicrobials, and thus antibacte- rial resistance a form of antimicrobial resistance (World Health Organization, 2015). Resistance evolves due to the survival of particular random mutations of bacteria. This was recently, beautifully visualized by researchers at Harvard Medical school (Baym et al., 2016) in a giant petri dish. A regular petri dish is a shallow glass or plastic cylinder used to culture cells such as bacteria in order to study them. This petri dish however was a shallow cuboid measuring 60 by 120 centimeters at a thickness of 11 millimeters. In one experiment, the cuboid petri dish was divided into 9 slices where neither the leftmost slice, nor the rightmost slice were treated with any antibiotics. The next slice inwards was treated with 3 times the wild-type minimum inhibitory concentration (MIC) of the antibiotic trimethoprim (an antibiotic usually used to treat urinary tract

26 1200 x 600 x 11 (mm)

0 3 30 300 3000 300 30 3 0

Figure 2.1. Antibiotic treatment scheme in a ‘giant’ petri dish experiment conducted by Baym et al. (2016). infections). The treatment of the following slices was sequentially increased by an order of magnitude. The giant petri dish was thus treated as visualized in Figure 2.1 The researchers inoculated the antibiotic-free outer slices with the bacteria “Escherichia coli” (also known as E. coli). In short, bacteria swam and spread within one slice until a sufficiently resistant mutant was able to breach the next slice. Over time, the winning lineage ultimately overtook the slice with the highest drug concentration. Bacteria grew resistant to antibiotic treatments 1,000 times stronger than the initial treatment over the course of 8 days, when allowed to experience a spatially gradual increase. Bacteria can broadly be divided into two classes – gram positives and gram negatives, where the latter, currently, is more resistant to antibiotics than the former. MRSA and acne are diseases caused by gram positives while Lyme disease and pneumonia are caused by gram negatives (Brunning, 2014). The main cause for concern is currently multidrug-resistant gram-negative bacteria (Laxminarayan et al., 2013). The spectrum of susceptible bacteria varies per antibiotic. Broad spectrum antibiotics target both gram positive and gram neg- ative bacteria, while narrow spectrum may target even limited species. Using narrow in lieu of broad spectrum antibiotics when possible may “help to slow induction and spread of resistance” (Laxminarayan et al., 2013). As resistance grows, the development of species specific antibiotics may grow increasingly important. The lesson here is, in short, that we don’t merely need new an- tibiotics, but that it also matters what range of and what bacteria these new antibiotics manage to target.

2.2 Development of antibiotics Antibiotics are mainly developed by three classes of organizations: universi- ties (who mainly partake in the early phases), large pharmaceutical organiza- tions (often referred to as simply ‘big pharma’), as well as small and medium- sized enterprises (SMEs). Antibiotic research, development, and commercial- ization is, as with most drugs, regulated and monitored by regulatory agencies. In the European Union the corresponding authority is the European Medicines

27 Regulatory Agencies

DISCOVERY DEVELOPMENT SALES Universities Universities Pharmacies Big Pharma Big Pharma Hospitals SMEs SMEs

Public/Private Funding & Investments

Figure 2.2. The antibiotic research, development, and commercialization landscape.

Agency (EMA), while that in the United States is the U.S. Food and Drug Administration (FDA). As we will see throughout this thesis, both the public and private sectors support pharmaceutical research, development and commercialization in var- ious monetary and non-monetary ways. Monetary ways include for example the issuance of grants (Savic & Årdal, 2018) and the reimbursement of hos- pital drug purchases by states, while non-monetary ways include for example synergetic knowledge sharing within public-private partnerships (Croft, 2005) and fast-tracking of approval (Kubler, 2018). Depending on one’s point of view, any of these could be considered interventions in the terminology of Section 1.2. The life cycle of a pharmaceutical product consists of three major stages: discovery, development and commercialization (Blau et al., 2004). These are summarized in Figure 2.2. In the discovery stage, molecules that have effects on the target are identified. Variations are experimented with and tested for toxicity. Finding something that will “kill the microorganism but that will leave its host (us) unharmed” is not trivial (Shlaes, 2010, p. 9). If some varia- tion suggest a successful drug and no “worrisome toxic effects are observed”, then the molecule is promoted to a “lead” and thus becomes a candidate for development (Blau et al., 2004). In the development stage the molecule is tested in healthy volunteers, then in patients with the disease, and finally in large-scale clinical studies conducted in concert with the relevant regulatory authority, meaning for example the FDA in the U.S. and the EMA in the Eu- ropean Union. Generally speaking, the intent is to either establish evidence of efficacy by for instance superiority to control such as placebo, or estab- lish comparative efficacy meaning superiority, non-inferiority, or other bene- fits such as improved safety (Food and Drug Administration, 2001). All this, while avoiding unacceptable side-effects. Finally a commercial plant is de- signed and constructed. If the drug is approved, it may then proceed to launch and be commercially marketed and sold. Each of the three major stages can be divided into several detailed phases. However, since innovation processes often are more iterative than commonly depicted (Van de Ven, 1999), how to splice a pharmaceutical’s lifecycle into a series of discrete phases is, to some

28 DISCOVERY DEVELOPMENT SALES

Discovery Pre-clinical Phase I Phase II Phase III Approval Sales

Failure | Termination

Figure 2.3. Pharmaceutical discovery, development, and sales as a simplified process.

extent, a matter of interpretation. This work, employs the seven phase split depicted in Figure 2.3 and explained in the following paragraphs. By only ex- ploding the development stage into further details and leaving both discovery and sales as high-level definitions, it ought to be clear that this work mostly is concerned with the development of antibiotics. We will at times touch upon both discovery (sometimes referred to as research) and sales since develop- ment cannot be understood in total isolation. At stage-gates (meaning before entering into a new phase) the development of the new drug may be terminated due to reasons such as unwanted side effects, marginal efficacy or competition from in-house or competitor candidates (Blau et al., 2004). An antibiotic project can at any point during its lifecycle fail or be termi- nated. We here make a terminological distinction between termination as a consequence of poor financial prospects (such as negative return on invest- ment) and failure as a consequence of objective scientific failure (such as a lack of effect on the target or toxicity). In the discovery phase, researchers try to find “hits” and then promote them to “leads”. A hit is a molecule that has an effect on the target, and a lead is a hit that also does not exhibit any worrisome toxicity (Blau et al., 2004). To find hits, thousands of molecules are applied to targets that simulate different disease groups (Blau et al., 2004). During pre-clinical, toxicity is further explored in vitro (meaning in “test tube” like environments) and sometimes in vivo in animals. Preparations for the first human dose, also known as “first-in-man”, is initiated and includes pharmacokinetic studies (Blau et al., 2004). The results produced in pre- clinical is gathered and submitted to regulatory authorities (such as the FDA in USA or EMA in Europe) as an investigational new drug application. If the application is approved then clinical development can be initiated. In phase I, the first clinical trials are performed. The drug is administered to healthy human volunteers and to animals (rats/mice) in hopes of identify- ing “acceptable absorption, distribution, or elimination patterns” (Blau et al., 2004). Unacceptable results may terminate the study (Blau et al., 2004). In phase II the drug is administered to humans with the disease (Blau et al., 2004). “Long-term oncogenic toxicological studies” are carried out on animals while market research is initiated to obtain sales estimates (Blau et al., 2004). Fail- ure to treat the disease, or product inferiority either terminates the study or sends the drug back to discovery (Blau et al., 2004). In phase III, “large-scale

29 clinical studies on humans with the disease” are carried out with involvement of regulatory authorities such as the FDA, (Blau et al., 2004). Results either confirm the results of phase II and the project moves to approval, or the study is terminated (Blau et al., 2004). In the approval phase, “all information (for example efficacy, toxicology, process, drug-drug interactions, side effects) is combined” (Blau et al., 2004) and submitted to the regulatory authorities (e.g. the FDA), in what is known as a ‘new drug application. This is also known as first submission for approval. A marketing strategy is being developed and a commercial plant constructed (Blau et al., 2004). Approval is expected, but failure possible. (Blau et al., 2004) Note that Blau et al. (2004) separates first submission for approval and prelaunch activities while we here bundle them under the term ‘approval’. The product is then sold in an increasing number of markets across the globe, until patents expire or “competition is realized either from competitors or from planned cannibalization” (Blau et al., 2004). Global market penetra- tion can take years since not all markets are entered at the same time (Blau et al., 2004). Pharmacovigilance monitors the product after launch, in search of unforeseen adverse effects (World Health Organization, 2002). This effec- tively means that a drug can, in our terminology, fail even after it has been launched. The excruciating level of detail in the above account of pharmaceutical de- velopment shows that while the discovery of new potential antibiotics might inherently be unpredictable, developing a drug from post discovery to mar- ket approval is not necessarily so. Drug development is a heavily regulated process. Innovation is commonly regarded as an inherently uncertain process which by definition only can become known through performing the process itself (Fagerberg et al., 2005). From this perspective, drug development is quite a peculiar process seeing that it is comparatively well formalized and understood (possibly as a consequence of heavy regulation). Perhaps the word ‘innovation’ is thus better used in relation to drug discovery, rather than drug development and commercialization. In clinical development, the challenge is seemingly neither what to do nor how to do it, but simply (a) whether or not it will succeed, and (b) whether or not any given developer will choose to try or not. This is not to say that clinical development is trivial but merely that what must be done is ‘known’. If one doubts the sensibility of attempting to model policy interventions for antibiotic development it is useful to remember that we are not talking about simulating the discovery of new antibiotics, but merely the development of potential antibiotics into actual antibiotics.

2.3 Economics of antibiotics Unfortunately, large pharmaceutical organizations (often referred to as simply “big pharma”) have left the antibiotics space to pursue more profitable ther-

30 Cashflow Cumulative Expected Value

15000 900

10000 600

5000 300

0 0

Net Present Value (NPV) Expected Net Present Value (ENPV) 60

1000 40 Valuation (million USD) Valuation

20 500 0

0 −20

01020300102030 Year

Figure 2.4. Valuation of hypothetical antibiotic projects from Okhravi (2020) at the start of pre-clinical, using different valuation methods. Solid lines represent mean values across all target indications while dashed lines represent a standard deviation away from the mean. apeutic areas (Towse & Sharma, 2011). Simplistically, the lack of antibiotic research and development stem from pharmaceutical firms either not engaging in discovery, which results in no new projects entering the pipeline (meaning the process between discovery and market approval), and/or prematurely dis- continuing projects already in the pipeline. Taking the undiscounted cashflow value of antibiotic projects at face value, one might conclude that the profit requirements of ‘big pharma’ are prepos- terous and that antibiotics certainly are profitable. Consider for example the significantly positive cumulative cashflow in the top left plot of Figure 2.4. The figure is based on data underpinning one of the published experiments (Okhravi, 2020) that we explore more closely in Chapter 5. In fact, most projects are not just cashflow positive, but heavily so, with total cumulative cashflows in excess of a billion USD. Unfortunately, a simple cashflow, or return on investment (ROI), analysis fails to take risk and opportunity costs into account. The crux is that this billion only ever is cumulated after 22 years and only ever if the project, against all odds, happens to succeed. More often than not, the project simply loses a tremendous amount of capital and then fails. Consequently, one cannot merely

31 consider cashflow, but must look at the expected value of the set of possible outcomes. Figure 2.4 presents different views of the same data. While the top left plot is neither capitalized (meaning that it does not consider opportunity cost) nor risk-adjusted, the top right is risk-adjusted, the bottom left capitalized, and the bottom right both risk-adjusted and capitalized. The top left can thus be thought of as the cumulative face value of the project, while the top right, which is known as the cumulative expected value (cumulative EV) takes the risk of failure into consideration, and the bottom left, which is known as Net Present Value (NPV) takes the lost opportunity cost of capital into considera- tion. Finally the bottom right, which is known as Expected Net Present Value (ENPV) takes both the risk of failure and the lost opportunity cost of capital into consideration. ENPV can thus be thought of as answering the question: what is this investment worth, given that I have to invest now and only possibly will receive revenue in the future. It should be noted that these are not esoteric valuation methods, but text book methods from economics that are employed in, among other places, pharmaceutical research and development (Blau et al., 2004). Assuming that ENPV plays a role in the biotech industry is not particularly controversial, see for example Blau et al. (2004), Kellogg et al. (1999), Villiger and Bogdan (2005), or Sertkaya et al. (2014). Whether that usage is sensible or not, however, is debated, see for example Blau et al. (2004) and Villiger and Bogdan (2005). Some authors (Blau et al., 2004) have proclaimed that it is a “truly meaningless” metric as any given project will either have a negative or positive NPV depending on whether it fails. The NPV distribution of a given project is thus not a single peak distribution (as ENPV would imply) but a two peaked distribution. Villiger and Bogdan (2005) discuss the potential of Real- Options Valuation (ROV) as an alternative to ENPV, while Blau et al. (2004) employ Monte Carlo simulation to explore “project selection and sequencing decisions jointly” as opposed to in isolation. For our purposes, this means we, when modeling, cannot simplistically as- sume that pharmaceutical developers apply ENPV. Criticism and alternatives like what we have outlined here are readily available to key decision-makers. Further, there’s a huge range of non-obvious parameters that might affect any pharmaceutical developer’s decision-making. Consider for example the fol- lowing arbitrary aspects: corporate social responsibility, staff skills, expected drug legislation, competition, strategic fit, spillover effects, corporate power struggles, acquisition of disruptive competition, etc. The list is seemingly endless.

32 2.4 Policy interventions for antibiotics Disregarding the complexities of modeling the decision-making underlying pharmaceutical investments, designing policy to mitigate the risks of antibi- otic resistance by promoting the development of antibiotics, unfortunately, in and of itself happens to be particularly difficult. Tackling antibiotic resistance requires a three-pronged approach (Hoffman & Outterson, 2015) that simulta- neously takes access, conservation and innovation into consideration. Access entails ensuring that everyone who has a medical need for antibiotics has ac- cess to effective antibiotics. Conservation entails ensuring that we are not wasting our currently existing antibiotics by becoming resistant as a conse- quence of consumption without a medical need. Innovation entails the con- tinuous development of new, novel antibiotics that can replace the current and future antibiotics that eventually will become ineffective due to bacteria de- veloping resistance. Failing to focus on innovation is naive, and akin to the fallacious plan to be- come rich by not spending. Current science dictates that resistance inevitably arises from use (Van den Bogaard & Stobberingh, 2000), so until science dic- tates otherwise, the goal must be a continuous stream of new, effective antibi- otics, ad infinitum. Failing to focus on access is clearly unethical. Uncon- strained access however, speeds resistance (Hoffman & Outterson, 2015) and thus exacerbates the demand for new innovation. To emphasize this balancing act between conservation and access, some prefer the word ‘stewardship’ as opposed to ‘conservation’. Failing to focus on conservation, and instead rely- ing on innovation, might in the case of antibiotics be audacious, as some argue (Shlaes, 2010, p. 67) that it is becoming increasingly difficult to discover new antibiotics. From the perspective of private pharmaceutical developers, the crux is that the incentive to innovate is negatively correlated to conserva- tion efforts, since conservation (or sustainable use) conventionally reduces the prospective market size.

At GSK between 1995 and 2001, 67 screening campaigns on antibacterial tar- gets were run against the SmithKline Beecham 260–530,000 compound collec- tion. Only five chemical molecules were thought worthy of pursuing and none of those resulted in a drug. (Shlaes, 2010, p. 67)

Some have argued that the three-pronged approach isn’t enough, and that a “strong” incentive (Kozak & Larsen, 2018) also must contain elements of sta- bility and sustainability. Stability emphasizes that interventions should min- imize disruptive effects to avoid unintended consequences. They argue that patent life extension vouchers for example may generate a secondary market that causes higher prices of more widely used medications for longer peri- ods in other disease areas (Kozak & Larsen, 2018). Sustainability emphasizes that drug investors and developers must be able to rely on the promises made by policy-makers (Kozak & Larsen, 2018). Suspecting that the nature of an

33 intervention may be affected by ‘political whim’ may drastically reduce its effectiveness. Lurking under all these practical ideas are a number of questions that are ethical at core. Successfully balancing the three pillars of antibiotic policy may inevitably require us to make a number of tough decisions (Littmann et al., 2015). These questions are ethical questions at core. Can some animals, some pets, some people, be left untreated to slow resistance (Foster & Grundmann, 2006)? Can social distancing measures (such as quarantines) be employed to control the rate of spreading resistance (Littmann et al., 2015)? Who should bear the social and economic cost of antibiotic resistance? The case of antibiotic resistance has been likened to the idea of the “tragedy of the commons” (Baquero & Campos, 2003) which unfortunately, lacks tech- nical solutions (Hardin, 1968). The tragedy of the commons is a natural con- sequence of a free market economy in an environment of shared resources, where agents own the right to act out of their own self interest in all matters. A rational agent having to choose between, so called, ‘cooperating’ or ‘de- fecting’ in the game theory model known as The Prisoner’s Dilemma will, if assuming it is facing another rational agent, choose to defect. Unless technical innovation changes the rules of the game significantly, the tragedy of the com- mons seems to “at its root” require a “change of human relationships” (Diek- ert, 2012), which inevitably makes it a political problem. The potential of for example sustainable use efforts in small countries like Sweden are limited as resistance will, in our globalized world, spread from countries with more lax prescription policies. On a global scale we may wish to draw on works from the field of game theory and ideas surrounding evolutionarily stable coopera- tion strategies discussed in works like (Axelrod & Hamilton, 1981). All this is meant to emphasize that to truly analyze policy interventions for antibiotic research, development, and commercialization we would have to take all these scientific, economic, political, and ethical facets into consider- ation. My focus on enabling development merely represent a fraction of the problem at hand. Yet, the proposed language for policy intervention contracts certainly could be usefully employed in future studies into any of the other facets above.

2.5 Evidence-based policy Consider two economic models that purport to be testing the same policy yet claim two different results. Even if none of them can be ‘right’ in the natu- ralistic sense, what should policy-makers and researchers make of these two results? A first question we might ask is whether the two models, for our in- tents and purposes, are equivalent or differ in some important respects? Then whether the policies do. The ability to “determine whether two models claim- ing to deal with the same phenomena can, or cannot, produce the same results”

34 is, in the words of Axtell et al. (1996) a fundamental process in the realm of social science simulation. They refer to this process as “alignment of simula- tion models” (or “docking” for short) and argue that it is necessary to support two “hallmarks of cumulative disciplinary research”: critical experiment and subsumption. Alignment, as proposed by Axtell et al. (1996), is however ad-hoc in that to dock two models, one must first gain a deep understanding of each model, reproduce them (if the source code is not available), and then somehow dock. While Axtell et al. (1996) admit that this can be daunting, especially with poorly documented models, they propose that docking should become easier over time as more model authors hold docking in mind when publishing. I argue that we can do better. By designing a domain-specific language (DSL), capable of expressing any policy intervention aimed at supporting antibiotic development and any model of antibiotic development into which said inter- vention can be applied to, we simplify the problem by nipping the combinato- rial growth in the bud. A DSL is a “means to describe and generate members of a family of programs in the domain” (Van Deursen & Klint, 2002) and I here use the term ‘language’ in the ontological sense. Combining all these pieces, a picture emerges. If we could design a domain- specific language capable of expressing policy intervention models for stimu- lating antibiotic development then we would know whether two given models align, or if they don’t, in which important sense they differ, which would en- able generating evidence for why something works “there” and evidence for how to apply the same strategy “here”, even if both “here” and “there” are in silico. In the following sections I expand on the three parts of the sketch above, to show in detail how the problem has been conceptualized. Evidence that a policy “worked somewhere” is not evidence that the policy will “work here”, so how do we get from “it worked there” to “it will work here”? This is the essential question raised by Cartwright and Hardie (2012) in their book on evidence-based policy where they expand on the theory of ev- idence for evidence-based policy outlined by Cartwright and Stegenga (2011). To truly move from ‘there’ to ‘here’ Cartwright and Stegenga (2011) suggest that we need causal models and auxiliary factors which in Cartwright and Hardie (2012) were rephrased to “causal roles” and “support factors”. As seen in Section 1.2, the former formulation is employed in this thesis. The three pillars of the monumental “theory of evidence for evidence-based policy” by Cartwright and Stegenga (2011) can be summarized as follows: (1) Policy effectiveness claims are “causal counterfactuals” that to be evaluated require a causal model that not only maps the causes and effects, but also captures how they combine. (2) Causes are INUS conditions, meaning that there might be multiple “causal complexes” that together are sufficient for producing some outcome, where the “causal components” may play varying roles in each complex. (3) For the policy to produce the desired effect there

35 might be a set of auxiliary factors that must be in place along with the policy if the policy is to operate successfully. Both Cartwright and Stegenga (2011) and Cartwright and Hardie (2012) are focused on modeling causality through equations. Given our interest in using simulation to estimate the impact of policy, we thus assume that what, by Cartwright and Stegenga (2011), is meant by equations is not limited to closed-form or analytical expressions. By treating the causal model as a fully parameterized black box we can simplify to: Model = Environment × Intervention → Effect (2.1) meaning that a causal model is a mapping from combinations of policies and environments to effects. This should be an intuitive interpretation. When implementing a change, the effect follows not only as a consequence of the change itself, but also from the context in which the change is applied. Map- ping terminology back to Cartwright and Stegenga (2011), auxiliary factors are here contained within the environment, and wanted and unwanted effects (if any) are both contained within the resulting effect. This definition of causal models does however not align with the one al- ready given in Equation 1.1, which is also visualized in Figure 1.1. Remember, that Equation 1.1 defined causal models as a unary function from intervention to effect, specifically as: Model = Intervention → Effect. Equation 2.1 on the other hand defines a binary function. The only difference lies in whether the environment is explicitly modeled or not. In Equation 2.1 the environment is explicitly passed as an argument to the causal model function, while in Equa- tion 1.1, the environment is assumed to be captured by the function. As this thesis is concerned with capturing policy interventions rather than environ- ments we will use the definition of Equation 1.1. It should however be noted that it is trivial to produce the unary function from the binary one by means of partial function application. Cartwright and Stegenga (2011) emphasize that a causal model is com- prised of two parts: a list of causes, and a specification of how these causes combine to produce an effect. They emphasize that the list of causes must both specify how the causes operate independently of any policy application (in our words ‘intervention’), as well as how they are altered by the application. We have black-boxed the list of causes and how they combine into what we above called Model. Yet, by maintaining the split between environments and policies we’re implying that a black-boxed environment can exist independently from policy and that the application of policy can alter its behavior. Comparing the effectiveness of two policies in order to, say, rank them is non-trivial. Predicting that the average outcome will be better with the policy as opposed to without, is often easier than predicting the actual average out- come (Cartwright & Hardie, 2012). The weakest form of effectiveness is thus that the “overall difference due just to the policy itself is positive” Cartwright and Hardie (2012, p. 30), which happens if the negative effects subtracted

36 from the positives still yield an overall positive result. This serves as a re- minder that when implementing policy, we often affect more than the one intended cause (Cartwright & Hardie, 2012, p. 31).

2.6 Docking and alignment The purpose of this section is to justify the suggestion to, in the context of social simulation model alignment, determine input, model, and output equal- ity by means of refactoring into a subsuming language. The complexity of equality was discussed extensively in Axtell et al. (1996) while emphasizing the need for further research into the matter.

What is the alternative to confronting these difficulties, to look away and rest our theorizing on unverified assumptions of equivalence? (Axtell et al., 1996, p. 134)

Axtell et al. (1996) explored equality as either “distributional” or “rela- tional”, where the former is the stronger form that consequently is more dif- ficult to attain. Distributional equivalence is when the produced results of the two models cannot be distinguished statistically in the sense that samples ap- pear to be drawn from the same distribution. Relational equivalence is when two models produce results with the same “internal relationship” in the sense that they are similar in ‘form’. Axtell et al. (1996) exemplify relational equiv- alence by suggesting that a variable might be a “quadratic function of time” or that something might grow “monotonically with population size”. Determining equality by comparing how black-boxed functions map inputs to outputs only makes sense if we assume, call it ‘exhaustiveness’ of output. Meaning that there is nothing of importance ‘inside’ the function that is not represented in its output. In other words, we don’t care about how the two functions map input to output, only that the resulting mapping is the same. To appreciate the weight of this assumption, consider an asinine example of two countries, where in one a fictitious job is invented for every unemployed individual, while in the other any individual who loses their job is deported. Depending on the output exhaustiveness, we might conclude that these two countries are equivalent in terms of unemployment. Yet they are evidently different in significant ways. In the context of social simulation, it is questionable whether one can as- sume exhaustiveness of output. It is for this reason that the thesis is concerned with determining model equality by ‘opening up’ the black boxes and actu- ally comparing the models by refactoring them into a subsuming language. This circumvents the need for exhaustiveness and only requires resorting to comparing input and output relations if we find that the models cannot be refactored into the same model without altering semantics.

37 Neither distributional nor relational equality, in the sense of Axtell et al. (1996), make sense in the context of assessing model equality, as opposed to model output equality. We will thus presume that equality can be either (a) syntactic (=) meaning literally the same, (b) semantic (≈) meaning for all intents and purposes the same, or (c) isomorphic () meaning structure- preservingly similar, or informally: “translatable”. Note that syntactic equality implies semantic equality, which in turn implies isomorphic equality, but that the reverse is not true. Also note that semantic equality is a stronger claim than isomorphic equality. We’re not merely demanding structural preservation but that the two structures have the same ‘meaning’. To syntactically compare two models that are either expressed in different languages or expressed differently in the same language, we must first refac- tor either one into the language of the other or both into some subsuming lan- guage. Assuming that the two languages are on the same grammar level in the Chomsky hierarchy (Chomsky, 1956), say recursively enumerable (meaning Turing machines), then two semantically equivalent structures (possibly ex- pressed in different languages) can always be refactored into two syntactically equivalent structures without semantic loss. Given two semantically equiv- alent models, we can thus conclude the existence of a semantics-preserving transformation function that maps them into two models that are syntactically equivalent to each other. Note that anything weaker than recursively enumer- able can of course also be subsumed by a recursively enumerable language. Depending on what we want to conclude with a given alignment exercise, this ternary equivalence formalism is quite useful. If for example we seek to show that two models, f1 : A1 → B1 and f2 : A2 → B2, are two different causal models that produce the same results for some given input policy intervention (ii and i2), then we must show that:

f1 ≈ f2 ∧ i1 ≈ i2 ∧ f1(i1) ≈ f2(i2) (2.2) meaning that that the models are not semantically equivalent, but that both the inputs are, and that both models yield semantically equivalent output when given these inputs. If however we seek to show that the two causal models are essentially equivalent and that we are looking at two different policy interven- tions that have the same essential effect, then we must show that:

f1 ≈ f2 ∧ i1 ≈ i2 ∧ f1(i1) ≈ f2(i2) (2.3) As a last example, if we are attempting to replicate some published simulation experiment and find that:

∃i1∃i2. i1 ≈ i2 ∧ f1(i1) ≈ f2(i2) (2.4) then we must conclude that f1 = f2. Which in turn must mean that either (1) they misrepresented their model, (2) we misinterpreted their model, or (3) one or both of us has a bug in our model code.

38 Thus far we have seen the utility of comparing model syntax instead of model output. However, I yet to account for why this ought to be done by means of refactoring models into a subsuming language instead of comparing the models directly. The argument is simple. Without first converting to the subsuming language, semantic model equality must be determined by select- ing seemingly arbitrary ‘chunks’ of one model and comparing it to ‘chunks’ of the other model until all pairs have been compared. This is tedious at best and complicated by the fact that models often are underreported in publica- tions. By first translating both models into the subsuming language and (by means of comparing output e.g. in the sense of Axtell et al. (1996)) ensuring that we have accurately translated them, we can then resort to simple syntactic comparison to isolate the differences between the two models. In theory, it might even be possible to algorithmically reduce statements in the subsum- ing language to some minimal standard form, much like relational database normalization or Backus-Naur normal form for context-free grammars.

Under current standards of reporting a simulation model it will often not be possible to resolve all questions for an alignment exercise. Thus it will be nec- essary either to contact the author of the target model, to have access to the source code, or to have access to a documentation of the target model more complete than is generally provided in accounts published in contemporary journals. (Axtell et al., 1996, p. 133)

A lot has happened in the almost 25 years since Axtell et al. (1996) pro- claimed that social science simulation models are, for purposes of alignment, underreported. Online-only journals, such as PLOS1 and Frontiers2 are be- coming mainstream, and specialized social simulation journals like JASSS (Journal of Artificial Societies and Social Simulation) explicitly ask reviewers to asses reproducibility and strongly encourage authors to consider it (“JASSS: How to submit a paper”, 2020). JASSS even recommends that authors up- load their model code to CoMSES Net Computational Model Library which gives the model itself a digital object identifier (DOI) and makes it a citable re- source. Yet, the publication outlet ecosystem for social science simulation is of course very diverse, and not all simulation studies are published in simulation- oriented outlets. Simulation studies are thus still published in a terse fashion that complicates replication. Clearly, some agent-based models readily lend themselves to equational specification, see for example the free market wealth distribution model by Chakraborti (2002). Models of this kind are significantly easier to compare for semantic equality due to their high information-to-noise-ratio. If for ex- ample you have two closed-form equational models you could rewrite the equations until they become as similar as possible, making their differences

1https://plos.org/ 2https://www.frontiersin.org/

39 (if any) apparent. However, rewriting Turing machines, ad-hoc, to make their differences apparent is not trivial. General purpose programming languages tend to be overly concerned with implementation details such as variable in- crementation and stopping conditions in loops, none of which are relevant to model specification. Agent-based for example are particularly problematic since they are tradi- tionally implemented and reported in object-oriented languages. The ODD (Overview, Design concepts and Details) protocol (Grimm et al., 2006) was proposed as a way of standardizing model reporting in scientific publications. The original authors, along with a large number of co-authors prominent in the field of agent-based modeling, recently published an updated commentary (Grimm et al., 2020) on the protocol, marking the 10th anniversary of the first update Grimm et al. (2010). In the latest update, the authors argue that “auto- mated links between written description and software are [...] an ideal”. One avenue for approaching this, they suggest, is “literate programming”, the no- tion originally introduced by (Knuth, 1984) where “programs are written as an uninterrupted exposition of logic in an ordinary human language, much like the text of an essay, in which macros are included to hide abstractions and traditional source code”. In short, sufficiently and strategically abstract lan- guages can be humans and machines alike. This vision resonates well with the contract language of policy interventions for antibiotic development presented in this thesis.

2.7 Domain specific languages A domain-specific language (DSL) is “a means to describe and generate mem- bers of a family of programs in the domain” (Van Deursen & Klint, 2002). I use the term “language” in the ontological sense. By designing a language for the domain we allow users of the domain to subjectively but unambiguously express “what is”. The key reason for using a DSL instead of a general purpose programming language, is that the DSL allows us to reason “directly within the domain semantics, rather than within the semantics of the programming language” (Hudak, 1996) even if the DSL itself is expressed in that language. It has been argued that the ‘ideal’ abstraction for a given application is a programming language “designed precisely for that application” (Hudak, 1996), meaning a DSL. Tying back to Section 2.6, this supports the view that using a DSL as a subsuming language is superior even if any Turing complete language in theory could serve the same role. A DSL can be thought of as a formal grammar plus semantics. In the con- text of policy interventions for antibiotic development, the grammar without the semantics thus describes how to generate policy interventions without be- ing concerned with the meaning of these instances. The grammar thus de-

40 scribes the ‘form’ or syntax by defining atomic parts and how they compose. To ascribe meaning to utterances in such a language we must define compo- sitional semantics. Meaning that we must define the meaning of atomic parts, and what it means to compose them. Formal grammars can be classified, based on the complexity of their pro- duction rules, in the Chomsky hierarchy, originally introduced by Chomsky (1956). Simplistically, a production rule in the Chomsky hierarchy consists of non-terminals, terminals, and a start symbol. Starting at the start symbol we can (often recursively) choose what non-terminals to expand until we eventu- ally reach terminal symbols. The production rules thus define the grammar of a language by defining how to generate valid sentences in it. In this thesis I use the language Haskell to reason about and to produce DSL fragments. Since any type (in a Turing complete language) has a corresponding (meaning can be converted into an) recursively enumerable grammar (meaning a type 0 Chomsky grammar), types in Haskell are from this view grammars. In this thesis I sometimes employ the term “type” to emphasize the practical utility in terms of computability but wish to emphasize that when we define a type for L we do not merely define a prac- tically useful type, but also a grammar of L. By endowing the grammar of L with semantics, we have formally defined a language. On the ontological plane, this thesis is thus concerned with designing a language of policy inter- ventions and causal models for and of antibiotic development. Terms that could have been used in lieu of domain-specific languages and formal grammars include ontologies, conceptual models, meta models, tax- onomies, frameworks, and libraries. The notion of domain-specific languages however, is well-defined in the context of programming languages, and, paired with the theoretical notion of formal grammars, we have a direct link between practice (programming) and theory (linguistics).

41 3. Theory

In this chapter we elucidate the theoretical framework used to guide the search for a ‘satisficing’ design solution, and clarify why the framework has been chosen. First, we explore the modeling paradigm known as agent-based mod- eling (ABM), along with its surrounding concepts and raison d’être. We then introduce the REA (resources, events, and agents) ontology that originated from accounting but has been extended with various concepts and patterns to allow the modeling of complex business processes. We then discuss the notion of compositional financial contracts and their subsequent extensions and gen- eralizations beyond the financial domain. In essence, agent-based modeling is used as an over-arching simulation paradigm, meaning to frame the endeavor. REA is used to capture the notion of resource transfers and transformations by means of economic events between agents. Compositional contracts, when merged with concepts from REA, are used to formally capture the core of policy interventions for antibiotic development as offers to commit to future actions under certain conditions.

3.1 Agent-based modeling Modeling and simulation have been used to provide decision support, with varying levels of success, in domains ranging from economics to biology. In pharmacology for example, an in silico human model was recently reported to predict pro-arrhythmic cardiotoxicity with higher accuracy than in vivo animal models (Passini et al., 2017). While modeling and simulation still remains at the periphery of the social sciences, agent-based modeling holds great promise for the field (Earnest & Frydenlund, 2017) and has even been called a revo- lution (Bankes, 2002). In the social sciences, quantitative methods, such as statistics, have a tendency to over-generalize while qualitative methods, such as ethnography, under-generalize (Earnest & Frydenlund, 2017). Simulation modeling appears to strike a good balance between the two. Contemporary approaches to simulation in the social sciences can broadly be divided into two categories. On one hand we have equation-based models, and on the other we have agent-, object-, and event-based models (Gilbert & Troitzsch, 2005, p. 7). The latter can simulate the former as they are con- ventionally implemented in Turing complete languages as opposed to mathe- matical equations. Agent-based modeling has experienced wider acceptance

42 than macro economic approaches like system dynamics (Earnest & Fryden- lund, 2017), and it is particularly well suited for the social sciences (Earnest & Frydenlund, 2017) partly due to its emphasis on explanation rather than prediction. Equation-based models like quantitative macro economic models are not without problems. Firstly, they fail to account for agent heterogeneity. Models that explicitly consider heterogeneity may yield qualitatively different policy recommendations than models that explore the dynamics of “average” agent characteristics (Dawid & Neugart, 2011). Consider for example how the ex- pected value of a two-peak distribution, a “fat-tailed” distribution suffering leptokurtosis, or a “power law”, is misguiding if you can only afford a sin- gle trial. In fact, the high risk of failure, intrinsically make drug develop- ment profit a two-tailed probability distribution, leading some authors (Blau et al., 2004) to argue that measures based on expected value, such as ENPV are “truly meaningless”. Nevertheless, ENPV is, as discussed in Section 2.3, common praxis in biotech company valuation (Stewart et al., 2001) Secondly, if model input values are uncertain (due for example to to the acquisition of empirical data being difficult, such as is often the case in the social sciences) then forecasts may be vastly inaccurate. The old computer science maxim ‘garbage in, garbage out’ is illuminating. A telling example is the realization by Lorenz (1963) that what may seem like the same initial conditions may yield completely different trajectories in weather prediction. It has been stressed, possibly for reasons like the above, that simulation of social systems should only ever attempt to predict broad trends over short terms, as even such conservative efforts are “notoriously prone to embarrass- ing error” (Wooldridge, 2009, p. 8). While modeling and simulation have been classically used for prediction, many (Dawid & Neugart, 2011; Earnest & Frydenlund, 2017; Lempert, 2002; Moss, 2002) thus argue that social sci- ence simulation1 should focus on generating explanations rather than predic- tion through for example multi scenario simulation. To truly appreciate the spectacular failure of macro-scale prediction, consider the following quote.

“Plate tectonics surely explains earthquakes, but does not permit us to predict the time and place of their occurrence. Electrostatics explains lightning, but we cannot predict when or where the next bolt will strike. In all but certain (regret- tably consequential) quarters, evolution is accepted as explaining speciation, but we cannot even predict next year’s flu strain.” (Epstein, 2008, para. 1-10)

Alternatively, consider how Moss (2002) asked members of the Interna- tional Institute of Forecasters to refute the statement below. While he did receive responses, no one was able to point to a correct forecast.

1Some specifically discuss policy intervention effects (Dawid & Neugart, 2011; Lempert, 2002; Moss, 2002) while others ABM prediction in general (Earnest & Frydenlund, 2017).

43 “Since the invention of econometrics by Jan Tinbergen in the 1930s, there has not been a single correct econometric forecast of an extreme event such as a turning point in a trade cycle or a stock market crash. Every such fore- cast—without exception—has yielded either a type I or a type II error.” (Moss, 2002, p. 7267)

Predicting the future based on the idea of causal determinism is an old idea, proposed by for example Pierre Simon Laplace already in 1814 (Mitchell, 2009, p. 19). It has unfortunately been shown that even purely deterministic systems such as the logistic map suffer from sensitivity upon initial conditions and therefore exhibit trajectories resembling random noise (May et al., 1976). In other words, arbitrarily close initial conditions of the logistic can, after a sufficiently long time, diverge widely (May et al., 1976). Perfect Laplace prediction may thus be impossible, even in principle, as we cannot empirically measure initial conditions with infinite accuracy (Mitchell, 2009, p. 33). In 1948, New Zealand economist William Phillips built one of the earliest economic simulation machines, MONIAC (Monetary National Income Ana- logue Computer). It used fluid logic to perform its computation. The MO- NIAC was larger than a cubic meter and consisted of transparent tanks and pipes. The flow of water between tanks represented the flow of money be- tween various parts of an economy. While originally introduced as a teaching aid it was successfully used as an economic simulator (Bissell, 2007). MO- NIAC is a prime example of a macro-economic simulation, or more broadly, of an equation-based model. All that MONIAC did could in theory be done with equations, albeit perhaps not as captivatingly. On the other end of the spectrum we have agent-based modeling, which could be said to have started when the American economist Thomas Schelling realized that population segregation may be viewed as a macro phenomenon emerging from micro interactions (Schelling, 1971). He represented house- holds as coins on a checkerboard. All household were given a “tolerance level” which was used to determine how many immediate neighbors of a dif- ferent “kind” the household could tolerate. When the number of neighbors exceeded the tolerance level, the household chose to move to a new random location. While the model is based on simple (yet abstractly representative) rules, it manages to reproduce segregated societies from these. Stark seg- regation would arise even if individuals would accept to be the minority in their neighborhood. Schelling’s findings were summarized in his seminal book “Micromotives and Macrobehavior” (Schelling, 1978). Axelrod (1986) “led the forefront of the social science ABM movement when he simulated the emergence of behavioral norms” and grounded agent- based modeling in the fields of sociology and political science (Earnest & Frydenlund, 2017). Later, Epstein and Axtell (1996) developed a seminal model, known as “Sugarscape”, that managed to make abstract representations of complex social phenomena such as trade, warfare, migration, group forma-

44 tion, cultural transmission, etc. emerge from agents following simple rules on a two dimensional plane. Epstein and Axtell (1996) referred to their efforts as “growing artificial societies”. It was quickly realized that agent-based models can be used to explore policy intervention (Doran et al., 2001). Viewing complex social phenomena as emergent macro-consequences of social micro-interactions has proven useful in a wide range of domains. Con- temporary works have, to pick a few arbitrary examples, explored anything between relationship formation in business networks (Prenkert & Følgesvold, 2014), crime prevention strategies (Malleson et al., 2010), transmission of re- sistant bacteria in hospitals (Almagor et al., 2018) and group formation in refugee camps (Collins & Frydenlund, 2016). The term individual-based modeling is essentially synonymous with agent- based modeling. Beyond simulation, the term multi-agent systems (MAS) is used to refer to computer systems composed of agents, that can independently figure out what to do to achieve their objectives instead of requiring constant input from humans (Wooldridge, 2009, p. 3). The term ‘multi-agent systems’ is thus mostly used in the field of artificial intelligence (Niazi & Hussain, 2011) but I here partly draw on the literature on multi-agent systems due to its similarity with agent-based modeling. An agent in a multi-agent system is defined by Wooldridge (2009, p. 21) as “a computer system that is situated in some environment, and that is capable of autonomous action in this environment in order to meet its delegated objec- tives”. Simplistically, we can visualize this as in Figure 3.1. In agent-based

Sensor input

Environment Agent Action output

Figure 3.1. An agent and its environment. modeling, both environments and agents are conventionally modeled in silico, meaning in software, while the environment in a multi-agent system might be cyberphysical in the sense that the environment is some actual physical space. Agents must, like humans, not only reason, but communicate, cooperate, negotiate, and reach agreements (Wooldridge, 2009). Encounters in multi- agent systems are economic encounters (Wooldridge, 2009, p. 9), that is en- counters between rational, self-interested entities. Consequently, the inter- disciplinary field takes inspiration from areas such as economics, philosophy, logic, ecology, and the social sciences (Wooldridge, 2009, p. 5). At its core, an agent in a multi-agent system is a machine in a software or physical environ- ment, capable of autonomously, pro-actively, reactively, and socially working towards its sought states, (achievement task) while simultaneously avoiding unsought states and maintaining states sought to preserve (maintenance task) (Wooldridge, 2009, p.41). The agent “senses” the environment, reasons, and

45 then pursues “action” upon it. Even though we might think of communica- tion as a phenomenon occurring between two individuals directly, agent-based modelers have argued that it is useful to treat the environment as a channel that stores (i.e. buffers) messages (Gilbert, 2008, p.26). In other words, while one agent transmits messages into the environment, another consumes them, and thereby communication arises. Agent-based modeling is not concerned with the macro-level definition of a system, because “spatio-temporal averaging gives us little understanding of actual agent behavior” (Epstein & Axtell, 1996) or as previously empha- sized: the “average” is not always representative. By instead building artificial economies from the bottom up, one can study “the relative performance of dis- tinct trade rules in producing agent welfare” (Epstein & Axtell, 1996) while maintaining heterogeneity in the population. In the field of complexity economics economies are thought of as com- plex adaptive systems (Gallegati & Kirman, 2012), and complex adaptive sys- tems can be operationalized as agent-based models (Niazi, 2017). While some economists have argued that complexity science does not fundamentally affect policy evaluation (Durlauf, 2012), others argue that, with complexity, the age of certainty is over, and since (for example) emergent facts are transient phe- nomena, policy interventions must be analyzed and evaluated with respect to contextual dependencies (Gallegati & Kirman, 2012). A growing number of economists are pursuing economic analysis using approaches of interacting heterogeneous agents. While modeling an agent-based economy may still be considered a complicated endeavor, the benefits gained entail, for example, identifying how different idiosyncratic economic conditions yield different policy effects, and how the effects do not merely vary across populations but may also need to be analyzed in terms of micro, meso and macro effects (Gal- legati & Kirman, 2012). I do not draw on works from complexity economics since the main concern is finding a language to express the foundations of policy interventions for antibiotic development in terms of contracts and thus merely to ensure that the formalization is useful within the broader context of agent-based modeling. The development of antibiotics can be viewed as an emergent consequence of pharmaceutical firms making strategic micro-decisions surrounding whether or whether not to invest in a stream of prospective and current antibiotic project opportunities. The key point of this whole section is thus that agent-based modeling can be used to simulate the development of antibiotics, and that agent-based models subsume all other models such as equation-based models. By ensuring that the proposed contract language is useful in the context of agent-based models, we ensure by extension that it is useful for all subsumed modes of modeling.

46 3.2 Resources, events, and agents At the heart of any business information system is the modeling of inputs and outputs of “various economic resources into a chain of value-adding processes or activities” (Geerts & McCarthy, 1997) in the sense of Porter and Millar (1985). By for example providing cash in exchange for raw materials and labor, and then converting these into goods, that in turn can be exchanged for cash again, a company has entered a cycle that can turn an initial investment into profit (Geerts & McCarthy, 1997). The primary information model used to track these activities in modern accounting systems is the all permeating idea of double-entry bookkeeping (Geerts & McCarthy, 1997). The roots of double-entry bookkeeping trace back to “Summa de Arith- metica, Geometria, Proportioni et Proportionalita” (“Everything About Arith- metic, Geometry and Proportions”) published in 1494 and written by the Fran- ciscan friar Luca Pacioli (M. Smith, 2018). While often attributed as the “fa- ther of accounting”, Pacioli did not invent double-entry bookkeeping but rather described the method used by merchants of Venice during the Italian Renais- sance (M. Smith, 2018). Geerts and McCarthy (1997) describe double-entry bookkeeping as one of the “stellar achievements of the Renaissance” while simultaneously arguing that it is a very narrow view of a rich data environ- ment, where it is specifically designed to support the generation of monetary net worth and net income reports. While double-entry bookkeeping has seen numerous enhancements and augmentations over five centuries since, the essential structure remains the same (Geerts & McCarthy, 1997). In the words of Geerts and McCarthy (1997) “forward-thinking accountants” have realized that these ancient in- formation structures are now “dysfunctional” in modern commerce. While Geerts and McCarthy (1997) noted that there are signs of a shift towards in- formation architectures with a “more semantic orientation”, they noted that the “primacy” of double-entry bookkeeping still persists. According to Geerts and McCarthy (1997), the first two papers on semantic modeling in the financial systems domain are McCarthy (1979) and McCarthy (1980). These two works were based on his doctoral dissertation work com- pleted in 1977 (McCarthy, 1999) in which he was inspired by the then ground- breaking paper by Chen (1976) on entity relationship (ER) modeling. All this work fed in to the first proposal of the REA model presented in McCarthy (1982). REA is not based on double-entry bookkeeping and was according to Geerts and McCarthy (1997) derived using “the same abstraction methods that gave rise to the object orientation paradigm”. REA’s historical roots in first ER modeling and then object orientation has arguably, and as shown further ahead in this section, had unfortunate consequences for its implementability. The word REA is an acronym for the three entities known as economic re- sources, economic events, and economic agents (or simply resources, events, and agents for short). The model in its original form is reproduced in Fig-

47 respon- duality sibility

Unit

Resource stockflow Event control

Agent

Figure 3.2. The REA model as originally proposed by McCarthy (1982).

ure 3.2. A major objective of REA was to, in the words of Geerts and Mc- Carthy (2011), introduce an “integrated information structure” that could help organizations employ a single “cross functional enterprise wide information structure” rather than having to re-invent the wheel for every facet of an or- ganization, whether it be shipping or billing or anything in between. Beyond these three entities, the original model also contained an entity called eco- nomic units. Economic resources are essentially equivalent to the usual notion of ‘as- sets’, or as objects with utility that are “scarce” and are “under the control of an enterprise” (McCarthy, 1982). Importantly however, McCarthy (1982) stresses that in the REA model, “accounts receivable” for example is not au- tomatically a resource, since it can be computed from more primitive data. In a philosophical sense, it is a non-fundamental view of data that can be derived by composing fundamental data. Economic events are changes in economic resources resulting from production, exchange, consumption, or distribution (McCarthy, 1982). Economic agents are either people or agencies partici- pating in economic events or responsible for subordinate agents (McCarthy, 1982). Economic units are sets of economic agents inside an organization, meaning subsets of all inside agents (McCarthy, 1982). Relationships between resources and events are called “stockflow”, and es- sentially denote the increment or decrement of some stock (McCarthy, 1982). Economic units can according to McCarthy (1982) be in a relationship with other economic units, and such a relationship denotes “responsibility” of one unit over another. “Control” is for McCarthy (1982) a ternary relationship be- tween events, units, and agents, where the event is an increment or decrement event (meaning that it increments or decrements some resource for the orga- nization) while the economic units denotes the “inside party” (meaning ‘our’ party engaging in the exchange), and the agent denotes the “outside party” (meaning ‘their’ party engaging in the exchange). In subsequent works, such as Geerts and McCarthy (2000a), the notion of economic units and the responsibility relationship are for the sake of simplic-

48 ity both omitted. The control relationship is thus seen as a binary rather than a ternary relationship. Geerts and McCarthy (2000a) argue that this omission doesn’t come at a “loss of generality”, and as such we employ the same simpli- fication here. Lastly, a relationship between events is called “duality” and this is a core idea of REA. REA duality simply means that when homo economi- cus ‘gives’, he must also ‘take’, such that the overall value of the exchange leaves him better off, meaning that every increment event, must be balanced by at least one decrement event, and vice versa. Duality imbalances imply that some agent holds a claim over some other agent, and here we begin to see how notions such as ‘accounts receivable’ can be derived. Let us now clarify at what level of abstraction REA is expressed. While originally referred to as a “framework” by McCarthy (1982), it has frequently been described as a “design pattern” in later works such as for example Geerts and McCarthy (1997) and Hruby (2006). We employ the latter terminology since a framework, in contrast, is defined by Gamma et al. (1995, p. 26) as “a set of cooperating classes that make up a reusable design for a specific class of software”. Note that the word ‘classes’ may, to generalize, be exchanged for the word ‘types’, since the word ‘framework’ might as well be used to describe a set of types and functions in for example a functional programming language. REA on the other hand, is a design pattern in the sense that it is a meta model that must be ‘implemented’ or ‘realized’ as, what Hruby (2006) calls, a REA “application model” for any given domain. More exhaustively, Hruby (2006, p. 356) suggests that REA must be understood from four levels that are also depicted in Figure 3.3. Note that the economic resource entity is, in the figure, merely chosen as an example. At the top of Figure 3.3 we find the metamodel level, meaning the level at which we discuss the very notions of economic resources, events, etc. Below the metamodel we find any given REA “application model” containing con- crete implementations of the meta-level concepts, such as cash as an economic resource and cash disbursements as economic events. Note that at this level we might find types that represent the general concepts of economic resources, events, etc. for this particular domain, and not just types of specific economic resources such as cash. To exemplify, the cash class in Figure 3.3 could for ex- ample implement a resource interface, while the cash disbursement class might implement a decrement event interface. Below the application model we find the runtime model, which in object-oriented terms would mean instantiated objects, or in relational database terms would mean rows in a database. At this level we’re representing actual holdings in cash and cash disbursements that have actually occurred. At the final level we find the “real world”, meaning the actual physical or institutional objects existing that we are trying to represent at higher levels. When discussing REA in this thesis we are, unless otherwise specified, discussing REA at the metamodel level. Understanding how to use REA, meaning how to use the metamodel to build an application model, is perhaps best explained by means of an exam-

49 REA Metamodel

1 outflow 1..* Economic Resource Decrement Event

is-a is-a REA Application Model

«class» 1 outflow 0..* «class» Cash Cash Disbursement

instance-of instance-of Runtime Model

«instance» outflow «instance» Cash Cash Disbursement

represents represents Real World

Actual cash Actual transaction

Figure 3.3. Abstraction levels of REA. ple. A prototypical example taken from McCarthy (1982) is illustrated in Fig- ure 3.4, where inventory and cash are economic resources, purchase and cash disbursement are economic events, and buyer, vendor, and cashier are all eco- nomic agents. This example is prototypical since variations of it can be found in virtually any publication on REA. I have simplified the example, as already discussed, by omitting the notion of economic units. In the example, inven- tory is purchased from some vendor, by a buyer, while cash is dispersed by the cashier to the vendor. In Figure 3.4 the buyer and cashier entities serve as in- side parties, while the vendor entity serves as the outside party. On the whole, this fictitious example is a model of an economic exchange that we might for example call ‘inventory purchases’. Geerts and McCarthy (1997) argue that most complex accounting software does not exhibit reusability, interoperability, and portability as a consequence of being based on a non-semantic model of business, i.e. that of double-entry bookkeeping. Geerts and McCarthy (2002) proclaim that many scholars con- sider REA “a more solid foundation for enterprise information systems of the future”. Yet, assimilation of REA into the mainstream has in the words of McCarthy (1999) “not been without problems and impediments”. McCarthy (1999) emphasizes that while semantic modeling is likely to be met with resis-

50 tance in any domain it is especially prominent in a very “traditional discipline” like accounting.

3.2.1 Axioms Geerts and McCarthy (2000b) explored REA as a domain ontology, and pro- posed that in order to be able to reason and derive certain types of informa- tion (such as accounts receivable), certain conditions must hold. In this light, Geerts and McCarthy (2000b) proposed a set of three axioms for REA. In the proposed contract language for antibiotic development, we draw on REA ter- minology, yet violate these axioms. This subsection serves to justify why. The axioms reported by Geerts and McCarthy (2000b) are as follows: 1. “At least one inflow event and one outflow event exist for each economic resource; conversely inflow and outflow events must affect identifiable resources.” 2. “All events effecting an outflow must be eventually paired in duality relationships with events effecting an inflow and vice-versa.” 3. “Each exchange needs an instance of both the inside and outside sub- sets.” If the second axiom, for example, would not hold, then we cannot reliably use momentary imbalances in exchange dualities to compute either accounts receivable or accounts payable. Consider for example the payment in Fig- ure 3.4, from the cashier to the vendor. If the payment is not recorded as a cash disbursement (decrement event) dual to the purchase (increment event) in question, but rather recorded as for example some entirely isolated decrement event, then there is no satisfied duality relationship. Hence we cannot deduce what purchase the payment is for. The third axiom is supposed to ensure that exchanges involve both a representative of the firm, and a representative of some other firm. Whether these axioms must hold at runtime or at compile-time is not ex- plicitly addressed by Geerts and McCarthy (2000b). Expressing constraints at compile-time (for example via types) is evidently superior to expressing them

Inventory inflow Purchase party to Buyer

pays for Vendor

Cash Cash outflow party to Cashier disbursement

Figure 3.4. Prototypical example of a REA instance, meaning a part of a REA appli- cation model.

51 Inventory Buyer Seller Cash

inflow provide outflow receive provide receive

exchange Cash Purchase disbursement

Buyer’s system Seller’s system same event same event

exchange Cash Sale receipt

inflow provide outflow receive provide receive

Inventory Buyer Seller Cash

Figure 3.5. Trading partner views of a prototypical REA exchange. at runtime (for example via if-else checks), and is arguably part of the point of having a type system in the first place. Unfortunately it seems that neither REA as originally proposed by McCarthy (1982) nor as revamped and exten- sively detailed by Hruby (2006) is able to support compile-time checking of these constraints. To understand why, we must first explore the notion of the so called trading partner view and the independent view. The terminology of giving/taking, decrements/increments, outflows/inflows, outside/inside, and so forth, suggests that REA models and instances must be expressed from the perspective of a singular business. When I ‘give’ some resource to you, you perceive ‘taking’ the same resource from me. Figure 3.5 is a reproduction of a figure from Hruby (2006, p. 353) where it indeed is ev- ident that if we model a two-party exchange process then each party’s model instance will be a ‘mirror image’ of the other party’s. Sales for me become pur- chases for you while cash receipts become cash disbursements, and so forth. Hruby (2006, p. 353) calls this perspective the “trading partner view”. In contrast, when expressing all events from an objective perspective, or what Hruby (2006, p. 353) calls the “independent view”, increment and decrement events are replaced by “transfers”, while inflows and outflows are replaced by “stockflows” (Hruby, 2006, p. 353).

52 Note that in Figure 3.4 there are two inside-parties (buyer and cashier) but only one outside-party. If the buyer and the cashier are the same agent, then there would only be a single inside-party. In Figure 3.5 there are (from each perspective) only two agents, one inside-party and one outside-party. Also note that in Figure 3.5, the diamonds have been omitted from the entity rela- tionship diagram (with the relationship names written next to the arrows) since all relationships in this figure are binary. Having established the independent and trading partner views, we now re- turn to the discussion of the axioms. Axiom 2 is fundamentally not provable using types that don’t capture the notion of time, since the word “eventually” suggests dependence on it. The argument for the provability of Axiom 3 how- ever depends on whether we employ the trading partner or independent view. Suppose that we are using the independent view. If we take two arbitrary agents from some full set of agents, then how can we statically determine whether these two agents belong to two different organizations or not? By definition this is a value rather than a type question since the two agents must in theory be of the same parent type and may in theory also be of the same sub- type, but be instantiated with differing values. Note that I use parent/sub-type here in a broad sense where it does not matter whether we concretely employ class inheritance, interface implementation, or ad-hoc polymorphism. Without resorting to sophisticated type systems like those based on dependent types, asserting axiom 3 in the independent view can thus only be done at runtime. Suppose then instead that we are using the trading partner view, mean- ing that instead of stockflows and transfers we have inflows/outflows and in- crements/decrements. If in this view we retain the separation between eco- nomic agents (outside parties) and economic units (inside parties), then we are, like McCarthy (1982), able to differentiate between ‘our’ parties and ‘theirs’, meaning that we can constrain the exchange type (i.e. exchange duality), such that it must contain at least one inside agent (i.e. economic unit) and one out- side agent (i.e. economic agent). If however not all agents are outside parties, not all units are inside parties, or we dispense with the very distinction be- tween units and agents and model them as the same type, then this constraint is no longer meaningful since we cannot guarantee that all economic units are inside and all agents outside. Combining the notions of economic units and economic agents into one notion of economic agents is, as already discussed, customary in REA and used in for example Geerts and McCarthy (2000a). Thus, we conclude that Axiom 3 is unsatisfiable at compile-time even in the trading partner view. On the flip side, the deeper question is whether the axioms presented by Geerts and McCarthy (2000b) cement characteristics that are desirable in the first place. While this is a question that ultimately ought to be debated in the fields of accounting or microeconomics, McCarthy (1982) does state in the original REA proposal that there are times when “duality requirements may be discarded” and that “nonreciprocal transfers” do exist, meaning that there

53 are “occasions when increments and decrements occur quite legitimately in isolation”. This suggests that what Geerts and McCarthy (2000b) established as the second axiom, was originally in McCarthy (1982) never intended to be viewed as something axiomatic. Stefansen (2004) suggests that “it is not ob- vious whether the constraint is desirable or not” and calls it “somewhat unset- tling” that what in one moment is called an axiom is allowed to be violated in the next under the banner of what in REA lingo (Geerts & McCarthy, 1997) is called “implementation compromises” away from “full REA modeling” which also known as “epistemologically adequate enterprise schemas”. For our purposes, we conclude both that it is questionable whether the REA axioms can be satisfied at compile time, and that it has been questioned whether the axioms are sensible in the first place. This serves as the justifica- tion for why in the proposed contract language we draw upon REA terminol- ogy yet violate these so called axioms.

3.2.2 Extensions While REA was originally introduced as a “domain specific theory” of how to design accounting information systems, it has over the years been extended to allow modeling of “economic phenomena in general” (Geerts & McCarthy, 2011). A rich account of these extensions is provided in David et al. (2002). For our purposes, three extensions are of direct relevance. These are transfor- mations, commitments, and type images, and they are all introduced in Geerts and McCarthy (2000b). The contracts in the language proposed by this thesis (Chapter 8) not only correspond to what here is called ‘commitments’ but also to what here is called ‘agreements’ (which are aggregations of commitments). Type images are achieved by what the proposal calls ‘actualization’, and trans- formations are a type of economic event very similar to how its defined here. Geerts and McCarthy (2000b) realized that the notion of “duality” can ex- tend beyond exchanges of economic resources into the transformation, or what Hruby (2006) calls “conversion”, of resources. Geerts and McCarthy (2000b) thus split the notion of “duality” into “transfer duality” and “transformation duality”. Transfer duality is simply another name for what we have discussed and what was previously called exchange duality or simply duality, but transfor- mation duality is the idea that resource decrements dual to resource incre- ments can be used to model resource transformations that increase the overall value of the enterprise’s resources. In other words, transformations/conver- sions transform/convert input into output in a way that is intended to add value for the agent performing the transformation/conversion. Commitments were introduced by Geerts and McCarthy (2000b) to capture the notion of agents agreeing to execute some economic event in a well-defined future, which given the definition of economic events will result in either a

54 Agreement contains reciprocal Commitment {transfer, {Contract, Schedule} transformation} participation reserves

Resource Agent

stockflow executes {outflow{use,consume,give}, participation inflow{take,produce}} duality Event {transfer, transformation} typification typification typification typification

Event duality {transfer, Type transformation}

participation {inside, outside} stockflow Resource {outflow{..}, Agent Type inflow{..}} Type executes reserves participation {inside, outside}

Commitment reciprocal {transfer, Type transformation}

Figure 3.6. REA extended with transformations, commitments (and agreements), and type images. decrement or increment of some resource. Much like McCarthy (1982) who argued that economic events exhibit exchange duality, Geerts and McCarthy (2000b) argue that economic commitments must come in pairs but refer to such relationships as “reciprocal” commitments. What on the level of events we call ‘duality’ we should thus call ‘reciprocal’ on the commitment level. Since pairs of economic events can be used to express either transfer dual- ities (called exchanges) or transformation dualities (called conversions), com- mitments can be used to commit to any of these two. Commitment pairs thus either form “transfer duality commitments” or “transformation duality com- mitments”, meaning exchange commitments or conversion commitments re- spectively. As can be seen in Figure 3.6, commitments2, can reserve resources and specify the parties of the commitment. Geerts and McCarthy (2000b) pro- pose that a collection of commitments can form an “economic agreement”

2Note that the ‘event’ and ‘commitment’ types of Figure 3.6 are not actually separated in Geerts and McCarthy (2000b) but their ambiguous representation allows for this interpretation.

55 which must either be a “contract” or a “schedule”. Intuitively, a contract consists of transfer commitments, while a schedule consists of transformation commitments. Hruby (2006, p. 101) does begin outlining essential parts of a contract, such as clauses and terms, and does suggest that agreements can be self-referential. Yet, compared to the work of Peyton Jones et al. (2000), which is explored in Section 3.3, these facilities are much too vague. Geerts and McCarthy (2000b) realized that application designers may at time want to refer to ‘classes of things’ rather than ‘actual things’ and con- sequently introduced the notion of “type images”. They suggested that there are at least three kind of type images: policies, prototypes, and characteri- zations. Policies restrict legal configurations of actual phenomena, meaning that they define what “should, could, or must be occurring sometime in the fu- ture” (Geerts & McCarthy, 2006). Prototypes define “blueprints” rather than restrictions, which can be likened to prototype based inheritance or what is sometimes called ‘classless’ or instance-based programming. Characteriza- tions are groupings that can be used to for example describe “substitutable resource types”. By allowing type images corresponding to resources, events, agents, and commitments, in the vein of Geerts and McCarthy (2000b), we can in Figure 3.6 see that it ought to be possible to express contracts on the type level.

3.2.3 Limitations The purpose of this section is to justify the choice to replace REA commit- ments with contract function combinators in the vein of Peyton Jones et al. (2000), and solve typification by means of isomorphisms and functors using what Chapter 8 calls ‘actualization’.

There is a serious number of issues pertaining to the latest extensions of the REA model, such as commitments, policies, the contract state machine, and types. We feel that these have not been adequately described to warrant further investigation [...] (Stefansen, 2005)

In McCarthy (1982) REA was influenced by, and expressed within, the entity relationship model of Chen (1976), that is, one of the foundations of relational database technology. Later, it was found that REA maps well to object-oriented programming (Geerts & McCarthy, 1997) and it was argued that objects might be a better fit for REA than relational databases (Geerts & McCarthy, 2011). When applying REA in object-oriented terms, economic resources, events and agents can all be trivially modeled as interfaces, abstract classes, or su- per classes that domain-specific classes can in turn implement or inherit from. However, since REA is a meta model (Stefansen, 2005), application designers

56 must define their own specializations at compile-time. The absurdity is par- ticularly evident when considering that the programmer has to define a spe- cialized class for every type of exchange and conversion that can occur. To allow runtime configuration of subclasses one either needs metaprogramming, which Stefansen (2005) argues increases complexity significantly, or use in- tricate as done by Nakamura and Johnson (1998) which by themselves increase complexity. Nakamura and Johnson (1998) had to for ex- ample resort to using the “Type Object Pattern”, the “”, and use OQL (Object Query Language) in lieu of the well studied SQL (Structured Query Language). All patterns in the seminal book Gamma et al. (1995) solve domain ag- nostic, general problems, while REA patterns are all focused on microeco- nomic activity. Consider for example the following interpretations of patterns from Gamma et al. (1995). enables runtime composition, enables event driven programming, Factory pattern enables treating classes as objects (which allows instantiations of types determined at runtime), enables structure agnostic traversal, while enables double dynamic dispatch. Evidently, REA design patterns are much more narrow in scope. Geerts and McCarthy (1997) themselves stressed this difference in generality, and suggested that a key to advancing work in the accounting domain is “the idea of building a REA system Framework” (as opposed to a design pattern). While treating REA as a ‘design pattern’ appears entirely valid in light of the definitions of R. Smith (1987) and that of Beck and Cunningham (1987), as we have seen, it seems to cause quite a commotion. Whether the REA com- munity has remained unaware of or simply rejects ideas of favoring object composition over inheritance (Gamma et al., 1995, p. 20), function composi- tion (Hughes, 1989), or parametric polymorphism3 remains unsaid.

3.2.4 Relevance Up to this point we have introduced the REA entities agents, events, exchanges (transfers), conversions (transformations), commitments, and type images. To understand why all of these concepts are key to the modeling of policy in- terventions for antibiotic development, consider again the simple policy in- tervention known as a market entry reward. Simplistically viewed, a market entry reward is a prize (i.e. an economic resource) transferred (in an economic event) to a developer (i.e. an economic agent) from some funding body (i.e.

3Parametric polymorphism, also known as universal types, universally quantified types, or generics, has been known for a long time, and was independently discovered by both Jean- Yves Girard (1972) in his “System F”, and John Reynolds (1974) in his “polymorphic lambda calculus” (Pierce, 2002, p. 341). It reached the common object-oriented language Java in 2004.

57 an economic agent) as a consequence of the developer successfully bringing an antibiotic (i.e. an economic resource) to market. Conversions (i.e. transformations) are relevant since they help us formulate how a potential antibiotic in pre-clinical can be refined into a market ready antibiotic. Conversions border on being beyond the scope of this thesis, since they are more concerned with how to model antibiotic development rather than policy interventions. They are however still included in this thesis since they form such a fundamental part of what will constitute the bridge between arbitrary antibiotic development models and contracts. Specifically they serve as a specific type of agent messages (Section 8.5). Commitments are relevant because they are the foundation of contracts, and the problem statement of the thesis (Section 1.1) is centered around the under- standing of policy interventions for antibiotics seen as contracts. Type images are relevant because while we want to be able to model policy interventions as specific agreements between specific agents concerning specific resources, we also want to be able to express them in general. Given the problems surrounding commitments and type images, the type images in the proposed solution (Chapter 8) draw on functors and parametric polymorphism, while the contract formalization builds upon Peyton Jones and Eber (2003) whose work is elucidated in the next section.

3.3 Compositional financial contracts Agreements in extended REA do not capture the notions of conditionals, pred- icates, or time. While REA agreements formally capture future commitments, the associated “terms and constraints are usually described in natural language and as such live outside the scope of the entity-relationship model” (Andersen et al., 2006). In other words, REA does not capture the circumstances un- der which these potential commitments are manifested. An “obvious path” forward (Stefansen, 2004) is to combine REA with the idea of compositional financial contracts, as pioneered by Peyton Jones et al. (2000) and then revised in Peyton Jones and Eber (2003). Andersen et al. (2006) argues that contracts may either be “express” or “im- plied”, which can be taken to mean that all exchange can be thought of as the execution of some possibly implied underlying contract. An implied contract is when some parties engage in economic exchange without an explicit con- tract. Consider for example the act of purchasing a cup of coffee at a coffee shop. There is an “implied” contract on the form ‘The customer (A1) pays some amount of money (R1) in exchange for the provision of coffee (R2)by the waiter (A2)’. Andersen et al. (2006) thus suggests that the term ‘contract’ must be understood in the broader sense of a “structure that governs any trade or production even if it is not verbal”. The word ‘contract’ is in this thesis too used in this broader sense.

58 The objectives and hence the proposal of this thesis, presented in Chap- ters 6 and 8 respectively, suggest a contract language similar to Peyton Jones and Eber (2003) altered with concepts from REA. This section briefly outline the underlying works, namely the compositional financial contracts of Pey- ton Jones and Eber (2003) and the extensions by Andersen et al. (2006) and Stefansen (2005).

The finance industry has an enormous vocabulary of jargon for typical combi- nations of financial contracts (swaps, futures, caps, oors, swaptions, spreads, straddles, captions, European options, American options, ...the list goes on). Treating each of these individually is like having a large catalogue of prefabri- cated components. The trouble is that someone will soon want a contract that is not in the catalogue. (Peyton Jones et al., 2000, p. 280)

The compositional financial contracts of Peyton Jones et al. (2000) and Pey- ton Jones and Eber (2003) essentially are a set of primitive function combina- tors that can be composed in order to formally express arbitrarily complex financial contracts between two parties. Financial contracts are agreements that are traded on financial markets, such as options and bonds. Andersen et al. (2006) cites estimates from personal communication with Jean-Marc Eber, one of the authors of Peyton Jones et al. (2000), when stating that a major French bank has had yearly losses of around 50 million Euro as a consequence of disagreements about the obligations of contracts, and breach or malexecution of contracts.

It seems that contract coding is a healthy process in the sense that it will of- ten unveil underspecification and errors in the natural language contract being coded. (Andersen et al., 2006, p. 20)

As the initial paper on compositional financial contracts (Peyton Jones et al., 2000) was published, Eber founded the company LexiFi4 which aimed to commercialize the combinator library using the functional language OCaml. LexiFi is still running and its technology is reported (Bloomberg, 2014) to have been licensed by Bloomberg. Compositional financial contracts are also a cornerstone of the company Netrium5 which “enables financial engineers to precisely describe and execute exotic and hybrid contracts”. In their words, they enable the “definition and operational execution of financial and physi- cal energy contracts, with arbitrary optionality and conditionality”. The core mechanisms of Netrium remains open source6 and upon inspection it is evi- dent that while the project has progressed beyond the initial implementations of Peyton Jones et al. (2000) and Peyton Jones and Eber (2003) the core ideas remain the same.

4https://www.lexifi.com/ 5http://netrium.org/ 6https://github.com/netrium/Netrium

59 3.3.1 Combinators The primitive combinators presented in the original paper (Peyton Jones et al., 2000) and the updated paper (Peyton Jones & Eber, 2003) are not all the same. In the updated version the authors do away with the notion of a “con- tract horizon” and instead shift their focus towards arbitrary predicates. The term “horizon” was in the first paper used to denote the “expiry date” of when a contract can be “acquired”. While a contract can in this original formula- tion not be acquired beyond its horizon, its rights and obligations may “extend well beyond”. In the updated paper the notion of a horizon is no longer nec- essary due to the introduction of the combinators when and until, as well as the altered semantics (and syntax) of the combinator anytime. The authors also added the combinator cond which captures the logic of binary condition- als. As all these were based on arbitrary predicates dressed as “observables” this allowed the authors to drop the idea of contract horizons, as well as the combinators truncate, get, and when. We will henceforth be referring to the updated paper, namely Peyton Jones and Eber (2003), unless otherwise noted. What follows is a brief overview and explanation of the combinators pre- sented in Peyton Jones and Eber (2003). While neither Peyton Jones et al. (2000) nor Peyton Jones and Eber (2003) define the datatype Contract that all combinators yield, the former paper does mention that the type Contract in their implementation indeed is “an algebraic data type, with one constructor for each primitive”. zero :: Contract This is a contract with no rights and no obligations, which is useful in a sense similar to how the number zero is useful. Consider for example a function that return a valuable contract if the recipient is eligible, but no contract if the recipient is not. Such functions won’t have to wrap the returned contract in for example a Maybe type (which would cause users of it to have to deal with unwrapping) but can simply return the zero contract when the recipient is not eligible. This worthless contract can also be used to express an option as a binary choice between some pos- sibly valuable contract, and a worthless contract. The zero contract can also be understood as a ‘null object’ in the sense of the object-oriented “” first published in Martin et al. (1998, p. 5). one :: Currency -> Contract Acquiring the contract (one k) means that you have the right to imme- diately receive one unit of currency k. As we shall see, Peyton Jones et al. (2000) realized that by employing scalars, the underlying ‘atomic’ contract does not have to specify an amount but could content with sim- ply specifying a currency. give :: Contract -> Contract This combinator ‘mirrors’ a contract such that all rights become obliga-

60 tions and vice versa. In economics parlance, the atomic contract (one k) can be thought of as a ‘take’ of k, while (give (one k)) can be thought of as a ‘give’ of k. Much like how REA was originally formu- lated in what later became known as the “trading partner view” rather than the “independent view” (see Figure 3.5), so too are compositional financial contracts expressed from the view of a single enterprise. If one party holds the contract c the other must therefore hold (give c). and :: Contract -> Contract -> Contract This combinator combines two subcontracts into a composite contract with all the rights and obligations of both contracts. All the rights and obligations are immediately acquired. or :: Contract -> Contract -> Contract This combinator encodes an immediate choice between two mutually ex- clusive contracts. The acquirer of this contract must immediately choose which of the two subcontracts to acquire. Choosing one subcontract eliminates the ability to choose the other. cond :: Obs Bool -> Contract -> Contract -> Contract If you acquire the contract (cond o c1 c2) then you will immedi- ately acquire c1 if o yields true, or immediately acquire c2 if o yields false. The cond combinator is, in the proposal of this thesis (Chap- ter 8) called IfElse. Observables are explained further in Section 3.3.2 but are essentially “objective, but perhaps time-varying” (Peyton Jones et al., 2000) values. Contracts containing observables are thus context- dependent. scale :: Obs Double -> Contract -> Contract This combinator returns a new contract where the subcontract is scaled by the value yielded by the numeric observable. Acquiring the contract scale o c therefore results in immediately acquiring the underlying contract c but where all rights and obligations have been scaled (multi- plied) by the numeric value yielded by the observable o. This behavior is, as shown in Section 6.11, difficult to maintain when generalizing to arbitrary, possibly non-fungible, resources. when :: Obs Bool -> Contract -> Contract This contract demands that you acquire the underlying contract as soon as the observable becomes true. If the observable will never again be true, then the contract is worthless. anytime :: Contract -> Contract If you acquire the contract (anytime o c) then you may acquire the underlying contract c at your discretion whenever the observable o is true. You may however only acquire the underlying contract c once.

61 until :: Obs Bool -> Contract -> Contract If you acquire the contract (until o c) then you immediately acquire c,butc must be immediately given up as soon as the observable o be- comes true. If the observable is true, then the contract is worthless. Contracts built from these combinators can trivially be visualized as trees or as graphs if recursive. Yet, a strength of this approach when compared to REA commitments is arguably that complex contracts need not be visualized to be human readable.

3.3.2 Observables Observables, as designed by Peyton Jones et al. (2000), are objective but possi- bly time-varying values. These can usefully be thought of as ‘external’ values, in the sense that the meaning of contracts varies with the observables they contain. Note that observables are parametrically polymorphic over the type of value that is to be observed, such that (Obs a) will yield a value of type a. Peyton Jones and Eber (2003) provides an insightful example of when such external dependency is useful. Consider for instance so called “weather derivatives”. Ski-resorts, might offer a ‘snow guarantee’ along the lines of the (simplified) example below: cond (snowInMeters %>= 0.5) (give c) zero where we assume that c is the contract specifying your payment to the re- sort, and that the observable snowInMeters has the type (Obs Double).In essence, the contract above states that the holder of the contract only is obliged to follow contract c only if the amount of snow measured exceeds half a meter. In the example above, we make use of the operator %>= which Peyton Jones and Eber (2003) define by ‘lifting’ the binary operator >= to observables. Readers unfamiliar with lifting can informally think of this as taking a unary or binary function that works on primitives and ‘lifting’ it into the space of observables so that it now works on primitive values ‘inside’ observables. The power to lift operations into observables is very useful since we can trivially formulate arbitrarily complex expressions based on multiple observables. Unary, binary, and ternary lifting, in general have the following types: lift :: (a -> b) ->fa->fb lift2 :: (a -> b -> c) ->fa->fb->fc lift3 :: (a -> b -> c -> d) ->fa->fb->fc->fd and if these can be defined for observables then we can trivially define arith- metic and relational operations on observables. In Haskell we would imple- ment the Num type class: instance Num a => Num (Obs a) where ...

62 which means that observables containing numbers essentially can be treated as numbers. For relational operators we would however have to define a set of custom functions, which Peyton Jones et al. (2000) define as: (%<) :: Ord a => Obs a -> Obs a -> Obs Bool (%<) = lift2 (<) ... Finally, Peyton Jones et al. (2000) suggest that there ought to exist a constant observable that always yields the same value: konst :: a -> Obs a Given this desire to lift, one may ask whether observables in fact are, say, applicative functors. In the proposal given in Chapter 8, observables are in fact profunctors, meaning that they are bifunctors that are contravariant in the first parameter and covariant in the second.

3.3.3 Extensions We now move to explore some extensions of Peyton Jones and Eber (2003) that are directly relevant to modeling policy interventions for antibiotic de- velopment. The extensions stem from Stefansen (2005) and Andersen et al. (2006). The syntax used in Stefansen (2005) and Andersen et al. (2006) is more terse than that of Peyton Jones and Eber (2003) yet arguably less reader friendly. As such, we here and in the proposal (Chapter 8) translate these ex- tensions into a syntax much more similar to that of Peyton Jones and Eber (2003).

Generalized resources By omitting the scale combinator, Andersen et al. (2006) manage to general- ize contracts away from currency and up to any arbitrary resources, which in their words include information. As previously mentioned, generalizing to ar- bitrary (possibly non-fungible) resources is non-trivial due to the scale com- binator. Consider for instance how to define scaling of an antibiotic project. Is there any universally sensible interpretation of the statement “multiply that antibiotic by 1.5”? Bar the complications of scaling, the contracts of Peyton Jones and Eber (2003) can trivially be universally quantified over resources. While we’re not using the syntax of Andersen et al. (2006) the basic idea is simply: one :: r -> Contract r meaning to make contracts parametrically polymorphic over the resource type.

Multiple parties The compositional contracts by Peyton Jones et al. (2000) have no explicit notion of agents. Stefansen (2004) argues that when combining this contract

63 model with REA an obvious extension is to make agents explicit. This, con- sequently, enables multi-party contracts but also highlights the need to param- eterize contracts over the agent type. Drawing on REA terminology (McCarthy, 1982) we can trivially see how to generalize the one combinator to allow multi-party contracts. Any transfer (actually: increment or decrement economic event) in REA must be associated with a resource and two agents, a “provider”, and a “receiver” (Hruby, 2006, p. 17). Similarly, Andersen et al. (2006) specify transfers as transmit(v1,v2,r,t) where v1 and v2 are members of the set of agents, r is a member of the set of resources, and t is some time. This is a simplistic interpretation of Ander- sen et al. (2006) since they actually make a distinction between commitments (meaning transfers that must be executed) and transfers (meaning executed transfers that might or might not satisfy some commitment), where commit- ments specify predicates that transfers must satisfy. At its core, the idea however is that an atomic transfer consists of two parties and a resource. Using syntax similar to Peyton Jones and Eber (2003) we can, again, reformulate the combinator one to: one :: a -> a -> r -> Contract r a where a defines agents and r resources. With this, we can express multi-party contracts over arbitrary resources. We have however still, as previously men- tioned, lost the ability to scale contracts since we do not know how to scale arbitrary resources. This limitation is addressed in the proposal by generaliz- ing scaling away from multiplication and up to function application.

Reduction In the words of Andersen et al. (2006), a contract “specifies a set of alternative performing event sequences (contract executions), each of which satisfies the obligations expressed in the contract and concludes it”. The word ‘perform- ing’ is here used since complying with a contract is, in economics parlance, known as “performance” while violating a contract, that is breaching a con- tract, is known as “nonperformance” (Andersen et al., 2006). The realization here is that a contract can be thought of as a (possibly infinite) set of valid transfer sequences. In Section 3.3.3 we discuss deviations from valid paths, meaning contract violations (also known as breaches of contract), but here we discuss what Andersen et al. (2006) term “reduction”. The approach of Andersen et al. (2006) can be contrasted to the (albeit less developed) proposal of Østerbye (2004) where a contract at any point in time consists of some “instantiated” parts and some open options. This thesis follows the approach of Andersen et al. (2006), by considering contracts reducible under event sequences. When a transfer expected by some contract is executed, then that full con- tract should be “reduced” into a contract containing the “residual” obligations, where the transfer in question is no longer expected. Contracts should thus be

64 “reduced” as a consequence of incoming events, where events in this context are transfers. Since Andersen et al. (2006) draws on process calculus, the word ‘reduce’ is used in the sense of reduction semantics. In this thesis however, ‘reducing’ and ‘reduction semantics’ is used to point the reader’s attention to the concept of folding (which in some languages is known as ‘reduce’). A right-fold in general has the type signature: foldr :: (a -> b -> b) -> b ->fa->b where in our case a is an event, fais some form of stream of events, and b is a (possibly residual) contract. All this requires (1) matching events with contracts to determine whether they should simplify the contract or not, and (2) defining how a transfer reduces a contract. Andersen et al. (2006) entertain two models for matching and reducing con- tracts that they call “deferred” and “eager”. They attempt to match arbitrary events with arbitrary contracts, and it appears as if contracts cannot be un- ambiguously reduced in an eager fashion, meaning that one must either defer matching or allow eager matching to present a set of possible paths. While this is very important work, it is not critical for the question of this thesis and Andersen et al. (2006) state that eager matching can be performed if “events are accompanied by control information that unambiguously prescribes how a contract is to be reduced”. As such, we, in the proposal (Chapter 8), employ universally unique identifiers (UUIDs) to match events with atomic contracts. This can be likened to what often happens in reality, where bill payments are matched by some unique identifier on an invoice. Consider for example the OCR (Optical Character Recognition) system used on invoices in Sweden.

Sequential conjunction While the conjunction combinator and has been around since the first paper (Peyton Jones et al., 2000) on compositional contracts, Andersen et al. (2006) realized that it is important to distinguish between “parallel” and “sequential” conjunction. The combinator and of Peyton Jones and Eber (2003) is parallel, since it demands that both subcontracts be acquired immediately. To appreciate the rationale behind sequential conjunction, consider a con- tract where my obligation is to pay you but only after you have delivered some specified goods. While this contract clearly could be modeled using an ob- servable that tracks the conclusion of the first contract (deliverance of goods) in order to activate the second (payment for goods), this is unnecessarily com- plicated. By instead expressing the two contracts as a sequential conjunction, the payment would never be required until the goods have been delivered. Sequential conjunction of two subcontracts thus specifies that the second con- tract should be activated as soon as (or if) the first is completed given some definition of completeness. Note that the proposal (Section 8.1) contains a combinator for sequential conjunction called andThen. However, this must not be mistaken for the com-

65 binator then from Peyton Jones et al. (2000) since it has entirely different semantics. The then combinator behaves such that the contract is equal to the first subcontract as long as it has not expired, but when it does, the contract is equal to the second subcontract. The then combinator was removed in the updated publication Peyton Jones and Eber (2003). Sequential conjunction of contracts, is only meaningful if the first contract can be reduced to something essentially equivalent to zero. Andersen et al. (2006) allow contracts to either reduce to Success or to Failure, where the second subcontract in a sequential conjunction only is entered when (or if) the first succeeds. To implement sequential conjunction for our purposes we only need to be able to reduce to success, that is to zero.

Violations We cannot however completely ignore notions of non-performance, meaning of contract violations. The critical question to ask however is: Should all undesirable but possible paths be encoded into a contract or should they be considered violations and be dealt with ad-hoc? Consider for instance the act of paying for a bill after the due date, but also paying the appropriate penalty fee. Whether or not there ever was a viola- tion is a domain-specific question that depends on how the contract issuer in question behaves. In a sense, the concept of violation is less useful without semantics surrounding what to do in cases of it. If the business keeps track of the number of times I’ve paid late, in order to send me a ‘warning’ and eventually to perhaps choose not to re-enter into business with me, then yes there are consequences of me paying late, but whether or not this should be termed a violation or not is entirely a domain specific terminology question. Further, it is evidently possible to model debt collectors as agents performing debt collection via contracts in a REA-like system extended with composi- tional contracts.

When coding the contract, one notices that the contract fails to specify the ram- ifications of the client’s non-approval of a deliverable. One also sees that the contract does not specify what to do if due to delay, some approval deadline comes before the postponed delivery date. (Andersen et al., 2006, p. 20)

As we have already seen, Andersen et al. (2006) emphasize how contract coding is a “healthy process” that can often “unveil underspecification”. Tak- ing this logic to its extreme, it ought to be possible to specify all potential paths through a contract such that no violations can occur without them being considered a choice (or the lack thereof) made within the potential paths of the contract. Andersen et al. (2006) have done extensive work on the automatic detection of contract violations, and reason that the “focal point is being able to decide if a predicate can not hold true for any future values of its parameters”. In

66 practice this often means determining whether a deadline has passed or not. In the proposal in Section 8.1 we take this to the extreme and make deadlines based on arbitrary predicates the only way a contract can fail. If a boolean observable however is an arbitrary predicate then the obvious downside of this is of course that we cannot deterministically ‘reason’ about the future states of such arbitrary predicates, meaning that we cannot actually speculate about when the deadline will come without resorting to brute-force.

3.3.4 Limitations We now move to explore some, key limitations, for our purposes, of the com- positional contracts proposed by Peyton Jones and Eber (2003) and extended by Stefansen (2005) and Andersen et al. (2006).

Arbitrary decision-makers The semantics of the choice combinator or in both Peyton Jones et al. (2000) and Peyton Jones and Eber (2003) suggest that the contract acquirer is the decision-maker. This has two unfortunate consequences. First, two people can not hold mirror contracts where only one individual is the decision-maker since there is no ‘inverse’ of or, unless choice is considered a right that can be inverted by the give combinator. This is possible, albeit slightly awkward. To express a contract where the other party has the choice of either giving us c1 or c2 we would have to say: give ((give c1) ‘or‘ (give c2)) where the outer give inverts the right to choose and the inner give inverts the transfer obligation. Note that this contract is not equivalent to the left-hand side of: give (give (c1 ‘or‘ c2)) = c1 ‘or‘ c2 since that merely is a reformulation of the right-hand side, which states that the holder has the right to choose between ‘taking’ c1 or c2. We however wanted to express that the other party has the right to choose whether we should take c1 or c2. Whether this is consistent with the other semantics outlined by Peyton Jones and Eber (2003) is a different question. Awkwardness aside, we are still left with the second shortcoming: namely that we cannot express contracts where a third party gets to choose. Con- sidering real world phenomena like arbitration or escrow, we quickly realize that such contractual structures do exist. In the context of policy interventions for antibiotic development, consider for example an organization acting as an independent evaluator of prize eligibility for some prize-based intervention. The independent evaluator would examine the resource underlying the con- tract, and decide whether to activate the ‘left’ or the ‘right’ subcontract, that is whether the prospective antibiotic is eligible or not.

67 Transfer UUIDs are sufficient for matching transfers with commitments un- der the assumption that instances of agents and resources can be compared for equality so that we can check whether the transferred resources match the specification and whether the recipient is the correct recipient. However, using transfers as a marker for decision-making is only useful if the decision-maker is responsible for the next immediately pending transfer. If the decision-maker is not responsible for the next transfer under the chosen path, then transfers alone cannot be used to reduce contracts, regardless of whether UUIDs are used or not. It is for this reason that the proposal in Section 8.5 contains events that specifically pertain to the selection of paths.

Actual observables Both Peyton Jones et al. (2000) and Peyton Jones and Eber (2003) suggest that their observables are similar to the “behaviors” of the functional reactive animation system called FRAN, published by Elliott and Hudak (1997). Elliott and Hudak (1997) suggest that behaviors trivially could be implemented as: data Behavior a = Behavior (Time -> a) which essentially means that they are functions from time to some value. They immediately however suggest that in their case, for reasons of optimization, the implementation should rather be: data Behavior a = Behavior (Time -> (a, Behavior a)) meaning that behaviors not only return values but also simplified versions of themselves7. In any of the two implementations above, however, the essence lies in values that depend on time. Literally, since any instance of the type (Behavior a) would contain a function from Time to a. Neither Peyton Jones et al. (2000) nor Peyton Jones and Eber (2003), how- ever, provide any such details about how observables are to be implemented. Whether for instance the type parameter of the observable type is a phan- tom type or not is neither discussed nor implied. Peyton Jones was even questioned by an audience member, during a presentation at Ericsson (Pey- ton Jones, 2008), about how ‘functional’ (in the sense of functional program- ming) the notion of observables really is. The gist of his reply was that how to construct observables essentially is an implementation detail. While it seems reasonable to consider observables an implementation de- tail two reasons suggest that implementation ought to be discussed anyway. First, the fact that observables are expected to support unary and binary lift- ing (Section 3.3.2), which (at least) in the case of phantom types constrains permissible implementations. Second, when agents reason about the value of a contract, this ought to in- volve predicting future values of involved observables. Consider for instance a policy intervention that states that the first three eligible antibiotics to reach

7Elliott and Hudak (1997) state that their actual implementation is a bit more complicated.

68 market will receive some substantial prize. If a developer with a prospective antibiotic is targeting the prize, then this developer ought to assign a proba- bility to the likelihood of being one of the first three to reach market. Valuing this contract as if being one of the first three is a certainty is naive. An even more complicated example is what Mossialos (2010, p. 94) calls a “best-entry tournament” where a prize is awarded to whoever has made the most progress towards a predefined goal at a predefined date. Andersen et al. (2006) employs the term “contract analysis” to refer to ask- ing contracts and portfolios of contracts questions such as: When is my next deadline? How much stock of resource r am I likely to have at time t? An- dersen et al. (2006) parameterized their contract language over both predicates and arithmetic. They conclude that language designers must balance expres- siveness with the undecidability of analyses.

There is a clear trade-off in play here: a sophisticated language buys expres- siveness, but renders most of the analyses undecidable. (Andersen et al., 2006)

Netrium, the previously discussed company with a contract engine based on the work of Peyton Jones and Eber (2003) has open sourced8 part of their work which means that we can explore their implementation. In Netrium, observ- ables are either constants, or named observables that can be extracted either from the local environment or from some external environment (using XML as a transportation format). In contrast to Hudak (1996), Netrium observables do not store the functions themselves. Instead, observables are ‘symbols’ that can be provided to an observer evaluation function that in Netrium essentially has the type signature: eval :: Time -> Obs a -> a where we say ‘essentially’ since the actual return type actually is (Steps a) but this is truly an implementation detail as the type is merely used to extract the value from the environment. Eventually the call to eval will result in a single a wrapped in the Result constructor. The bottom line is that while both Netrium observables and the behaviors of Hudak (1996) are time-dependent values, their implementations are starkly different. Netrium observables hold symbols whose computation may require IO while FRAN behaviors hold pure functions. One benefit of holding symbols, in the vein of Netrium, is that reasoning about future states of an observable using heuristics is likely to be significantly simpler since a (possibly unique) reasoning function can be implemented for each symbol. In the case of observables being functions this is not possible in a language such as Haskell where functions cannot be compared for equality. How can we for instance tell the difference between a function that returns the

8https://github.com/netrium/Netrium/blob/master/src/Observable.hs

69 number of current antibiotics meeting some specification, and a function re- turning the number of patients currently in need of some particular antibiotic, if both of them have the type Time -> Int, meaning that both return integers. Interestingly, the type (Obs a) works both in the case of observables as functions, and in observables as symbols. In both cases, lifting is imple- mentable, albeit in different ways. However, the ability to reason about the future value of some observable depends heavily on the implementation in question. Regardless of whether we implement observables as functions or as sym- bols, decoupling from the notion of time as input and instead allowing observ- ables be polymorphic over input is trivial. With observables as functions we could say: data Obsio=Obs(i->o) meaning that observables simply wrap functions. In the proposal, specifically Section 8.2, we choose the route of observables as functions, but also let them store their last computed value, in the sense of a memoized function. In conclusion, the point here is that the structure of observables have important consequences for contract analysis, and can thus not be entirely written off as an implementation detail.

3.3.5 Relevance The heart of any policy intervention for antibiotic development can, as argued in Section 1.2, be viewed as a compositional contract, where some compo- nents (in the real world) may in the terminology of Andersen et al. (2006) be “implied” while others are “express”. In Section 1.2 we established our view of policy as behavior, and interventions as mappings from state to state by means of additions, alterations, or eliminations of behavior. In that terminol- ogy, we can at this point only conclude that contracts somehow are interacted with within behaviors. We cannot conclude how behaviors interact with con- tracts, only that they should. Understanding this interaction is an important question that however resides beyond the scope of this research. While we in Chapter 8 and Chapter 9 examine a number of stylized con- tracts for common policy interventions I here offer a few natural language examples that serve as indications of how policy can be formalized using com- positional contracts. A partially delinked market entry reward is essentially a prize awarded to a developer who successfully brings a new eligible antibiotic to market. This can for example be modeled as a when contract wrapped by a cond, where the outer observable contains an observable that checks eligibil- ity, and the inner observable checks whether the antibiotic project in question has reached market. The contract within the inner observable would then spec- ify the prize transfer, using the simple combinator one. Note that if one insists on exchange duality, then the observable that checks whether the antibiotic

70 has reached market could for example be replaced by sequential conjunction of two transfers where the first is a voucher that proves market entry and the second is the prize. A fully delinked market entry reward, on the other hand, is essentially a partially delinked market entry reward where the intellectual property of the antibiotic in question is transferred to the benefactor. As such, the ‘only’ dif- ference between the contracts of a partially and a fully delinked market entry reward is the reciprocal transfer of the antibiotic IP, which in terms of com- positional contracts simply equates to the prize being one leg in a parallel conjunction (meaning the and combinator) where the IP transfer is the other.

71 4. Methodology

This chapter begins by situating the research within the design science re- search (DSR) paradigm, and then provides a brief overview of the research strategy. The research output is then described in the language of design sci- ence research artifacts, followed by a section on the evaluation strategy used to establish utility of the designed artifacts in light of the research problem at hand. Lastly, the delimitations of this work are staked out.

4.1 Paradigm Following the plea of Weber (1987), this information systems research will be carried out within an explicit paradigm, specifically, the design science research (DSR) paradigm. This in turn means that the work rests on the philosophical foundations of pragmatism (March & Smith, 1995), meaning that we are concerned about what works rather than about what is true. The knowledge produced is thus prescriptive rather than descriptive, and specifi- cally surrounds the question of how to simulate policy interventions, aimed at stimulating antibiotic development, in order to allow alignment of simulation models. While the main contribution of this thesis is the language of policy intervention contracts for the domain of antibiotic development, instances in the language could of course be used within simulation models to produce policy-relevant prescriptive knowledge for the domain.

4.2 Research strategy This work was conducted in two major, and overlapping, stages. First, six simulation experiments were devised, executed, and separately published. The intent was not only to provide concrete decision-support for policy-makers in general and within DRIVE-AB in specific, but also to gain a practical under- standing of the complexities involved in simulation modeling of policy inter- ventions for antibiotic development. One of these published experiments is presented in detail as a case in Chapter 5, and the rest are summarized. Second, the learnings gathered from these published experiments were used to inform the design science oriented search for a more abstract and subsuming model, capable of capturing policy interventions for antibiotic development as contracts. Specifically, the cases led to a set of questions that here form the

72 foundation of, what, in design science research, is known as the objectives of a solution. These derived objectives are given in Chapter 6. Methodologically, the second stage of this work is loosely based on the design science research process (DSRP) as presented by Peffers et al. (2006) with the addition of the notion of a problem space, which I will refer to as a design solution space by drawing on Simon (1996, p. 109). Employing a solution space has been discussed by other design science researchers (Hevner et al., 2004), but here it is particularly useful as it helps connect the structure of causal models of antibiotic development in general with the structure of models for contracts and contract offers underlying policy interventions by ensuring that the latter makes sense in the context of the former. In summary, the strategy has from a design science research perspective entailed: 1. Identifying the problem (Chapters 1 and 2). 2. Formulating some objectives of a solution (Chapter 6), by drawing on the problem formulation (Chapters 1 and 2), the theoretical framework (Chapter 3), the experiments (Chapter 5), and my unique position as a participant in DRIVE-AB (Section 1.4). 3. Identifying a design solution space (Chapter 7) by drawing on the prob- lem formulation (Chapters 1 and 2). 4. Designing a proposal (Chapter 8) that fulfills the objectives (Chapter 6) and is useful in the solution space (Chapter 7). 5. Evaluating the proposal (Chapter 9) by demonstrating utility. Peffers et al. (2006) make a point of evaluating the proposal in light of the objectives, but in this work the proposal is so intimately tied to the objectives that demonstrating the connection appears superfluous.

4.3 Research output Following Carter et al. (2015) I refrain from dressing up information technol- ogy artifact instantiations as research contributions and instead argue that the proposed model along with its constituent fundamental (terminal) and com- posite (non-terminal) constructs constitutes is a domain-specific definition of information. In essence, the claim is thus that the main contribution of the the- sis is an information model. In the terminology of March and Smith (1995), the contribution consists of constructs, models, and methods. By casting the search space defined in Chapter 7 in the language of design science research using the terminology from March and Smith (1995) we re- alize that the expected output of this research is a model and its constituent constructs. More specifically, the objectives reported in Chapter 6 are con- structs, while the proposal reported in Chapter 8 constitutes a model. The constructs are conceptual while the model is concrete, in the sense that the model is an implementation of the constructs. The model should not be con- sidered an instantiation in the sense of design science research as it is void

73 of almost all context-specificity. The fact that the model is implemented in Haskell (or a functional language for that matter) is entirely incidental. The model, as built from the constructs, constitutes a domain-specific language for expressing policy interventions for antibiotic development as contracts and of- fers. The research output of this thesis is, in the terminology of Gregor and Hevner (2013), by definition, a “level 2” contribution, seeing that the deliv- erable is comprised of models and constructs. While I therefore cannot claim that the contribution is a “well-developed design theory about embedded phe- nomena” (level 3) I do not have to settle for a “situated implementation” of an artifact (level 1). In reality, the contribution probably resides somewhere in between, namely at the level of “knowledge as operational principles/archi- tecture”, or what Gregor and Hevner (2013) call “nascent design theory”. In the same paper, Gregor and Hevner (2013) propose that we can think of design science research contributions in terms of (at least) two dimensions: solution maturity, and application domain maturity. The question is how well- studied, how known, our solution and application domain is. A simplified version of the framework is depicted in Figure 4.1. Gregor and Hevner (2013) propose three terms to describe the extremities of the framework. If a new so- lution is proposed to a new problem, then we are concerned with “invention”. If a new solution is proposed to an old problem, then we are concerned with “improvement”. If a known solution is proposed to a new problem, then we are concerned with “exaptation”. Finally, if a known solution is proposed to a known problem, then we are merely concerned with “routine design” and this should not be considered a scientific knowledge contribution. This thesis appears to, in the terminology of Gregor and Hevner (2013), be concerned with exaptation, since I am applying known solutions to a new prob- lem. The known solutions together constitute the theoretical framework as re- ported in Chapter 3. Specifically, agent-based modeling, the REA ontology originally introduced by McCarthy (1982) but as reconceptualized by Geerts and McCarthy (2000b) amongst others, and the financial contracts model of Peyton Jones and Eber (2003) along with some derivative works Andersen et al. (2006) and Stefansen (2005). The ‘new problem’, reported in Section 1.1, is that of social simulation model alignment within the domain of policy interventions for antibiotic de- velopment. It could be argued that the whatever holds for social simulation model alignment in general, should also hold for social science simulation model alignment of policy interventions for antibiotic development specifi- cally. To this I respond that simulation model alignment in the social sciences is, in general, not yet sophisticated enough to deal with the issues I report in Section 2.6. The low application domain maturity does thus not stem from a lack of solutions to the problem of modeling policy interventions for antibiotic research and development as such works, while limited in numbers, do indeed exist (see Section 1.5). Instead, the low maturity stem from a lack of attempts

74 Improvement: Invention: Develop new solutions Invent new solutions for known problems. for new problems. Low Research opportunity and Research opportunity and knowledge contribution. knowledge contribution.

Solution maturity Exaptation: Routine Design: Extend known solutions to Apply known solutions new problems (e.g., adopt to known problems.

High solutions from other fields). No major knowl- Research opportunity and edge contribution. knowledge contribution.

High Low Application domain maturity

Figure 4.1. Design Science Research knowledge contribution framework (Gregor & Hevner, 2013) to design a subsuming model that can be used to unify (align) all these now distinct approaches. While I claim exaptation in the sense of Gregor and Hevner (2013) I wish to emphasize that one could argue innovation. As made evident in Chapters 6 and 8, non-mechanistically combining the so called ‘known’ solutions, mean- ing all the disjoint parts of the theoretical framework, into a single coherent argument was non-trivial to the point where one could argue that the solution was in fact not entirely “known”. While I do not argue innovation, I wish to emphasize that Gregor and Hevner (2013) presents the framework as two con- tinuous dimensions, and thus argue that the contribution likely lies somewhere between exaptation and innovation since the solution is somewhere between known and unknown.

4.4 Evaluation

Whereas natural science tries to understand reality, design science attempts to create things that serve human purposes. [...] Its products are assessed against criteria of value or utility – does it work? (March & Smith, 1995)

75 Evaluation is a key activity in design science research and advice on how it should be executed is plentiful (March & Smith, 1995; Pries-Heje et al., 2008; Venable, 2006). Yet, how to design an evaluation strategy for a contract lan- guage designed to express policy interventions for antibiotic development for the purpose of simulation model alignment is not obvious. The true test of utility would be to use the language to first (and exhaustively) encode every proposed policy intervention available in the literature and then proceed to align all encoded interventions by means of syntactic alignment as described in Section 2.6. Such an undertaking is, in the context of this thesis work, unfortunately prohibitively expensive. In lieu, a proof by construction is, in Chapter 9, offered to show that some key facets of key policy interventions in- deed can be encoded in the language. All this is in the lines of what Hevner et al. (2004) might call an “analytical” evaluation by means of “static analysis”.

4.5 Delimitations This work prefers semantic simplicity and elegant fundamentality over time or space efficiency of executable artifacts. In the code examples provided, performance is readily sacrificed in favor of simplicity. Along the same lines, the code presented makes sparing use of syntactic sugar that might unduly confuse readers with a surface level understanding of Haskell. On the topic of technicalities, neither denotational nor operational seman- tics are provided for the proposed language. This work is striving for the ‘big- ger picture’ and thus emphasizes that it indeed is possible to capture important facets of important policy interventions using a fairly simple composable con- tract language. In this vein, the precise semantics beyond what in this thesis is already expressed in natural language, is not of great importance. Similarly, the proposed model (Chapter 8) is of lesser importance when compared to the conceptual constructs (Chapter 6) that it is built from. Technicalities aside, this work makes a point of discussing antibiotic devel- opment rather than antibiotic research, development, and commercialization. These other two facets are of course too of great importance, but this delimi- tation is put in place to reduce scope to a manageable chunk. Consider for ex- ample how, as indicated in Section 1.5, taking sales into consideration might actually require theoretical exploration of epidemiological modeling since an- tibiotic use affect resistance, which in turn means that sales not only affects direct customers, but also potential customers never sold to. Commercializa- tion is also complicated by fragmentation in regulatory standards and the fact that developers have to get their drugs approved in multiple individual regions. Even more complications arise from the fact that some antibiotics are also sold for animal use, which too affects resistance levels, as it has been shown that re- sistant bacteria found in food animals treated with antibiotics eventually reach humans (Van den Bogaard & Stobberingh, 2000).

76 In the design solution space, given in Chapter 7, we are only concerned with single-threaded simulation models. While translating the solution space to encompass multi-threaded simulation models may in fact be simple, it is a conversion not here considered. As a final and perhaps obvious remark, this thesis is not concerned with any questions related to the economic, social, environmental, political, etc. feasibility of policy interventions. Nor are we here concerned with practical implementation details of any given policy intervention. Instead, this thesis simply ask how we can capture policy interventions as contract offers so that we can reason about such questions.

77 5. Experiments

The thesis work was, as emphasized in Chapter 4, conducted in two major and overlapping stages. In the first, a series of concrete simulation experi- ments were carried out to provide decision-support for policy-makers within and outside of DRIVE-AB, while in the second the contract language was de- signed. The first section of this chapter provides a brief overview of all major experiments conducted during this thesis work. Connections between learn- ings acquired during the experiments and decisions made in the second part of this thesis work are provided here. The second section of this chapter gives a detailed account of the final experiment. This account is provided to both give the reader a better understanding of the complexities involved in estimat- ing the effects of policy interventions for antibiotic development by means of simulation, but also to account for the source of some decisions in this work.

5.1 Summary of experiments While many formal and informal experiments were carried out during the course of this research, some resulted in publications. The experiments that did result in publications are listed in Table 5.1. Suffice to say that while my name is not first in all these publications I’ve had a key role in conceiving, executing, and analysing the experiments underlying each of them.

5.1.1 Experiment 1 This experiment resulted in the first simulation publication of Task 9 in DRIVE- AB, namely Okhravi et al. (2017). We simulated go/no-decisions of pharma- ceutical developers in consideration of antibiotic projects that were publicly

Table 5.1. Simulation experiments that resulted in publications. # Publication Type 1 Okhravi et al. (2017) Conference paper 2 Kronlid et al. (2017b) Report (private) 3 Kronlid et al. (2017a) Report 4 Årdal et al. (2017, Appendix C) Report 5 Okhravi et al. (2018) Journal article 6 Okhravi (2020) Journal article

78 reported to be in the pipeline. The 33 antibiotics currently assumed to be in the pipeline were sourced from Pew Charitable Trusts (2016). Each antibiotic project was, based on target indication, matched with phase-based estimations of development times, costs, probabilities of success taken from Sertkaya et al. (2014). Go/no-decisions were determined on the basis of expected net present value (ENPV). To model the imperfection of information an uncertainty pa- rameter was, in the vein of Abbott and Vernon (2007), introduced. The experiment explored fully delinked and partially delinked market entry rewards, and made an initial attempt at parametrizing interventions which re- sulted in the three intervention types that Okhravi et al. (2017) called “replace- ments”, “alterations”, and “additions”. Replacements replaces future phases of a project with an alternative sequence of future phases. Alterations alter values of phases by means of an operator and an operand, while additions add additional activities to be carried out in parallel with other additions. Jumping the gun to Chapter 6, we can see how replacements are related to the con- ditionality construct (Section 6.10), alterations are related to the scalability construct (Section 6.11), and finally additions are related to the parallel con- junction construct (Section 6.8).

5.1.2 Experiments 2 and 3 The second experiment was centered around the production of Kronlid et al. (2017b), an internal report (available upon request) used at a Task 9 meeting in London. The third experiment was centered around the production of Kro- nlid et al. (2017a), a public report published on the DRIVE-AB website. The underlying models of the two experiments are very similar which is why we here discuss them both at the same time. These experiments introduced the idea of a “discovery rate” (or entry rate), which elucidates how early this work was delimited from antibiotic research in favor of antibiotic development. These experiments considered two kinds of organizations, namely big pharmaceutical organizations as well as small and medium-sized enterprises (SMEs). This division was introduced to explore the idea of scarcity of capital in the sense that big pharmaceutical organizations were assumed to have virtually infinite capital but be more selective about where said capital is applied, while SMEs were assumed to have finite capital and be less selective about where it is applied. SMEs in these experiments had to acquire venture capital, at costs of capital higher than they themselves use to make decisions, in order to fund development. This shows how agent-based thinking was at the core of this thesis work from the outset. These experiments also explored the idea of multiple types of antibiotic projects where different interventions can target different types or multiple types of antibiotics. This provoked the need for the eligibility dimension of the conditionality objective (Section 6.10.1). Remnants from the complex-

79 ity of managing multiple qualitatively different antibiotics can be found in the proposal (Section 8.5) where non-commodity-like resources (such as an antibiotic project) is instead treated as a pair of an identifier and a list of prop- erties exhibited. Lastly, the experiments explored project dependent characteristics in the shape of grants that fund a percentage of projects with a percentage of the costs of some phase for the project in question. The need to express grants that target particular phases provoked the need for the conditionality and causality objectives (Sections 6.10 and 6.12). The need to on the other hand alter grant sizes based on phase costs provoked the need for the scalability objective (Sec- tion 6.11). Finally, the need to introduce a level of serendipity in who gets a grant and who does not, provoked the second dimension of the conditionality objective, namely availability (Section 6.10.2).

5.1.3 Experiments 4 and 5 The fourth experiment is the foundation of Appendix C of the final report of DRIVE-AB (Årdal et al., 2017). The fifth experiment lead to the special is- sue publication Okhravi et al. (2018). Both these models are very similar to each other and to those of Experiments 3 and 4. All significant differences are found in the sophistication of output analysis. Årdal et al. (2017) for example present heatmap plots that show how combinations of what is known as push and pull funding, specifically market entry rewards and grants, under various magnitudes, can or cannot collaborate to incentivize pharmaceutical develop- ers to reach go-decisions. Both publications report sensitivity analyses, and while Årdal et al. (2017) discuss multiple types of antibiotic projects in the vein of earlier experiments, Okhravi et al. (2018) focus entirely on sensitivity analysis.

5.1.4 Experiment 6 The sixth and final experiment took a slightly different approach in an attempt to reduce assumptions and instead focus on analysis of output data. While all work up to this point is joint work, it should be noted that the modeling, sim- ulation, and analysis was performed almost exclusively by myself or myself in collaboration with Simone Callegari. This final experiment however lead to the publication Okhravi (2020) of which I am the sole author and contributor.

5.2 Detailed case

Issuing monetary incentives, such as market entry rewards, to stimulate private firm engagement has been championed as a solution to our urgent need for

80 new antibiotics, but we ask whether it is economically rational to simply take public ownership of antibiotics development instead. We show that the cost of indirectly funding antibiotics development through late phase policy interven- tions, such as market entry rewards may actually be higher than simple direct funding. [...] We conclude that while indirect funding may be necessary for the current pipeline we may want to prefer direct funding as a cost effective long-term solution for future antibiotics. (Okhravi, 2020, abstract)

What follows, is a fairly detailed account of Okhravi (2020), to illustrate some interesting conclusions that we can derive from modeling in this domain, but more importantly to show (1) how a simulation model for this domain might be approached, and (2) the nature of some of the assumptions that must be made along the way. The intent of Okhravi (2020) is to explore the cost difference, for some hy- pothetical benefactor, between what is termed “direct” and “indirect” funding. The former (i.e. direct funding) is defined as simply paying for all antibiotic development, at-cost and when needed. By eliminating profitability require- ments, funding decisions could be entirely based on public health needs. The latter (i.e. indirect funding) is the notion of issuing policy interventions that aim to incentivize others to somehow participate in antibiotic development. While a huge number of indirect interventions are conceivable, Okhravi (2020) claims that phase entry rewards, also known as prizes, milestone-based prizes, or prize competitions, is a “prototypical” indirect intervention. A mar- ket entry reward is a common example of a phase entry reward where the reward phase is market entry. Okhravi (2020) however asks whether the time value of money, meaning the time-sensitivity, of private developers and the state, would cause some phase prizes to be cheaper than others. Or more ex- tremely, whether simply paying at-cost, which entirely eliminates the profit- requirements of private developers inflated by time-sensitivity, would under some conditions be cheaper. Okhravi (2020) thus sets out to compare the cost difference between paying for antibiotics at-cost (i.e. direct funding) and paying for antibiotics by incentivizing private developers to pursue them by awarding cash prizes. Input data to the simulation model is mostly sourced from Sertkaya et al. (2014) who report data for hypothetical antibiotics targeting six different in- dications. The indications are: acute bacterial otitis media (ABOM), acute bacterial skin and skin structure infections (ABSSSI), community acquired bacterial pneumonia (CABP), complicated intra-abdominal infections (CIAI), complicated urinary tract infections (CUTI), and hospital acquired/ventilator associated bacterial pneumonia (HABP/VABP). Okhravi (2020) divide antibi- otic development into five phases: pre-clinical (PC), phase 1 (P1), phase 2 (P2), phase 3 (P3), and phase 4 (P4), where the term ‘phase 4’ is (unconven- tionally) used to denote “all activities between the end of phase 3 and the first year of sales”.

81 Each phase is parameterized in terms of development time, cost, and prob- ability of success, all sourced from Sertkaya et al. (2014). Beyond develop- ment, various forms of market data is sourced form Sertkaya et al. (2014) and sampled in a similar fashion. Sales revenue is reduced when generics are pre- sumed to enter (i.e. following patent expiry). Additional costs such as those of post-approval studies and plant construction are also sourced from Sertkaya et al. (2014). Okhravi (2020) assumes that the benefactor paying for the in- tervention is a public-sector like agent and thus uses a lower (social) discount rate for the benefactor (i.e. the public sector) than the beneficiary (i.e. a pri- vate developer). The social discount rate assumptions stem from Moore et al. (2004). Okhravi (2020) also introduces an inefficiency parameter that mod- els the assumption that the benefactor, as a planning agency, might introduce additional inefficiencies. This thus proxies and respects the often held as- sumption that public institutions are more inefficient than private companies. Public sector inefficiencies in Okhravi (2020) are only assumed to apply to costs. The assumption is thus that the public sector is capable of delivering results within the same time-frame and with the same probability of success as the private sector, but that doing so will cost (possibly significantly) more. Prize intervention size is logarithmically sampled on the form 10X where X is a uniformly distributed random variable. Logarithmic sampling is employed to allow exploration of a wide space within a reasonable computational time frame, since early experiments showed that the probability of a go-decision varied greatly at low prize sizes but that it very large prize sizes were required for the probability of go to approximate 1. Okhravi (2020) employs ENPV (expected net present value) as a valuation method and suggests that ENPV not only is a suitable method for private com- panies to compute value, but also is a suitable method for public institutions to compute probabilistic, and discounted cost. This, since ENPV quantita- tively considers both the risk of failure at multiple points in time, as well as the lost opportunity cost of capital. Okhravi (2020) subsequently computes value from the following four perspectives: (1) private value is the value that the private owner assigns the project and in turn use as a basis for a go-/no- decision. (2) intervened private value is that same value but when taking the cash prize (such as a market entry reward) into consideration. (3) indirect cost is the cost for the benefactor from the benefactor’s perspective, of issuing said prize. (4) direct cost, finally, is the cost for the benefactor from the benefac- tor’s perspective of simply paying for the project whenever needed, at-cost. Computing ENPV from all these four perspectives, correspondingly, leads to what Okhravi (2020) terms: (1) private ENPV, (2) intervened private ENPV, (3) indirect ENPV, and finally (4) direct ENPV. Okhravi (2020) computes private ENPV as:

(R −C )P ∑ t t 0 (5.1) ( + )t t∈T 1 r Pt

82 Table 5.2. Probability (%) of go/no-decisions before interventions.

Indication Summary HABP/ Decision CUTI ABSSSI CIAI ABOM CABP VABP Min Max Mean Go 89 83 77 75 75 60 60 89 77 No 11 17 23 25 25 40 11 40 23 where T is the set of all time steps of the project (i.e. all development and market years), r is the discount rate of the evaluating private agent, Rt −Ct is the cashflow at time step t computed by subtracting costs from revenues. P0 is the probability of reaching the market from the point (in our case pre- clinical) at which ENPV is calculated, Pt is the probability of reaching the market from the entry point of time step t which means that P0/Pt is equivalent to the probability of completing time step t − 1. Okhravi (2020) assumes that an agent will reach a go-decision without an intervention if private ENPV  0, and with an intervention if intervened private ENPV  0. Private intervened ENPV is computed as: (R −C + Z )P ∑ t t t 0 (5.2) ( + )t t∈T 1 r Pt where Zt is the prize (if any) associated with time step t. Indirect ENPV is computed as: −Z P ∑ t 0 (5.3) ( + )t t∈T 1 r Pt where the only considered cashflow is the issuing of the prize, and r is the discount rate of the benefactor, i.e. of a potentially public agent. Lastly, direct ENPV, is computed as: −(1 + i)C P ∑ t 0 (5.4) ( + )t t∈T 1 r Pt where i is the inefficiency fraction of the direct funding agency. Finally, Okhravi (2020) emphasizes that since indirect ENPV and direct ENPV cor- respond to indirect and direct cost but are expressed in terms of revenue they will always be negative. Okhravi (2020) then samples all distributions 2,000 times (resulting in 2,000 projects) for each indication, and computes all the four valuation metrics for each project. Any project that has a private ENPV greater than or equal to zero, at the outset of pre-clinical, is presumed to reach a go-decision. Note that Sertkaya et al. (2014) employed a threshold of 100 million USD, so the assumption of Okhravi (2020) is even more permissive and thus inclined to ‘deflate’ the required prize size to reach a go-decision in an indirect interven- tion. The resulting probability that any project reach a go- as compared to a no-decision is reported in Table 5.2.

83 Okhravi (2020) is focused on estimating total intervention cost as a conse- quence of the probability of turning a no-go-decision (no-decision for short) into a go-decision. Okhravi (2020) postulates that go-decisions should be cor- related to intervention prize size, since (and as is shown in the paper) private intervened ENPV is (obviously) correlated to prize size, and since go is de- fined as private intervened ENPV  0. Since go is a binary variable, Okhravi (2020) employs logistic regression, by first filtering out all projects that reach a go-decision irrespective of prize size (i.e. where private ENPV  0) and then fitting the following model to the remaining data: P(go) ln = β + β log (prize_size) (5.5) 1 − P(go) 0 1 10 which can be reformulated from log-odds to a prediction of P(go):

β0+β1 log10(prize_size) ( )= e P go β +β ( ) (5.6) e 0 1 log10 prize_size + 1 and which allows solving for prize size: ( P(go) ) − β ln 1−P(go) 0 log10(prize_size)= (5.7) β1 With this we can predict the probability of turning a no-decision into a go- decision, i.e. what above is denoted P(go), from a given prize size. However, we can also predict the prize size required to yield some target P(go). Note again that P(go) does not denote the general probability of a go-decision, but the conditional probability of turning a no-decision into a go-decision. Note also that the independent variable prize size is log10 transformed due to het- eroscedasticity in the correlation between prize size and intervened private ENPV. Predicting P(go) from prize size yields the curves depicted in Fig- ure 5.1. While setting a target P(go) is a policy concern, Okhravi (2020) suggests that 90% ought to be a fair target if we are to turn the vast majority of no- decisions into go-decisions without spending extraordinary amounts on prizes for tiny increments in improvement. The prize sizes required to achieve a tar- get P(go) of 90% are reported in Table 5.3. Note that M1 is used to denote the first market year, which in prize terms is analogous to a market entry reward. In line with previous works (Sertkaya et al., 2014), Table 5.3 supports the view that the later a prize is awarded the higher it must be if the resulting P(go) is to remain constant. From the table we can also conclude that there is a great risk for what in Section 1.5 was called “over-incentivizing” or “overspending”. To achieve a P(go) of 90% without differentiating between target indications we must, for some given phase, issue the largest prize across all indications. This since the most ‘expensive’ indication otherwise will be unable to achieve a P(go) of 90%.

84 P1 P2 P3 P4 M1

ABOM ABSSSI 1.0

0.8

0.6

0.4

0.2

0.0

CABP CIAI 1.0

0.8

0.6

P(go) 0.4

0.2

0.0

CUTI HABP/VABP 1.0

0.8

0.6

0.4

0.2

0.0

106 107 108 109 1010 106 107 108 109 1010 Prize size

Figure 5.1. Predicting the conditional probability of turning a no-decision into a go- decision, P(go), from prize size for each combination of prize phase and target indica- tion, using data from Okhravi (2020).

85 Table 5.3. Prizes yielding P(go) = 90% (million USD).

Indication Summary HABP/ Phase ABOM ABSSSI CABP CIAI CUTI VABP Min Max Mean P1 79 81 92 105 85 98 79 105 90 P2 323 305 326 291 295 295 291 326 306 P3 940 884 719 863 840 907 719 940 859 P4 1,841 2,432 2,083 4,008 1,533 2,859 1,533 4,008 2,459 M1 2,456 3,066 2,598 4,494 2,188 3,800 2,188 4,494 3,100

By only looking at the face-value prize sizes of Table 5.3 one might con- clude that paying later is substantially more expensive than paying early. While this, as we shall see later, is indeed true, we need a slightly more sophisticated analysis to conclude this. Okhravi (2020) emphasizes that we also must take the lost opportunity cost of capital and the risk of failure into consideration when comparing intervention costs. Hence the previously outlined formulas for indirect ENPV and direct ENPV. The intuition behind the need for indirect and direct ENPV is as follows: Due to the high risk of project failure, an early phase prize will be success- fully acquired by more developers than a late phase prize. However, as we have seen, an early phase prize can be substantially lower than a late phase prize and still achieve the same probability of turning no-decisions into go- decisions, meaning the same P(go). In essence this means that while we pay less every time we pay for an early phase prize, we pay more times. Similarly, we pay more every time we pay for a late phase prize, but we pay fewer times (since fewer projects make it that far). The question is therefore the following: When taking into account (1) the lost opportunity cost of capital by means of a discount rate, meaning the time-value of money, and (2) the probability of not having to pay due to project failure, which prize phase is cheaper? Con- sequently, we cannot use plain prize size when comparing costs of an indirect intervention but must instead use indirect ENPV. Okhravi (2020) computes the within-subject cost difference between indi- rect ENPV and direct ENPV, for every prize phase and every indication, but only for projects with an intervened private ENPV greater than or equal to zero. The removal of projects where the intervention has no effect is important as it is nonsensical to compare the cost of an intervention that doesn’t work with the cost of one that does. The within-subject cost difference is computed as indirect ENPV subtracted from direct ENPV which will result in a positive number if direct funding is cheaper, and a negative number if indirect fund- ing is cheaper. This since both direct and indirect ENPV are costs expressed as cashflows, that is, necessarily negative numbers. The within-subject cost difference computed as above thus correspond to the cost savings of direct funding, for some given project.

86 While we at this point have established a way of computing the capitalized, expected value of cost difference, we must find a way to reason about cost difference in relation to P(go), prize size, and public inefficiency. Okhravi (2020) argues that cost difference should be dependent on both benefactor in- efficiency and prize size, since inefficiency affects direct ENPV (by increasing development costs) and prize size affects indirect ENPV (since it is the only cashflow in the calculation). Further, since P(go) is correlated to prize size, cost difference must also be dependent on it. Plotting cost difference against P(go) and inefficiency yields a simpler interpretation than if we had logarith- mically plotted the prize size instead of P(go). This is the case since P(go) is comparable across indications and prize phases, while prize size, as we’ve seen in Figure 5.1, is not. Okhravi (2020) chooses to rerun the simulation for a fair number of combi- nations of P(go) and inefficiencies, rather than attempting to fit a new model. This is because it turns out that indirect ENPV is nonlinearly correlated to P(go) which means that cost difference ought to be nonlinearly correlated to it, which in turn would necessitate a nonlinear model. By then computing the mean cost savings for every combination of P(go) and inefficiency in the rerun of the simulation, Okhravi (2020) presents a heatmap essentially equivalent to the one presented in Figure 5.2. The thick line crosscutting each plot is a quadratic polynomial fitted against the points closest to a cost difference of zero. Above the thick line values denote the cost savings of direct funding, while values below it denote those of indirect funding. To give a brief interpretation of Figure 5.2, consider the following few notes. In all indications, but especially in some, indirect funding is in early phases, virtually regardless of the target P(go) and benefactor inefficiency, cheaper for the benefactor when compared to direct funding. Conversely how- ever, in late phases, direct funding is for the benefactor substantially cheaper than indirect funding, unless the target P(go) is very low or the benefactor inef- ficiency very high. Importantly, it should also be noted that the potential cost savings of choosing direct funding are, in the explored space, significantly higher compared to the potential cost savings of indirect funding, meaning that in the unluckiest of circumstances, indirect funding may become dispro- portionally more expensive. The density of Figure 5.2, unfortunately, not only means that it captures a lot of information but also makes it quite difficult to draw conclusions from. To mitigate this Okhravi (2020) devises a hypothetical scenario and runs the simulation a third time. In this scenario, prizes are set to sizes that yield a P(go) of 90% while benefactor inefficiency is held constant at 50%. The cost savings of direct funding (as compared to indirect funding) can then, in the same way as before, be computed. The resulting cost differences of this scenario are reported in Table 5.4 while the actual costs are depicted in Figure 5.3. Figure 5.3 shows that the cost of direct funding, in the simulated scenario, remains comparatively constant across phases, despite minor variances in the

87 Mean cost savings (million USD)

500 1000 1500 2000 2500 3000 3500 4000 4500 5000

P1 P2 P3 P4 M1 0.9

0.8 ABOM 0.7 0.6 0.5

0.9 ABSSSI 0.8 0.7 0.6 0.5

0.9

0.8 CABP 0.7 0.6 0.5

P(go) 0.9

0.8 CIAI 0.7 0.6 0.5

0.9

0.8 CUTI 0.7 0.6 0.5

0.9 HABP/VABP 0.8 0.7 0.6 0.5 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Inefficiency

Figure 5.2. Mean cost savings of direct funding (above the thick line) and indirect funding (below the thick line) per combination of P(go) and direct funding ineffi- ciency. Lines delimit bins of 250 million. Using data from Okhravi (2020).

88 Table 5.4. Mean cost savings of direct funding (in million USD), per market entry, at a benefactor inefficiency of 50%, over indirect funding with prize sizes that yield P(go) = 90%.

Indication Summary HABP/ Phase ABOM ABSSSI CABP CIAI CUTI VABP Min Max Mean P1 -448 -451 -371 -297 -421 -438 -451 -297 -404 P2 -47 -118 -57 -163 -141 -238 -238 -47 -127 P3 336 258 15 215 200 188 15 336 202 P4 372 837 570 1,993 137 928 137 1,993 806 M1 576 1,001 692 1,894 402 1,225 402 1,894 965 sample of projects that happen to be stimulated to a go-decision by the prize in question. Compare this to how the costs of indirect funding rise when prizes are paid in later phases. In the simulated scenario both Figure 5.3 and Table 5.4 support the view that early phase prizes are cheaper than direct funding, but conversely that late phase prizes are (sometimes substantially) more expensive than direct funding. Both Figure 5.3 and Table 5.4 further strengthen the claim that designing a ‘one size fits all’ intervention might result in either significant overspending or understimulation. Lastly, we again observe that the possible cost-savings of choosing direct funding over late phase prizes greatly outweigh the possible cost-savings of choosing indirect funding over early phase prizes. Let us conclude this section by highlighting some of the policy-relevant conclusions that Okhravi (2020) draws from this analysis. Previous authors (Sertkaya et al., 2014) have pointed to the principal-agent problems where awarding prizes too early, incentivizes developers to overstate the potential of their drug candidates to ensure prize eligibility and possibly even abandon development after receiving funding. Okhravi (2020) likens this to ‘buying the pig in the poke’, but argues that even if early phase prizes are too uncertain, late phase prizes seem to be too expensive. This means that if we want to avoid the risks associated with early phase prizes then there is ample reason to further investigate the potential of direct funding rather than blindly jumping into late phase prizes. Recommending late phase prizes over direct funding may prove difficult if, as indicated, they may result in additional costs of up to around 1.9 billion USD. Concerns have been raised, for example by Rome and Kesselheim (2019), around the economic sensibility of “pull” interventions for antibiotics. Specif- ically they analyzed market exclusivity vouchers, but Rome and Kesselheim (2019) also state that market entry rewards in excess of 1 billion USD is “po- litically unpalatable and financially unsustainable unless they are accompa- nied by a method of revenue generation”. Yet, they were met with critique from Boucher et al. (2020) for neither taking the societal value into account nor providing alternatives. In the approach of Okhravi (2020) however, the question of societal value is irrelevant since the analysis is ‘within-subject’ as

89 Direct Indirect

P1 P2 P3 P4 M1

3500

3000

2500

2000

1500

1000 Cost of funding (million USD)

500 CIAI CIAI CIAI CIAI CIAI CUTI CUTI CUTI CUTI CUTI CABP CABP CABP CABP CABP ABOM ABOM ABOM ABOM ABOM ABSSSI ABSSSI ABSSSI ABSSSI ABSSSI HABP/VABP HABP/VABP HABP/VABP HABP/VABP HABP/VABP

Figure 5.3. Costs per market entry (in million USD) of indirect funding (at prize sizes that yield a P(go) of 90%) and direct funding of the same projects (at a benefactor inefficiency of 50%). Outliers not shown.

opposed to ‘between’. In fact, direct funding may arguably result in higher societal value since the benefactor has the ability to prioritize projects entirely on the basis of societal need while completely neglecting private industry’s perspective on prospective profitability. Further, the complexities of prize sizing are not just limited to the bal- ance between over- (Okhravi, 2020; Okhravi et al., 2018; Okhravi et al., 2017; Sertkaya et al., 2017; Towse & Sharma, 2011) and under-incentivizing (Okhravi et al., 2018; Sertkaya et al., 2017) but it might be criticized that finan- cial rewards have been known to be unrelated to a drug’s societal value and/or innovativeness (Rome & Kesselheim, 2019). Rex and Outterson (2016) even argued that “rewarding all drugs with the same payments could create perverse incentives to produce drugs that provide the least possible innovation”. Direct funding, on the other hand, entirely circumvents the issue by removing private profitability from the equation.

90 This is not a radical suggestion. Interventions effectively similar to direct funding can be enacted without involvement of the public sector through non- profit enterprises or public benefit corporations (Årdal et al., 2017; Outterson & Rex, 2020) which as postulated by Outterson and Rex (2020) can play an important role before the politicial will to enact reward-based interventions is in place. On the flip side, Singer et al. (2020) suggests using a two-pronged ap- proach where indirect funding ensures short-term extraction of the antibiotics currently in the pipeline, and direct funding, ensures cost-effective, long-term access to new antibiotics.

91 6. Objectives of a solution

In light of the design science research process (Peffers et al., 2006), the objec- tives of a solution are, here, inferred from the problem described in Chapters 1 and 2, the literature as accounted for in Chapter 3, the concrete simulation experiments reported in Chapter 5, and finally my unique position as a partic- ipant in DRIVE-AB, described in Section 1.4. Based on these four sources, I composed a long (but by no means exhaus- tive) list of significant challenges faced when attempting to model policy inter- ventions aimed at stimulating antibiotic development. Not all of these issues however pertain to the research question at hand. The challenges that do per- tain to the research question have however instead been rephrased as objectives and are outlined here. They stake out what must be expressible in a language of policy interventions aimed at stimulating antibiotic development. The key question driving all these objectives is how a prospective benefi- ciary can value their (current and prospective) portfolio in the presence of an offer to enter into a contract or a contract they’ve already entered into. This in order to determine what opportunities to pursue, in what order, and what to do with the rest. In the evaluation, performed in Chapter 9, we tie back to the objectives reported here, in order to show how the proposed contract language satisfy these objectives. Note that we do not deal with the question of how to abstract over valuation methods, or even how to transform a policy intervention into a form suitable for valuation. Nor how to deal with evaluating multiple policy interventions simultaneously. Recall that the research question does not ask how to deal with policy interventions, only what they ought to look like to enable activities such as valuation. To appreciate why these objectives are critical to simulation model align- ment, consider how Okhravi et al. (2017), Kronlid et al. (2017b), Kronlid et al. (2017a), Årdal et al. (2017, Appendix C), Okhravi et al. (2018), and Okhravi (2020) all assumed that each simulated antibiotic project, from day one, knew whether or not they are and will remain eligible for the market entry reward, that the reward is and will remain available, and that its size will remain con- stant. Some of the publications also modeled probabilistically awarded grants, but agents in these simulations were not even able to reason about their like- lihood of receiving them. Both these assumptions are simplistic at best and discharging the first one would likely lead to (among other things) inflation of the required prize sizes reported in these publications. This since increased risk (by means of uncertain eligibility and availability) must be offset by in- creased reward.

92 Before further elucidating these objectives we must discuss what might use- fully be described in terms of known unknowns and unknown unknowns. A contract language should help policy intervention designers to design contract offers that contain known unknowns from the perspective of evaluators of the contract or offer, meaning from the perspective of for example potential bene- ficiaries. To elucidate the distinction between known unknowns and unknown unknowns in this context, consider how a benefactor might choose to, in the future, withdraw their contract offer or disregard all obligations of some exist- ing contract without conferring with anyone. From the perspective of anyone else this is an entirely unknown unknown. It is not only unpredictable, but unexpected. While we may be varyingly successful in reasoning about the likelihood of various agents performing such sudden actions, such questions are outside the scope of this thesis. Now let us consider a known unknown. If a grant contract offer is encoded such that its availability varies depending on whether there’s currently any money available to hand out, then if the grant in the future becomes unavailable due to a lack of remaining funds, then this is a known unknown. It is in a sense, entirely unsurprising. It might be that no one could have predicted exactly when the funds would run out, yet everyone knew that it could happen and what the cause (but not necessarily root cause) of the unavailability would be. All this is to say that we in this thesis are concerned with allowing contract designers to encode known unknowns into a contract in a way that makes it possible for contract analyzers, for example potential beneficiaries, to reason about the current and future states of the contract in light of these contingen- cies. We now move to describe each of the objectives of a solution in detail.

6.1 Compositionality Underpinning all the other objectives is that of compositionality. As discussed in Chapter 1, the research question specifically pertains to composable con- tracts and not merely to contracts. Again, the emphasis is to, in the vein of Peyton Jones et al. (2000), enable the construction of complex contracts from simple building blocks, instead of producing a tremendously long list of policy intervention contracts that are all entirely unique.

6.2 Actualizability Thus far we have explored contracts without addressing how a policy interven- tion applicable to anyone can be encoded as a contract between some known concrete parties. The term policy intervention is ambiguously used in the lit- erature, and can refer to either state to state changes (see Equation 1.2), offers to enter into a contract, or contracts already entered into. At the macro-level a

93 policy intervention alters the available offers in a system. On the micro-level a policy intervention can be enacted as a contract between two agents. Con- sider for example how we might ask whether a particular agent is eligible for a given intervention. In that question we are assuming that the macro-level intervention has been introduced to the system and we are instead referring to whether the policy intervention is applicable in the micro-level. Meaning, whether some set of participants actually can enter into a contract. We are however not referring to an actual contract between some actual agents but some potential contract between some potential agents. In this sense, a policy intervention on the micro-level is an offer to enter into a contract, not an actual contract. An offer is in this thesis used to mean a formally structured, binding, proposal to enter into a contract. Offers are thus similar to but not exactly equivalent to quotes in business. Contracts are specific, in the sense that they must be agreements between actual agents on how to transfer actual resources. Offers however are general, in the sense that they can express what a contract looks like without being concerned about which agents and which resources will play what ‘roles’. Consider for example the policy intervention sometimes called fully delinked market entry reward. A market entry reward is a general offer that in the light of some prospective project and project owner can be concretized into an ac- tual contract. On the general plane, meaning as an offer, a market entry reward states that when a beneficiary brings an antibiotic meeting certain eligibility criteria to the market, they will receive a prize from the benefactor in exchange for the intellectual property of the antibiotic. That is, if the beneficiary chooses to accept the offer so that the two parties enter into a contract. To turn such a general offer into a specific contract we must thus ‘fill in the blanks’, or rather apply the offer to some specific agents and resources that can play a role in the contract. In this case we need two agents, namely a beneficiary and a benefactor, along with two resources, namely an antibiotic project, and a prize. We can thus think of this fully delinked market entry reward (fdmer) offer as a function: fdmer1 :: Resource -> Resource -> Agent -> Agent -> Contract from some two agents and two resources to a contract. Unfortunately, this doesn’t generalize very well. Consider for example a trilateral fully delinked market entry reward where the beneficiary brings the antibiotic to market approval, the benefactor supplies the prize, and some third party manufactures the drug. See for instance Outterson et al. (2016) for sug- gestions along these lines. In this case we ought to define the function as: fdmer2 :: Resource -> Resource -> Agent -> Agent -> Agent -> Contract meaning that we need three agents before we can build the contract. Impor- tantly, however, offers must, just like contracts, be composable. If contracts

94 are composable but offers are not, then we have not gained much since we still need to build an entire catalog of policy intervention offers where every offer is unique. Our simple conceptualization of offers as functions above, appears unsatisfactory. To capture conversions from offers to contracts in the general case we need a reusable function, call it actualize: actualize :: Offer -> Contract that turn offers into contracts. By taking the advice of Simon (1996, p. 215) to heart we here realize that we can simplify the problem of defining compositionality for both contracts and offers by re-representing the problem in a way that makes them the same. If contracts are parametrically polymorphic over economic events then we can redefine the actualization function to: actualize :: (e1 -> e2) -> Contract e1 -> Contract e2 where e1 is some platonically ideal event type while e2 is some concrete event type. The function (e1 -> e2) is a context-specific mapping from ideals to concrete types, meaning from ‘roles to play’ to ‘players of roles’. The function is context-specific since players of roles can vary. If economic events represent resource transfers between agents then the ideal event type (e1) should specify ideal transfers of ideal resources between ideal agents. Similarly, the concrete event type (e2) should specify actual transfers of actual resources between actual agents. Note that ‘actual’ events are here not events that have occurred but specifications of events to be carried out. In the terminology of extended REA we are referring to ‘commitments’ rather than ‘events’, meaning events that should be carried ought rather than events that have been carried out. If an actualization function like the one above exists for some e1 and e2, then we have gained compositionality of contract offers for free, by represent- ing the problem in a way that removes the distinction between contracts and offers. In the new representation, offers are contracts and since contracts are composable, offers are composable as well.

6.3 Prospectability Seeping through all the objectives, is that of prospectability. This essentially regards the realization that if properties of an offer (such as for example the yet to be discussed notions of eligibility, availability, optionality, and scaling) varies with state then there must be some way for contract and contract of- fer analyzers (such as for example potential beneficiaries) to reason about the future values of these properties. To concretize, consider the following. Even if a project or agent is deemed eligible today, we cannot necessarily conclude that said agent or project will remain eligible tomorrow. The project might for example be further refined,

95 while the agent may engage in further economic activity. Consider for exam- ple phase-specific grants. If the intention of some grant is to help developers with insufficient funds to survive the transition from pre-clinical to phase 1, colloquially called the “valley of death” (So et al., 2011; So et al., 2012), then it would be irrational for such a grant to consider a phase 1 project eligible on the basis that it once was in pre-clinical. Seen from the perspective of a contract analyzer (such as a potential bene- ficiary), it is critical to somehow be able to reason about how properties of a contract or offer will vary in the future. Consider for example a hypothetical grant that renews bi-annually, but accepts applications on a rolling basis, and hands out a fixed sum to each successful applicant. If all the money has been handed out and we have yet to reach the bi-annual renewal point, then it is im- portant for prospective beneficiaries to be able to understand that the current unavailability is momentary. A naive approach to this question might be structured like: unfold :: Contract -> [Contract] where a list of prospective future contracts is extracted from a contract alone. To appreciate the complexity of this approach, consider something as simple as how a grant fund behaves over time. Extensive discussions on this topic preceded the publication of both Okhravi et al. (2018) and Årdal et al. (2017, Appendix C). The discussions were however omitted from the manuscripts as we took an altogether different approach to sidestep the complexity. If a grant is based on a fund, then we might for example wonder how often that fund is renewed. We might wonder whether it always resets to the same amount every cycle or whether money simply is added at every renewal. We might wonder whether there is a particular application window where we only can apply to the fund within that window. We might wonder what happens if we are eligible (for example due to having a project in pre-clinical) when applying but not when the decision is made (for example since we’ve then been able to take the project to phase 1). The list of questions can be made long. Designing a data type to capture such state-based variations is non-trivial. In the vein of Simon (1996, p. 215) we can simplify this problem by simpli- fying its representation. Instead of asking about the future states of a contract, we ask what it means for an offer to be state-dependent in the first place. In- stead of demanding that a contract be able to tell us about its future states, a contract demands that we tell it the current state in order for it to tell us what it is. If contracts (and thereby offers) are functions of state, the future of a con- tract can be trivially computed by applying the function to some future state. To reason about the future states of a contract we should thus reason about future states. Our illustrative unfolding function can thus be re-expressed as: unfold :: Contract -> State -> Contract

96 where the state of the world now is an input. It may appear perplexing that we here employ the same contract type to represent both contract input and contract output. This stem from the realization that a contract always can be viewed as a function of state even if it doesn’t change over changes in state. A contract that doesn’t change with state is akin to the constant function with the type (a -> b -> a). In the section on reducibility (Section 6.15) we further discuss how contracts and offers can change over changes in state by returning new versions of themselves. The upside of this approach is the decoupling of descriptions of future states from descriptions of contracts. The downside however is that analyzing the future of a contract is turned into a black-box exercise. Since contracts are functions of state, agents have no qualitative characteristics (beyond type in- formation) to apply heuristics to. Making informed guesses about the future of state in general is likely more difficult than making guesses about state in relation to a specific contract. Complications and trade-offs surrounding the unfolding of future states is a prime avenue for subsequent research.

6.4 Atomicity Rights and obligations are at the core of the financial contracts of Peyton Jones et al. (2000). Similarly, we assume that all contracts are either atomic obliga- tions or composite contracts that ‘bottom’ in atomic obligations. Note that in the independent view (Section 3.2.1) there is no need to distinguish between rights and obligations since every right is an obligation for some other party and vice versa. Unfortunately, since the ideality objective states that any at- tempt to actualize an offer must return a contract, we cannot simply assume that all contracts bottom in obligations. Consider for example what we would return when an agent asks to actualize some offer into a contract, if the project used in the mapping function is ineligible for the contract from the perspective of the offer. We cannot return any non-empty set of obligations since no one should be obliged to do anything. Subsequently, this objective thus states that an non-composite (atomic) con- tract must either contain one obligation or no obligations and that it must be possible to distinguish between the two. By extension, a composite (non- atomic) contract must in every branch bottom in either an obligation or no obligation. From this view, and in the light of the prospectability objective, a contract is nothing more than a state-dependent, possibly empty, set of obliga- tions. This objective thus captures what Peyton Jones et al. (2000) calls one and zero, which in their case refer to a monetary transfer between two agents, and the lack of obligations, respectively. We here take a more general stance and suggest that an atomic contract is either one or no obligation, but that obligations are not necessarily trans- fers. In general, a contract can be seen as the parametrically polymorphic type

97 (Contract a) where a is some type whose inhabitants are obligations. In the following sections we will discuss three types of obligations, meaning three types that in this proposal together makes up the type parameterized as a. These obligations are transfers, transformations, and choices. Transfers stem from both the compositional contracts of Peyton Jones et al. (2000) and the original REA formulation by McCarthy (1982). Transformations stem from REA as extended by, amongst others, Geerts and McCarthy (2000b). Choices stem from the realization, gained during the experiments, that the right to choose a path through a contract can be endowed to any arbitrary agent. Note that this objective assumes that any definition of obligations only al- low obligations that somehow hold value for someone under some circum- stances. No obligation is thus not the same as an obligation of infinitesimal value. No obligation is the true absence of value. While this may appear strict, remember that it says nothing about the magnitude of said value. Claiming that a grain of sand holds value is not preposterous if a million grains of sand makes a heap. In contrast to Peyton Jones et al. (2000) this objective makes no demands on when an obligation must be fulfilled. In Peyton Jones et al. (2000) and Peyton Jones and Eber (2003), the one combinator states that the counter party is obliged to “immediately” transfer the resource underlying the contract. The notion of ‘immediacy’ is however a “vague predicate” and ought to be better defined if used. To appreciate immediacy is vague, consider Sorities paradox: If a transfer today counts as an immediate transfer, and if an immediate transfer delayed by one second still counts as an immediate transfer, then why is an immediate transfer delayed by a month not an immediate transfer? In other words, does ‘immediately’ mean before end of day, before end of next work day, within the next minute, or something completely different? Immediacy is inevitably domain-specific. To avoid premature assumptions about immediacy, we dispense with the notion altogether. Obligations should thus be understood as outstanding debt from one agent to another. Without composing such an atomic obligation contract with other contracts there is no time limit and no interest rate, which means that the agent responsible for the obligation is allowed to postpone the obligation into infinity without any direct negative consequences. This is also visualized in Figure 6.1 where the arrow depicts the passage of time, or rather evolution of state.

Obligation | Nothing

Figure 6.1. Contract obligations extend into infinity or until fulfilled.

98 6.5 Transferability One form of obligation is the notion of a transfer of an economic resource between two economic agents. This is akin to a marriage between the one combinator of Peyton Jones et al. (2000) and a REA (McCarthy, 1982) trans- fer. Specifically, a transfer is a triple of two agents and one resource where one agent is the provider of the resource, while the other is the receiver. This closely resembles the definition of “transmit” from Andersen et al. (2006) which builds on the same works. Since a transfer is a form of obligation there are, as previously discussed, no requirements surrounding when the transfer must be executed. The only obligation is that it eventually must be executed. This is also visualized in Figure 6.2.

(Agent, Agent, Resource)

Figure 6.2. Transfers extend into infinity or until fulfilled.

In contrast to REA exchanges (McCarthy, 1982), neither this definition of transfers nor any other objective here outlined demand “duality”. To the con- trary, transfer duality is by this definition not possible since a transfer captures the directional movement of a single resource from one provider to one recip- ient. It is akin to what in the “trading partner view” of Hruby (2006, p. 353) or the original REA formulation (McCarthy, 1982) is known as “increment” and “decrement” events, or to what in the “independent view” is known as “transfers”. To model dual transfers, meaning the exchange of some resources for some others, we need composite contracts that can conjunct atomic obliga- tion contracts. This is later discussed as parallel (Section 6.8) and sequential conjunction (Section 6.9) respectively. Exchange duality is omitted as a conse- quence of the discussion in Section 3.2.1. It is not obvious that such a demand is necessary, and Stefansen (2004) argued that it is unsettling that REA authors themselves encourage violations of the duality axiom in the name of pragmatic compromises when implementing systems.

6.6 Transformability The second form of obligation stem from REA, as extended by Geerts and McCarthy (2000b). If contracts can express REA transfer commitments then one might ask whether they also should allow expressing REA transformation commitments. Consider for example how a grant might be awarded to a de- veloper of an antibiotic in exchange for the developer promising to undertake some activity, meaning to attempt some transformation such as moving the antibiotic from phase 2 to phase 3. As discussed in Section 3.2.2, Geerts and McCarthy (2000b) argue that col- lections of transfer commitments form a contract, while collections of trans-

99 formation commitments form a schedule. Given the structural similarities be- tween REA contracts and REA schedules, as made evident by Figure 3.6 we here propose that both transfers and transformations can constitute commit- ments in a unified type which we refer to as a contract. In Geerts and McCarthy (2000b) transformations are claimed to either “con- sume”, “use”, or “produce” a resource, where the distinction between con- sumption and usage is that the former decrement the resource in “chunks” while the latter may use up the resource in its entirety or so that it looses its “form so as to be unrecognizable”. This definition of consumption however demands that application designers specify what constitutes a chunk, how to decrement a chunk from a given resource, and how many chunks a given trans- formation should decrement. Given the desire to express transformations of arbitrary resources, such as for example an antibiotic project, this definition of consume seems unduly complicated. Thus, we omit the notion of chunks and simply suggest that when a resource is consumed it is consumed in its en- tirety. To model the partial consumption of some resource one must match a consumption event with a production event where the ex-post resource is pro- duced. This definition also allows omitting the otherwise ambiguous definition of resource usage. Similar to how transfers only oblige the agent in question to transfer the un- derlying resource eventually, so too does transformation not make any claims surrounding when the transformation must be fulfilled. The notion of trans- formability is visualized in Figure 6.3.

(Use | Consume, Agent, Resource)

Figure 6.3. Transformations extend into infinity or until fulfilled.

6.7 Optionality The third and last form of obligation is that of options or choices. Consider for instance the “optional reward system” discussed by Mossialos (2010, p. 93) where a developer who manages to bring an eligible drug to the market gets to choose between a monetary reward, and a patent. There are four key considerations in the above sentence. Who is allowed to make the choice? What are the alternatives? When can the choice be made? Finally, when must the choice be made and what happens if the choice-maker doesn’t make a choice? We can trivially conceive of alternatives schemes where the benefactor (or even some external third party) play the role of the choice-maker. Hence, the choice-maker is not bound to be the beneficiary. Consequently, there must ex- ist some choice event (or token) that can be issued to specify what particular

100 choice is being made. In fact, cases where a single agent gets to choose be- tween two subcontracts can be considered a special case of a situation where two agents can choose whether or not to activate some subcontract given to them, but that only the agent who chooses first gets its will through. If the two agents happen to be the same, then we have expressed a mutually exclusive choice for a single agent.

if (a1, s1) then c1

((a1, s1, c1), (a2, s2, c2))

if (a2, s2) then c2

Figure 6.4. Optionality as the obligation to choose between two things.

The introduction of a choice event however, begs the question of whether this token perhaps can serve as an obligation as well as an event. Options, in contrast to transfers and transformations can thus be viewed from two per- spectives. On one hand we can contractually oblige agents to choose between two things, and on the other to make a particular choice in a choice between two things. The former is depicted in Figure 6.4 and the latter in Figure 6.5, where c1 and c2 both are subcontracts, s1 and s2 are tokens used to refer to the choice of c1 or c2 respectively, and a is some agent.

(a, s1 | s2, ((a1, s1,c1), (a2, s2,c2)))

Figure 6.5. Optionality as the obligation to make a particular choice.

Note again that optionality, like the other obligations extend into infinity. While Peyton Jones and Eber (2003) hold that optionality obliges the choice- maker to choose “immediately”, we here hold the previously discussed posi- tion that immediacy is a vague predicate and instead let choice be a ‘blocking’ contract. If we need to limit the period in which the choice is available we need more complex contract capabilities such as for example the notion of deadlines described in Section 6.13. Importantly, and in the vein of Peyton Jones et al. (2000), binary options can of course also be used to express options between either activating a contract or not by making the empty contract atom serve as one of the subcontracts.

6.8 Parallel conjunctivity Parallel conjunction is the simplest form of contract composition. It is akin to the and combinator of Peyton Jones and Eber (2003), and is visualized in Figure 6.6 where c1 and c2 are subcontracts. It captures the simple idea that the

101 c1

c2

Figure 6.6. Parallel conjunction of contracts. rights and obligations of a contract can be defined as the rights and obligations of two subcontracts. Where the execution order of the rights and obligations in the subcontracts is entirely defined within the subcontracts themselves. A general example of parallel conjunction is the concept of exchange dual- ity from McCarthy (1982), or what often is called reciprocity. Any simple ex- change of two resources between two agents can be expressed as parallel con- junction of transfers. Note however again, that by only restricting ourselves to parallel conjunction and transfers, the horizon of the exchange extends into infinity, which means that while the agents indeed do ‘owe’ each other the resources in question they are allowed to postpone the transfers indefinitely. Parallel conjunction can however of course be used with more complex sub- contracts than simple transfers. Let us look at an example of parallel conjunction in the domain of policy in- terventions for antibiotics. Assuming for a moment that we have the ability to express more complex subcontracts (using properties that are discussed further ahead in this chapter). Rex and Outterson (2016) presents a prize based policy intervention where prizes are paid annually over five years, after an eligible antibiotic makes it to market. The additive aspect arises as there are multiple tiers that the antibiotic can achieve. These “bonus payments” are conditioned on desirable characteristics such as whether the antibiotic is approved for oral use, is the first approved drug to act via a given mechanism of action, targets one or more urgent pathogen from the threat assessment report by Centres for Disease Control and Prevention (2013)1, and so forth. In the context of pol- icy interventions for antibiotic development, parallel conjunction appears to be key.

6.9 Sequential conjunctivity Sequentiality captures the idea that some contracts are conjuncted sequentially as opposed to in parallel. The sequentiality objective stem from two sources. On one hand Andersen et al. (2006) suggested dividing the conjunction con- tract combinator of Peyton Jones and Eber (2003) into sequential and parallel conjunction, but called the former “sequential execution of subcontracts”. On the other hand, Hruby (2006, p. 336) when discussing REA commitments, gave an example of a contract with a term for “failure to sell”. In that example

1Both the CDC and the WHO have since issued newer reports on antibiotic resistance threats.

102 the seller would be obliged to pay a penalty fee if unable to deliver the goods agreed upon in the contract. Sequentiality is depicted in Figure 6.7 where c1 and c2 are two subcontracts and D(c1) is a predicate that yields true when c1 in some state is considered ‘done’. The notion of contract completion does not need to be a domain-

c1 c2

D(c1) Figure 6.7. Sequential conjunction of contracts. specific definition as it can be defined generally if we take it to mean that all obligations have been fulfilled. If we thus can determine whether an arbitrary set of obligations should be considered completed, then we can also determine whether the contract containing these obligations and rights is completed. Im- portantly, a subcontract can of course contain obligations involving multiple parties, and in such a case it appears reasonable that the contract only ought to be considered ‘done’ when all parties have fulfilled their obligations. The notion of reducing a contract to a completed or ‘done’ state is, in the proposal, dealt with in Section 8.3. Let us now consider an example from the domain of policy intervention contracts for antibiotic development. Consider again the notion of a market entry reward, but let us now assume that the benefactor wishes to not only ensure that the antibiotic is market-ready before handing out the prize, but is also delivered to some customers according to some specification. The first subcontract (c1) thus states that the developer (meaning the beneficiary) must deliver some amount of the antibiotic to some customers, and only when this obligation is fulfilled do we enter the second subcontract (c2) which obliges the benefactor to pay the developer the prize. We can also invert the sequentiality above to generate an alternative version of a market entry reward which too sounds like a reasonable policy interven- tion. In this inverted version, the benefactor is required to first pay the bene- ficiary, and only when the benefactor has issued the payment do we enter the second subcontract which then obliges the beneficiary to actually distribute the antibiotic to some specified customers in some specified amount. This contract is perhaps especially useful in a context where the payment of some large re- ward is divided across multiple legally distinct parties where no single party is liable for all payments. Consider for example the hypothetical example out- lined in Section 1.1 where some countries (say the G10 nations) enter into a multilateral contract that obliges them to pay a portion of the prize based on for example GDP to the beneficiary upon that beneficiary successfully bring- ing the antibiotic to market. Arguably the risk profile, from the perspective of the beneficiary, of such a contract is different from a bilateral contract where a single benefactor is responsible for the payment. As such, the contract could

103 for example be formulated as a (parallel) set of sequences where the benefi- ciary is responsible for delivering the antibiotic into each country only after the country in question has paid their proportion of the prize.

6.10 Conditionality Conditionality follows from the realization that not all contracts are linear. Peyton Jones and Eber (2003) introduced the cond combinator that builds a composite contract from two subcontracts and an observable. Observables are extensively described in Section 3.3.2. Conditionality here means that it must be possible to express a contract (or offer) as a binary choice between two subcontracts where the path chosen is determined by some predicate of the current state. This is visualized in Figure 6.8 where c1 and c2 are two

if True then c1

P(s) if False then c2

Figure 6.8. Conditionality of contracts alternative subcontracts, P(s) a predicate, and s the current state. If P(s) is true then c1 should be activated and c2 if not. In the context of policy interventions for antibiotic development there are at least two obvious conditions that have come up during the experiments and in DRIVE-AB discussions. These are the notions of eligibility and availabil- ity. The following two sections further elucidate the nuances of conditionality from these two perspectives.

6.10.1 Eligibility On the micro-level, when an agent is entertaining the thought of accepting a contract offer underlying a policy intervention, the agent might wonder whether the offer is even directed towards them in the first place. There are two aspects of eligibility: project eligibility and agent eligibility. Project el- igibility, regards whether some project (assuming that the contract requires a project to play a role) is eligible for the intervention, meaning is allowed to play a role in the contract. Agent eligibility, regards whether some agent (pre- sumably a project owner) is eligible for the intervention, meaning is allowed to play a role in the contract. The distinction between agent and project eligi- bility exists since a project can during its lifetime, change hands by means of for example mergers, acquisitions, spinoffs, and stock sales. The project could

104 thus remain eligible even though the new owner is not, rendering the overall eligibility false. To take an example of project eligibility, consider how a prize-based in- tervention might be designed so that only projects that meet some eligibility criteria would be allowed to receive the prize. Such criteria could be based upon public health need, innovativeness, expected levels of resistance, ex- pected sales, and so forth. To determine public health need one could for example use the pathogen priority list, published by the World Health Orga- nization (Tacconelli et al., 2018), that outlines what type of antibiotics are urgently needed. Such eligibility schemes might help avoid awarding substan- tial prizes to, say, me-too drugs. Basing eligibility on expected sales has been suggested (Okhravi et al., 2018) as a means to avoid overcompensation in the sense of for example providing a prize to a developer for pursuing a project that the developer would have pursued even in the absence of the prize. To take an example of owner eligibility, consider how certain ownership structures might prohibit public benefactors from supporting beneficiaries. A grant might for example only be available to non-profit organizations, or only be available to small- or medium sized enterprises. Agent eligibility might also depend on whether the agent in question has received support from the bene- factor before. Previous support might for example indicate ineligibility if say any beneficiary is only ever allowed to receive a single grant from the benefac- tor. Alternatively it might indicate immediate eligibility since the benefactor say considers the beneficiary already ‘approved’. The here discussed eligibility objective, stem from how the word ‘pre- qualification’ often was employed in DRIVE-AB discussions around market entry rewards. The idea being that the eligibility of a project might be de- terminable in early development phases even though the prize would be paid upon market entry. The reason for this being that this would bring predictabil- ity to the developer, meaning the potential beneficiary. Being able to deter- mine whether one now is, or in the future might be, eligible for some benefit is important if the benefit is to affect ones behavior.

6.10.2 Availability While eligibility, broadly speaking, determines path variations in a contract offer as a consequence of project or beneficiary state, availability determines path variations as a consequence of benefactor state. Consider for example how the availability of a grant might vary based on whether there actually is any grant money left in the pool or whether it has all already been handed out. We can trivially concoct an enormous number of reasons for why availabil- ity might vary. In an extreme case, a benefactor could simply wish to give the impression that they will indeed allow beneficiaries to enter into the contract in the future, when they in fact will not. While this may seem absurd, consider

105 for a moment how much the public health sector currently is concerned with publicly researching, discussing, and promoting possible policy interventions. Both Mossialos (2010) and Baraldi et al. (2019), for example, are reports com- missioned by the comparatively tiny country Sweden. The first documents a large number of policy interventions for antibiotic development while the sec- ond explores what the role of Sweden should be. It is therefore plausible to assume that some developers today assign a probability to the likelihood that there indeed will be prizes issued in the future and that said developers might be able to secure some of them.

6.11 Scalability While conditionality enables path variations in a contract as a consequence of state, scalability enables resource transformations as a consequence of state. This objective draws on the scale combinator introduced in Peyton Jones et al. (2000) but is generalized to arbitrary resources. Consider for example something as simple as a late payment fee or a loan, where the next payment might be computed as the outstanding amount multiplied by an interest rate. Such a contract would be laborious to express if all we had at our disposal was the ability to discretely fork a path. Scalability thus ensures that values in a contract can be continuously (as opposed to discretely) dependent on some outside state. In essence, the objective of scalability states that any time a number is mentioned in a contract or offer, it should be possible to express that number as a function of some external state as opposed to a constant. Examples of state that a contract might depend on include, but is by no means limited to, the history or current state of a project, the history of some or all other projects, the public economic actions of another agent, and so forth. To take a concrete example, consider the suggestion of “clawbacks” (Renwick, Simpkin, et al., 2016) in relation to market entry rewards. One way to implement clawbacks is to demand repayment of grants received during development, if a project successfully receives a market entry reward. This means that the actual size of the reward depends on the history of the recipient project in question. Specifically it depends on the amount of applicable grants that the project has received. Another example of scalability is cost-based grants which for instance are simulated in Årdal et al. (2017, Appendix C). Grants as a “push” (Grace & Kyle, 2009) mechanism usually aim to incentivize the beneficiary to under- take some activity and might as such for example be based on the cost of that activity. Consider for instance a hypothetical grant that funds 50% of the expected costs of carrying out phase 1 and is paid upon entry into phase 1. The actual size of any executed grant thus depend on the characteristics of the beneficiary’s project. At this point the reader might realize that for such a contract it matters whether we are talking about, say, future prospective costs,

106 costs incurred up to this point, or some historical costs. Such a question is too detailed at the level of solution objectives, but generally the point is that, as noted by Peyton Jones et al. (2000) when discussing observables, we need precise semantics for when a computed value is realized into an actual value. This question is dealt with in Sections 8.1 to 8.3. A more complex dependency could be an intervention that’s dependent on the realized sales of an antibiotic when (or if) that antibiotic manages to reach the market. Consider for example the proposal of an “insurance license” which might be structured as a “cap” or “cap and collar” intervention (Årdal et al., 2017). The cap specifies the minimum amount of unit sales that the benefactor guarantees the beneficiary, while the collar specifies the maximum amount before revenue sharing occurs. In this model, the amount of units that the benefactor acquires from the beneficiary depends on actual yearly sales. In the case of the scale combinator introduced in Peyton Jones et al. (2000), from which this objective is derived, its implementation is more trivial since all contracts in their language express transfers of currency. In the con- text of policy interventions for antibiotic development, and given the atomicity objective (Section 6.4), we cannot guarantee that all resources that underlie an atomic contract are currencies, let alone numeric. In fact, we cannot even guarantee that all resources are fungible. This problem was briefly outlined in Section 3.3.2. For example, ask yourself whether there is any sensible in- terpretation of the request to say, multiply some antibiotic project by 1.5. To combat this complication I here propose that a type, call it a only is valid as a contract atom (Contract a) if it is possible to define a scaling function on the form:

Double -> a -> a where a decimal number and an atom forms a new atom. Note that this does not demand that all things expressible as an atom in fact change their shape. If an atom denotes, say, the transfer of some specific antibiotic then scaling by some number might simply yield the same antibiotic in an unchanged fashion. Given the diversity of the above example, we can in the general case, thus only conclude that it should be possible to scale contracts based on arbitrary observable values in the vein of Peyton Jones et al. (2000) and as discussed in Section 3.3.2. Scalability is depicted in Figure 6.9 where the contract is defined as a pair of a subcontract and a function that maps from state to a scalar that the contract’s obligations should be scaled by.

(State → Double, Contract)

Figure 6.9. Scalability of contracts

107 6.12 Causality The objective of causality suggests that some contracts and contract offers un- dergo what might be thought of as phase transitions as a consequence of some ‘trigger’. In physics, the term phase transition is used to denote an abrupt change of a thermodynamic system from one phase to another. Consider for example the melting of ice, or the vaporization of boiling water. Phase tran- sitions are often incurred due to exchanges in the external environment. It is this detail that makes the analogy particularly useful. Upon the arrival of some (known unknown) event the contract or offer transitions abruptly from one state to another. The causality objective stem from the realization that the two combina- tors when and until from Peyton Jones and Eber (2003), described in Sec- tion 3.3.1, can be combined. Both combinators takes an arbitrary predicate and a subcontract and yields a new contract that in the first case is worthless until the predicate is true, and in the second case is worthless after the predi- cate is true. The first thus states that the subcontract only is valid as soon as the predicate has become true, while the second that the subcontract is valid until the predicate has become true. These can trivially be combined into a single combinator where the first subcontract is valid until the predicate yields true, upon which then the second subcontract is valid. Achieving equivalent behavior is of course already possible in Peyton Jones and Eber (2003) but requires conjoining the two combinators with an and. The causality objective is visualized in Figure 6.10 where c1 and c2 both are contracts or offers and P(s) is some arbitrary predicate that upon the arrival of some event becomes true. The horizontal axis represents time or more

c1 c2 P(s)

Figure 6.10. Causality of contracts specifically the successive arrival of events, or unfolding of state. In short, contract causality suggests that a contract can have one form, but upon the arrival of some predetermined event, it will immediately shift to the other form. The first form, or rather the first subcontract, is in the figure denoted as c1, while the second as c2. In the domain of policy interventions for antibiotic development, causality is common. Consider for example a canonical prize intervention such as a market entry reward. In a market entry reward, the benefactor pays the benefi- ciary a prize when the beneficiary manages to successfully bring an antibiotic specified by the contract to market. The ‘trigger’ that, in this case, causes the subcontract specifying the payment to be activated is any transformation that turns the antibiotic in question into a market-ready antibiotic.

108 It is important to realize that since the objective of causality states that a contract can undergo a phase transition as a consequence of some arbitrary predicate becoming true, sequential conjunction is rendered non-fundamental. This since we can build the notion of sequentiality from the notion of causality by expressing the predicate as the fulfilment of the first subcontract. However, this would turn the definition of ‘done’, or of contract completeness, into a domain-specific definition. Since sequentiality is an evidently useful concept it would, from the perspective of simulation model alignment, be unfortunate if it was domain-dependent. With a domain-independent definition of sequen- tiality we can trivially determine whether two otherwise equal contracts are the same. If all sequential contracts however were expressed as causality contracts with a domain-dependent definitions of ‘done’ we would first have to untan- gle the predicates in order to realize that they indeed are definitions of ‘done’. Said differently, teasing out sequentiality allows us to separate the concerns of defining ‘done’ from whether any particular contract is to be considered ‘done’. Given the research question’s emphasis (Section 1.2) on balancing fundamentality with utility we conclude that the utility of sequential conjunc- tion outweighs its non-fundamentality and thus let it remain an objective.

6.13 Finality The finality objective stem from the realization that if sequentiality defines what follows from subcontract performance then there must exist some dual that specifies what follows from subcontract non-performance. Recall that ‘performance’ and ‘non-performance’ are terms used to refer to adhering and not adhering to a contract respectively. In our terminology, if we have a binary definition of ‘done’ then it follows that we have a definition of ‘not done’. Finality is depicted in Figure 6.11 where c1 is the subcontract that defines

if D(c1) then ct

c1 if P(s) then c f

Figure 6.11. Finality of contracts. the obligations that must be fulfilled, D(c1) is a predicate that yields true if c1 is ‘done’, and P(s) is an arbitrary predicate that defines before when the predicate D(c1) must be true. Finally, ct is the subcontract that follows if c1 was performed in time, meaning before the advent of P(s), and c f is the subcontract that follows if not. The path taken is thus determined by whichever of the two predicates P(s) and D(c1) yields true first. The careful reader might at this point realize that this definition of finality renders sequential conjunction (Section 6.9) non-fundamental. Sequentiality

109 can be defined in terms of finality where the predicate P(s) is the false con- stant. A contract exhibiting such finality would thus never enter the c f sub- contract since the obligations of subcontract c1 may be infinitely postponed. Subsequently, we will either forever remain in c1 or eventually enter ct. This duplication is eliminated in Section 8.1 of the proposal. Interestingly, even the causality objective (Section 6.12) is rendered non- fundamental by this definition of finality. Causality can be expressed in terms of finality by letting the subcontract that follows after the trigger play the role of both ct and c f . Given that this requires duplicating the contract we still implement causality as a standalone combinator in Section 8.1 but it should be noted that it, performance aside, is entirely unnecessary. Examples of finality are also readily available in the context of policy in- terventions for antibiotic development. Consider, what has been described (Okhravi et al., 2017) as a fully delinked market entry reward, namely a re- ward that’s awarded in exchange for the intellectual property of a market-ready antibiotic meeting some eligibility criteria. Assume that the benefactor enters into such a contract with a developer (meaning a beneficiary) today, but that the developer struggles and only manages to bring the antibiotic through ap- proval and then transfer the intellectual property after a period of, say, 20 years. Evidently one may wonder whether this equates to performance or to non-performance of the contract. Unless a deadline was specified in the con- tract, then this must be considered performance. A simple example of finality would thus be to restrict the subcontract containing the reward, such that it can only ever be entered if the first subcontract, namely the transfer of the intellec- tual property of an antibiotic meeting some specification, is performed before time step t. The predicate in question thus checks whether the current time is greater than or equal to t. To take a more complex example, consider a fully delinked market entry reward that’s structured like a tournament where the first successful antibiotic is issued a large prize, the second a slightly smaller, and the third an even smaller one. In this case, the contract undergoes a transition each time a prize is issued. After the third transition, it is no longer possible to receive a prize. Such a policy intervention contract could be understood by means of three nested contracts exhibiting finality. The predicate in question of course being the awarding of a prize.

6.14 Cyclicity Cyclicity is here introduced to emphasize that while cycles in contracts, in general, can be implemented by means of recursion, it is certainly not the only way. The need for cyclicity was emphasized in Section 6.3 where an example of a bi-annually renewing grant was provided. Another example of cyclicity is loans that demand repayments on a recurring schedule until the loan is paid

110 back in full. The cyclicity objective is depicted in Figure 6.12 where P(s) is some predicate, s some state, Cn is a subcontract and the addition operator denotes that the rights and obligations of the subcontract is added to whatever rights and obligations are already owned.

P(s) +Cn

Figure 6.12. Cyclicity of contracts.

6.15 Reducibility As discussed in Section 3.3.3 Andersen et al. (2006) realized that a contract can be thought of as a (possibly infinite) set of transfer sequences. Seen from a different perspective, any contract can be reduced to either success or fail- ure (or in contract lingo performance or non-performance) given a sequence of transfers. In even other words, a contract specifies what transfers must be executed between what agents and in what order. So, given a sequence of transfers, we can determine whether the contract ‘reduces’ to success or fail- ure. In a nutshell, this is the notion of contract reduction. However, in the above exposition we have ignored the complications that arises as a consequence of state-dependent values. Andersen et al. (2006) employs a predicate langauge in lieu of the observables of Peyton Jones et al. (2000) and Peyton Jones and Eber (2003). The predicates of Andersen et al. (2006) are mostly focused on dealing with time, while the observables of Peyton Jones and Eber (2003) are, as discussed in Section 3.3.4, insufficiently concretized to allow us to reason about implications for reducibility. We here mix the two approaches. Contracts can be reduced to ‘done’ or ‘not done’ under any sequence of events, but ‘not done’ makes no claims sur- rounding the permanence of this state, but simply concludes that the contract in question has yet to be completed which may or may not still be possible. In this definition, ‘done’ is equivalent to contract performance, but ‘not done’ is not equivalent to non-performance. “Failure” in the work of Andersen et al. (2006) denotes non-performance, meaning contract violation in a sense that the residual contract is “impossible to fulfill”. The argument here is that in the context of simulation modeling, non-performance with unspecified repercus- sions is less useful than demanding that all contracts specify all possible paths, including non-performing ones. This argument was outlined in greater detail under the banner of violations in Section 3.3.3. An event must either be a transfer of a resource from one agent to an- other, a transformation (refinement) of one agent’s resource into another, or a choice by an agent of either the left or right path of an option. Note that

111 the first two types of events stem from the economic events of REA as in- troduced by McCarthy (1982), while the last is a pragmatic addition to avoid the non-determinism that, as highlighted by Andersen et al. (2006), arise as a consequence of trying to reduce optionality by means of structural transfer matching. Reducibility thus simultaneously address three related problems: (1) match- ing actual transfers with expected transfers, (2) matching actual choices with expected choices, and (3) updating state dependent values (observables) given some updated state. In all three cases, the contract must be reduced to what Andersen et al. (2006) calls a “residual contract”. To elucidate the meaning of residual contracts when matching actual trans- fers with expected transfers, consider an example contract expressed as a par- allel conjunction of transfers. Meaning a contract that contains two transfers that can be executed in parallel. Given the arrival of an event, where the event is a transfer that satisfies the first subcontract in the addition, the contract should be reduced to a new contract that does not consist of a parallel con- junction but only a transfer. In other words, we go from a contract specifying two transfers to a contract specifying one. If another event then arrives, and if this other event is a transfer that satisfies the second subcontract from the original contract which now is the only contract in the residual contract, we again reduce the contract and end up with nothing (meaning the absence of obligations). The second residual contract is thus ‘done’ since it only exhibits the absence of obligations. To elucidate the meaning of residual contracts when matching actual choices with expected choices, consider an example contract expressed as the option between two transfer subcontracts. Meaning a contract that contains a choice for some agent between which of two subcontracts, each containing a single transfer, to execute. Upon the arrival of an event equivalent to the choice of one of the subcontracts (left or right), the contract should be reduced to the residual contract containing only the chosen subcontract. Finally, reducing state dependent values under the arrival of any event is based upon the idea that all state dependent values can be expressed as either transfer events, conversion events, or choice events. Note that this applies to all state-dependent predicates discussed in other objectives in this section, such as conditionality and transformability. Also note that reduction as per this specification is a streaming algorithm.

112 7. Solution space

Every problem-solving effort must begin with creating a representation for the problem–a problem space in which the search for the solution can take place. (Simon, 1996, p. 108)

Chapter 6 outlined the objectives of a contract and offer language for express- ing policy interventions for antibiotic development. This language must how- ever, as emphasized by the research question (Section 1.2) be useful in the context of causal models of antibiotic development. As such, a model solution or search-space, in the vein of Simon (1996, p. 109) is here outlined.

7.1 Behavers The notion of ‘causal models’ (Equation 1.1) and the notion of ‘interventions’ (Equation 1.2) have both already been defined as mappings from interventions to effects, and as mappings from system state to system state, respectively. We have used the word ‘behavior’ frequently, and stated that interventions act by either introducing, eliminating, or altering agent behavior. By extension, causal models thus consist of behaviors that somehow can be altered by inter- ventions. At first glance, it may appear as if this conceptualization of interventions cannot capture interventions that themselves change over for example time. However, this can be done by realizing that in this representation, it is not that interventions themselves change over time but rather that their underlying behavior do. This is a critical realization since its logical extension is the idea that agent behavior simply is a function of some external state which may or may not include the notion of time. This aligns with how agents in multi- agent systems, or agent-based simulation models, by for example Wooldridge (2009, p. 22), are described as non-terminating processes that perceive the environment and act upon it. This interaction between agent and environment was earlier visualized in Figure 3.1. The key pieces of this interaction are: sensor input, action output, the environment, and agents themselves. Behavior can thus be thought of as the mapping Input → Output, where the input is the agent’s sensor input or interpretation of the environment and the output the agent’s corresponding

113 action output or actuation. The notion of behaviors that change over time can in this representation be accounted for by rephrasing the mapping to:

Behavior = Input → Output × Behavior (7.1) meaning that behavior now not only reacts to input data by yielding output actions, but by also recursively yielding some new behavior. Yielding new behavior from input can semantically be thought of as the process of learning. In order to build an actual simulation, we must of course settle on some structure for sensor input, action output, the environment, and agent behav- iors. Yet, we’ve up to this point only established the structure of behavior and claimed that agent’s somehow are related to it. Since we at this stage are attempting to identify the space of possible solutions, rather than suggesting the utility of some particular solution, it is important to retain generality. As such it would be unfortunate to suggest that all agent-based models share some given input and output types. Clearly, this is an unwanted assumption since these types, in theory, would have to proximate the top type ( ) or the ‘type of all types’. Instead we can employ parametric polymorphism and propose that behavior is a function that can be defined for any combination of input and output types. In set theory we might express this as:

∀i. ∀o. Behaviorio = i → (o × Behavior) (7.2) and in Haskell as: newtype Behavior i o = Behavior (i -> (o, Behavior i o)) where Behavior is a type constructor that when given two types correspond- ing to input and output, yields a new type containing a function that maps input to output and a new behavior for those two types. To appreciate the generality of this definition of agent behavior notice that since behavior can redefine itself when given new information, we have not only captured the idea of behavior but also that of knowledge. This, since the ‘new’ behavior can be codified such that it respects whatever new information the agent has gathered. Informally, when agents encounter new information they alter their behavior such that their new behavior is equivalent to their old behavior but adapted to this new piece of information. More accurately, the new behavior is a function of the old behavior and the new piece of informa- tion. The question of how behaviors relate to agents however remain unanswered. While there are many useful models of autonomous reasoning, such as the beliefs-desires-intentions (BDI) model (Bratman et al., 1988), committing to one at this stage is premature. The most conservative assumption we can make about the structure of agents, seems to be to take a phenomenological stance, in the sense that the only environment that ultimately matters is the one inside the mind of every agent. This is akin to what Thaler and Siebers (2019) calls

114 the “actor”’ update-strategy, based on actor theory as originally proposed by Hewitt et al. (1973). In this view, there is no global environment, but only “as many local environments as there are agents” (Thaler & Siebers, 2019). In the vein of phenomenology, agents are, in this view, free to interpret and ascribe meaning to all and any information flowing their way. Agents are, in Don Quixoteian terms, free to tilt at windmills at their own discretion. This is especially useful in social simulation where interpretation as opposed to objec- tive truth often serves as the basis for decision-making. Consider for example usage of stock market models to dictate investment decisions. The ultimate determinator of investment decisions is not some true fact of the world, but individuals subjective interpretation of it. Even in algorithmic trading the very selection and design of algorithms are shaped by subjectivity. Though slightly awkward we can even argue that entirely objective events like hitting ones foot into a stool can be sensibly modeled by these phe- nomenological agents. If agent a1 perceives that it places a stool in position x and agent a2 perceives the placement and proceeds to claim position x then agent a2 may perceive pain as it bangs its foot into the stool. Whether there ever was a stool or whether any pain could have been objectively measured is in one sense entirely a matter of perspective, and in another an entirely irrele- vant question. By combining these phenomenological agents with the recursive behavior of Equation 7.2 we find that there is no essential difference between agents and behaviors. In this sense, agents are behaviors and thus perhaps rather be called ‘behavers’. In the sense that Agent = Behavior. At first sight, it may appear as if this phenomenological stance prohibits the modeling of ‘global truth’. Contrarily, there are many ways of simulating global environments, meaning shared objective state, through local-only envi- ronments. One such way is to designate a single agent the role of an ‘oracle’, which can be thought of as ‘mother nature’. Consider for example an agent keeping a ledger of transactions on a stock exchange to determine the current balance of traders. While agents are free to interpret their balance as they please, the bank (meaning the oracle) will not confirm the validity of what it deems to be invalid transactions, such as for example attempts to overdraft. In the realm of social simulation this stance appears to be a useful starting point as it natively supports the notion of subjectivity but still allows the modeling of a single source of objective truth. At first sight, it may also appear as if it is not possible to simulate commu- nication without global identities. How can we otherwise for example deter- mine whether we in two encounters are talking to the same agent? Fortunately, global identity is simply another example of a global environment or shared state and can thus too be simulated in a distributed fashion. ‘Behavers’ could quite simply respond to questions about identity with some token that can be used for identification. In such a solution we don’t even need an ‘oracle’.

115 7.2 State Further, if all truth resides within agents and there is no external environment then the objective state of a system can simply be defined as: ∀ . ∀ . = { } = { } i o Stateio Agentio Behaviorio (7.3) meaning as a set of agents that essentially are behaviors. Moving to Haskell, we might for example define state as: type Stateio=[Agent i o] but it is in fact unnecessary to couple to concrete data types such as List, and in Haskell we can instead make use of ad-hoc polymorphism and well studied type classes. To do so it would be beneficial if State was only parameterized over a single type rather than two. This provokes the question of what the essential difference is between agent input (perception) and agent output (actuation). Given what we have termed the phenomenological stance the intention ought to be to minimize the amount of, from the perspective of agents, objective algorithms. If there is a difference between the agent input type and the agent output type however, then this by definition means that there must exist some mapping Output → Input so that the consequences of agents actions can be consumed by agents. If we are looking to minimize objectivity then the logical conclusion is to propose that Input = Output. This means that we can redefine behavior as:

∀m. Behavior = m → m × Behavior (7.4) meaning that it is only parametrically polymorphic over a single type. Moving back to Haskell we might define behavior as: newtype Behavior m = Behavior (m -> (m, Behavior)) where the letter m is not used to suggest monad but the word ‘message’. This means that all agent behavior can be encapsulated in the idea of re- sponding with a message when receiving a message. Ironically we at this point find ourselves in a terminology similar to that of some object-oriented languages where the sole mode of communication is message passing between objects. We can now avoid defining the concrete type of State by instead express- ing type constraints on the introduction, elimination, and alteration functions. If behavior alteration is functorial: alter :: (Functor s) => (a -> a) ->sa->sa alter = fmap then alteration is equivalent to mapping functions over a functor which in Haskell is known as fmap. Notice however that, unlike regular functorial map- ping, the type of the transformation function is constrained to a → a rather than

116 a → b since behavior transformations must return new behaviors. For lists this is simply equivalent to the function, in Haskell, (and in most languages) known as map. To implement introductions and eliminations however we need to make stronger assumptions about the structure of state. Assuming that the state functor is applicative and that it forms a monoid over agents, then we can define the introduction function as: introduce :: (Applicative s, Monoid (s a)) => a ->sa->sa introduce = mappend . pure meaning that introduction simply means wrapping (pure) the agent in the ‘context’ and then concatenating (mappend) that context with whatever con- text we had before. For lists, this is, in Haskell, equivalent to the cons operator (:), meaning equivalent to the notion of prepending. If we also assume that the state type is filterable: eliminate :: Filterable s => (a -> Bool) ->sa->sa eliminate = ffilter then elimination can simply be implemented as filtering based on some predi- cate. Note that filterable is not a type class that ships with the Glasgow Haskell Compiler but has the following trivial definition: class Filterable f where ffilter :: (a -> Bool) ->fa->fa Since Haskell lists are both applicative functors and filterable we get these implementations for free1 when choosing to represent state using lists. Importantly however, many other structures are also members of these type classes and as such this very general definition of simulation state captures many conceivable simulation structures. The spatial configuration of agents in an agent-based model vary significantly across simulations. Examples range from lattices, rings, toruses, and networks, to highly detailed environments supported by geographical information systems (Hammond, 2015). Such struc- tural differences could thus be encoded directly in the state type. How to up- date such structure itself as a consequence of agent messages, brings us to the next question.

7.3 Execution At this point we have yet to talk about what it means to ‘run a model’, meaning how to actually simulate. To maintain semantic simplicity the proposed model structure ignores multi-threaded simulations and thus assumes that a simula- tion can be thought of as a pure function from simulation input to simulation

1In practice we would have to supply an implementation for ffilter for lists, in the vein of instance Filterable [] where; ffilter = filter.

117 output. Do note that pure functions are fully capable of modeling randomness. Also, simulation output may still be arbitrarily complex and contain informa- tion about multiple output variables. As emphasized by Thaler and Siebers (2019), how you update agents in an agent-based model may have significant consequences for the output of the model. If our search space structure is to capture all single-threaded social simulation models then we of course cannot at this stage couple to any par- ticular update-strategy but must maintain generality. We know that it must somehow involve messages since the only way that agents can interact are through message passing, but we cannot make a decision as to whether, in the terminology of Thaler and Siebers (2019) the iteration-order is sequential or parallel, or whether changes are visible “in-iteration” or only “post-iteration”. We can thus only make the very general assumption that:

∀m. Nextm = Statem →{m}→(Statem ×{m}) (7.5) or if we make the assumption that the set of messages are ordered, we can express it in Haskell as: next :: State m -> [m] -> (State m, [m]) meaning that the next function maps a state and some ordered sequence of in- put messages into a new state and some ordered sequence of output messages. Unraveling our type synonyms perhaps make the interpretation of the next function more clear: next :: [Behavior m] -> [m] -> ([Behavior m], [m]) Since we’ve defined agent behavior as being parametrically polymorphic over messages the function next can be understood as a mapping from behav- iors and input messages, to updated behaviors and output messages. In other words, the messages on the left-hand side are input messages while the mes- sages on the right-hand side are output messages. Importantly, the updated state, meaning the state on the right-hand side is state that has not ‘responded’ to the messages yielded as output. This is easiest understood if we consider how agents (meaning behavers) themselves receive messages as input, and re- spond with an updated version of themselves, and whatever messages they wish to emit as output. Other agents only update themselves as a consequence of these output messages in the ‘next’ iteration since the output messages will there serve as input messages. It is possible that this structure or some variant of it could be reformulated as a monad, but this question is left open for future research. To summarize the design decisions made up to this point, we can, wrap up our functions into the following Haskell type class: class (Functor s, Applicative s, Filterable s) => State s where next ::sa->[m]->(sa,[m])

118 introduce :: Monoid (s a) => a ->sa->sa introduce = mappend . pure

eliminate :: (a -> Bool) ->sa->sa eliminate = filter

alter :: (a -> a) ->sa->sa alter = fmap where s suggests state, a agent, and m message. This type class makes it evi- dent that messages and agents (meaning behavers) are left entirely undefined (due to parametric polymorphism) while the function next merely lacks an implementation since it is assumed to be domain-specific. The latter (i.e. the next function) should remain undefined, since its implementation depends on which update-strategy the model designer wishes to employ. The two former (i.e. messages and agents) should remain undefined since it entirely depends on what the simulation model in question is actually modeling. It is possi- ble that the work of Thaler and Siebers (2019) on update-strategies could be used to capture the notion of the next function in terms of well-studied type classes in the sense of how we have captured introduction, eliminations, and alterations. This is however left as an open question for further research. It should be noted that since introductions, eliminations, and alterations all end in state, we can trivially compose them in order to build infinitely complex interventions that themselves contain contract definitions. In fact, a single canonical replacement function can trivially be defined in terms of introduction and elimination: replace p b = introduce b . eliminate p where . denotes right-to-left function composition. The replacement function removes any agents matching predicate p and introduces the behaver a. In order to answer the research question posed in Section 1.2 we must find a fundamental structure of agent messages that can be used to interact with contracts that underlie policy interventions for antibiotic development. Impor- tantly, the message type does not have to be concerned with behavior since we have defined message passing as the interface between behaviors which means that behaviors can remain domain-specific. Finally, do note how the search space has been demarcated without making unnecessarily restrictive theoretical assumptions regarding how cognition and decision-making should be structured. It has for example been argued (Kahneman, 2003; Simon, 1955) that the choice processes of organisms are not entirely rational so agent-based modelers commonly apply the notion of “bounded rationality” (Gilbert, 2008, p. 15). While such ideas are critical when implementing actual simulation models, we can safely ignore such concerns as we have encapsulated all such questions in the domain-varying notion of behavior. Whether to, for example, apply the previously discussed beliefs-desires-intentions model of Bratman et

119 al. (1988) is thus entirely up the simulation model designer. The search space as presented here ought to, due to its high level of abstraction, cater to many theoretical assumptions that does not demand multi-threaded execution.

120 8. Proposal

The more general problem may be easier to solve. (Polya, 2014, p. 109)

In this chapter we explore the proposed language for expressing contracts and contract offers underlying policy interventions aimed at stimulating antibiotic development. The proposed design addresses the problem, outlined in Chap- ters 1 and 2, by fulfilling the objectives, provided in Chapter 6, while remain- ing usable in the solution space, given in Chapter 7. The proposal builds upon the theory base, outlined in Chapter 3, and draws on the learnings from the experiments, summarized in Chapter 5, as well as my unique position as a participant in DRIVE-AB, depicted in Section 1.4. Notable contributions in- clude: (1) enabling arbitrary decision-makers in contracts, (2) giving reduc- tion semantics for observables and contracts with observables, (3) simplifying reduction semantics by abandoning the notion of ‘immediacy’, and most im- portantly (4) enabling compositionality of contract offers by means of actual- ization of ideals.

8.1 Contracts The proposed contract model builds heavily on that of Peyton Jones and Eber (2003) but is further generalized by means of parametric polymorphism, sim- plified by eliminating non-fundamental combinators and a vague predicate, and finally extended by adding additional capabilities. The increased generalization stem from the realization that contracts can, in the sense of Andersen et al. (2006), be thought of as sets of valid event sequences, while observables, in the sense of Elliott and Hudak (1997), can be thought of as event dependent values. Whether the events that actually are emitted and thus consumed in observables are of the same type as that of event specifications, let’s call them commitments, is entirely irrelevant so long as there exists a function that maps Event -> Commitment -> Bool so that we can determine whether some event satisfies some commitment. Whether these commitments and events contain, say, economic transfers or not, is too entirely irrelevant. All this leads us to a very general definition of contracts which can be parametrically polymorphic, in the sense of (Contract e s), over events (e) and commitments (s). Informally, the event type defines ‘what can be done’ and the commitment type defines ‘what we can commit to do’.

121 The notion of an observable can then, as we shall see, be defined as a type whose instances consume incoming events of type e in order to produce their observed values of some other arbitrary type. The increased simplification stem from two changes. First we simplify by merging the two combinators when and until into what we here call whentil. Secondly we simplify by eliminating the vague predicate of immediacy which demanded that atomic obligations be fulfilled immediately without a strict def- inition of what immediately means. Instead we insist that all demands on im- mediacy be expressed in terms of deadlines with repercussions. Lastly, the language extensions entail adding support for sequential con- junction, finality (meaning deadlines), and cycles. Sequential conjunction enables linear sequencing of contract obligations, based on contract perfor- mance, in the vein of Andersen et al. (2006). Deadlines is a means to avoid having to determine irrevocable contract non-performance as is done in An- dersen et al. (2006), by demanding the specification of all paths (including non-performing ones) through a contract. Cycles are introduced to allow ex- pression of infinitely recurring contracts without using infinite recursion. The proposed model of reduction does not work in infinitely recursive contracts since all observables in a contract must be eagerly updated upon the arrival of events. This is different from the reduction semantics of Andersen et al. (2006) and Stefansen (2005). However, Stefansen (2005) argued that full re- cursion makes even very simple contract analysis surprisingly involved. Still, I encourage future researchers to explore whether the here proposed concep- tualization of observables as functions of events is reconcilable with infinitely recursive contracts by lazily reducing contracts. We now move to discuss each contract combinator in turn. zero :: Contract e s The zero combinator represents the constant contract with no rights and no obligations. It represents, as discussed in Section 6.4, the true absence of value. Note that, as previously discussed, some agent’s right can can always be expressed as some other agent’s obligation, and we will thus only hereon discuss obligations. one :: s -> Contract e s The atomic one combinator establishes a single obligation as specified by the commitment s. As discussed in Section 6.4, there are no demands set on when the obligation in question must be fulfilled, only that it over an infinite time horizon eventually must be. or :: s -> s -> Contractes->Contractes->Contract e s The or combinator yields a composite contract which fulfills the optionality objective (Section 6.7). When given two arbitrary commitments and two arbi- trary subcontracts, it states that whichever commitment is first fulfilled deter-

122 mines which subcontract is to be executed. If the left commitment is fulfilled, then the left subcontract is activated, and vice versa for the right. and :: Contractes->Contractes->Contract e s The and combinator expects two subcontracts and yields a composite con- tract inheriting all the obligations of the two. As discussed in Sections 6.8 and 6.9 this combinator addresses parallel rather than sequential conjunction. Meaning that the resulting contract can only be considered done when both subcontracts are considered done, but there are no restrictions placed on the order in which these two subcontracts must be completed. andThen :: Contractes->Contractes->Contract e s The second additive contract combinator is that of andThen, which addresses the objective of parallel conjunction (Section 6.8). The combinator for paral- lel conjunction has the same type signature as that of sequential conjunction, but the obligations of the subcontracts are treated differently. In sequential conjunction, the second subcontract is only activated when the obligations of the first subcontract are all fulfilled. Much like the atomic combinator one, there are no restrictions placed on when the first subcontract must be com- pleted. Unless restrictions are added by means of more complex combina- tors explained below, the fulfilment of the obligations of the first subcontract may be postponed indefinitely, which means that the second subcontract only ever is activated in a future infinitely far away. It should be noted that while andThen is a useful combinator it is non-fundamental and is implemented in terms of the before combinator described further ahead. ifElse :: Obs e Bool -> Contractes->Contractes->Contract e s The ifElse combinator captures the conditionality objective provided in Sec- tion 6.10. It essentially mirrors the cond combinator of Peyton Jones and Eber (2003) but instead of specifying that the condition is to be executed at the mo- ment of contract acquisition we specify that the condition must be executed upon the arrival of the next event. This combinator is the first combinator that expects an observable as input. While observables are discussed in detail in Section 8.2 we should here note that the observable in question returns a boolean and is expecting events of type e rather than s since it is a function of events (meaning of things that ‘have occurred’) rather than a function of commitments (meaning of things that ‘should occur’). As discussed in the be- ginning of this section, events and commitments need not be represented by the same type. whentil :: Obs e Bool -> Contractes->Contractes->Contract e s The name of the whentil combinator stem from fusing the words ‘when’ and ‘until’. It combines the two combinators when and until of Peyton Jones

123 and Eber (2003) and addresses the causality objective given in Section 6.12. From the whentil combinator we can trivially retrieve the practically useful composite combinators when and until which both have the type: when, until :: Obs e Bool -> Contractes->Contractes->Contract e s and can be implemented as: whenoc=whentil o zero c until o c = whentil o c zero that is by partially applying the first subcontract with the zero constant in the first case, and the second subcontract with zero in the second case. The next combinator, before, satisfies the finality objective provided in Section 6.13. It can usefully be thought of as the deadline combinator. The combinator expects an observable that yields a boolean and three subcontracts: before :: Obs e Bool -- Deadline. -> Contract e -- Subcontract to perform. -> Contract e -- Subcontract that follows performance. -> Contract e -- Subcontract that follows non-performance. -> Contract e s From this it returns a composite contract that is equivalent to the first subcon- tract until the observable yields true. When the observable is true (meaning when the deadline has been reached), the composite contract is now equivalent to the second subcontract if the first subcontract does not have any remaining obligations (meaning that it is done), and the third subcontract if the first sub- contract does have remaining obligations (meaning that it is not done). If the first subcontract is completed before the deadline is reached we immediately enter the second subcontract. The before combinator can thus be used to model deadlines, based on ar- bitrary predicates, where performance and non-performance result in an arbi- trary subcontracts. This combinator captures the previously discussed idea that all paths, including what might be considered non-performing paths, should be refactored into performing paths through a contract. As emphasized in Section 6.13, this definition of deadlines renders sequen- tial conjunction non-fundamental, since we can implement andThen as: andThen c1 c2 = before (constObs False) c1 c2 zero that is by partially applying the before combinator with a constant observable that remains false and by setting the third (now irrelevant subcontract) to the zero contract. The function constObs is defined in Section 8.2 but essen- tially has the type (a -> Obs e a) and when applied to a value returns an observable that always yields that value. everytime :: Obs e Bool -> Contractes->Contract e s

124 As discussed in the introduction to this section, the reduction semantics pro- posed in Section 8.3 does not permit infinitely recursive contracts since re- duction in this proposal is eager rather than lazy. To mitigate this limitation a specific combinator is introduced to deal with the notion of recurrence in con- tracts. The combinator expects an observable boolean and a subcontract. Each time an incoming event turns the observable boolean from false to true this composite contract is turned into a parallel conjunction (using the and combi- nator) between the composite contract itself and the subcontract. This means that every time the observable becomes true the obligations of the subcontract is added to the set of current obligations, which include the recurring contract itself. Importantly, the subcontract is only added whenever the currently con- sumed event causes the observable to switch to true from false, not for ev- ery event under which the observable remains true. Without this restriction the subcontract would be added under the arrival of every unrelated event un- til the observable was switched back to false, which unduly complicates the processes of designing observables for cycles. The last combinator in the proposal of this thesis is the scale combina- tor which expects an observable number and a subcontract and yields a com- posite contract where the subcontract is scaled by the factor yielded by the double. This combinator mirrors the scale combinator of Peyton Jones and Eber (2003) but again with slightly different semantics. Peyton Jones and Eber (2003) suggest that scaling should be applied whenever the contract is acquired. This is sensible in a context where the atomic one combinator de- mands that the underlying obligation be satisfied immediately, but not in our context where we suggest that atomic obligations may be postponed indefi- nitely. Instead, and as further described in Section 8.3, we delay the appli- cation of the scale, and hence also what Peyton Jones and Eber (2003) calls the ‘sampling’ of the observable, until an arrived event is matched against a commitment. The suggestion is thus that scaling is lazy, in the sense that it only is applied to some underlying commitment when we seek to determine whether a given event satisfies the commitment or not. It should be noted that all fundamental combinators outlined above return instances of a contract type that is implemented as an algebraic sum type. This implementation is given in Appendix A but is also trivial since the type of each fundamental combinator is exactly matched by a variant of the sum type. The data type underlying contracts is also made obvious by Section 8.3 where we, in the implementation of reduction, pattern match against these constructors.

8.2 Observables Peyton Jones et al. (2000) suggest that their observables are akin to behaviors in the functional reactive animation model by Elliott and Hudak (1997). We

125 take this comment to heart and model observables as functions from events to values. However, given the choice to, as discussed in Chapter 6, treat state as a stream, or possibly infinite list, of events not all events pertain to all observ- ables. Consequently, any given event cannot necessarily compute the value of any other observable, and as such we should consider observables streaming algorithms themselves. Observables are thus stateful values whose state is up- dated upon incoming events. As such, the basic type of an observable must be akin to a folding function on the form e -> v -> v where e is an incoming event, the first v the old value, and the second v the updated value. However, Peyton Jones et al. (2000) also show how unary and binary lift- ing can be implemented for their observables. Our observables ‘remember’ their last computed value so that we at any time can extract the current value even when the currently consumed event is unable to compute a new value for the observable. Since we must store values of the observed type we can- not compose the observing function (meaning the streaming algorithm) with an arbitrary transformation function v->wsince the observing function ex- pects the old value (meaning the second parameter in the folding function) to be of type v. As such, we must introduce composite observables that not only store their updating function and their current value, but also any applied transformations or rather any functions that we have lifted into the observable. Observables can thus be implemented as: data Obs e v where Obs::v->(e->v->v)->Obsev UnOp :: (a -> b) -> Obsea->Obseb BinOp :: (a -> b -> c) -> Obsea->Obseb->Obsec where e is the type of the events that are being consumed, and v is the type of the value being observed, meaning that value that the observable yields. Note that we make use of generalized algebraic data types to allow the type variables a, b, and c to remain unmentioned in the type Obs e v. Unary and binary lifting is thus trivially achieved by storing the lifted func- tions using the corresponding constructor. We canonically implement unary and binary lifting by showing how the observable type form both a functor and an applicative functor in Haskell. instance Functor (Obs e) where fmapfo=UnOp f o instance Applicative (Obs e) where pure x = konst x liftA2 f o1 o2 = BinOp f o1 o2

Since observables form applicative functors we also get ternary lifting (liftA3 in Haskell) for free. Lifting for example arithmetic, relational, and logical op- erations is trivial and similar to the examples given in Section 3.3.2. To take a

126 few examples however, consider how we can lift the relational operator greater than: (%>) :: Eq a => Obs e a -> Obsea->ObseBool (%>) o1 o2 = liftA2 (>) o1 o2 logical disjunction: (%||) :: Obs e Bool -> Obs e Bool -> Obs e Bool (%||) = liftA2 (||) logical negation: obsNot :: Obs e Bool -> Obs e Bool obsNot = fmap not or something entirely arbitrary like the length function: obsLength :: Obs e [v] -> Obs e Int obsLength = fmap length With ternary lifting at our disposal however we interestingly also get the ability to express observable conditionals: obsIf :: Obs e Bool -> Obsea->Obsea->Obsea obsIf = liftA3 if’ assuming that if’ is defined as:1 if’ :: Bool -> a -> a -> a if’bxy=ifbthen x else y Unfortunately it appears difficult to implement the folding of observables over arbitrary functions in the vein of: foldObs :: (a -> b -> b) -> b -> Obsea->Obse(b,a) which is unfortunate since folding is at the heart of a streaming algorithm. Consider for example how we might want to have observables that track both whether some particular project is currently in phase 2, and whether some project has ever entered phase 2. The latter is the same as the former folded over logical disjunction. Consider also the previously discussed idea of claw- backs, where a developer must pay back some grants received during devel- opment when awarded a market entry reward. This requires aggregation of grant payouts which lends itself quite well to folding. Of course, instead of folding over observables we can simply implement both the folding and the observable in a given observable itself. Thus, this is not a question of what can be expressed, but rather how elegantly it can be expressed. Either way, this facet of observables is a prime avenue for future research, as it might be that observables are better understood as for example monads.

1Alternatively, we can lift the function bool from Data.Bool.

127 Further, Peyton Jones et al. (2000) suggested that there ought to be a con- stant observable that always returns the same value. Such an observable can in this proposal trivially be implemented as: constObs :: v -> Obs e v constObs x = Obs x (flip const) and we have already made use of it when, in Section 6.13, showing how sequential conjunction (andThen) can be implemented in terms of finality (before). Before closing this section, allow me to emphasize that the current value of an observable at any time can be extracted: value :: Obsev->v value (Obs x _) = x value (UnOp f o1) = f (value o1) value (BinOp f o1 o2) = f (value o1) (value o2) by recursively applying any transformations to the stored value and then re- turning the result.

8.3 Reduction Reduction is introduced in Section 3.3.3 and turned into an objective in Sec- tion 6.15. In essence, reduction refers to the idea that if a contract specifies a set of permissible event sequences then, when given an event, we should be able to reduce the contract to a new and possibly different set of permissible event sequences. In other words, given an event and a contract, we must be able to reduce the contract down to a new contract that has taken the event into consideration. Since contracts contain observables, and since observables are defined as state dependent values, we must also be able to reduce observables. We will refer to the reduction of observables as update and the reduction of contracts as reduce. Reducing or updating observables is trivial: update :: e -> Obsev->Obsev update e (Obs x f) = Obs (f e x) f update e (UnOp f o1) = UnOp f (update e o1) update e (BinOp f o1 o2) = BinOp f (update e o1) (update e o2) as it simply entails applying the reduction function stored inside the observable to the incoming event and the previous value which in turn produces the next value to be stored. Contract reduction is however slightly more involved and we will as such discuss each individual pattern match in turn. The type of the reduction func- tion for contracts is:

128 reduce :: (Double -> s -> s) -- Scaling function. -> (e -> s -> Bool) -- Settling function. -> e -- Event. -> Contract e s -- Contract to reduce. -> Contract e s where the first parameter is a scaling function, the second parameter is a set- tling function, the third parameter is the event in question, and the final pa- rameter is the contract that we wish to reduce. The scaling function denotes how to scale contract specifications (s)given some number (Double) yielded by an observable. The scaling function is necessary since we’ve abstracted away from currency and up into arbitrary re- sources, or rather up into arbitrary commitments that in theory could specify transfers of resources of some arbitrary type. This problem of scaling arbi- trary resources was discussed in Section 3.3.3 and Section 6.11 and the here proposed solution is to simply make the scaling function a domain-specific concept. The simplest, scaling function is the one which entirely ignores the scalar and can be implemented as (flip const) or (\s x -> x). The settling function stem from a similar realization. By parametrizing contracts over events and commitments we’ve abstracted away from not only transfers of currency but even transfers of resources. As such, we’ve turned the settlement of commitments into a domain-specific concept. If events and commitments are modeled by the same type then the simplest useful settling function is simply equality, namely the function (==). We now move to discuss each matched pattern in the implementation of the function reduce as it precisely describes the semantics of contract reduction in this proposal. We begin with the trivial empty contract: reduce _ _ _ Zero = Zero which simply always remains empty. The atomic obligation contract however has a slightly more interesting implementation: reduce f g e (One s) |ge(f1s)=Zero | otherwise = One s that applies the scaling function to the commitment with the scalar 1 and then uses the settling function to determine whether the event in question settles the scaled contract. This assumes that the scaling function treats the scalar 1 as the identity element. The reason that the scaling function is applied to the obligation with the identity element will become obvious when discussing how the Scale constructor is reduced. In short, the idea is that we delay the application of scaling for as long as possible and when recursively traversing a contract to reduce instead transform the scaling function to carry all scalars in the correct order down to the atomic contract. In other words, if we only

129 have an atomic contract then the application of the scaling function will be pointless and equivalent to the identity function, but if we wrap the contract using the scale combinator then the scaling will no longer be pointless since the scaling is ‘carried’ down into the atomic contract. Reducing a choice contract involves matching events with the two obliga- tions and then choosing to recursively reduce and return the left contract if the left obligation was matched and vice versa. reduce f g e (Or s1 s2 c1 c2) |ge(f1s1)=reducefgec1 |ge(f1s2)=reducefgec2 | otherwise = Or s1 s2 (refresh e c1) (refresh e c2) Note that we here introduce the function refresh which has the type: refresh :: e -> Contractes->Contract e s and recursively updates (meaning reduces) all observables in a contract with- out actually reducing the contract. Since observables must consume all in- coming events in the same order to compute their state we must update all observables in all subcontracts since we cannot tell beforehand which subcon- tracts we will enter in the future. Note that the time-efficiency of reduction might be improved by for example storing a list of events and only lazily re- ducing subcontracts and observables as needed. This question is however left open for future research. Performance issues aside, consider for example the case where we have a choice between two subcontracts c1 and c2 marked by the choice commitments s1 and s2. If the event e does not settle either s1 or s2 then we must refresh both c1 and c2 and not reduce them since we do not yet know which subcontract is to be entered. Note that choices are short- circuit evaluated in the sense that if an event matches the first commitment then the second is never checked. Reducing a parallel conjunction of two subcontracts however is trivial and simply involves reducing both subcontracts and combining them in a parallel conjunction. reduce f g e (And c1 c2) = And (reducefgec1)(reducefgec2) Reducing a sequential conjunction however is slightly more involved but since we have defined our sequential conjunction (andThen) in terms of the deadline combinator (before) we need only provide reduction semantics for before. Before considering the deadline combinator let us first consider a simpler composite contract that contains observables. Reducing an ifElse composi- tion means that we must first reduce the observable and check whether it, in its updated state, is true or not. If it is true then we will proceed to reduce and return the first subcontract, and if it is false then we reduce and return the second.

130 reduce f g e (IfElse o c1 c2) | value (update e o) = reducefgec1 | otherwise = reducefgec2 Reducing a before contract involves first checking whether the deadline, as specified by the observable, when updated based on the new event, has been reached. If so then we must reduce and return the third subcontract (or what is called c f in Section 6.13) since we have failed to complete the first subcontract before the deadline. If the deadline however has not yet been reached then we must check whether the first subcontract, when reduced under the incoming event, is settled (meaning is done) or not. If the first subcontract is settled then we return the second subcontract (or what is called ct in Section 6.13), reduced under the incoming event. If the first subcontract is not settled then we return a new sequential conjunction where the first subcontract is reduced under the event and the second and third are both refreshed. reduce f g e (Before o c1 c2 c3) | value (update e o) = reducefgec3 | done (reducefgec1)=reducefgec2 | otherwise = Before (update e o) (reducefgec1)(refresh e c2) (refresh e c3) To re-iterate, the second and third subcontracts are not reduced since we have not yet entered them. Avoiding to reduce contracts that we have yet to enter is important as combinators such as for example when should not take affect yet even if the observable currently happens to be true. However, we must still refresh such contracts since their contained observables must consume all events in the appropriate order. Reducing a whentil contract requires first checking whether the observ- able, when updated under the incoming event, is true. If so, then we can reduce and return the second subcontract. If not, then we have to return a new whentil composition where we’ve updated the observable, reduced the first subcontract and refreshed the second. reduce f g e (Whentil o c1 c2) | value (update e o) = reducefgec2 | otherwise = Whentil (update e o) (reducefgec1)(refresh e c2) It bears to be repeated however that whentil in fact, and as was argued in Section 6.13, is non-fundamental in the face of the before combinator. It could have been implemented as: whentil’’ o c1 c2 = before o c1 c2 c2 that is by letting the second subcontract of the whentil play the role of both the second and third subcontracts in the before composition. Alternatively,

131 we can choose to consider whentil superflous and implement when and until in terms of before directly: when’ o c = before o zero c zero until’ o c = before o c zero zero where when’ sets the first subcontract to the immediately completed zero and lets the passed subcontract play the role of ct. Conversely, in until’ the passed contract is set to play the role of the first subcontract while both ct and c f are set to zero. Moving on to cyclicity, reducing an everytime contract requires that we first check whether it is time for another cycle or not. This is determined by checking if the observable is both currently false but also turns true when up- dated based on the current event. If it is time, the we return a new parallel conjunction (and) between the everytime composition itself and the subcon- tract reduced under the current event. In a sense we ‘add’ another instance of the subcontract to the contract. Before returning, the observable in the everytime is updated and the subcontract in it is refreshed to ensure that all observables are up to date. reduce f g e (Everytime o c) | not (value o) && value (update e o) = And (Everytime (update e o) (refresh e c)) (reducefgec) | otherwise = Everytime (update e o) (refresh e c) If it is not time, then we simply return a new everytime composition where the observable is updated and the subcontract refreshed. Finally, reducing a scale composition is slightly more involved as we must first update the observable given the incoming event, then compose a new scaling function based on the old one and the current value of the observable, in order to then reduce the subcontract based on this new scaling function. reduce f g e (Scale o c) = Scale o’ (reduce f’gec)where o’ = update e o f’x=fx.f(value o’) Note that we do not return the reduced subcontract but rather the reduced subcontract still wrapped in the scaling function. Since the scaling function isn’t persisted anywhere in the contract the applied scaling will be ‘forgotten’ and then ‘reapplied’ upon the arrival of the next event. This allows us to apply scaling at the last possible moment by rescaling commitments upon the arrival of every event. This explains why we, in the beginning of this section, when reducing the contracts containing atomic commitments (meaning one and or), applied the scaling function with the identity element. If we did not apply the scaling function in these cases then the altered scaling function would never be applied and hence the observed scalar pointless.

132 8.4 Done In this section we deal with the question of what it means for a contract to be done. Reduction of both the then and before combinators assume the existence of a done function of the type (Contract e s -> Bool) that tells us whether all the obligations of a contract in its current state should be con- sidered fulfilled. In other words, whether the contract in its current state is essentially equivalent to zero. Since then is implemented in terms of before it would be more accurate to state that the before combinator necessitates a definition of done. done :: Contractes->Bool done Zero = True done (One {}) = False done (Or__c1c2) = False done (And c1 c2) = done c1 && done c2 done (Whentil o c1 c2) = (done c1 && done c2) || (value o && done c2) done (Before o c1 c2 c3) | not (value o) = done c1 && done c2 | otherwise = done c3 done (Everytime o c) = not (value o) done (IfElse o c1 c2) = if (value o) then done c1 else done c2 done (Scale o c) = done c Note that all definitions of done are recursive except in the case of Zero, One, Or, and Everytime The only two cases that return true, meaning the only two cases where a contract can be considered done is either when it ‘bottoms’ in only Zero’s or when it’s wrapped in an Everytime whose observable is currently not true.

8.5 Events Andersen et al. (2006) observed that their base language could be separated from their contract language which would make their compositional model independent of REA as well as any other data model. In the same vein, by ab- stracting contracts and observables over events and commitments rather than say for example resource transfers between agents, the contract language in this proposal is entirely independent from the adaptation of REA constructs

133 outlined in this section. In the next section however, we discuss actualization and show that actualization is applicable within the context of this adaptation of REA constructs. Geerts and McCarthy (2000b) introduced two types of REA commitments, namely transformations and transfers. These have been extensively accounted for in Section 3.2.2. Essentially, transformation commitments capture the no- tion of committing to consume or use some resources in order to produce some others. Transfer commitments, on the other hand, capture the notion of com- mitting to transfer some resources to some other agents. Such commitments are, as depicted in Figure 3.6, expected to be matched by REA events which means actual transformations and transfers. REA commitments are akin to what we have called commitments, while REA events are akin to what we have called events. To maintain simplicity we can use the same type for both obligations and events, which as discussed in Section 8.1, enables us to use the simple equiva- lence function (==) as the settlement function (assuming that it’s implemented for the type). When an event of our type is used in place of s in a contract of type Contract e s then it refers to a specification, meaning to a commit- ment. When it however is used in place of an e then it refers to an actual event over which the contract can be reduced. Given the definition of the or combinator (Section 8.1) and the optionality objective (Section 6.7) we need a way to not only express the commitment to select between two alternatives but also the commitment to select some partic- ular alternative, and the actual event containing a selection of some alternative. As such, we will add a third event type that we call Choose. To model con- tracts that underlie policy interventions for antibiotic development, it appears that we need an event type along the lines of: data Event kprea = Transfereaa(ResourceTransfer k r) | Transform Type e a (ResourceTransformationkpr) | Choose e a where the parametrically polymorphic parameter k is a type representing con- stant, commodity-like, resources (such as for example screws or currency), p is a type that capture properties or characteristics that refinable (non-commodity- like) resources can at any time either possess or not possess, r is a type used to identify resources, e is a type used to identify events, and a is a type used to identify agents. While it may seem as if a single identification type param- eter would suffice, the three must, for our implementation of actualization to work, be distinct. All events are endowed with identifiers to, as mentioned in Section 3.3.3, avoid ambiguity in contract reduction. A transfer can, beyond being identified by an e, be thought of as a more typed version of a triple of two agents and a resource, where the transferred resource is either a commodity or some identifiable refinable.

134 data ResourceTransfer k r = CommodityTransfer k | RefinableTransfer r This additional complication stem from needing to allow expression of both simple commodities like currency, and complex refinable resources like projects. This draws on how Hruby (2006) distinguishes between individually identifi- able and individually unidentifiable resources. A transformation is too identified by an e and denotes a transformation of some resource by some agent a where the transformation must be classified as being of some Type. Transformation types are in turn defined as: data Type = Produce | Consume meaning that transformations are either producing or consuming some re- source. Note that while Hruby (2006) distinguishes between usage and con- sumption of resources we make no such distinction and suggest that all in- stances of usage can be refactored into transformation pairs where the used resource is both consumed and produced. This simplification is applied since Hruby (2006) suggests that the definition of how to interpret usage might vary across domains and across resources anyway. This is similar in vein to the argument (see Sections 3.3.3, 6.13 and 8.4) of how if all paths through a con- tract are expressed as performing paths then there is no need to determine non-performance and there is no need for additional logic surrounding how to deal with it. Resources to be transformed (meaning consumed or produced) are defined as: data ResourceTransformation k p r = CommodityTransformation k | RefinableTransformation r p meaning that an instance is either a commodity, or a property pertaining to some refinable (identifiable) resource. Lastly, it should be noted that, as discussed in Section 6.5, the axiom of ex- change duality is omitted altogether as we have failed to prove that its benefits outweigh its costs. For the same reasons, transformation duality, discussed in Section 3.2.2 is too omitted. This is exhibited in the proposed event type in that transfers only move a single resource in a single direction, while transfor- mations only consume or produce a single resource.

8.6 Actualization To satisfy the actualization objective contracts must also be able to express offers. The distinction lies in that offers specify contracts based on ‘roles to be played’. By providing some set of players of roles, meaning some actual resources, agents, and events we must be able to turn an offer into a contract

135 between said agents, identified by said events, and in regards to transfers and transformations of said resources. In essence this is achieved by realizing that: (1) contracts and observables both form profunctors, meaning that they are bifunctors that are contravariant in the first parameter and covariant in the second, and that (2) events form trifunctors. At first sight, this argument may appear nonsensical. Yet, the intuition truly is quite simple. In order to map an observable expressed in a general domain (meaning that of offers) to some specific domain (meaning that of actual con- tracts) we must be able to map the events that the observable is consuming (contravariantly) from specific to general. In order to be able to map a con- tract expressed in a general domain to some specific domain we must be able to (1) map the events that the contract is consuming (contravariantly) from specific to general (because contracts contain observables), and (2) map the commitments (covariantly) from general to specific. The intuition is as follows: If we forget about observables for a moment then an offer, call it an ‘ideal’ contract, specifies ideal obligations. When we turn an offer into an actual contract we must turn these ideal obligations into actual obligations so that actual agents know what actual obligations they are expected to execute. Consequently, contracts are covariant since we need to transform ideal obligations to concrete obligations. However, since observ- ables essentially are functions, they are not covariant but contravariant in the ‘input type’, meaning in the event parameter. To compute values of an ob- servable we need to convert every specific event into its ideal counterpart so that we can apply the ideal event to the ideal function that computes the ob- servable’s new value. Remember that functions in general are contravariant in their input type, and observables are in a sense nothing more than memo- ized functions. Hence, contracts are also contravariant since observables are contravariant. A type class for profunctors is not part of standard Haskell, but is commonly found in libraries and conventionally has the following implementation: class Profunctor p where dimap :: (a -> b) -> (c -> d) ->pbc->pad dimap f g = lmap f . rmap g lmap :: (a -> b) ->pbc->pac lmap f = dimap f id rmap :: (b -> c) ->pab->pac rmap = dimap id where the first function in dimap is contravariant and the second covariant. The functions lmap and rmap are short-hands for applying only the first or second function respectively. The implementation of dimap for contracts essentially just passes the first function (the event transformer) to the profunctor functions of observables

136 and applies the second function (the commitment transformer) to each atomic commitment: instance Profunctor Contract where dimap fe fs Zero = Zero dimap fe fs (One s) = One (fs s) dimap fe fs (Or s1 s2 c1 c2) = Or (fs s1) (fs s2) (dimap fe fs c1) (dimap fe fs c2) dimap fe fs (And c1 c2) = And (dimap fe fs c1) (dimap fe fs c2) dimap fe fs (Whentil o c1 c2) = Whentil (lmap fe o) (dimap fe fs c1) (dimap fe fs c2) dimap fe fs (Before o c1 c2 c3) = Before (lmap fe o) (dimap fe fs c1) (dimap fe fs c2) (dimap fe fs c3) dimap fe fs (Everytime o c) = Everytime (lmap fe o) (dimap fe fs c) dimap fe fs (IfElse o c1 c2) = IfElse (lmap fe o) (dimap fe fs c1) (dimap fe fs c2) dimap fe fs (Scale o c) = Scale (lmap fe o) (dimap fe fs c) where everything else merely is recursive application aimed at achieving these two effects. For observables, dimap converts incoming events using the first function before applying the observable’s reduction function, and composes the second function with the observable’s transformation function in order to alter the output value. instance Profunctor Obs where dimap fe fv (Obs v f) = fmap fv (Obs v (\e x -> f (fe e) x)) dimap fe fv (UnOp f o) = fmap (fv.f) (lmap fe o) dimap fe fv (BinOp f o1 o2) = liftA2 ((fv.).f) (lmap fe o1) (lmap fe o2) To further simplify actualization it helps to realize that our event type form a covariant trifunctor. The implementation of events as a covariant trifunctor is however trivial and is thus left to Appendix A. Having established the profunctoriality of contracts and observables as well as the trifunctoriality of events we can now define a simplified actualization function suitable for contract offers that underlie policy interventions in the domain of antibiotic development. Assume two event types that each are pa- rameterized over resources, events, and agents. One event type represents

137 Contract Contract

Event Event

r1 r2

e1 e2

a1 a2

Figure 8.1. Informal depiction of isomporphic resource, event, and agent identifiers, which allow events to be isomorphic, which in turn allow contracts to be isomorphic, which in turn allow actualization. the general domain, meaning that of offers, and one represents the specific domain, meaning that of actual contracts. If we can define three total isomor- phisms between the resources, events, and agents underlying the two types then we can trivially actualize offers as contracts. Actualization under these assumptions can be implemented like this: actualize :: (r1 -> r2) -- Contravariant resource transformation. -> (e1 -> e2) -- Contravariant event transformation. -> (a1 -> a2) -- Contravariant agent transformation. -> (r3 -> r4) -- Covariant resource transformation. -> (e3 -> e4) -- Covariant event transformation. -> (a3 -> a4) -- Covariant agent transformation. -> Contract (Eventkpr2e2a2)(Eventkpr3e3a3) -> Contract (Eventkpr1e1a1)(Eventkpr4e4a4) actualize fr’ fe’ fa’ fr fe fa = dimap (trimap fr’ fe’ fa’) (trimap fr fe fa) where the implementation simplicity stem from the fact that the contract type forms a profunctor and events a covariant trifunctor. The translation between offers and contracts (or actually: between two contracts whose types are de- fined by applying the contract type constructor to different values) by means of isomorphisms is informally depicted in Figure 8.1. It should be noted that the REA isomorphisms need not be unique. To the contrary, if a unique isomorphism can be found in a given domain then there is likely no reason to convert offers to contracts in the first place. In such a domain there is no essential difference between offers and contracts. In the definition of the actualization objective given in Section 6.2 we stated that offers can be viewed as functions that when given some number of con- crete resources, events, and/or agents will yield a contract between said parties over said resources and events. The problem of such an approach however was argued to be the lack of generalizability. Considering only agents for a mo-

138 ment we readily realize that contracts can be bilateral, trilateral, quadrilateral and so forth. A general type signature for offers as functions is thus not read- ily available, which makes it difficult to achieve compositionality of offers as sought by the compositionality objective outlined in Section 6.1. In this pro- posal however, we have achieved compositionality of offers by treating them as contracts and moving the application of concrete resources, events, and/or agents to domain-specific functions that produce isomorphisms between re- source offers and actual resources. Examples of domain-specific actualization is given in Appendix B but the key realization here is that isomorphisms between two REA sets can only (in the usual case) be produced by partial application of some specific informa- tion. If an agent is to for example turn a bilateral offer into a bilateral contract in order to for example evaluate the worth of the contract then the agent would partially apply some domain-specific function for bilateral offers with itself in order to receive an isomorphism that can be passed to the domain-agnostic actualization function. A final key detail of actualization is that of what we will call ideals. As- sume that we have defined the following type for expressing bilateral offers, meaning offers that pertain two unknown agents, or two ‘roles’ that must be taken on by concrete agents when actualizing the offer into a contract: data Bilateral = Benefactor | Beneficiary deriving (Eq) To define an isomorphism between the bilateral type and some other arbi- trary type (a) we must define a specialization function (Bilateral -> a) and a generalization function (a -> Bilateral). The former acts covari- antly while the latter contravariantly in the context of the contract profunctor. While we can define the (covariant) specialization function: specialize :: a -> a -> Bilateral -> a specialize x y Benefactor = x specialize x y Beneficiary = y the (contravariant) generalization function is partial: generalize :: Eq a => a -> a -> a -> Bilateral generalize x y z | z == x = Benefactor | z == y = Beneficiary | otherwise = undefined -- Partial function! Intuitively the issue is that we cannot map a specific event pertaining to some third party up into a general domain where that third party is not mentioned. We simply have no generalized definition of this third party. To address this issue we introduce the notion of ideality which allows us to talk about specific things even in the general domain. Intuitively this means that when defining an offer, we define either roles to be played or specific

139 things whose identities we already know. This has the added benefit of al- lowing what can be thought of as partial application of some roles without applying all roles at the same time. This is important since we loose the abil- ity to partially apply when leaving the realm of contract offers as functions that expect players of roles. By defining ideality as: data Ideal s k = Ideal s | Known k deriving (Eq) we can then build a total generalization function: generalize :: Eq a => a -> a -> a -> Ideal Bilateral a generalize x y z | z == x = Ideal Benefactor | z == y = Ideal Beneficiary | otherwise = Known z and reformulate our specialization function to match: specialize :: a -> a -> Ideal Bilateral a -> a specialize x _ (Ideal Benefactor) = x specialize _ y (Ideal Beneficiary) = y specialize _ _ (Known z) = z which gives us an isomorphism when the two first arguments are applied. Note that the two first arguments in the specialization and generalization functions above represent the two agents that we actually wish to let play the roles of the benefactor and beneficiary respectively. As previously empha- sized, few domains will have single isomorphisms. Therefore we must, when actualizing, specify which isomorphism we wish to use by applying the ap- propriate players of roles.

140 9. Evaluation

The constructs underpinning this language of policy intervention contracts for antibiotic development was presented in Chapter 6. The solution space, in which many simulation models of policy interventions for antibiotic develop- ment reside, was given in Chapter 7. To ensure that the proposal is usable within causal models, it was, in Chapter 8, shown how a single message type can bridge the two. The utility of solving the research problem in the first place was established already in Chapter 1. To evaluate the proposal, in the sense of design science, we must establish utility. This is here achieved by providing a constructive proof showing that key facets of key policy interventions can be encoded as contracts in the proposed language. The policy interventions in question are fully delinked and partially delinked phase entry rewards.

9.1 Proof of utility The case reported in Chapter 5 explores the two interventions often referred to as phase entry rewards and direct funding. A phase entry reward is, at its core, a financial reward awarded to a developer who successfully manages to bring an antibiotic product meeting some target specification to some target phase. A market entry reward is a specialization of a phase entry reward, where the phase in which the prize is awarded is the market. While the market entry rewards discussed in Section 5.2 are akin to what Okhravi et al. (2017) and Okhravi et al. (2018), amongst others, call “par- tially delinked” as opposed to “fully delinked” market entry rewards, we here discuss both partial and full delinkage. The difference between partial and full delinkage is that in partial delinkage the prize is awarded in addition to whatever sales revenue the product owner also manages to secure while in full delinkage the prize is awarded instead of sales. This difference is visualized in Figure 9.1 where the underlying data stem from the simulation experiment published in Okhravi et al. (2018) and the lines demarcate the full range of input values. In the case of full delinkage the intellectual property (IP) of the antibiotic product is transferred from the beneficiary to the benefactor while the IP in the case of partial delinkage remains with the beneficiary (meaning the developer). Let us now proceed to, by construction, prove that important facets of par- tially and fully delinked phase entry rewards can be formally captured in the proposed contract language. We here assume some domain-specific types and

141 Partially Delinked Market Entry Reward Fully Delinked Market Entry Reward

800 800 R&D Costs R&D Costs 700 700 Global Net Revenues MER 600 MER 600

500 500

400 400

300 300

200 200 Cash flow (M$) Cash flow (M$)

100 100

0 0

-100 -100

-200 -200 Phase 1 Phase 2 Phase 3 Phase 1 Phase 2 Phase 3 Approval Approval Market yr1 Market yr2 Market yr3 Market yr4 Market yr5 Market yr6 Market yr7 Market yr8 Market yr9 PreClinical Market yr1 Market yr2 Market yr3 Market yr4 Market yr5 Market yr6 Market yr7 Market yr8 Market yr9 PreClinical Market yr10 Market yr10 Figure 9.1. Visualization of a partially (left) and fully (right) delinked market entry reward (Okhravi et al., 2018). helper functions that are provided in Appendix B. While most types are given in the appendices we here omit them for presentational simplicity and since they can be inferred by the compiler. With the assumed helpers, we can define the following two general contract functions (or combinators): pdprize ps k = when (hasPropsObs Single ps) (one $ transferC 1 Benefactor Beneficiary k) fdprize ps k = when (hasPropsObs Single ps) (and (one $ transferC 1 Benefactor Beneficiary k) (one $ transferR 2 Beneficiary Benefactor Single)) where the parameter ps is a list of conditions expressed as characteristics that the project must exhibit in order to be eligible, k some prize to be awarded if eligible, and the hard-coded numbers are event-identifiers. The list of con- ditions are combined into a single observable using logical and (&&) lifted into observables By applying values to these two functions we can generate partially and fully delinked market entry reward offers: offer1 = pdprize [InPhase M1] (eur 100) offer2 = fdprize [InPhase M1] (eur 100)

142 where in this example, offer1 is a partially delinked market entry reward while offer2 is a fully delinked one. Both offers in the example specify that the benefactor must pay 100 euros to the beneficiary, but only when the project in question successfully reaches its first market year. Note how the when combinator is used to condition the payment and how the hasPropsObs observable is used to observe when the project is successfully refined to exhibit the property InPhase M1. Since these are contract offers and not concrete contracts we do not yet know who will act as the benefactor, the beneficiary, nor what antibiotic project we are actually looking to observe. Hence, we use the symbols Benefactor, Beneficiary, and Single to refer to these ‘roles to be played’. Importantly we are however still able to express the relationships between these roles in a well typed contract. Okhravi (2020) is however not merely concerned with market entry rewards but the more general notion of phase entry rewards. Changing the prize phase of the offers above is a simple act of changing from M1 to for example P1 for a phase 1 entry reward. We could also pass further conditions to create more complex phase entry rewards that express further contingencies. Consider the following example: offer3 = pdprize [InPhase P2, Targets CUTI] (eur 100) where we offer a prize of 100 euros to any project targeting complicated uri- nary tract infections (CUTI) as soon as the project enters phase 2. It is im- portant to note that none of these contracts specify any deadlines and thus no repercussions for non-performance. While we will not do so here, such repercussions could be added by means of the before combinator. Importantly, we’ve constructed these offers by composing contracts that express roles to be played rather than by returning monolithic contracts from functions that must be applied to players of these roles. The only difference between an offer and a contract is, as suggested in Section 6.2 and Section 8.6, the event type that the contracts in question consume and the type that is used to specify atomic obligations. This gives us compositionality of offers. To further emphasize the utility of composable offers, and to go beyond the policy interventions of Okhravi (2020), consider how we might add the clawbacks discussed in Section 6.11. Clawbacks are essentially grant pay- backs, where a recipient of a phase entry reward would have to pay back some portion of previously awarded grants. Let us define: clawbackPrize ps k1 k2 = pdprize ps k1 ‘and‘ clawback ps k2 where ps again is a list of conditions, k1 a prize, and k2 a resource that will be scaled by the amount of grants that the project has received. For the sake of simplicity we naively ignore currency conversions. Also note that (c1 ‘and‘ c2) is equivalent to (and c1 c2) as it simply makes use of the infix nota- tion of Haskell. With this definition of clawback prizes we could express the following:

143 offer4 = clawbackPrize [InPhase M1] (eur 100) (eur 0.75) which offers to pay 100 euros when the project enters the market but demand a payback of 75% of all grants received. All this assumes that we’ve defined the clawback combinator as: clawback ps k = when (hasPropsObs Single ps) (scale grantsReceivedObs (one $ transferC 3 Beneficiary Benefactor k)) where ps again is a list of conditions and k the resource to be paid back. It should be noted that while we have used parallel conjunction (and)to chain obligations in this chapter we can switch to sequential conjunction by simply replacing the combinator and with andThen. Offer combinators could even be parameterized to take a combinator as an argument. Switching to sequential conjunction would allow expressing variations such as the one dis- cussed in Section 6.9 where the beneficiary only is obliged to transfer the IP after the benefactor has executed the prize payment.

144 10. Conclusion

In this concluding chapter the research question is first revisited in light of the derived constructs (Chapter 6), the solution space (Chapter 7), and the de- signed model (Chapter 8). Some key limitations of this research are disclosed, and some provoked avenues for future research outlined. Lastly, I share some closing thoughts on the role of this research as a stepping stone on the path towards a formal language of policy interventions in general.

10.1 Revisiting the research question The research question given in Section 1.2 asked what the fundamental lan- guage of policy interventions usable within causal models of antibiotic devel- opment is. To answer this question, Chapter 6 proposed a set of constructs, or objectives, that were derived from the problem description given in Chapters 1 and 2, the literature as accounted for in Chapter 3, the experiments as outlined in Chapter 5, and finally my unique position as a participant in DRIVE-AB. These constructs, or objectives that a solution must exhibit, were then used to drive the implementation of a language to address the question. To ensure that the proposed language satisfies the second part of the re- search question, Chapter 7 introduced a simple conceptualization of causal models not only for antibiotic development in specific but for single-threaded social simulation in general. This conceptualization was presented under the banner of a solution space. The solution space suggested that messages can be seen as the sole mode of communication between agents where agents are nothing but behavior and state is nothing but agents. On the basis of the solution space and the objectives, an implemented con- tract language was, in Chapter 8, proposed. By letting the message type of the solution space be that of both events and commitments we observe that the contract language is practically useful within the solution space. While this establishes the utility of the language in terms of it being usable within causal models of antibiotic development, the constructive proof provided in Chap- ter 9 establishes the utility of the language in terms of its ability to actually express policy interventions for antibiotic development. Whether the proposed language is the fundamental language or not is, as discussed in Section 1.2, to an extent irrelevant. The proposed language is cer- tainly sufficiently fundamental in the light of alternative contract formalisms. Yet, the ability to model offers as contracts renders its utility, in the context of alignment of simulation models of policy interventions for antibiotic develop- ment, greater than that of the alternatives.

145 10.2 Limitations While the contributions of this work and its implications for research and prac- tice are listed in Section 1.3, this section briefly exposes some key limitations of this research. First, most readers presumably agree that while there now is evidence sug- gesting that a vast number of policy interventions can be captured as contract offers, there certainly are those that either cannot yet be expressed or only awk- wardly so. In Chapter 9 it was shown that one of the two policy interventions, namely phase entry rewards, discussed in Chapter 5 and published in Okhravi (2020) can be captured in the proposed contract language. The other policy in- tervention of that publication, namely direct funding, is however not so trivial to model as a contract offer. While explained in further detail in Section 5.2 the idea of direct funding is essentially that a benefactor somehow ‘takes over’ a project and attempts to bring it to completion without performing any form of profitability analysis and while paying for all costs itself. While we certainly can use sequential conjunction and transformations to oblige the benefactor to attempt all refinements required to bring the prospective antibiotic to market, the issue is that in direct funding, the benefactor is assumed to operate without profitability requirements. If the benefactor doesn’t have a financial incentive to pursue the contract then the optimal course of action might be to infinitely postpone the contracted obligations. Therefore we must either oblige the benefactor to perform certain actions before certain deadlines (whatever form such deadlines may take) or we must oblige the benefactor to costs that it can seek to avoid (such as lost population health) or award some other non-financial gain (such as increased population health). Measuring quality-adjusted life years (QUALYs) is a common (albeit criticized) approach to measuring the impact of a drug and should thus serve as a sensible starting point for capturing such reasons. Yet, it should now be clear that direct funding is an example of a policy intervention that is signif- icantly more complicated to capture as a contract in the proposed language. Subsequently, it cannot at this point be claimed that the proposed contract lan- guage sufficiently captures the contractual aspects of all policy interventions for antibiotic development. Second, this thesis has focused almost exclusively on policy interventions as contract offers. As shown with the example above however, some policy interventions might be better described in terms of behavior. While the event type does serve as a bridge between the solution space given in Chapter 7 and the proposal given in Chapter 8 it is not clear what facets of a policy intervention can be captured a contract offer and what must be captured as behavior. Subsequently, it cannot at this point be claimed that there even exists a contract language sufficient to capture all policy interventions for antibiotics as contracts or contract offers.

146 In light of the two reasons above, we cannot conclude that the proposed contract language allows alignment of all policy interventions for antibiotics. However, since some important facets of important policy interventions can be captured as contracts and contract offers in the proposed language, and since the language is employable within the causal model search space, the lan- guage can in some cases be used to support alignment, by means of syntactic comparison, for some policy intervention simulations.

10.3 Future work Andersen et al. (2006) and Stefansen (2005) both employed process calcu- lus to reason about which fundamental function combinators are needed. In retrospect it appears as if some formalization, while not necessarily process calculus, would be helpful in teasing out which combinators are fundamental. This view is supported by how the objectives proposed in Chapter 6 appear to lend themselves to grouping into three sets that we might call constraints, terminals, and non-terminals. To the constraints group I would add: com- positionality, atomicity, actualizability, prospectability, and reducibility. To the terminals group I would add: transferability, transformability, and option- ality. Finally to the non-terminals group I would add: optionality, parallel conjunctivity, sequential conjunctivity, conditionality, scalability, causality, fi- nality, and cyclicity. Note that optionality appears in both the terminal and non-terminal group, where it refers to the obligation to choose a particular thing in the former (terminal) and the obligation to choose between two things in the latter (non-terminal). In this division, the first set is concerned with the basic constraints and capabilities of the design. Consider for example how atomicity states that we must be able to express an atomic obligation as well as the absence of one. Further, actualization states that it must be possible to express both offers and contracts within the same language, and that it must be possible to convert from the former to the latter. The second set is concerned with defining what constitutes an atomic obligation, and finally the third set is concerned with the ways in which we can combine atoms in order to build complex contracts. The suitability of a division of sorts is also indicated by how the contract type in Section 8.1 could be abstracted away from transfers and up to arbitrary events and commitments. Finding some higher level division of this sort would also simplify comparing contract language proposals. Another potential for refinement is to increase the domain-specificity. Al- most all the domain-specificity presented in this thesis pertains to formal con- tracts in general rather than to contracts underlying policy interventions for an- tibiotic development in specific. Even if the design is guided by requirements stemming from the domain of policy interventions for antibiotic development, very few specifics ended up in the proposal. While it increases the generality

147 of this work it is also the reason that the proposal given in Chapter 8 requires the domain-specific helpers that have been relegated to Appendix B. Another aspect worthy of future research, is caused by the choice to store functions inside observables and hence inside contracts. Functions in Haskell are neither serializable nor comparable under equality. This limits possible simulation designs in important ways. The inability to compare contracts for example means that agent cannot compare whether a contract that it holds is the same as a contract held by some counterparty. On one hand we cannot for example serialize a simulation that contains contracts in order to, say, persist its state. Determining contract equivalence would require reducing both con- tracts under all possible event sequences in order to compare the sequences that render the contract effectively equivalent to zero.

10.4 Closing thoughts In the end, I hold that this thesis has convincingly argued that policy inter- ventions for antibiotic development can be captured as contract offers, and that doing so will help us understand the strengths and weaknesses of various interventions by enabling the use of simulation model alignment to compare them. All this provokes the question of whether such an exercise is possible for policy intervention in general and not just within the specific domain of an- tibiotic development. I sincerely hope that I have inspired future researchers to approach this question so that we as, information modelers, do not have to settle for providing disparate decision-support to policy-makers but can aid in unifying it. While access to data, in the words of Harari (2016), used to mean having power, power today is knowing what to ignore. Harmonizing the ca- cophony of decision-support will therefore become increasingly important in light of the vital questions to come.

Models are, for the most part, caricatures of reality, but if they are good, then, like good caricatures, they portray, though perhaps in distorted manner, some of the features of the real world. The main role of models is not so much to explain and to predict – though ultimately these are the main functions of science – as to polarize thinking and to pose sharp questions. (Kac, 1969)

148 References

Abbott, T. A., & Vernon, J. A. (2007). The cost of us pharmaceutical price reg- ulation: A financial simulation Model of R&D decisions. Managerial and Decision Economics, 28(4/5), 293–306. https://doi.org/10.1002/ mde.1342 Almagor, J., Temkin, E., Benenson, I., Fallach, N., & Carmeli, Y. (2018). The impact of antibiotic use on transmission of resistant bacteria in hospi- tals: Insights from an agent-based model [on behalf of the DRIVE-AB Consortium]. PLOS ONE, 13(5), e0197111. https://doi.org/10.1371/ journal.pone.0197111 Andersen, J., Elsborg, E., Henglein, F., Simonsen, J., & Stefansen, C. (2006). Compositional specification of commercial contracts. International Journal on Software Tools for Technology Transfer, 8(6), 485–516. https://doi.org/10.1007/s10009-006-0010-1 Årdal, C., Findlay, D., Savic, M., Carmeli, Y., Gyssens, I., Laxminarayan, R., Outterson, K., & Rex, J. H. (2017). Revitalizing the antibiotic pipeline: Stimulating innovation while driving sustainable use and global access. DRIVE-AB. Årdal, C., Lacotte, Y., & Ploy, M.-C. (2020). Financing pull mechanisms for antibiotic-related innovation: Opportunities for Europe [On behalf of the European Union Joint Action on Antimicrobial Resistance and Healthcare-Associated Infections (EU-JAMRAI)]. Clinical Infectious Diseases. 10.1093/cid/ciaa153/5736365 Axelrod, R., & Hamilton, W. D. (1981). The evolution of cooperation. Science, 211(4489), 1390–1396. https://doi.org/10.1126/science.7466396 Axelrod, R. (1986). An evolutionary approach to norms. The American Po- litical Science Review, 80(4), 1095–1111. https://doi.org/10.2307/ 1960858 Axtell, R., Axelrod, R., Epstein, J. M., & Cohen, M. D. (1996). Aligning sim- ulation models: A case study and results. Computational and Mathe- matical Organization Theory, 1(2), 123–141. https://doi.org/10.1007/ BF01299065 Bankes, S. C. (2002). Agent-based modeling: A revolution? Proceedings of the National Academy of Sciences, 99(suppl 3), 7199–7200. https : //doi.org/10.1073/pnas.072081299 Baquero, F., & Campos, J. (2003). The tragedy of the commons in antimicro- bial chemotherapy. Rev Esp Quimioter, 16(1), 11–3. https://pubmed. ncbi.nlm.nih.gov/12750754/

149 Baraldi, E., Ciabuschi, F., Callegari, S., & Lindahl, O. (2019). Economic in- centives for the development of new antibiotics: Report commissioned by the Public Health Agency of Sweden (tech. rep.). http://urn.kb.se/ resolve?urn=urn%3Anbn%3Ase%3Auu%3Adiva-375258 Baym, M., Lieberman, T. D., Kelsic, E. D., Chait, R., Gross, R., Yelin, I., & Kishony, R. (2016). Spatiotemporal microbial evolution on antibiotic landscapes. Science, 353(6304), 1147–1151. https://doi.org/10.1126/ science.aag0822 Beck, K., & Cunningham, W. (1987). Using pattern languages for object- oriented programs. OOPSLA-87 workshop on the specification and design for object-oriented programming. http://c2.com/doc/oopsla87. html Bissell, C. (2007). Historical perspectives-the MONIAC a hydromechanical analog computer of the 1950s. IEEE Control Systems, 27(1), 69–74. https://doi.org/10.1109/MCS.2007.284511 Blau, G. E., Pekny, J. F., Varma, V. A., & Bunch, P. R. (2004). Managing a portfolio of interdependent new product candidates in the pharma- ceutical industry. Journal of Product Innovation Management, 21(4), 227–245. https://doi.org/10.1111/j.0737-6782.2004.00075.x Bloomberg. (2014, March 31). Press announcement – Bloomberg li- censes LexiFi’s technology to strengthen derivatives and struc- tured products coverage. Retrieved August 11, 2020, from https : //www.bloomberg.com/company/press/bloomberg-licenses-lexifis- technology-strengthen-derivatives-structured-products-coverage/ Boucher, H. W., File, T. M., Fowler, V. G., Jezek, A., Rex, J. H., & Outter- son, K. (2020). Antibiotic development incentives that reflect societal value of antibiotics. Clinical Infectious Diseases. https://doi.org/10. 1093/cid/ciaa092 Box, G. E. (1976). Science and statistics. Journal of the American Statistical Association, 71(356), 791–799. https://doi.org/10.1080/01621459. 1976.10480949 Bratman, M. E., Israel, D. J., & Pollack, M. E. (1988). Plans and resource- bounded practical reasoning. Computational Intelligence, 4(3), 349– 355. https://doi.org/10.1111/j.1467-8640.1988.tb00284.x Brunning, A. (2014). An overview of antibiotics. Retrieved August 11, 2020, from https://longitudeprize.org/blog-post/overview-antibiotics Carter, M., Petter, S., & Randolph, A. (2015). Desperately seeking information in information systems research. ICIS 2015 Proceedings. Cartwright, N., & Hardie, J. (2012). Evidence-based policy: A practical guide to doing it better. Oxford University Press. Cartwright, N., & Stegenga, J. (2011). A theory of evidence for evidence- based policy. Evidence, Inference and Enquiry (p. 291). https://doi. org/10.5871/bacad/9780197264843.003.0011

150 Centres for Disease Control and Prevention. (2013). Antibiotic resis- tance threats in the United States (tech. rep.). U.S. Department of Health and Human Services. Retrieved July 19, 2020, from https :/ / www. cdc .gov / drugresistance/ threat - report- 2013/ pdf /ar- threats-2013-508.pdf Chakraborti, A. (2002). Distributions of money in model markets of economy. International Journal of Modern Physics C, 13(10), 1315–1321. https: //doi.org/10.1142/S0129183102003905 Chen, P. P.-S. (1976). The entity-relationship model—toward a unified view of data. ACM Transactions on Database Systems, 1(1), 9–36. https: //doi.org/10.1145/320434.320440 Chomsky, N. (1956). Three models for the description of language. IRE Trans- actions on Information Theory, 2(3), 113–124. https://doi.org/10. 1109/TIT.1956.1056813 Collins, A. J., & Frydenlund, E. (2016). Agent-based modeling and strategic group formation: A refugee case study. 2016 Winter Simulation Con- ference (WSC). https://doi.org/10.1109/WSC.2016.7822184 Cook, M. (2004). Universality in elementary cellular automata. Complex Sys- tems, 1–40. Cooper, M. A., & Shlaes, D. (2011). Fix the antibiotics pipeline. Nature, 472(7341), 32–32. https://doi.org/10.1038/472032a Croft, S. L. (2005). Public-private partnership: From there to here. Transac- tions of The Royal Society of Tropical Medicine and Hygiene, 99, S9– S14. https://doi.org/10.1016/j.trstmh.2005.06.008 David, J. S., Gerard, G. J., & McCarthy, W. E. (2002). Design science: Build- ing the future of ais. American Accounting Association, 69. Dawid, H., & Neugart, M. (2011). Agent-based models for economic policy design. Eastern Economic Journal, 37(1), 44–50. https://doi.org/10. 1057/eej.2010.43 Diekert, F. K. (2012). The Tragedy of the Commons from a Game-Theoretic Perspective. Sustainability, 4(12), 1776–1786. https://doi.org/10. 3390/su4081776 Doran, J. et al. (2001). Intervening to achieve co-operative ecosystem manage- ment: Towards an agent based model. Journal of Artificial Societies and Social Simulation, 4(2), 1–21. DRIVE-AB. (n.d.). About DRIVE-AB. Retrieved July 11, 2020, from http: //drive-ab.eu/about/ Durlauf, S. N. (2012). Complexity, economics, and public policy. Politics, Philosophy & Economics, 11(1), 45–75. https://doi.org/10.1177/ 1470594X11434625 Earnest, D. C., & Frydenlund, E. (2017). Flipping coins and coding turtles. Guide to simulation-based disciplines (pp. 237–259). https://doi.org/ 10.1007/978-3-319-61264-5_11

151 Edmonds, B., & Hales, D. (2003). Replication, replication and replication: Some hard lessons from model alignment. Journal of Artificial Soci- eties and Social Simulation, 6(4). http://jasss.soc.surrey.ac.uk/6/4/11. html Eliopoulos, G. M., Cosgrove, S. E., & Carmeli, Y. (2003). The Impact of An- timicrobial Resistance on Health and Economic Outcomes. Clinical Infectious Diseases, 36(11), 1433–1437. https: //doi .org /10. 1086/ 375081 Elliott, C., & Hudak, P. (1997). Functional reactive animation. Proceedings of the second ACM SIGPLAN international conference on Functional programming, 263–273. https://doi.org/10.1145/258948.258973 Epstein, J. M. (2008). Why model? Journal of Artificial Societies and Social Simulation, 11(4), 12. http://jasss.soc.surrey.ac.uk/11/4/12.html Epstein, J. M., & Axtell, R. (1996). Growing artificial societies: Social science from the bottom up. Brookings Institution Press. Fagerberg, J., Mowery, D. C., & Nelson, R. R. (2005). The oxford handbook of innovation. Oxford university press. Food and Drug Administration. (2001). Guidance for Industry: E 10 Choice of Control Group and Related Issues in Clinical Trials. U.S. Department of Health and Human Services. Foster, K. R., & Grundmann, H. (2006). Do we need to put society first? The potential for tragedy in antimicrobial resistance. PLoS Med, 3(2), e29. https://doi.org/10.1371/journal.pmed.0030029 Gallegati, M., & Kirman, A. (2012). Reconstructing economics. Complexity Economics, 1(1), 5–31. https://doi.org/10.7564/12-coec2 Gamma, E., Helm, R., Johnson, R., & Vlissides, J. (1995). Design patterns: Elements of reusable object-oriented software. Pearson Education. Geerts, G. L., & McCarthy, W. E. (1997). Modeling business enterprises as value-added process hierarchies with resource-event-agent object templates. Business object design and implementation (pp. 94–113). Springer. Geerts, G. L., & McCarthy, W. E. (2000a). Augmented intensional reasoning in knowledge-based accounting systems. Journal of Information Sys- tems, 14(2), 127–150. https://doi.org/10.2308/jis.2000.14.2.127 Geerts, G. L., & McCarthy, W. E. (2000b). The ontological foundation of REA enterprise information systems. Annual Meeting of the American Ac- counting Association, Philadelphia, PA, 362, 127–150. Geerts, G. L., & McCarthy, W. E. (2002). An ontological analysis of the economic primitives of the extended-REA enterprise information ar- chitecture. International Journal of Accounting Information Systems, 3(1), 1–16. https://doi.org/10.1016/S1467-0895(01)00020-3 Geerts, G. L., & McCarthy, W. E. (2006). Policy-level specifications in REA enterprise information systems. Journal of Information Systems, 20, 37–63. https://doi.org/10.2308/jis.2006.20.2.37

152 Geerts, G. L., & McCarthy, W. E. (2011). Using object templates from the REA accounting model to engineer business processes and tasks. Re- view of Business Information Systems (RBIS), 5(4), 89. https://doi. org/10.19030/rbis.v5i4.5372 Gilbert, N. (2008). Agent-based models. SAGE Publications. Gilbert, N., & Troitzsch, K. (2005). Simulation for the social scientist. McGraw-Hill Education (UK). Grace, C., & Kyle, M. (2009). Comparative advantages of push and pull incen- tives for technology development: Lessons for neglected disease tech- nology development. Global Forum Update on Research for Health, 6, 147–51. Gregor, S., & Hevner, A. R. (2013). Positioning and presenting design science research for maximum impact. MIS quarterly, 37(2). https://www. jstor.org/stable/43825912 Grimm, V., Berger, U., Bastiansen, F., Eliassen, S., Ginot, V., Giske, J., Goss- Custard, J., Grand, T., Heinz, S. K., Huse, G., Huth, A., Jepsen, J. U., Jørgensen, C., Mooij, W. M., Müller, B., Pe’er, G., Piou, C., Rails- back, S. F., Robbins, A. M., . . . DeAngelis, D. L. (2006). A stan- dard protocol for describing individual-based and agent-based mod- els. Ecological Modelling, 198(1-2), 115–126. https://doi.org/10. 1016/j.ecolmodel.2006.04.023 Grimm, V., Berger, U., DeAngelis, D. L., Polhill, J. G., Giske, J., & Railsback, S. F. (2010). The ODD protocol: A review and first update. Ecological modelling, 221(23), 2760–2768. https://doi.org/10.1016/j.ecolmodel. 2010.08.019 Grimm, V., Railsback, S. F., Vincenot, C. E., Berger, U., Gallagher, C., DeAn- gelis, D. L., Edmonds, B., Ge, J., Giske, J., Groeneveld, J., Johnston, A. S. A., Milles, A., Nabe-Nielsen, J., Polhill, J. G., Radchuk, V., Ro- hwäder, M.-S., Stillman, R. A., Thiele, J. C., & Ayllón, D. (2020). The ODD protocol for describing agent-based and other simulation mod- els: A second update to improve clarity, replication, and structural re- alism. Journal of Artificial Societies and Social Simulation, 23(2), 7. https://doi.org/10.18564/jasss.4259 Hammond, R. A. (2015). Considerations and best practices in agent-based modeling to inform policy. Assessing the use of agent-based models for tobacco regulation. National Academies Press (US). Harari, Y. N. (2016). Homo Deus: A Brief History of Tomorrow. Random House. Harbarth, S., Theuretzbacher, U., & Hackett, J. (2015). Antibiotic research and development: Business as usual? [on behalf of the DRIVE-AB Consortium]. Journal of Antimicrobial Chemotherapy, dkv020. https: //doi.org/10.1093/jac/dkv020 Hardin, G. (1968). The tragedy of the commons. Science, 162(3859), 1243– 1248. https://doi.org/10.1126/science.162.3859.1243

153 Hevner, A. R., March, S. T., Park, J., & Ram, S. (2004). Design science in information systems research. MIS Quarterly, 28(1), 75–105. https: //doi.org/10.2307/25148625 Hewitt, C., Bishop, P., & Steiger, R. (1973). Session 8 formalisms for artificial intelligence a universal modular actor formalism for artificial intelli- gence. Advance Papers of the Conference, 3, 235. Hoffman, S. J., & Outterson, K. (2015). What will it take to address the global threat of antibiotic resistance? The Journal of Law, Medicine & Ethics, 43(2), 363–368. https://doi.org/10.1111/jlme.12253 Hruby, P. (2006). Model-driven design using business patterns. Springer Sci- ence & Business Media. Hudak, P. (1996). Building domain-specific embedded languages. ACM Com- puting Surveys, 28(4). Hughes, J. (1989). Why functional programming matters. The Computer Jour- nal, 32(2), 98–107. https://doi.org/10.1093/comjnl/32.2.98 JASSS: How to submit a paper. (2020). Retrieved August 7, 2020, from http: //jasss.soc.surrey.ac.uk/admin/submit.html Kac, M. (1969). Some mathematical models in science. Science, 166(3906), 695–699. https://www.jstor.org/stable/1727775 Kahneman, D. (2003). A perspective on judgment and choice: Mapping bounded rationality. American Psychologist, 58(9), 697–720. https://doi.org/10.1037/0003-066X.58.9.697 Kellogg, D., Charnes, J. M., & Demirer, R. (1999). Valuation of a biotech- nology firm: An application of real-options methodologies. Proc. 3rd Annu. Intl Confer. on Real Options. Knuth, D. E. (1984). Literate programming. The Computer Journal, 27(2), 97–111. https://doi.org/10.1093/comjnl/27.2.97 Kozak, M. L., & Larsen, J. C. (2018). Economic incentives for antibacte- rial drug development: Alternative market structures to promote inno- vation. Antimicrobial Resistance in the 21st Century (pp. 721–753). Springer International Publishing. https://doi.org/10.1007/978- 3- 319-78538-7_24 Kronlid, C. A., Baraldi, E., Callegari, S., Ciabuschi, F., Lindahl, O., McKeever, S., & Okhravi, C. (2017a). Work package 2, task 9: Preliminary sim- ulation report (tech. rep.). Kronlid, C. A., Baraldi, E., Callegari, S., Ciabuschi, F., Lindahl, O., McKeever, S., & Okhravi, C. (2017b). Work package 2, task 9: Preliminary sim- ulation report (London) (tech. rep.) [Available upon request]. Kubler, P. (2018). Fast-tracking of new drugs: Getting the balance right. Aus- tralian Prescriber, 41(4), 98–99. https://doi.org/10.18773/austprescr. 2018.032 Laxminarayan, R., Duse, A., Wattal, C., Zaidi, A. K. M., Wertheim, H. F. L., Sumpradit, N., Vlieghe, E., Hara, G. L., Gould, I. M., Goossens, H., Greko, C., So, A. D., Bigdeli, M., Tomson, G., Woodhouse, W., Om-

154 baka, E., Peralta, A. Q., Qamar, F. N., Mir, F., . . . Cars, O. (2013). Antibiotic resistance—the need for global solutions. The Lancet In- fectious Diseases, 13(12), 1057–1098. https://doi.org/https://doi.org/ 10.1016/S1473-3099(13)70318-9 Lempert, R. (2002). Agent-based modeling as organizational and public pol- icy simulators. Proceedings of the National Academy of Sciences, 99, 7195–7196. https://doi.org/10.1073/pnas.072079399 Littmann, J., Buyx, A., & Cars, O. (2015). Antibiotic resistance: An ethical challenge. International Journal of Antimicrobial Agents, 46(4), 359– 361. https://doi.org/10.1016/j.ijantimicag.2015.06.010 Lorenz, E. N. (1963). Deterministic nonperiodic flow. Journal of the atmo- spheric sciences, 20(2), 130–141. https : / / doi . org / 10 . 1175 / 1520 - 0469(1963)020<0130:DNF>2.0.CO;2 Malleson, N., Heppenstall, A., & See, L. (2010). Crime reduction through simulation: An agent-based model of burglary. Computers, Environ- ment and Urban Systems, 34(3), 236–250. https://doi.org/10.1016/j. compenvurbsys.2009.10.005 Manohar, P., Loh, B., & Leptihn, S. (2020). Will the overuse of antibiotics during the coronavirus pandemic accelerate antimicrobial resistance of bacteria? Infectious Microbes & Diseases, Latest Articles. https: //doi.org/10.1097/IM9.0000000000000034 March, S. T., & Smith, G. F. (1995). Design and natural science research on information technology. Decision support systems, 15(4), 251–266. https://doi.org/10.1016/0167-9236(94)00041-2 Martin, R. C., Riehle, D., & Buschmann, F. (1998). Pattern languages of pro- gram design 3. Addison-Wesley. Massad, E., Lundberg, S., & Yang, H. M. (1993). Modeling and simulating the evolution of resistance against antibiotics. International Journal of Bio-Medical Computing, 33(1), 65–81. https://doi.org/10.1016/0020- 7101(93)90060-J May, R. M. et al. (1976). Simple mathematical models with very complicated dynamics. Nature, 261(5560), 459–467. https : / / doi . org / 10 . 1038 / 261459a0 McCarthy, W. E. (1979). An entity-relationship view of accounting models. The Accounting Review, 54(4), 667–686. https://www.jstor.org/stable/ 245625 McCarthy, W. E. (1980). Construction and use of integrated accounting sys- tems with entity-relationship modelling. Proceedings of the 1st Inter- national Conference on the Entity-Relationship Approach to Systems Analysis and Design, 625–637. McCarthy, W. E. (1982). The REA accounting model: A generalized frame- work for accounting systems in a shared data environment. Account- ing Review, 554–578. https://www.jstor.org/stable/246878

155 McCarthy, W. E. (1999). Semantic modeling in accounting education, prac- tice, and research: Some progress and impediments. Conceptual Mod- eling (pp. 144–153). Springer. https://doi.org/10.1007/3-540-48854- 5_12 Mitchell, M. (2009). Complexity: A guided tour. OUP USA. Moore, M. A., Boardman, A. E., Vining, A. R., Weimer, D. L., & Greenberg, D. H. (2004). "Just give me a number!"practical values for the so- cial discount rate. Journal of Policy Analysis and Management, 23(4), 789–812. https://doi.org/10.1002/pam.20047 Moss, S. (2002). Policy analysis from first principles. Proceedings of the Na- tional Academy of Sciences, 99(suppl 3), 7267–7274. https://doi.org/ 10.1073/pnas.092080699 Mossialos, E. (Ed.). (2010). Policies and incentives for promoting innova- tion in antibiotic research. European Observatory on Health Systems; Policies. Nakamura, H., & Johnson, R. E. (1998). Adaptive framework for the REA ac- counting model. Proceedings of OOPSLA’98 Business Object Work- shop IV. Niazi, M. (2017). Towards a novel unified framework for developing formal, network and validated agent-based simulation models of complex adaptive systems (Doctoral dissertation). University of Stirling. Scotland, UK. Niazi, M., & Hussain, A. (2011). Agent-based computing from multi-agent systems to agent-based models: A visual survey. Scientometrics, 89(2), 479. Okhravi, C. (2020). Economics of public antibiotics development. Frontiers in Public Health, 8, 161. https://doi.org/10.3389/fpubh.2020.00161 Okhravi, C., Callegari, S., McKeever, S., Kronlid, C., Baraldi, E., Lindahl, O., & Ciabuschi, F. (2018). Simulating market entry rewards for antibiotics development. The Journal of Law, Medicine & Ethics, 46(1_suppl), 32–42. https://doi.org/10.1177/1073110518782913 Okhravi, C., McKeever, S., Kronlid, C., Baraldi, E., Lindahl, O., & Ciabuschi, F. (2017). Simulating market-oriented policy interventions for stim- ulating antibiotics development. SpringSim-ANSS 2017. https://doi. org/10.5555/3106388.3106390 O’Neill, J. (2016). Tackling drug-resistant infections globally: Final report and recommendations (tech. rep.). Review on Antimicrobial Resis- tance. Østerbye, K. (2004). Structured REA contracts [Position paper at: First Inter- national REA Technology Workshop, Copenhagen, Denmark, April 22-24]. http : / / www. itu . dk / people / kasper / REA2004 / pospapers / KasperOsterbye.pdf Outterson, K., Gopinathan, U., Clift, C., So, A. D., Morel, C. M., & Røt- tingen, J.-A. (2016). Delinking investment in antibiotic research and

156 development from sales revenues: The challenges of transforming a promising idea into reality. PLOS Medicine, 13(6), e1002043. https: //doi.org/10.1371/journal.pmed.1002043 Outterson, K., & Rex, J. H. (2020). Evaluating for-profit public benefit corpo- rations as an additional structure for antibiotic development and com- mercialization. Translational Research. https://doi.org/10.1016/j.trsl. 2020.02.006 Passini, E., Britton, O. J., Lu, H. R., Rohrbacher, J., Hermans, A. N., Gallacher, D. J., Greig, R. J. H., Bueno-Orovio, A., & Rodriguez, B. (2017). Human in silico drug trials demonstrate higher accuracy than animal models in predicting clinical pro-arrhythmic cardiotoxicity. Frontiers in Physiology, 8, 668. https://doi.org/10.3389/fphys.2017.00668 Peffers, K., Tuunanen, T., Gengler, C. E., Rossi, M., Hui, W., Virtanen, V., & Bragge, J. (2006). The design science research process: A model for producing and presenting information systems research. Proceed- ings of the first international conference on design science research in information systems and technology (DESRIST 2006), 83–106. Pew Charitable Trusts. (2016). Antibiotics currently in clinical development (tech. rep.). Retrieved January 9, 2017, from http://www.pewtrusts. org/~/media/assets/2016/12/antibiotics_datatable_201605.pdf Peyton Jones, S. (2008, February 21). Composing contracts: An adventure in financial engineering [Seminar at Ericsson]. Retrieved August 11, 2020, from https://www.youtube.com/watch?v=dd1F6GrivhI Peyton Jones, S., & Eber, J.-M. (2003). How to write a financial contract. Peyton Jones, S., Eber, J.-M., & Seward, J. (2000). Composing contracts: An adventure in financial engineering (functional pearl). Proceedings of the Fifth ACM SIGPLAN International Conference on Functional Pro- gramming, 35, 280–292. https://doi.org/10.1145/357766.351267 Pierce, B. C. (2002). Types and programming languages. MIT Press. Polya, G. (2014). How to solve it: A new aspect of mathematical method. Princeton University Press. Porter, M. E., & Millar, V. E. (1985). How information gives you competitive advantage. Harvard Business Review, 63(4), 149–160. Prenkert, F., & Følgesvold, A. (2014). Relationship strength and network form: An agent-based simulation of interaction in a business network. Australasian Marketing Journal (AMJ), 22(1), 15–27. https://doi.org/10.1016/j.ausmj.2013.12.004 Pries-Heje, J., Baskerville, R., & Venable, J. R. (2008). Strategies for design science research evaluation. ECIS 2008 Proceedings. Renwick, M. J., Brogan, D. M., & Mossialos, E. (2016). A systematic review and critical assessment of incentive strategies for discovery and de- velopment of novel antibiotics. The Journal of antibiotics, 69(2), 73. https://doi.org/10.1038/ja.2015.98

157 Renwick, M. J., Simpkin, V., & Mossialos, E. (2016). Targeting innovation in antibiotic drug discovery and development: The need for a one health – one Europe – one world framework (Vol. 45). European Observatory on Health Systems; Policies. Rex, J. H., & Outterson, K. (2016). Antibiotic reimbursement in a model delinked from sales: A benchmark-based worldwide approach. The Lancet Infectious Diseases, 16(4), 500–505. https://doi.org/10.1016/ S1473-3099(15)00500-9 Rome, B. N., & Kesselheim, A. S. (2019). Transferrable market exclusivity extensions to promote antibiotic development: An economic analysis. Clinical Infectious Diseases. https://doi.org/10.1093/cid/ciz1039 Savic, M., & Årdal, C. (2018). A grant framework as a push incentive to stimu- late research and development of new antibiotics. The Journal of Law, Medicine & Ethics, 46(1_suppl), 9–24. https : / / doi . org / 10 . 1177 / 1073110518782911 Schaffer, J. (2015). What not to multiply without necessity. Australasian Jour- nal of Philosophy, 93(4), 644–664. https://doi.org/10.1080/00048402. 2014.992447 Schelling, T. C. (1971). Dynamic models of segregation. Journal of mathe- matical sociology, 1(2), 143–186. https://doi.org/10.1080/0022250X. 1971.9989794 Schelling, T. C. (1978). Micromotives and macrobehavior. Norton. Sertkaya, A., Eyraud, J. T., Birkenbach, A., Franz, C., Ackerley, N., Overton, V., & Outterson, K. (2014). Analytical framework for examining the value of antibacterial products. https://ssrn.com/abstract=2641820 Sertkaya, A., Jessup, A., & Wong, H.-H. (2017). Promoting antibacte- rial drug development: Select policies and challenges. Applied Health Economics and Health Policy, 15(1), 113–118. https : //doi.org/10.1007/s40258-016-0279-5 Sharma, P., Towse, A., & Office of Health Economics (London, E. (2011). New drugs to tackle antimicrobial resistance: Analysis of EU policy options. Office of Health Economics. https://doi.org/10.2139/ssrn. 2640028 Shlaes, D. (2010). Antibiotics: The perfect storm. Springer Netherlands. Simon, H. A. (1955). A behavioral model of rational choice. The Quarterly Journal of Economics, 69(1), 99. https://doi.org/10.2307/1884852 Simon, H. A. (1956). Rational choice and the structure of the environment. Psychological review, 63(2), 129. https://doi.org/10.1037/h0042769 Simon, H. A. (1996). The sciences of the artificial. MIT Press. Singer, A. C., Kirchhelle, C., & Roberts, A. P. (2020). (Inter)nationalising the antibiotic research and development pipeline. The Lancet Infectious Diseases, 20(2), e54–e62. https://doi.org/10.1016/S1473-3099(19) 30552-3

158 Smith, M. (2018). Luca Pacioli: The father of accounting. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.2320658 Smith, R. (1987). Panel on design methodology. Addendum to the Proceedings on Object-Oriented Programming Systems, Languages and Applica- tions (Addendum), 23, 91–95. https://doi.org/10.1145/62138.62151 So, A. D., Gupta, N., Brahmachari, S. K., Chopra, I., Munos, B., Nathan, C., Outterson, K., Paccaud, J. P., Payne, D. J., Peeling, R. W., Spigelman, M., & Weigelt, J. (2011). Towards new business models for R&D for novel antibiotics. Drug Resistance Updates, 14(2), 88–94. https: //doi.org/10.1016/j.drup.2011.01.006 So, A. D., Ruiz-Esparza, Q., Gupta, N., & Cars, O. (2012). 3Rs for innovating novel antibiotics: Sharing resources, risks, and rewards. BMJ, 344. https://doi.org/10.1136/bmj.e1782 Stefansen, C. (2004). Transforming the resources/events/agents model into a formal process-oriented enterprise framework. First International REA Technology Workshop, Copenhagen, Denmark. Stefansen, C. (2005). A declarative framework for enterprise information sys- tems (Doctoral dissertation). University of Copenhagen (DIKU). Stewart, J. J., Allison, P. N., & Johnson, R. S. (2001). Putting a price on biotechnology. Nature biotechnology, 19(9), 813–817. https : / / doi . org/10.1038/nbt0901-813 Tacconelli, E., Carrara, E., Savoldi, A., Harbarth, S., Mendelson, M., Mon- net, D. L., Pulcini, C., Kahlmeter, G., Kluytmans, J., Carmeli, Y., Ouellette, M., Outterson, K., Patel, J., Cavaleri, M., Cox, E. M., Houchens, C. R., Grayson, M. L., Hansen, P., Singh, N., . . . Zorzet, A. (2018). Discovery, research, and development of new antibiotics: The WHO priority list of antibiotic-resistant bacteria and tuberculosis. The Lancet Infectious Diseases, 18(3), 318–327. https://doi.org/10.1016/S1473-3099(17)30753-3 Thaler, J., & Siebers, P.-O. (2019). The art of iterating: Update-strategies in agent-based simulation. Social Simulation for a Digital Society: Ap- plications and Innovations in Computational Social Science (pp. 21– 36). Springer International Publishing. https://doi.org/10.1007/978- 3-030-30298-6_3 Theuretzbacher, U., Outterson, K., Engel, A., & Karlén, A. (2019). The global preclinical antibacterial pipeline. Nature Reviews Microbiology, 1–11. Towse, A., Hoyle, C. K., Goodall, J., Hirsch, M., Mestre-Ferrandiz, J., & Rex, J. H. (2017). Time for a change in how new antibiotics are reimbursed: Development of an insurance framework for funding new antibiotics based on a policy of risk mitigation. Health Policy. https://doi.org/10. 1016/j.healthpol.2017.07.011 Towse, A., & Sharma, P. (2011). Incentives for R&D for new antimicrobial drugs. International Journal of the Economics of Business, 18(2), 331–350. https://doi.org/10.1080/13571516.2011.584434

159 Van de Ven, A. H. (1999). The innovation journey. Oxford University Press. Van den Bogaard, A. E., & Stobberingh, E. E. (2000). Epidemiology of resis- tance to antibiotics: Links between animals and humans. International Journal of Antimicrobial Agents, 14(4), 327–335. https://doi.org/10. 1016/S0924-8579(00)00145-X Van Deursen, A., & Klint, P. (2002). Domain-specific language design requires feature descriptions. Journal of computing and information technol- ogy, 10(1), 1–17. Venable, J. (2006). The role of theory and theorising in design science re- search. Proceedings of the 1st International Conference on Design Science in Information Systems and Technology (DESRIST 2006),1– 18. https://doi.org/20.500.11937/20936 Villiger, R., & Bogdan, B. (2005). Getting real about valuations in biotech. Nature Biotechnology, 23(4), 423–428. https : / / doi . org / 10 . 1038 / nbt0405-423 Weber, R. (1987). Toward a theory of artifacts: A paradigmatic base for infor- mation systems research. Journal of Information Systems, 1(2), 3. Wooldridge, M. (2009). An introduction to multiagent systems. Wiley. World Health Organization. (2002). The importance of pharmacovigilance: Safety monitoring of medicinal products (tech. rep.). World Health Organization. Retrieved August 11, 2020, from https://apps.who.int/ iris/handle/10665/42493 World Health Organization. (2014). Antimicrobial resistance: Global report on surveillance 2014 (tech. rep.). Retrieved August 11, 2020, from http://www.who.int/drugresistance/documents/surveillancereport/en/ World Health Organization. (2015). Antimicrobial resistance: Fact sheet n◦194 (tech. rep.). Retrieved August 11, 2020, from http://www.who. int/mediacentre/factsheets/fs194/en/

160 Appendix A. Complete language

{-# OPTIONS_GHC -fwarn-incomplete-patterns #-} {-# LANGUAGE GADTs #-} module Language where import Prelude hiding (and) import Control.Applicative import Data.Bifunctor import Data.Bool

------Contracts ------Fundamental combinators. data Contract e s = Zero | One s | Or s s (Contract e s) (Contract e s) | And (Contract e s) (Contract e s) | Whentil (Obs e Bool) (Contract e s) (Contract e s) | Before (Obs e Bool) (Contract e s) (Contract e s) (Contract e s) | Everytime (Obs e Bool) (Contract e s) | IfElse (Obs e Bool) (Contract e s) (Contract e s) | Scale (Obs e Double) (Contract e s)

-- Fundamental combinator aliases. zero = Zero one = One or = Or and = And whentil = Whentil before = Before everytime = Everytime ifElse = IfElse scale = Scale

-- Non-fundamental combinators. andThen c1 c2 = before (constObs False) c1 c2 zero when o c = whentil o zero c until o c = whentil o c zero whentil’ o c1 c2 = before o c1 c2 c2

161 when’ o c = before o zero c zero until’ o c = before o c zero zero

-- Alias for simple contracts. type Montract e = Contract e e

-- Contract as profunctor. instance Profunctor Contract where dimap fe fs Zero = Zero dimap fe fs (One s) = One (fs s) dimap fe fs (Or s1 s2 c1 c2) = Or (fs s1) (fs s2) (dimap fe fs c1) (dimap fe fs c2) dimap fe fs (And c1 c2) = And (dimap fe fs c1) (dimap fe fs c2) dimap fe fs (Whentil o c1 c2) = Whentil (lmap fe o) (dimap fe fs c1) (dimap fe fs c2) dimap fe fs (Before o c1 c2 c3) = Before (lmap fe o) (dimap fe fs c1) (dimap fe fs c2) (dimap fe fs c3) dimap fe fs (Everytime o c) = Everytime (lmap fe o) (dimap fe fs c) dimap fe fs (IfElse o c1 c2) = IfElse (lmap fe o) (dimap fe fs c1) (dimap fe fs c2) dimap fe fs (Scale o c) = Scale (lmap fe o) (dimap fe fs c)

-- Reduce contract based on event. reduce :: (Double -> s -> s) -- Scaling function. -> (e -> s -> Bool) -- Settling function. -> e -> Contract e s -> Contract e s reduce f g e Zero = Zero reduce f g e (One s) |ge(f1s)=Zero -- Apply f in case of scaling. | otherwise = One s reduce f g e (Or s1 s2 c1 c2) |ge(f1s1)=reducefgec1 |ge(f1s2)=reducefgec2 | otherwise = Or s1 s2 (refresh e c1) (refresh e c2) reduce f g e (And c1 c2) = And (reducefgec1)(reducefgec2) reduce f g e (Whentil o c1 c2)

162 | value (update e o) = reducefgec2 | otherwise = Whentil (update e o) (reducefgec1)(refresh e c2) reduce f g e (Before o c1 c2 c3) | value (update e o) = reducefgec3 | done (reducefgec1) = reducefgec2 | otherwise = Before (update e o) (reducefgec1) (refresh e c2) (refresh e c3) reduce f g e (Everytime o c) | not (value o) && value (update e o) = And (Everytime (update e o) (refresh e c)) (reduce fgec) | otherwise = Everytime (update e o) (refresh e c) reduce f g e (IfElse o c1 c2) | value (update e o) = reducefgec1 | otherwise = reducefgec2 reduce f g e (Scale o c) = Scale o’ (reduce f’gec)where o’ = update e o f’x=fx.f(value o’) -- Compose with scaling function.

-- Updating observables in a contract based on event. refresh :: e -> Contractes->Contract e s refresh _ Zero = Zero refresh _ (One t) = One t refresh e (Or s1 s2 c1 c2) = Or s1 s2 (refresh e c1) (refresh e c2) refresh e (And c1 c2) = And (refresh e c1) (refresh e c2) refresh e (Whentil o c1 c2) = Whentil (update e o) (refresh e c1) (refresh e c2) refresh e (Before o c1 c2 c3) = Before (update e o) (refresh e c1) (refresh e c2) (refresh e c3) refresh e (Everytime o c) = Everytime (update e o) (refresh e c) refresh e (IfElse o c1 c2) = IfElse (update e o) (refresh e c1) (refresh e c2)

163 refresh e (Scale o c) = Scale (update e o) (refresh e c)

-- Checks if contract is effectively zero. done :: Contractes->Bool done Zero = True done (One {}) = False done (Or__c1c2) =False done (And c1 c2) = done c1 && done c2 done (Whentil o c1 c2) = (done c1 && done c2) || (value o && done c2) done (Before o c1 c2 c3) | not (value o) = done c1 && done c2 | otherwise = done c3 done (Everytime o c) = not (value o) done (IfElse o c1 c2) | value o = done c1 | otherwise = done c2 done (Scale o c) = done c ------

------Observables ------data Obs e v where Obs::v->(e->v->v)->Obsev UnOp :: (a -> b) -> Obsea->Obseb BinOp :: (a -> b -> c) -> Obsea->Obseb->Obsec

-- Extract value from observable. value :: Obsev->v value (Obs x _) = x value (UnOp f o1) = f (value o1) value (BinOp f o1 o2) = f (value o1) (value o2)

-- Update observable based on event. update :: e -> Obsev->Obsev update e (Obs x f) = Obs (f e x) f update e (UnOp f o1) = UnOp f (update e o1) update e (BinOp f o1 o2) = BinOp f (update e o1) (update e o2)

-- The constant observable. constObs :: v -> Obs e v constObs x = Obs x (flip const)

-- Observable as profunctor.

164 instance Profunctor Obs where dimap fe fv (Obs v f) = fmap fv (Obs v (\e x -> f (fe e) x)) dimap fe fv (UnOp f o) = fmap (fv.f) (lmap fe o) dimap fe fv (BinOp f o1 o2) = liftA2 ((fv.).f) (lmap fe o1) (lmap fe o2)

-- Observable as functor. -- Allows lifting unary functions. instance Functor (Obs e) where fmap fo=UnOpfo

-- Observable as applicative functor. -- Allows lifting binary and ternary functions. instance Applicative (Obs e) where pure x = constObs x liftA2 f o1 o2 = BinOp f o1 o2

-- Observable relations. (%==) :: Eq a => Obsea->Obsea->ObseBool (%==) o1 o2 = liftA2 (==) o1 o2 (%<=) o1 o2 = liftA2 (<=) o1 o2 (%>=) o1 o2 = liftA2 (>=) o1 o2 (%<) o1 o2 = liftA2 (<) o1 o2 (%>) o1 o2 = liftA2 (>) o1 o2

-- Observable arithmetic expressions. instance Num a => Num (Obs e a) where fromInteger x = constObs (fromInteger x) (+) o1 o2 = liftA2 (+) o1 o2 (-) o1 o2 = liftA2 (-) o1 o2 (*) o1 o2 = liftA2 (*) o1 o2 abs o = fmap abs o signum o = fmap signum o

-- Observable conditionals. obsIf :: Obs e Bool -> Obsea->Obsea->Obsea obsIf = liftA3 if’ obsBool :: Obsea->Obsea->ObseBool -> Obs e a obsBool = liftA3 bool

-- Boolean operations. obsNot :: Obs e Bool -> Obs e Bool obsNot = fmap not (%&&) :: Obs e Bool -> Obs e Bool -> Obs e Bool (%&&) = liftA2 (&&)

165 (%||) :: Obs e Bool -> Obs e Bool -> Obs e Bool (%||) = liftA2 (||) ------

------REA-like events ------data Event kprea = Transfereaa(ResourceTransfer k r) | Transform Type e a (ResourceTransformationkpr) | Choose e a deriving (Eq)

-- Transformations either produce or consume. data Type = Produce | Consume deriving (Eq)

-- Resource in transfer. data ResourceTransfer k r = CommodityTransfer k | RefinableTransfer r deriving (Eq)

-- Resource in transformation. data ResourceTransformation k p r = CommodityTransformation k | RefinableTransformation r p deriving (Eq)

-- Event as covariant trifunctor. instance Trifunctor (Event k p) where trimap fr fe fa (Transfer e a1 a2 r) = Transfer (fe e) (fa a1) (fa a2) (second fr r) trimap fr fe fa (Transformtear) = Transform t (fe e) (fa a) (trimap id id fr r) trimap fr fe fa (Choose e a) = Choose (fe e) (fa a)

-- Resource in transfer as covariant bifunctor. instance Bifunctor ResourceTransfer where bimap f g (CommodityTransfer k) = CommodityTransfer (f k) bimap f g (RefinableTransfer r) = RefinableTransfer (g r)

-- Resource in transformation as covariant trifunctor. instance Trifunctor ResourceTransformation where trimap f g h (CommodityTransformation k) = CommodityTransformation (f k)

166 trimap f g h (RefinableTransformation r p) = RefinableTransformation (h r) (g p) ------

------Actualization ------actualize :: (r1 -> r2) -> (e1 -> e2) -> (a1 -> a2) -> (r3 -> r4) -> (e3 -> e4) -> (a3 -> a4) -> Contract (Eventkpr2e2a2)(Eventkpr3e3a3) -> Contract (Eventkpr1e1a1)(Eventkpr4e4a4) actualize fr’ fe’ fa’ fr fe fa = dimap (trimap fr’ fe’ fa’) (trimap fr fe fa)

-- Ideality is useful when defining isomorphisms between -- between REA triads. data Ideal s k = Ideal s | Known k deriving (Eq, Show) ------

------General assumptions ------if’ :: Bool -> a -> a -> a if’bxy=ifbthen x else y class Trifunctor f where trimap :: (a -> a’) -> (b -> b’) -> (c -> c’) ->(fabc->fa’b’c’) class Filterable f where filterr :: (a -> Bool) ->fa->fa class Profunctor p where dimap :: (a -> b) -> (c -> d) ->pbc->pad dimap f g = lmap f . rmap g lmap :: (a -> b) ->pbc->pac lmap f = dimap f id rmap :: (b -> c) ->pab->pac rmap = dimap id ------

167 Appendix B. Executable example

{-# OPTIONS_GHC -fwarn-incomplete-patterns #-} module Example where import Prelude hiding (and) import Language

------Basic domain types ------data Project = Project Phase deriving (Eq) data Phase = PC | P1 | P2 | P3 | App | M1 deriving (Eq) data Cash = Cash Currency Double deriving (Eq) data Currency = EUR | SEK deriving (Eq) eur = Cash EUR -- Helper function sek = Cash SEK -- Helper function data Indication = ABOM | CUTI deriving (Eq) data Prop = InPhase Phase | Targets Indication deriving (Eq)

-- "Known" types used in offers. type AgentId = Int type ProjectId = Int type EventId = (EventIdPrefix, Int) type ResourceId = Int type EventIdPrefix = Int

-- "Ideal" types used in offers. data Bilateral = Benefactor | Beneficiary deriving (Eq) data Single = Single deriving (Eq) ------

------Helpers for building events ------Commodity-specific transfer helper for ideals. transferC ::e->a->a->k -> Event k p (Ideal r r’) (Ideal e e’) (Ideal a a’) transferC e a1 a2 k = Transfer (Ideal e) (Ideal a1) (Ideal a2) (CommodityTransfer k)

168 -- Refinable-specific transfer helper for ideals. transferR ::e->a->a->r -> Event k p (Ideal r r’) (Ideal e e’) (Ideal a a’) transferR e a1 a2 r = Transfer (Ideal e) (Ideal a1) (Ideal a2) (RefinableTransfer $ Ideal r)

-- Refinable-specific transformation helper for ideals. transformR :: e -> a -> Type -> r -> p -> Event k p (Ideal r r’) (Ideal e e’) (Ideal a a’) transformR eatrp = Transform t (Ideal e) (Ideal a) (RefinableTransformation (Ideal r) p) ------

------Example observables ------hasPropObs :: r -> p -> Obs (Eventkprea)Bool hasPropObs r p = Obs False f where f (Transform Produce _ _ (RefinableTransformation r p)) _ = True f (Transform Consume _ _ (RefinableTransformation r p)) _ = False f_x=x hasPropsObs :: r -> [p] -> Obs (Event k p (Ideal r r’) e a) Bool hasPropsObs r = foldr (%&&) (constObs False) . map (hasPropObs $Ideal r) grantsReceivedObs :: Obs (Event k p (Ideal Single r’) e a) Double grantsReceivedObs = undefined ------

------Example offers ------Fully delinked market entry reward. -- Defined without helper functions to demonstrate verbosity. fdmer :: k -> Montract (Event k Prop

169 (Ideal Single r) (Ideal Int e) (Ideal Bilateral a)) fdmer k = when ((%&&) (hasPropObs (Ideal Single) (Targets CUTI)) (hasPropObs (Ideal Single) (InPhase M1))) (and (one (Transfer (Ideal 1) (Ideal Beneficiary) (Ideal Benefactor) (RefinableTransfer $Ideal Single))) (one (Transfer (Ideal 2) (Ideal Benefactor) (Ideal Beneficiary) (CommodityTransfer $k))))

-- Partially delinked prize. pdprize :: [p] -> k -> Montract (Event k p (Ideal Single r) (Ideal Int e2) (Ideal Bilateral a)) pdprize ps k = when (hasPropsObs Single ps) (one $ transferC 1 Benefactor Beneficiary k)

-- Fully delinked prize. fdprize :: [p] -> k -> Montract (Event k p (Ideal Single r) (Ideal Int e2) (Ideal Bilateral a)) fdprize ps k = when (hasPropsObs Single ps) (and (one $ transferC 1 Benefactor Beneficiary k) (one $ transferR 2 Beneficiary Benefactor Single))

-- Clawbacks. clawback

170 :: [p] -> k -> Montract (Event k p (Ideal Single r) (Ideal Int e2) (Ideal Bilateral a)) clawback ps k = when (hasPropsObs Single ps) (scale grantsReceivedObs (one $ transferC 3 Beneficiary Benefactor k))

-- Partially delinked prize with clawbacks. -- Defined by composing offers. clawbackPrize :: [p] -> k -> k -> Montract (Event k p (Ideal Single r) (Ideal Int e2) (Ideal Bilateral a)) clawbackPrize ps k1 k2 = pdprize ps k1 ‘and‘ clawback ps k2 ------

------Domain-specific idealizers ------Could be generalized to Monus (Monoid with inverse) if -- (Ideal a a) of which strings under prepend should be a -- member. specializeByTag :: t -> Ideal v (t,v) -> (t,v) specializeByTag t (Ideal v) = (t, v) specializeByTag _ (Known x) = x generalizeByTag :: Eq t => t -> (t,v) -> Ideal v (t,v) generalizeByTag t1 (t2, v) | t1 == t2 = Ideal v | otherwise = Known (t2, v) specializeBilateral :: a -> a -> Ideal Bilateral a -> a specializeBilateral x _ (Ideal Benefactor) = x specializeBilateral _ y (Ideal Beneficiary) = y specializeBilateral _ _ (Known z) = z generalizeBilateral :: Eq a => a -> a -> a -> Ideal Bilateral a generalizeBilateral x y z | z == x = Ideal Benefactor | z == y = Ideal Beneficiary

171 | otherwise = Known z specializeSingle :: r -> Ideal Single r -> r specializeSingle x (Ideal Single) = x specializeSingle _ (Known y) = y generalizeSingle :: Eq r => r -> r -> Ideal Single r generalizeSingle x y | x == y = Ideal Single | otherwise = Known y ------

------Domain-specific actualizer ------actualizeBilateralSingleResource :: (Eq r, Eq t, Eq a) =>r->t->a->a -> Montract (Event k p (Ideal Single r) (Ideal e (t, e)) (Ideal Bilateral a)) -> Montract (Eventkpr(t,e)a) actualizeBilateralSingleResource r e a1 a2 = actualize (generalizeSingle r) (generalizeByTag e) (generalizeBilateral a1 a2) (specializeSingle r) (specializeByTag e) (specializeBilateral a1 a2) ------

------Actualization examples ------proj=1::ProjectId pfx=1::EventIdPrefix a1=1::AgentId a2=2::AgentId o1 = pdprize [InPhase M1] (eur 100) c1 = actualizeBilateralSingleResource proj pfx a1 a2 o1 o2 = fdprize [InPhase P2] (eur 100) c2 = actualizeBilateralSingleResource proj pfx a1 a2 o2 o3 = clawbackPrize [InPhase M1] (eur 100) (eur 0.75) c3 = actualizeBilateralSingleResource proj pfx a1 a2 o3 ------

172 ------Reduction examples ------e1 :: Event Cash Prop ResourceId EventId AgentId e1 = Transfer (1,1) 1 2 (CommodityTransfer $eur 2) c1’ = reduce scaling (==) e1 c1’ c2’ = reduce scaling (==) e1 c2’ c3’ = reduce scaling (==) e1 c3’

-- Assuming some definition of scaling. scaling :: Double -> Event Cashprea->Event Cashprea scaling x (Transfer e a1 a2 (CommodityTransfer (Cash c y))) = Transfer e a1 a2 (CommodityTransfer (Cash c (x * y))) scaling x (Transformtea(CommodityTransformation (Cash c y))) = Transformtea(CommodityTransformation (Cash c (x * y))) scaling _e=e-- Don’t scale anything else. ------

173