Quick viewing(Text Mode)

Advances in Technology Commercialization Models

Advances in Technology Commercialization Models

ADVANCES IN

COMMERCIALIZATION MODELS

DISSERTATION

to obtain the degree of doctor

at the University of Twente, under the authority of

the rector magnificus, prof.dr. H. Brinksma,

on account of the decision of the graduation committee,

to be publicly defended

on Friday October 14 at 16.45 by

Yorgos D. Marinakis

born on May 21, 1963.

In Oxnard, California, USA.

1

The promotors and assistant promotor have approved the

dissertation account.

Copyright University of Twente 2016

ISBN 978­90­365­4211­1 ​

2

This dissertation was printed by Gildeprint, Enschede.

ISBN 978­90­365­4211­1 ​ DOI 10.3990/1.9789036542111

3

TABLE OF CONTENTS

Abstract Introduction Lists of Publications

1. Technology Diffusion with the Richards model

2. How do Technology Adoption Models Diffuse?

3. Explaining New Product Adoption at the Base of the Pyramid

4. The Precautionary Principle: what is it?, where did it come from?, how should we use it?

5. Indexes of serious adverse events reported in interventional clinical trials

Conclusion

4

ABSTRACT

Through the lens of the philosophy of science, we reconsider things that are currently being taken for granted and locate issues that are not currently being treated. In general, that lens has been more focused on views of scientific theories rather than theories of models. A philosophy of science­type approach to technology commercialization models is underrepresented in the academic literature. In this dissertation I investigated whether the study of technology commercialization models can benefit from a philosophy of science­type approach. I proceeded by addressing fundamental questions about the efficacy, ontology and epistemology of selected technology commercialization models that are unaddressed by the field of the Management of Technology. These selected models are the technology diffusion model, the Technology Acceptance Model, the Precautionary Principle, and economic index numbers in application to technology commercialization. I contributed to the literature by authoring a series of papers that make philosophy of science­based recommendations for improving the efficacy of these models. At the same time, through these authored works, I increased our understanding of their ontology and epistemology. The importance of improving the efficacy of models is centric to advancing the field. The importance of improving our understanding of their ontology and epistemology derives from the onto­epistemic perspective, which holds that how we think about the world and what the world is are interdependent. For example, we think models are representational (observational), but in fact they may be interventional. This result has bearing on our notions of objectivity of the observer, the influence of the observer’s logic on his observations, and the relation between understanding and truth. I concluded that the study of technology commercialization models can benefit from a philosophy of science­type approach.

Keywords: index numbers; onto­epistemic; philosophy of science; Precautionary Principle;

Technology Acceptance Model; technology commercialization; technology diffusion

5

INTRODUCTION

The importance of models and their improvement or creation is widely recognized in the sciences1 (Frigg and Hartmann 2006, Contessa 2007), including in the academic field of management. Science, as it is practiced, is centered around models (Giere 2010, Potochnik 2012). It has been suggested that scientific models link theory and experiment, such that we cannot understand the relationship between theory and experiment without studying the process of constructing scientific models (Portides 2011). It has been argued that knowledge about the nature and purpose of models, or metamodel knowledge, is a prerequisite to both understanding the nature of science (Adúriz­Bravo 2013) and to using and understanding scientific models (Portides 2007). Academic courses are often designed around a few basic models (Schwartz and White 2005). In management, the field known as “management science”2 in particular has allegedly been “dominated by an empiricist philosophy that has led it to see quantitative modelling and statistical analysis as the only legitimate type of research method” (Mingers 2006: 202).

Epistemology and Ontology of Models

What is a model (what is the ontology of models) and how reliable is the knowledge they produce (what is the epistemology of models)? Nearly all discussions of scientific models are directed to the questions of how do scientific models represent, rather than the issue of their ontologies (Ducheyne 2008) or epistemologies. Beginning at least in the 1934 with the

1 “We use the word 'science' here in its widest sense, including all theoretical knowledge, no ​ matter whether in the field of natural sciences or in the field of the social sciences and the so­called humanities, and no matter whether it is knowledge found by the application of special scientific procedures, or knowledge based on common sense in everyday life” (Carnap 1991 [1938]: 394­395). 2 Management science is “taken to include management science, operational research (OR), ​ informa­ tion systems, systems thinking and other management disciplines insofar as they adopt an approach towards investigating and intervening in the organizational world based on explicit quantitative modelling” (Mingers 2006: 202). 6

publication of Rudolf Carnap’s Logische Syntax der Sprache (Carnap 1934), models were ​ ​ viewed as alternative interpretations of an axiomatic calculus, where the calculus comprised a theory (Frigg and Hartmann 2008). More recently, models were viewed as set­theoretic structures3 (Suppes 1961) or at least as something that can be represented by set­theoretic structures (Da Costa and French 2003, Contessa 2006) that satisfy linguistic formulations of a theory. Another researcher asserted that models are fictions because they are not physical objects, but they are not merely set­theoretical structures because they would be physical things if they were real (Frigg 2010). This view treated models as imagined physical systems or hypothetical entities. Yet another view treated models as epistemic representations of certain aspects of the world, vis­à­vis denotations (the London Underground brand logo is a denotation, and a map of the underground is an epistemic representation as a user can perform surrogative inferences from the map to the network) (Contessa 2007, Suárez 2010). At their ​ ​ cores, these are all treatments of how models represent. In contrast, in a genuinely ontological treatment, models have been viewed as mental representations (Morgan 2014) that allow surrogative reasoning (i.e., that allow a user to perform specific inferences from the model to the system) that lead to empirically testable results, whereby an empirically adequate model is said to be scientific (Ducheyne 2008). Here, the close relation to empirical results is asserted to render a model as not fictional. Finally, Ducheyne (2012) agrees with Contessa (2007) that denotation is too weak to provide a standard for scientific representation. Rejecting pragmatic similarity, partial isomorphism and homomorphism, Ducheyne (2012: 83, note 13) instead offered the idea of a pragmatic higher­order limiting case, where calling a model (M) “a pragmatic limiting case of its target means that: (1) M provides a ceteris paribus and ceteris absentibus conceptualization of its target—i.e. it treats its target in a highly abstracted and idealized way, as it cuts loose from the complexity of the empirical world and deliberately distorts it— and (2) M allows for the inference of certain relations which are not inferable from the target itself, but which hold approximately for the target relative to a purpose P.”

Why study the epistemology of models? Because we want to know the reliability of the knowledge we are deriving from them. Why study the ontology of models? Because knowing

3“A structure S = [U, R] consists of a non­empty set U of objects (the domain of the structure), and a non­empty indexed set R of relations on U. This definition does not assume anything about the nature of the objects in U.” Frigg 2010: 253. 7 what models are will help us evaluate the knowledge that we are deriving from them. For example, suppose models are epistemic representations (Bolinska 2013). Then we can ask, what is the epistemic value of representationally inadequate models (Eronen and van Riel 2015)? An epistemological approach also helps us avoid metaphysical essentialism (Love 2000). Understanding the nature of models as surrogative approximations enables us to ask pointed questions based on that nature, such as how the manipulation of surrogate systems provide an empirical understanding of the world, and how can approximations provide explanations (Kuorikoski and Ylikoski 2015). ​

The Philosophy of Science

Scientific models are practiced throughout the sciences, their algebra is studied in mathematics (Hodges 1997, Marker 2002), but their ontologies and epistemologies are studied in the field known as the philosophy of science. Chang (1999) proposed thinking about philosophy of science as a venue for addressing general questions that science could address but did not due to scientific specialization. In this complementary role, philosophy of science reconsiders things that are currently being taken for granted and locates issues that are not currently being treated. Chang also distinguished the social science perspective that seeks to discredit science as a social construct, from the philosophy of science perspective that (in part) seeks to uncover scientific theory that was epistemologically unfounded. In other words, the philosophy of science provides a venue for original scientific thinking. Like deconstructive postmodernism, it makes us critically aware of our assumptions. Unlike deconstructive postmodernism, it is not nihilistic; rather, the philosophy of science institutionalizes Nietzsche’s moment of eternal recurrence (as interpreted by Vattimo 1988 ​ [1985]: 166). Chang (1999: 414) also stated that “...specialist science cannot afford to be completely open­minded. Some elements of knowledge must be taken for granted, since they have to be used as foundations or tools for studying other things. This also means that some ideas and questions must be suppressed, since they are heterodox enough to contradict or destabilize those items of knowledge that need to be taken for granted. Such are the necessities of specialist science...” The philosophy of science probes the soundness of the foundations of science. It often asks questions that are out­of­bounds of existing disciplines.

8

One such ontologically­focused out­of­bounds question is whether models are science or whether they are technology.4 This is important because science is observational ​ (representational), whereas technology is interventional. What if we are intervening, when we think we are observing? What does that mean for our results? In terms of epistemology, we obtain our understanding of the world through models that are incomplete approximations or even fictions; thus we must either accept that we have not achieved understanding, or we must sever the link between truth and understanding (and accept that we achieved ​ understanding without achieving the truth; De Regt 2015). The latter choice takes us to the ​ onto­epistemic perspective. I discuss this and other epistemological questions in Chapter Two.

Technology Commercialization Models

The present dissertation in part comprises a philosophy of science approach to technology commercialization models. Technology commercialization is “a process by which a new technology is converted into products, services, or activities that have market value” (Hsu and Chang 2013: 476). It comprises “product design and marketing based on developed or technology transfer through licensing or other cooperative arrangements” (Park and Ryu 2015: 340). Modes of technology commercialization include: “internal approaches, quasi­internal approaches (e.g. incubators), university research parks, regional clusters, academic spin­offs and start­ups, licensing, contract research and consultancy, corporate venture capital, and open science and innovation” (Markman et al. 2008: 1401). Technology commercialization is said to comprise three main stages (Park and Ryu 2015): researching potential markets for new technologies (Rogers 2010, Marinakis 2012, Cho and Lee 2013); designing new products and/or services for those potential markets using those new technologies (Marquis 1969, Abernathy and Clark 1985, Abernathy and Utterback 1978, Linton and Walsh 2003, Verganti 2011, Wentzel et al. 2013, Marinakis et al. 2014, Walsh et al. 2015); and marketing those new products and/or services to those potential markets. Models have also been developed for factors that affect commercialization results (Sohn and Moon 2003) and for commercialization paths for “misfit” technologies (Anokhin et al. 2011).

4 The same question has been asked about Operations Research and Management Science ​ (Mingers 2000). 9

Technology commercialization began in earnest in the United States in 1980 with the Bayh­Dole Act. However, the Bayh­Dole Act had its roots in a move by President Roosevelt during WW II. To technology to the war effort, President Roosevelt brought Vannevar Bush to Washington, D.C. to manage 6,000 scientists. After the war, Bush published a report that would become the blueprint for government­funded technology development, transfer and commercialization programs (Miller and Acs 2013).

I realized that there are fundamental questions about technology commercialization models that were unaddressed by the field of the Management of Technology. In this dissertation I investigated whether the study of technology commercialization models can benefit from a philosophy of science­type approach. In Chapter One, I developed an analytical tool necessary for carrying out the study in Chapter Two, namely I applied a four­parameter model to technology diffusion curves. In Chapter Two, I addressed the hitherto unaddressed question, are models (technology commercialization and otherwise) science or are they technology? In Chapter Three, I addressed the issue of harmonizing academic studies in technology adoption and the diffusion of innovations, with each other and with field studies, in developed nation contexts and at the Base of the Pyramid. In Chapter Four, I revisited an established legal principle of European Union and International law (and arguably American law), the Precautionary Principle, which is invoked in cases of such as genetically modified organisms and ­based products. About this Principle, I asked, what is it?, where did it come from?, and how should we use it? In Chapter Five, I developed an analytical tool for clinical research informatics, namely indexes of aggregates of percents of clinical trials reporting serious adverse events. I used the study as an opportunity to revisit the validity of constructs of formative measures such as indexes.

In my investigations I also advanced the efficacy of several specific technology commercialization models, namely the technology diffusion curve, the Precautionary Principle, and index numbers for clinical research informatics. In some cases, I also advanced our understanding of their ontology (what is it?) and epistemology (what do we know about it, and how reliable is our knowledge?). I chose to investigate technology commercialization models that were not only of interest to me but are among the most widely used technology

10

commercialization models. Their abilities to hold academic and practitioner interest is evident from the number of citations to them, as shown in the bibliometric analysis in Chapter Two. My contribution to advancing the efficacy of technology commercialization models included increasing their accuracy and/or precision, which improves our ability to promote technology. Advances in technology are a significant source of long­term economic growth (Solow 1956). Understanding their ontology and epistemology is important because, as I discussed in Chapter Two, how we think about the world and what the world is are interdependent. We think models are representational (observational), but in fact they may be interventional.

This Dissertation

The philosophical topics in this dissertation include the views of management scientific theories, technoscience, the ontology and epistemology of models, and the onto­epistemic and socio­epistemic perspectives. The technical tools include bibliometrics, index number theory, the theory of formative measures, and iterated function systems. The computational tools include nonlinear regression and large database manipulation. My specific contributions to

the development of technology commercialization models follow.

In Chapter One, “Forecasting technology diffusion with the Richards model,” I advanced the practice of the technology diffusion curve by providing a four­parameter mathematical model that increases the precision and accuracy of technology diffusion modeling and forecasting.

This article was published in Technological Forecasting and Social Change. ​ ​

In Chapter Two, “How do technology adoption models diffuse?.” I advanced our understanding of the ontology and epistemology of technology adoption (commercialization)

models. Chapter Two is based on the syllogism:

● The adoption of innovation occurs by means of social learning with conformist

bias (Henrich 2001).

● Technology adoption models are forms of innovation (this is an assumption).

11

● Therefore the adoption of technology adoption models occurs by means of

social learning with conformist bias.

This article is currently in review at Axiomathes. ​ ​

In Chapter Three, “Explaining new product adoption at the Base of the Pyramid,” I showed that Base of the Pyramid field studies contained evidence of new product adoption through replicator dynamics (imitation). This article was presented at IAMOT 2016 and is currently in

review at Technology Analysis & Strategic Management. ​ ​

In Chapter Four, “The Precautionary Principle: what is it?, where did it come from?, how should we use it?,” I advanced our understanding of the Precautionary Principle by recasting it as an index of formative measures of risk and fear. The Precautionary Principle is a model for how to commercialize a technology when the public fears it. This article was recently

accepted by the Journal of International & Interdisciplinary Business Research. ​ ​

In Chapter Five, “Indexes of serious adverse events reported in interventional clinical trials,” I advanced the practice of R&D management by developing a new model that is an index of technology commercialization failure, viz. indexes of Serious Adverse Events (SAEs) in clinical trials. I used the study as an opportunity to revisit the validity of constructs of formative measures such as indexes.

Acknowledgements

I would like to take this opportunity to thank my Committee, namely Professor Doctors Steven T. Walsh, Rainer Harms and Aard Groen, for their support over this lengthy project (January 2011­October 2016). I would also like to acknowledge assistance from the Faculty of Management, University of Twente. With their assistance, in addition to the present work focusing on modeling improvement and development, I was able to publish eight articles in archival academic research journals and book chapters. In the staff at the Faculty of

Management I would like to thank Monique Zuithof, Joyce Holsbeeke and Brenda Kroeze. ​ ​

12

Literature Cited

Abernathy, W.J. and Clark, K.B., “Innovation: Mapping the winds of creative destruction,” Research Policy, 14(1), (1985), pp. 3­22.

Abernathy, William J., and James M. Utterback., "Patterns of industrial innovation." Technology review 64 (1978): 254­228.

Adúriz­Bravo, Agustín. "A ‘semantic’ view of scientific models for science education." Science & Education 22, no. 7 (2013): 1593­1611.

Anokhin, Sergey, Joakim Wincent, and Johan Frishammar. "A conceptual framework for misfit technology commercialization." Technological Forecasting and Social Change 78, no. 6 (2011): 1060­1071.

Bolinska, Agnes. "Epistemic representation, informativeness and the aim of faithful representation." Synthese 190, no. 2 (2013): 219­234. ​ ​ Carnap, R., “Logical foundations in the unity of science,” in Boyd, Richard, Philip Gasper, and John D. Trout. The philosophy of science. MIT Press, 1991. ​ ​ Carnap, Rudolf. Logische Syntax der Sprache. Springer­Verlag, 2013 [1934]. ​ ​ Chang, Hasok. "History and philosophy of science as a continuation of science by other means." Science & Education 8, no. 4 (1999): 413­425. ​ ​ Cho, Jaemin, and Jaeho Lee. "Development of a new technology product evaluation model for assessing commercialization opportunities using Delphi method and fuzzy AHP approach." Expert Systems with Applications 40, no. 13 (2013): 5314­5330.

Contessa, Gabriele. "Scientific models, partial structures and the new received view of theories." Studies in History and Philosophy of Science Part A 37, no. 2 (2006): 370­377. ​ ​ Contessa, Gabriele. "Scientific Representation, Interpretation, and Surrogative Reasoning." Philosophy of Science 74, no. 1 (2007): 48­68.

Da Costa, Newton CA, and Steven French. Science and partial truth: A unitary approach to ​ models and scientific reasoning. Oxford University Press, 2003. ​ De Regt, Henk W. "Scientific understanding: truth or dare?." Synthese 192, no. 12 (2015): ​ 3781­3797.

Ducheyne, Steffen. "Towards an ontology of scientific models." Metaphysica 9, no. 1 (2008): ​ ​ 119­127.

13

Ducheyne, Steffen. "Scientific Representations as Limiting Cases."Erkenntnis 76, no. 1 ​ (2012): 73­89.

Eronen, Markus I., and Raphael van Riel. "Understanding through modeling:: the explanatory power of inadequate representation." Synthese 192, no. 12 (2015): 3777­3780. ​ ​ French, Steven. "Keeping quiet on the ontology of models." Synthese 172, no. 2 (2010): ​ 231­249.

Frigg, Roman. "Models and fiction." Synthese 172, no. 2 (2010): 251­268. ​ ​ Frigg, Roman, and Stephan Hartmann. "Models in science." (2006). In: Sarkar, Sahotra, and Jessica Pfeifer. The Philosophy of Science: N­Z. Vol. 2. Taylor & Francis, 2006.

Giere, Ronald N. Explaining science: A cognitive approach. University of Chicago Press, ​ ​ 2010.

Hodges, Wilfrid. A shorter model theory. Cambridge university press, 1997. ​ ​ Hsu, Chiung­Wen, and Pao­Long Chang. "Innovative evaluation model of emerging energy technology commercialization." Innovation 15, no. 4 (2013): 476­483.

Kuorikoski, Jaakko, and Petri Ylikoski. "External representations and scientific understanding." Synthese 192, no. 12 (2015): 3817­3837. ​ ​ Linton, Jonathan D., and Steven T. Walsh. "From bench to business." Nature Materials 2, no. 5 (2003): 287­289.

Love, Alan C. "Typology reconfigured: from the metaphysics of essentialism to the epistemology of representation." Acta biotheoretica 57, no. 1­2 (2009): 51­75. ​ ​ Marinakis, Yorgos D. "Forecasting technology diffusion with the Richards model." Technological Forecasting and Social Change 79, no. 1 (2012): 172­179.

Marinakis, Yorgos D., Harms, R. and Walsh, S.T., (2014), "Zomeworks: design driven innovation." Ivey Business Publishing.

Marker, David (2002). Model Theory: An Introduction. Graduate Texts in Mathematics 217. ​ ​ ​ Springer.

Markman, Gideon D., Donald S. Siegel, and Mike Wright. "Research and technology commercialization." Journal of Management Studies 45, no. 8 (2008): 1401­1423.

Marquis, D. G., (1969), “The anatomy of successful innovations,” Innovation, 1, pp. 28­37.

14

Miller, David J., and Zoltan J. Acs. "Technology commercialization on campus: twentieth century frameworks and twenty­first century blind spots." The Annals of Regional Science 50, no. 2 (2013): 407­423.

Mingers, John. "The contribution of critical realism as an underpinning philosophy for OR/MS and systems." Journal of the Operational Research Society 51, no. 11 (2000): ​ 1256­1270.

Mingers, John. "A critique of statistical modelling in management science from a critical realist perspective: its role within multimethodology." Journal of the Operational Research ​ Society 57, no. 2 (2006): 202­219. ​ Morgan, Alex. "Representations gone mental." Synthese 191, no. 2 (2014): 213­244. ​ ​ Park, Taekyung, and Dongwoo Ryu. "Drivers of technology commercialization and performance in SMEs: The moderating effect of environmental dynamism." Management Decision 53, no. 2 (2015): 338­353.

Portides, Demetris P. "The relation between idealisation and approximation in scientific model construction." Science & Education 16, no. 7­8 (2007): 699­724.

Portides, Demetris. "Seeking representations of phenomena: Phenomenological models." Studies In History and Philosophy of Science Part A 42, no. 2 (2011): 334­341. ​ Potochnik, Angela. "Feminist implications of model­based science." Studies in History and ​ Philosophy of Science Part A 43, no. 2 (2012): 383­389. ​

Rogers, Everett M. Diffusion of innovations. Simon and Schuster, 2010.

Schwarz, Christina V., and Barbara Y. White. "Metamodeling knowledge: Developing students' understanding of scientific modeling." Cognition and instruction 23, no. 2 (2005): 165­205.

Sohn, So Young, and Tae Hee Moon. "Structural equation model for predicting technology commercialization success index (TCSI)." Technological Forecasting and Social Change 70, no. 9 (2003): 885­899.

Solow, Robert M. "A contribution to the theory of economic growth." The quarterly journal ​ of economics (1956): 65­94. ​ Suárez, Mauricio. "Scientific representation." Philosophy Compass 5, no. 1 (2010): 91­101. ​ ​ Suppes, Patrick. "A comparison of the meaning and uses of models in mathematics and the empirical sciences." In The Concept and the Role of the Model in Mathematics and Natural ​ and Social Sciences, pp. 163­177. Springer Netherlands, 1961. ​

15

Vattimo, Gianni, 1988 [1985], The End of Modernity: Nihilism and Hermeneutics in ​ Postmodern Culture, Jon R. Snyder (trans.), Baltimore: Johns Hopkins University Press. ​ Verganti, Roberto. "Radical design and technology epiphanies: a new focus for research on design management." Journal of Product 28, no. 3 (2011): 384­388.

Walsh, Steven T., Yorgos D. Marinakis, and Robert Boylan. "Teaching case and teaching note systems equipment division at Ferrofluidics." Technological Forecasting and Social Change (2015).

Wentzel, J.P., Sundar Diatha K., and Yadavalli, V.S.S., (2013), “An application of the extended Technology Acceptance Model in understanding technology­enabled financial service adoption in South Africa,” Development Southern Africa, Volume 30, Issue 4­05, pp. 659­673.

16

Lists of Publications

Conference Proceedings

Marinakis, Yorgos D., Harms, R. and Walsh, S.T. “Catalyzing New Product Adoption at the Base of the Pyramid.” IAMOT 2016.

Marinakis, Yorgos D., Harms, R. and Walsh, S.T. "A test of a design process scale." In: Management of Engineering and Technology (PICMET), 2015 Portland International Conference on, 2­6 Aug. 2015 (2015): 988 – 991.

Marinakis, Y., and Walsh S., (2015), TSensor System Roadmap, TSensor Summit, Florida.

Marinakis, Yorgos D., Walsh, S.. and Chavez,V. “Here there be dragons: The TSensors ​ ​ Systems Technology Roadmap.” In: Management of Engineering and Technology (PICMET), 2016 Honolulu International Conference on, 4­8 Sept. 2016.

Academic Archival Publications

Harms, Rainer, Yorgos Marinakis, and Steven T. Walsh. "Lean startup for materials ventures and other science­based ventures: under what conditions is it useful?" Translational Materials ​ Research 2, no. 3 (2015): 035001. ​ Marinakis, Yorgos D. "Forecasting technology diffusion with the Richards model." Technological Forecasting and Social Change 79, no. 1 (2012): 172­179. [Chapter 1] ​ Walsh, S. T., Tierney, R., Davenport, C., Hermina, W., Marinakis, Y., Haak, R., Tolfree, D., and Giasolli, R., (2014), Chapter 7: The Pharmaceutical Landscape Tool, Karlsruhe Institute of Technology, pp. 114­129.

Walsh, S. T., Tierney, R., Tolfree, D., Marinakis, Y., and White, C., (2014), Chapter 1: An Introduction to technology road mapping and its evolution to Landscaping, Karlsruhe Institute of Technology, pp. 1­18.

Marinakis, Y.D., Harms, R., and Walsh, S.T., (2014), Zomeworks: design driven innovation, Ivey Business Publishing.

Walsh, Steven T., Yorgos D. Marinakis, and Robert Boylan. "Teaching case and teaching note systems equipment division at Ferrofluidics." Technological Forecasting and Social ​ Change 100 (2015): 29­38. ​

17

Berg, Dan, H. S. Mani, Yorgos George Marinakis, Robert Tierney, and Steven Walsh. "An introduction to Management of Technology pedagogy (andragogy)." Technological ​ forecasting and social change 100 (2015): 1­4. ​ Marinakis, Yorgos D. "Book review: Facing the fold: essays on scenario planning, James A. Ogilvy." Technological forecasting and social change 90 (2015): 648­649. ​ ​ Academic Submitted Works

Marinakis, Yorgos D., Harms, R. and Walsh, S.T. “How do technology adoption models diffuse?” In review, Axiomathes. [Chapter 2] ​ ​ Marinakis, Yorgos D., Harms, R. and Walsh, S.T. Catalyzing New Product Adoption at the Base of the Pyramid. In review: Technology Analysis and Strategic Management. [Chapter 3] ​ ​ ​ ​ Marinakis, Yorgos D., Harms, R. and Walsh, S.T. “The Precautionary Principle: what is it, ​ ​ where did it come from, and how should we use it?” In review: Journal of International & ​ Interdisciplinary Business Research. [Chapter 4] ​

18

This page intentionally left blank.

19

1. FORECASTING TECHNOLOGY DIFFUSION WITH THE RICHARDS MODEL

Marinakis, Yorgos D. "Forecasting technology diffusion with the Richards model." Technological Forecasting and Social Change 79, no. 1 (2012): 172­179. ​

20

Forecasting Technology Diffusion with the Richards Model

Abstract

The Richards model has a shape parameter m that allows it to fit any sigmoidal curve. This article demonstrates the ability of a modified Richards model to fit a variety of technology diffusion curvilinear data that would otherwise be fit by Bass, Gompertz, Logistic, and other models. The performance of the Richards model in forecasting was examined by analyzing fragments of data computed from the model itself, where the fragments simulated either an entire diffusion curve but with sparse data points, or only the initial trajectory of a diffusion curve but with dense data points. It was determined that accurate parameter estimates could be obtained when the data was sparse but traced out the curve at least up to the third inflection point (concave down), and when the data was dense and traced out the curve up to the first inflection point (concave up). Rogers' Innovation I, II and III are discussed in the context of the Richards model. Since m is scale independent, the model allows for a typology of diffusion curves and may provide an alternative to Rogers' typology.

Keywords: Technology diffusion; Technology forecasting; S­shaped curve; Sigmoid curve; ​ Growth curve; Richards model

21

1. Introduction

Technology diffusion is widely accepted as tracing a sigmoidal trajectory that resembles a biotic growth curve [1,2]. Accordingly, empirical growth models are commonly fit to technology diffusion data. Much of the activity surrounding empirical growth models relates to model selection, where for example the Fisher–Pry model [3] is appropriate to technology diffusion in general and the Gompertz [4] as a mortality model is appropriate to cases of technology diffusion involving replacement [5].

The Richards model [6] is an empirical model developed for fitting growth data. Through the use of a shape parameter that enables the curve to stretch or shrink, the Richards model encompasses the Gompertz, Fisher–Pry and every other imaginable sigmoidal model (Fig. 1). When m=0, the model approximates the exponential growth function. When m=0.67, the model behaves like the von Bertalanffy [7]. When m approaches 1, the model behaves like the Gompertz. When m=2, the model behaves like the Logistic model [8]. Thus one of the advantages of the Richards model is that it, in effect, selects the model for you.

The Richards model has been investigated in at least two white papers as a tool for technology forecasting [9,10], but it has not been reported in the peer­reviewed literature in the context of diffusion of technology diffusion or diffusion forecasting. Neither has its behavior under a variety of data qualities been examined. This article demonstrates the ability of a modified Richards model to fit a variety of technology diffusion curves that would otherwise be fit by the Bass [11], Gompertz, Logistic, and other models.

22

∞ Fig. 1. The Richards model at various m­values. W , T∞, and k also vary between curves.​ ​

2. Methods

The Richards model was introduced in 1959 in the context of plant growth [6]. Richards conceived of the model as an extension to the von Bertalanffy model, but he recognized its ability to emulate the Gompertz and Logistic as well. The model has been modified and reparameterized by several researchers. As modified by Sugden et al. [12], the model is:

∞ m/(1­m) 1/(1­m) Wt = W [1–(1–m)exp[−k​ (t–T*)/m ]]​ ​ ​ ​ ​ ​

Where W is the weight or growth at time t, W∞ is the asymptotic weight, k is the maximum ​t ​ relative growth rate per unit time, T∞ is the maximum rate of growth per unit time, and m is a ​ 1/(1−m) shape parameter with the property that m is the relative weight at time T∞.​ ​

In terms of technology diffusion, W∞ is the asymptotic weight, i.e., the asymptotic or maximum level of diffusion achieved. k is the maximum relative growth rate per unit time;

23 when applied to diffusion data, k signifies the maximum diffusion per unit time relative to the size of the population. As shown in Fig. 2a, the k­value is associated with the lagging effect: the higher the k­value, the longer the lag. T∞ is the maximum rate of diffusion per unit time. The m­value is the shape parameter that determines the position of the concave up–concave down inflection point. As shown in Fig. 2b, all other parameters held equal, the relation between an initial lag and the m­value is complex. As the m­value increases, the curve becomes more sigmoidal, moving from something close to a straight line at m=1.3026 through a series of increasingly steep sigmoidal shapes.

24

Fig. 2. a. Shapes of curves in Table 1, all other parameters held equal. b. Richards model curves for m=1.3026, but k varies from 0.05 to 0.1.

The Richards model has been used in contexts other than body growth. White and Marinakis [13] for example used a reparameterized Richards model [14] to quantify nitrogen mineralization patterns. That particular form of the equation was not used here because it is restricted to phase space, i.e., it fits dW/dt=f(W), such that it cannot be used to forecast in the temporal domain.

The model was fit to the data with Proc Nlin in SAS PC 9.1.3. Proc Nlin requires a grid search, and the failure of a model to converge often indicates the grid search was too restricted in some sense. Note that it is necessary to renormalize the start time of the data to zero.

The model was run on several datasets previously analyzed in the peer­reviewed literature (Table 1). P­values and graphical comparisons of the computed models and the data are presented to visually demonstrate the accuracy of the model (Fig. 3a–f).

25

3. Results

3.1. Fitting technology diffusion data with the Richards model

The Richards model accurately modeled all of the datasets (Table 1, Fig. 3a–f). In all cases, regarding the model, the probability of the F ratio being as large or greater (under the null hypothesis) than the value that was obtained was P=0.0001. Root mean squared errors (MSE)

were always smaller than the standard deviation of the data.

The Richards model fit the following data sets:

Rogers [1] page 45 on the diffusion of diffusion research publications (Fig. 3a, Table 1) with

∞ m=1.30,W =3996.7,​ T∞=28.47, and k=0.0395; Rogers [1] page 258 on the adopters of hybrid ​ ∞ seed corn in two Iowa communities (Fig. 3b, Table 1) with m=4.19,W =257.5,​ T∞=11.22, ​ and k=0.2170; Chu et al. [15] and Wu and Chu [16] on the diffusion of mobile telephony in

∞ Taiwan (Fig. 3c, Table 1) with m=2.56,W =104.5,​ T∞=5.21, k=0.3306, and P=0.0001. The ​ authors had fit the data using Bass, Gompertz, Logistic and ARMA models;

∞ The chemical patent data set of Andersen [17] (Fig. 3d, Table 1) with m=32.10, W =3522.9,​ ​ ∞ T∞=18.77, and k=0.0366; The CATV data set from Porter et al. [5] withm=18.31,W =48.36,​ ​ T∞=30.29, and k=0.1045 (Fig. 3e, Table 1). Those authors had used the data to demonstrate

the Fisher–Pry and Gompertz models;

∞ Michalakelis et al. [18] on mobile telephony in Greece with m=1.39, W =12,703,310,​ ​ T∞=6.8, and k=0.1608 (Fig. 3f, Table 1). In that article the authors fit the data using Bass,

Gompertz, Logistic, Box­Cox [19], FLOG [20] and TONIC [21] models.

To further test the validity of the Richards model, the model was fit to the first half of each data set. The results support a finding of validity (Table 1). In four out of six cases, RMSE is

less than the standard deviation of the data, and Prb>F at less than 0.05.

Data m W∞ T∞ k Converge P>F RMS Data RMSE ​ generated d? E standard < using deviation StdDe parameters v? from:

26

Rogers [1], 1.30 3996.7 28.4 0.038 Yes 0.000 45.90 2243497. Yes page 45 7 5 1 05 (diffusion research publications ), days 1–52; Using only 0.149 17132.1 60.1 0.001 No 0.736 810.5 579.28 No days 1–28. 9 24 8 3 Rogers [1], 4.19 257.5 11.2 0.217 Yes 0.000 4.09 99.95 Yes page 258 2 0 1 (adopters of hybrid seed corn in two Iowa communitie s), days 1–151 Using only 2.608 483.4 16.5 0.094 No 0.007 3.30 8.70 Yes days 1–7. 7 7 7 Chu et al. 2.56 104.5 5.21 0.330 Yes 0.000 4.99 44.26 Yes [15], 6 1 (mobile telephony), days 1–13l Using only 2.23 103.9 5.1 0.320 Yes 0.000 2.27 38.79 Yes days 1–7. 7 1 Andersen 32.10 3522.9 18.7 0.036 Yes 0.000 115.0 717.77 Yes [17], Fig. 2, 7 6 1 4 chemical patents, days 1–26l Using only 23.9 3341.7 40.0 0.010 No 0.000 124.3 198.13 Yes days 1–13. 9 1 8

Porter et al. 18.31 48.36 30.2 0.104 Yes 0.000 1.41 14.91 Yes [5], page 9 5 1 142, 175 ff., CATV, days 1–33l Using only 5.0 50.0 30.0 0.110 No 0.034 2.58 2.02 No days 1–33. 0 8 Michalakeli 1.39 12,703,3 6.80 0.160 Yes 0.000 1.23E 4.65E6 Yes s et al. [18], 10 8 1 5 days 1–12; Using only 6.2 14,757,5 10 0.210 No 0.201 1.04E 1.4E6 Yes days 1­6 50 0 8 6 Table 1. Richards model parameters. Pr>F refers to the probability of the F ratio being as large or greater (under the null hypothesis) than the value that was obtained. Two results are

27 shown for each data set. The first result corresponds to the analysis of the entire data set. The second result is a validity analysis that corresponds to the analysis of the first half of the data set.

28

3.2. Forecasting

As discussed in Ref. [10], logistic and Gompertz models are the most commonly used to fit diffusion data. Compared to the Richards model, these models are relatively inflexible. They force a concave up–concave down inflection point at 50% and 37% of the asymptote, respectively. Moreover, the logistic is symmetric about that inflection point, and the Gompertz always has a larger speed of adoption after that inflection point. The Richards model has no fixed concave up–concave down inflection point, and it allows for any and all

types of asymmetry about that inflection point.

Sokele [9] notes that the Richards model is not suitable for modeling market adoption immediately after a product is introduced because as t→−∞, the Richards model goes to zero; and that the Richards model is more flexible than the logistic or Bass in fitting data with an asymmetric inflection point. Data requirements for achieving accurate parameter estimates

will now be examined.

The performance of the Richards model in forecasting was examined by analyzing fragments of data computed from the model itself, where the fragments simulated either an entire diffusion curve but with sparse data points, or only the initial trajectory of a diffusion curve but with dense data points (Table 2; figures not shown). It was determined that accurate parameter estimates could be obtained when the data was sparse but traced out the curve at least up to the third inflection point (concave down; the midpoint is the second inflection point), and when the data was dense and traced out the curve up to the first inflection point

(concave up).

Data generated m W∞ T∞ k Converged? ​ using parameters from: Rogers [1], 1.30 3996.7 28.47 0.0385 Yes page 45) (diffusion research publications): Data up to 4.7703 3632.2 78.8723 0.0121 No approximately the midpoint; Data to just 1.30 3996.7 28.47 0.0385 Yes below the first

29 inflection point, dense data; Rogers [1], 4.19 257.5 11.22 0.2170 Yes page 258 (adopters of hybrid seed corn in two Iowa communities): Data up to and 4.1898 257.5 11.2243 0.0395 Yes including the first inflection point; Using data up 87.3543 107.7 6.6743 0.0307 No to and including the first inflection point but decreasing the data density; Using only the 5.1975 226.1 25.5092 0.1225 No first five data points (do i=1 to 5 by 1). Table 2. Experimental design for forecasting. The bold entries correspond to an analysis of the entire data set. The following entries correspond to analyses of subsets and transformations of that data.

For example, with data generated by the model using the same parameters as those from

∞ Rogers [1] page 45 (Fig. 3a, Table 1; m=1.3026, W =3996.7,​ T∞=28.4696, and k=0.0395), ​ with only iterations that traced out the data up to approximately the midpoint (w=3165; i=1 to 40 by 1), the model failed to converge and the parameter estimates were marginally accurate

∞ (m=4.7703, W =3632.2,​ T∞=78.8723, k=0.0121). But using 170 iterations that traced out the ​ data to just below the first inflection point (i=1 to 17 by 0.1), the model converged and the parameter estimates were perfectly accurate. With lesser amounts of data (e.g., i=1 to 16 by

0.1), the model failed to converge or provide accurate parameter estimates.

In another experiment, using relatively dense data generated by the model using the same

∞ parameters as those from Rogers [1] (Fig. 3b, Table 1; m=4.1898, W =257.5,​ T∞=11.2243, ​ and k=0.2170), statistically significantly fits and nearly perfectly accurate parameters

∞ (m=4.1898,W =257.5,​ T∞=11.2243, k=0.0395, and P=0.001, converged) were obtained using ​

30 the data that traced a curve up to and including the first inflection point (eight data points; i.e., do i=1 to 8 by 1). Using data up to and including the first inflection point but decreasing the density (i.e., do i=1 to 8 by 2) resulted both in inaccurate parameter estimates (m=87.3543, W∞=107.7, T∞=6.6743, and k=0.0307) and a failure of the regression to converge. Reasonably accurate but statistically insignificant parameter estimates were obtained using data that traced out even less of the curve. For example, using only the first five data points

∞ (do i=1 to 5 by 1), the regression produced m=5.1975, W =226.1,​ T∞=25.5092, k=0.0145, ​ and P=0.1225, though the model failed to converge. Increasing the density of the data (do i=1 to 5 by 0.1) did not improve the performance. Again it can be seen that accurate parameter estimates can be obtained from relatively dense data that traces out the curve up to at least the first inflection point.

For the data from the Taiwan telecommunications industry [15,16] (Fig. 3c, Table 1), the model was run again using only the first four data points, which points traced the curve up to the first inflection point. The model converged but the parameters were inaccurate. The model did not both converge and provide accurate parameters until the addition of the seventh data point, which traced the curve up to the third inflection point. However, using the data computed from the model using the above referenced values for the parameters, with only the first four data points sparsely populated as in the original data set (i=1 to 4 by 1), the model converged with perfectly accurate parameters. Four data points up to the first inflection point were sufficient here, but they were insufficient in other data sets, e.g. Rogers [1] page 45. Thus the difference between relatively dense data and relatively sparse data is a difficult line to draw. One must look for convergence to determine whether parameters are accurate or not.

∞ In addition, as a safety check the analyst is advised to use the derived parameters (m, W ,​ T∞, ​ and k) to compute data to visually compare against the original data.

31

32

33

Fig. 3. a. Data from Rogers page 45 [1]. b. Data from Rogers page 258 [1]. c. Data from Chu et al. [15]. d. Data from Andersen [17]. e. Data from Porter [5]. f. Data from Michalakelis et al. [18].

34

4. Discussion

Rogers [1] identifies three types of technology curves, which he refers to as Innovation I for rapid adoption, Innovation III for slower adoption, and Innovation II between them (Fig. 4). When placed adjacently the distinctions between these curves can be discerned, but the theory does not tell us how to categorize any given curve in isolation. We can begin to approximate these types with Richards model by using decreasing m­values, all other parameters held equal (Fig. 5), in which case the lowest m­value has the slowest take off (i.e., the lowest slope in the initial portion of the curve). The shapes of the curves analyzed in this article (Table 1) can be similarly depicted (Fig. 6), all other parameters held equal. Such an analysis however reveals that simply changing the m­value fails to capture other features of the transition between Innovation I, II and III, namely a much steeper take off (slope) in Innovation I. To capture that feature, it is necessary to alter other parameters along with the shape parameter m. As shown in Fig. 2a, in which the k­value is changed but all other parameters are held equal, the higher the k­value, the steeper the take off. In Fig. 4, by varying m, T∞ and k, we have a closer facsimile of Rogers' three types of Innovation, vis­à­vis Fig. 5 in which only m is varied. Thus the differences between the rapid adoption of Innovation I and the slower adoption of Innovations II and III are a complex of shapes (m) that vary unsystematically combined with decreasing growth rates (T∞ and k). It may be that Rogers' Innovations I, II and III are oversimplifications. Or it may be that the Richards model (at least the four parameter version) is not suited to generalizing Rogers' types.

If Rogers' typology is set aside for a moment, it can be seen that the m­values provide a scale­independent method for characterizing technology diffusion curves, such that the (m,

∞ W ,​ T∞, and k) phase space (or subsets thereof) may provide the basis for an alternative to ​ Rogers' typology. The data presented herein represent a wide variety of diffusion processes, with m­values varying from 1.3 to 32. Them value (m=1.39) for the data in Michalakelis [18] was close to that of Rogers [1] page 45 (m=1.30). Despite a difference of four orders of magnitude in the asymptotic growth parameter values, and differences by a factor of four in their growth rates, they trace out a similar shape (m~1.3) in their respective reference frames.

35

This imilarity may indicate similar processes. Thus one potentially informative analysis would be to examine m vs. ln(T∞) for a large number of cases.

The m­value facilitates both hypothesis testing and quantitative comparisons. For example, Noh and Yoo [22] compare the average diffusion of internet use by income inequalities (Fig. 6). They show that “countries with higher income inequality lagged behind those with lower income inequality in internet diffusion.” An analysis with the Richards model enables us to see that the three curves have three different m­values, clustered into two sets (Fig. 6). A suitable null hypothesis would be that the three curves have the same shape. The “lagging” by countries with high income inequality can be described as a lower m­value.

The Richards model tells us that this is more than a lagging: the model is predicting lower asymptotic values for the total number of users, by several orders of magnitude. It also tells us that the diffusion of internet use in countries with higher income inequality is characterized by a maximum rate of growth per unit time that is an order of magnitude lower than those of the other two categories. Moreover, whereas the maximum growth rates differ, the maximum relative growth rates for all three categories do not differ or they do not differ by much.

Relative growth rate is the growth rate relative to the size of the population.

A recent study [23] by the Pew Internet and American Life Project of Social Networking use, analyzing the use of social networking websites by age groups, illustrates the type of analysis that could be done if data were collected according to the requirements of the model, i.e., relatively dense data points at least up to the first inflection point. The model was unable to statistically significantly fit all the data, most likely due to its sparse quality (Fig. 7). The fit for the age group 18–29 was particularly bad and is not presented here. Given those caveats, the Richards model allows quantification of the patterns: as age increases, m increases, W∞ decreases, T∞ increases, and k increases. The m­values from the Pew's study on social networking in the United States are much higher than the m­values from the internet diffusion data in Noh and Yoo [22]; but as might be expected, they are closer to the m­value for the countries with relatively low income inequality (m=6.07), which include the United States.

36

Figure 4. Types of innovation per Rogers [1].

Figure 5. Approximating Rogers' [1] innovation types with the Richards model by using decreasing m­values, all other parameters held equal.

37

Figure 6. Average diffusion of internet use, by income inequalities. Data from Noh and Yoo [22].

Figure 7. Use of social networking websites, by age groups. Data from Madden [23].

5. Conclusion

38

The Richards model accurately modeled technology diffusion data from several sources (Table 1, Fig. 3a–f). In all cases, regarding the model, the probability of the F ratio being as large or greater (under the null hypothesis) than the value that was obtained was P=0.0001. It was demonstrated that the model is also useful in forecasting. Accurate parameter estimates could be obtained when the data was sparse but traced out the curve at least up to the third inflection point (concave down), and when the data was dense and traced out the curve up to

∞ the first inflection point (concave up). The Richards model parameter (m, W ,​ T∞, and k) ​ phase space (or subsets thereof) is suggested as providing the basis for an alternative to Rogers' technology diffusion typology (Fig. 4), and as a means to facilitate both hypothesis testing and quantitative comparisons.

39

Literature Cited

[1] Everett M. Rogers, Diffusion of Innovations, Fourth , Press, New York, 1995. [2] Luiz C.M. Miranda, Carlos A.S. Lima, A new methodology for the logistic analysis of evolutionary S­shaped processes: application to historical time series and forecasting, Technol. Forecast. Soc. Change 77 (2010) 175–192. [3] R.C. Lenz, Rates of Adoption/Substitution in , Technology Inc., Austin, 1985. [4] G.C. Chow, Technological change and the demand for computers, Am. Econ. Rev. 57 (1967) 1117–1130. [5] Alan L. Porter, A. Thomas Roper, Thomas W. Mason, Frederick A. Rossini, Jerry Banks, Forecasting and management of technology, Wiley Series in Engineering and Technology Management, John Wiley and Sons, New York, 1991. [6] F.J. Richards, A flexible growth function for empirical use, J. Exp. Bot. 10 (1959) 290–300. [7] L. von Bertalanffy, Quantitative laws in metabolism and growth, Q. Rev. Biol. 32 (1957) 217–231. [8] N. Meade, T. Islam, Modelling and forecasting the diffusion of innovation — a 25­year review, Int. J. Forecasting 22 (3) (2006) 519–545. [9] M. Sokele, Growth models for the forecasting of new product market adoption, Telektronikk 3 (2008) 144–154. [10] Pedro Pereira, J.C. Pernías­Cerrillo, The diffusion of cellular telephony in Portugal before UMTS: a time series approach, CEPR Discussion Papers Number 2598, 2005. [11] F. Bass, T. Krishnan, D. Jain, Why the Bass model fits without decision variables, Market. Sci. 13 (3) (1994) 203–223. [12] L.G. Sugden, E.A. Driver, M.C.S. Kingsley, Growth and energy consumption by captive mallards, Can. J. Zool. 59 (1981) 1567–1570. [13] Carleton S. White, Yorgos D. Marinakis, A flexible model for quantitative comparisons of nitrogen mineralizaton patterns, Biol. Fertil. Soils 11 (1991) 239–244. [14] I.L. Brisbin Jr., G.C. White, P.B. Bush, PCB intake and the growth of waterfowl: multivariate analyses based on a reparameterized Richards sigmoid model, Growth 50 (1986) 1–11. [15] Wen­Lin Chu, Feng­Shan Wu, Kai­Sheng Kao, David C. Yen, Diffusion of mobile telephony: an empirical study in Taiwan, Telecommunications Pol. 33 (2009) 506–520. [16] Feng­Shang Wu, Wen­Lin Chu, Diffusion models of mobile telephony, J. Bus. Res. 63 (2010) 497–501. [17] Birgitte Andersen, The hunt for S­shaped growth paths in technological innovation: a patent study, J. Evol. Econ. 9 (1999) 487–526.

40

[18] Christos Michalakelis, Dimitris Varoutas, Thomas Sphicopoulos, Diffusion models of mobile telephony in Greece, Telecommunications Pol. 32 (2008) 234–245. [19] J.C. Westland, E.W.K. See­To, The short­run price­performance dynamics of microcomputer technologies, Res. Pol. 36 (2007) 591–604. [20] R. Bewley, D.G. Fiebig, A flexible logistic growth model with applications in telecommunications, Int. J. Forecasting 4 (1988) 177–192. [21] TONIC, EU IST­2000­25172 TONIC projectDeliverables and related publications available from, http://www.nrc.nokia.com/tonic/S, 2000. [22] Y.­H. Noh, K. Yoo, Internet, inequality and growth, J. Pol. Modeling 30 (2008) 1005–1016. [23] Mary Madden, Older Adults and Social Media, http://pewresearch.org/pubs/1711/older­adults­social­networking­facebook­twitter, 2010.

41

2. HOW DO TECHNOLOGY ADOPTION MODELS DIFFUSE?

Marinakis, Yorgos D., Rainer Harms and Steven T. Walsh "How do technology adoption models diffuse?" Submitted to Axiomathes. ​ ​

42

How do technology adoption models diffuse?

Abstract

The simultaneous rise of models and technology (at least on an historical time frame) begs the question whether models are technology. But the question needs further justification before it can even be asked. We first need to accumulate sufficient evidence that supports posing the question, or put another way, that supports proposing the hypothesis that models are technology. Therefore in the present study, in the manner of abductive logic, we instead seek to determine whether models behave like technology. We ask a very specific question about a ​ very specific type of model, namely: how do technology adoption models diffuse? Is their pattern of diffusion consistent with the pattern of diffusion of technology­­or not? Since the model for the diffusion of technology is the sigmoidal curve, we investigate whether the sigmoidal curve can fit the adoption patterns of the Technology Acceptance Model, the diffusion of innovations and the Precautionary Principle. If a sigmoidal model can be fit to the number of publications citing a model, then we can say the pattern of diffusion consistent with the pattern of diffusion of technology, and the model is diffusing by cultural transmission bias. The three models investigated here both diffused in a sigmoidal fashion, indicating that they diffused like technology and that their primary means of diffusion was cultural transmission bias. Thus via abductive reasoning we are justified in proposing the hypothesis that models are technology. If models are technology, then they are also interventional vis­a­vis representational (observational). We discuss what it means to say that a model is interventional, and we suggest that views of scientific theories are topoi. We also hypothesize that technoscience is a socio­epistemic way of thinking about science and technology, and the sigmoidal curve is a technoscientific model.

Keywords: abductive reasoning; bibliometric; diffusion of innovations; onto­epistemic; ​ sigmoidal model; socio­epistemic; Technology Acceptance Model; technology diffusion; technoscience; topos theory

43

Introduction

Two related developments in the latter half of the twentieth century indicated a heightened interest in scientific models. One was the rise of the semantic view of scientific theories, which recast scientific theories as “causally­possible collections of state­transition models of data for which there is a representation theorem” (Suppe 2000: S111; the common characterization of the semantic view, as holding theories to be classes of set­theoretical models, and is far too simplistic and is susceptible to use as a straw man by critics). The other was the development of mathematical model theory (Tarski and Vaught 1956, Chang and Keisler 1990, Shelah 1990). It has been suggested that the one arose from the other, that developers of the semantic view made use of model theory because model theory was the new tool at hand (French 2010: 234­235).

At the same time, technology began to be perceived of as being more important than science­allegedly (Forman 2007) but not conclusively (Ma and Van Brakel 2014) a reversal of roles; and the term “technoscience” began to be widely used. In the first decade following the rise of the semantic view, U.S. government science policy dramatically morphed into technology policy (Forman 2007, Elzinga 1985), and the thinking of many intellectuals followed suit. In 1976, two weeks before his death, Heidegger sent a letter of greetings to the 10th annual meeting of the North American Heidegger Society in which he suggested that technology was the same as science (he had expressed that idea as early as 1944/1945; Ma and Van Brakel 2014). Though Bachelard had used the term “science technique” in Le nouvel ​ esprit scientifique (Bachelard 1968 [1934]), the term technoscience first appeared in the late 1970s when it was first introduced by Hottois (Hottois 1984). By the mid­1980’s, technoscience had morphed into two distinct perspectives (Weber 2010, Fiedeler 2011, Kastenhofer and Allhutter 2010, Kastenholer and Schwarz 2011): an epistemological ​ perspective itself comprising two branches (Weber 2010), namely one derived from the traditional philosophy (epistemology) of sciences (Popper 2005 [1934], Carnap 1934, Kuhn 2012 [1962]), and another within the Edinburgh School (Barnes 1974, Bloor 1991 [1976], MacKenzie 1981, Shapin 1982); and a social science perspective (Haraway 1985/1991,

Latour 1987, Anderson 2002). (Bensaude­Vincent et al. 2011).

44

The simultaneous rise of models and technology (at least on an historical time frame) is the type of “curious circumstance” (Psillos 2004: 125) that Peirce was referring to when he proposed abductive logic. It begs the question whether models are technology vis­a­vis whether models are science. Moreover, science is said to be a matter of observing (representing) but technology is said to be a matter of intervening (Fiedeler 2011; and under the two main modern views of scientific theories, models are representational (or 5 synonymously “observational”), not interventional. But if models are technology, then they are also interventional, which is not how we currently understand them. Thus this “curious circumstance” is a classic philosophy of science question, because it “re­consider[s] things that are taken for granted in current science” and it “locate[s] issues that are not treated at all by specialist science” (Chang 1999: 417).

But the question of whether models are technology needs further justification before it can even be asked. We first need to accumulate sufficient evidence that supports posing the question, or put another way, that supports proposing the hypothesis, that models are 6 technology. Technology has many definitions, which make direct categorization of models as

5 Under the syntactic view of scientific theories, which holds that theories are linguistic ​ representations (Hartmann 2008), models are alternative interpretations (representations) of some asserted underlying logico­linguistic calculus (French 2010: 232). Under the semantic view of scientific theories, models are commonly said to be constitutive (interventional?) of the theory, but that is an oversimplification. It may be more accurate to say that models are “representational, in the sense that we draw on set theory to represent the structure of the theory” (French 2010: 237). An attempt has been made to hybridize the syntactic and semantic views (French 2010). Mathematical model theory, the study of interpretation of any language with set­theoretic structures, using Tarski’s Truth definition as developed in 1933 (Tarski 1933a) and revised in 1956 (Tarski and Vaught 1956), does not address the issue. Models have been studied in their various aspect, including ontology (Ducheyne 2008, Levy ​ ​ 2015), realist and non­realist types (bin Abdul Murad 2011), and phenomenological models ​ ​ for representing physical systems (Portides 2011); but the question of whether models are ​ technological­interventional has not yet been addressed. 6 Rogers (2010 [1962]: 14) defined technology as “… a design for instrumental action that ​ ​ ​ ​ reduces the uncertainty in the cause effect relationships involved in achieving a desired ‐ outcome. A technological innovation usually has at least some degree of benefit for its potential adopters, but this advantage is not always clear cut to those intended adopters.” ‐ Arthur (2009: 28) defined technology as a means to fulfill a human purpose, an assemblage of practices and components, and the collection of devices and engineering practices available to a culture. Bleed (1997) reviewed how anthropologists have defined technology. He modified Spier’s (1970) approach, which viewed technology as a process having knowledge, applications and standards as contents (inputs) and material culture, environmental 45 technology into a very difficult exercise indeed. Therefore in the present study, in the manner 7 of abductive logic, we instead seek to determine whether models behave like technology. We ​ ask a very specific question about a very specific type of model, namely: how do technology adoption models diffuse through the academic community? Is their pattern of diffusion consistent with the pattern of diffusion of technology­­or not? Since the model for the diffusion of technology is the sigmoidal curve, we investigate whether the sigmoidal curve can fit the temporal trajectory of the number of publications citing the Technology Acceptance Model (TAM; Davis 1986, 1989, Davis et al. 1989, Venkatesh and Davis 2000, Venkatesh and Bala 2008), the diffusion of innovations (Rogers 2010 [1962]), and the Precautionary Principle (O’Riordan and Jordan 1995, Jordan and O’Riordan 1999, Petrenko and McArthur 2010). Our research question and method are somewhat novel. Kolman et al. (2016: 14) recently showed that the criteria for model acceptance by government policymakers “overlap” with the TAM and the diffusion of innovations, but their focus was on model acceptance and not on whether models are technology. As in TAM studies, Kolman et al. (2016) studied cognitive, reasoned, epistemological facets to technology acceptance, but not social elements. Their purpose was not to investigate whether models are technology.

Fundamentally, we performed a bibliometric analysis. Bibliometric analyses have revealed s­shaped curves for articles citing analogous technologies (Daim and Suntharasaj 2009), ​ ​ topics (Jiang et al. 2016), new stories (Paananen and Mäkinen 2013), as well as patents (Daim ​ ​ et al. 2006, Trappey et al. 2011, Leu et al. 2012); but not for articles citing specific models. The s­shaped curves were not interpreted beyond comparison to logistic­type growth curves for forecasting purposes.

In addition to addressing the research question, in our Discussion we point out the dual epistemological­social nature of the sigmoidal curve. If models are technology, then they are also interventional vis­a­vis representational (observational). Thus in our Conclusion we

modifications, social arrangements and adaptive success as results (outputs). Fleck and Howells (2001) reviewed definitions of technology across many disciplines. Their solution was to construct a technology complex comprising a list of elements common to all definitions, and they defined technology as knowledge and activity related to artefacts. 7 Abductive reasoning is exemplified by James Whitcomb Riley’s statement “When I see a ​ ​ bird that walks like a duck and swims like a duck and quacks like a duck, I call that bird a duck.” ​ 46

discuss what it means to say that a model is interventional, and we suggest that views of scientific theories are topoi. We also hypothesize that technoscience is a socio­epistemic way of thinking about science and technology, and the sigmoidal curve is a technoscientific model. Thus this research is of broader academic and philosophical interest than suggested by

the research question.

Theoretical Background

Before turning to the models, we first review the scientific theoretical context in which models appear. This provides us with a language for addressing the notion, in the Discussion below, that models are interventions. We then introduce the models. We end the section with

a discussion of cultural transmission.

The modern views of scientific theories

The modern views of scientific theories arguably began in 1934 with the publication of

Rudolf Carnap’s Logische Syntax der Sprache (Carnap 2013 [1934]; hereafter Syntax). ​ ​ ​ ​ According to Carnap himself (Creath 2013), this particular work, written eight years after accepting a position at the University of Vienna, was a response to Wittgenstein’s concept of 8 logical form and Hilbert’s metamathematics as transmitted by Tarski and Gödel. A reading of Wittgenstein’s (1922) Tractatus Logico­Philosophicus (hereafter Tractatus) locates ​ ​ ​ ​ Carnap’s inspiration in the section 3.325 proposal for a logical syntax; but it is equally clear that Carnap’s Syntax was also written in opposition to Wittgenstein’s assertion in section ​ 4.121 that propositions cannot represent logical form (Awodey and Carus 2007). Carnap’s ​ ​

8 It has been said that the modern views of scientific theories originated in logical ​ empiricism’s (more precisely, logical positivism’s; Uebel 2013) misunderstanding of Hilbert’s axiomatic method (Torretti 1978, Landry 2012). Hilbert’s axiomatic method in turn has its roots in the mathematical structuralism of the nineteenth century, as in Riemann’s 1854 lecture on manifolds (published in Riemann 1868); Galois’ 1830 paper to the Paris Academy of the Sciences, and Joseph Liouville’s related work in the 1840’s; and Johann Carl ​ Friedrich Gauss’ 1821 theorem on curvature (Hintikka 2011). In mathematical structuralism, ​ ​ axioms define relations between objects rather than defining the objects themselves (Friederich 2010). Hilbert, a structuralist, understood an axiom system as a relational structure. The logical positivists, in contrast, misunderstood an axiom system to be a system of statements about a defined subject matter. Hilbert’s stance corresponds to De Saussure’s Structuralism, and logical positivism corresponds to Peirce’s semiotics (Peregrin 1997). 47

position was evidence of the years in which he had been steeped in the axiomatization movement of Hilbert and Frege. As a student he had studied mathematical logic with the ageing Gottlob Frege; had been attracted to (Stöltzner 2015) Hilbert’s axiomatizations of ​ ​ geometry (Hilbert 1899) and physics (Hilbert 1916, 1917); and in 1921 had written his first, albeit rejected, thesis presenting an axiomatic theory of space and time. The thesis later resulted in an unpublished work, Untersuchung zur allgemeinen Axiomatik (Carnap 2000), ​ written in 1928­29 in part as a response to Fraenkel’s Einleitung in die Mengenlehre (Creath ​ 2013). Untersuchung arguably represented an improvement over Hilbert’s approach to ​ axiomatization (Stöltzner 2015); but on its face it showed the influence of Russell, to whom ​ ​ Carnap had famously written in 1922, and nothing of Hilbert’s metamathematics. As Creath (2013: 67) writes, “The result was considerable confusion, but help was on the way.” Carnap had been at the University of Vienna since 1926, and was a member of the Vienna Circle (Uebel 2013), the source of logical positivism. Tarski visited there in February 1930, and Carnap visited Tarski in Warsaw in November 1930 (Creath 2013). In his notes to Logical ​ Syntax of Language, entitled Versuch einer Metalogic, which he said “came to me like a ​ ​ ​ vision during a sleepless night in January 1931, when I was ill,” Carnap (“in light of Gödel’s results,” Landry 2012: 38) adopted Tarski’s metalogical point of view in which the logical language is a system of uninterpreted marks (Coffa 1987). A contemporary to Carnap testified that Syntax had been influenced by Tarski’s (1933b) model­theoretic method of semantics ​ (Rand, undated), though Creath (2013: 67) asserts that Tarski’s influence was mainly

Hilbert’s metamathematical approach.

Given the influence of Frege and Russell on the young Carnap (Reck 2004), and given Carnap’s interest in Hilbert’s axiomatization project, it is not surprising that Carnap’s Syntax ​ presented an axiomatic theory. Termed the “Received View” by Putnam (1962), the syntactic view is associated with logical positivism and the Axiomatization Movement (Portides 2011, Krause et al. 2011). It holds that a theory is identified or formalized by a set of logical propositions that are stated in a particular language (Suppe 1972, Lutz 2014). Carnap’s ​ original version of the syntactic view proposed that theories were to be construed as ​ axiomatic calculi in which theoretical terms are given explicit definition by means of correspondence rules, in which the nonlogical terms of the theory are bifurcated into an observational vocabulary and a theoretical vocabulary (Suppe 1972, Da Costa and French ​ ​ ​ 48

2000). It is notable that Carnap’s original proposal treated correspondence rules as being explicit definitions (Suppe 1977: 12, 16­17). Previously in his 1928 Der Logische Aufbau der ​ Welt (Carnap 1967 [1928]), Carnap had asserted the verifiability principle of logical empiricism, namely that a statement is meaningful only if every non­logical term is explicitly ​ definable by means of a very restricted phenomenalistic language. Soon after Syntax was ​ ​ published, Carnap realized that a phenomenalistic language was insufficient to define ​ physical concepts, and he quickly replaced explicit definitions with reduction sentences (Carnap 1936, 1937). He also augmented the syntactic logic with modal operators, which must be interpreted with semantics, and gave the observation language a semantic interpretation.

Later in the 1960s, the syntactic view was attacked on several grounds. Perhaps the most fundamental of these allegations were that the observational­theoretical distinction was untenable, and that the correspondence rules were a heterogeneous confusion of meaning relationships, experimental design, measurement, and causal relationships some of which are not properly parts of theories (Suppes 1960). It was later argued that this attack could not be sustained, but that the syntactic view should be nevertheless abandoned because the correspondence rules combined a number of widely disparate aspects of the scientific enterprise so as to obscure epistemologically important and revealing aspects of scientific theorizing (Suppe 1972).

It was not until 1960, and the work of another researcher, that a semantic view of scientific theories would emerge. The semantic view (Suppes 1960, van Fraassen 1980, Da Costa and French 2003, Portides 2005, Contessa 2006, Portides 2007, Vorms 2011, Lutz 2014) holds ​ that theories are causally­possible collections of state transition models of data for which ​ there is a representation theorem (Suppe 2000: S111). The semantic view is very occasionally attacked, mainly by setting up the oversimplified caricature (Suppe 2000, Van Fraassen ​ 2014), such as the semantic view holds that theories are classes of models (Halvorson 2012, ​ 2013).

A theme in this brief history, as pointed out by French (2010), is that philosophers of science make use of whatever resources are at hand at the time. At the time of the formulation of the syntactic view, first order logic was being developed. At the time of the formulation of the

49 semantic view, model theory was being developed. Looking ahead, category theory is currently being developed, and Lawvere and others (Landry 2007) suggest viewing some theories as categories (with suitable structure) and models for these theories as functors (preserving that structure) (Szabo 1981, Peruzzi 2006). French also points out that a pluralistic, pragmatic stance is possible. A more formal way of stating French’s observation is given by topos theory. A topos (plural: topoi) is an abstract world or a universe for discourse (Trifonov 1995) in which the observer’s or researcher’s logic is conceptualized as part and parcel of what he is observing (Zimmermann 1999, 2002, Marinakis 2008). A view of scientific theories is a topos. We return to this theme below in the Conclusion.

The Models

Davis (1986, 1989, Davis et al. 1989) derived the Technology Acceptance Model (TAM) from the Theory of Reasoned Action (TRA) (Fishbein and Ajzen 1975). The TRA assumes that individuals are rational decision makers who constantly calculate and evaluate the relevant behavior beliefs in the process of forming their attitude toward the behavior. Along these lines, the TAM assumes that perceived usefulness and perceived ease of use, both matters of environmental learning, drive the intention to adopt. Venkatesh and Davis’ (2000) extension, so­called TAM2, postulated that subjective norm, image, job relevance, output quality and result demonstrability were determinants of perceived usefulness. Venkatesh and Bala (2008) later proposed TAM3, in which computer self­efficacy, computer anxiety, and computer playfulness, perceptions of external control, perceived enjoyment and objective usability comprise determinants of perceived ease of use.

Rogers’ (2010 [1962]) diffusion of innovations is a social science theory of how novel ideas and technologies spread through a social system. Rogers defined diffusion as “the process in ​ which an innovation is communicated through certain channels over time among the members of a social system” (Rogers 2010 [1962]: 5). He defined innovation as “… an idea, practice or ​ ​ object that is perceived as new by an individual or other unit of adoption” (Rogers 2010 [1962]: 12). In his theory he posited characteristics of innovations, of individual adopters, and of organizations; a five­step individual decision making process for technology acceptance, which resembles the TAM; and perhaps the best known feature of his theory, the categories of adopters, viz. innovators, early adopters, early majority, late majority and laggards. The

50 theory has been applied to a wide variety of ideas and technologies (e.g., Greenhalgh et al.

2004, Byambaa et al. 2015, Sriwannawit and Sandström 2015).

The Precautionary Principle is a technoscientific regulatory model, as it has both epistemological and social facets. It is epistemological because it seeks to introduce science into environmental law. It is social because it relates to public fear. It has been said that the Precautionary Principle “captures an underlying misgiving over the growing technicalities of environmental management at the expense of ethics, environmental rights in the face of vulnerability, and the facilitative manipulation of cost­benefit analysis” (O’Riordan and ​ Jordan 1995: 192).

Technology diffusion is generally agreed to trace a sigmoidal or s­shaped curve (Singh 2008, Miranda and Lima 2010, Marinakis 2012). Sigmoidal curves have also been observed in a wide variety of technologies (e.g., Michalakelis et al. 2008, Chen and Liu 2011, Ruohonen et ​ al. 2015). In relation to Rogers’ diffusion of innovations, the s­shaped pattern implies that ​ there are first a few innovators, then slightly more early adopters, then slightly more in the early majority, etc. This of course is an empirical or phenomenological observation. Various mechanistic models have been used to model the sigmoidal curve, each having their own assumptions, but a non­mechanistic four parameter flexible model has been proposed that fits all sigmoidal patterns (Marinakis 2012).

Henrich (2001) showed that environmental learning (i.e., acquiring payoff­relevant or cost­benefit­relevant information through action and interaction in local social, economic, and ecological environments) alone never produces S­shaped adoption dynamics. Conformity bias produces the S­curve; and a combination of environmental learning and biased cultural transmission can also generate S­dynamics, but only when conformity bias is the predominate force in the spread of new behaviors. This takes us into a discussion of conformity and cultural transmission.

Cultural Transmission

Another trend in the late 20th century was research into conformity. Much of this research was conducted in the areas of social psychology (Asch 1955), cultural evolution modeling (Boyd and Richerson 1985), and social learning studies (Morgan and Laland 2012). We are interested in cultural evolution modeling in particular because this field deals with cultural

51 transmission bias. Cultural evolution is studied in terms of sociobiology, memes and dual inheritance, each of which has shortcomings (Read 2003). Sociobiology treats culture as a form of behavior such that cultural evolution is subsumed under biological evolution (Read 2003: 18). This approach has numerous problems, not the least of which is that it unrealistically limits time scales for cultural change to 1,000s to 10,000s of years (Read 2003: 20). Meme theory oversimplifies culture as an aggregated linear combination of memes, when the more accurate representation may be that culture is a nonlinear emergent construct of memes (Read 2003: 28). The dual inheritance model, which assumes phenotypic transmission through the genome and through nongenetic transfer (Read 2008: 21, Henrich and McElreath 2003), suffers similar deficiencies as sociobiology because it treats culture as behavior rather than as ideas. Cultural transmission biases fall under this latter field of study.

Cultural evolution modeling hypothesizes that individuals possess a “wide range of cultural transmission biases that dictate when they copy others and who they copy” (Morgan and Laland 2012: 2). The two main ideas in this hypothesis are copying and cultural transmission. The notion of copying, or conforming, is operationalized by the rule that an individual is disproportionately likely to adopt the majority decision, because it leads them to acquire valuable fitness­enhancing information. Cultural transmission biases comprise “content­based biases, where inherent features of the cultural traits at stake determine the choice [i.e., to choose when, what, or from whom, to copy], and context­based biases, where the choice relates instead on features extracted from the social context” (Acerbi and Bentley 2014: 228). Context­based biases include biases in which the choice (to choose when, what and from whom to copy) is based on commonality (conformist bias), and those in which the choice is based on the fact that the thing in question is possessed by individuals perceived as more successful or knowledgeable (Henrich and Boyd 1998, Henrich and McElreath 2003, Acerbi and Bentley 2014).

Henrich (2001), through an analysis of an environmental learning model, showed that cultural transmission bias is the predominate force in behavioral change. The environmental learning model assumes unbiased transmission, in which “the driver of behavioral change lies in the cost­benefit evaluation of alternatives based on low­cost experimentation” (Henrich 2001: 996). However, the graphs produced from the equation of such a model are r­shaped. He then

52 showed that a biased cultural transmission model derived from replicator dynamics produced graphs that are s­shaped.

Humans are said to be a cultural species (Heine and Norenzayan 2006). Evolutionary ​ anthropologists to define culture minimally as socially transmitted information (Alvard 2003). The information underlying the choices in cultural transmission bias is not Roger’s (2010) innovation­evaluation information, which is used for individual environmental learning (individual cost­benefit analyses); rather it is “information about such things as who have adopted a particular practice (how prestigious they are) or how many others have adopted the practices” (Henrich 2001: 1008).

Methods

We address our research question “how do technology adoption models diffuse?” through abductive reasoning. Abduction was Peirce’s attempt “to illustrate that the process of ​ scientific discovery is not irrational and that a methodology of discovery is possible” (Magnani 2005: 265). Abduction “is typically understood as the process of looking for an explanation for a surprising observation” (Velázquez­Quesada 2015: 51­52). “[A]bductive ​ reasoning is for constructing hypotheses for puzzling phenomena...abductive reasoning can only offer hypotheses which may be refuted with additional information” (Aliseda 2003: 25). “Abduction is thinking from evidence to explanation” (Aliseda 2003: 30). Abductive ​ reasoning is related to “‘inference to the best explanation,’ where an inference is made from given data to a hypothesis that would explain the data, and no better explanation can be found” (Lyne 2005: 114); but the result is a plausible hypothesis rather than a conclusion.

To gather data to model, a keyword search was performed on “technology acceptance model” by year from the date of publication of the first journal article (1989) through the latest full year. The following databases were searched: WorldCat.org; ERIC; Academic Search ​ Complete; Business Source Complete; ArticleFirst; and JSTOR Arts & Sciences Collections I through XII. Articles, chapters and books were counted (Figure 1). A similar keyword search ​ was also performed on [“diffusion of innovations” and Rogers] (Figure 2) and “Precautionary

Principle” (Figure 3).

53

Figure 1. The Richards model and the Technology Acceptance Model publications data.

Figure 2. The Richards model and the Diffusion of Innovations publications data.

54

Figure 3. The Richards model and the Precautionary Principle publications data.

The Richards model was then fit to the data (Figures 1­3, Tables 1­6). The benefit of the Richards model is that it is a flexible, four­parameter model, and is able to fit the full range of sigmoidal shapes. The Richards model was introduced in 1959 in the context of plant growth (Richards 1959). It was recently applied to technology diffusion data (Marinakis 2012a). The model has been modified and reparameterized by several researchers. As modified by Sugden et al. (1981), the model is:

∞ m/(1­m) 1/(1­m) Wt = W [1–(1–m)exp[−k(t–T*)/m​ ​ ]]​ ​ ​ ​ ​ ​

Where W is the weight or growth at time t, W∞ is the asymptotic weight, k is the maximum ​t ​ relative growth rate per unit time, T∞ is the time to asymptote, and m is a shape parameter ​ 1/(1−m) ∞ with the property that m is the relative weight at time T .​ In application to the present ​ ​ study, W∞ is the asymptotic number of publications, k signifies the maximum diffusion per ​

55 unit time relative to the number of publications, T∞ is the time to asymptote, and m is a shape ​ parameter.

Results

Statistically significant fits of the Richards model were achieved for both the TAM (Tables 1 and 2, Figure 1), the diffusion of innovations (Tables 3 and 4, Figure 2), and the Precautionary Principle (Tables 5 and 6, Figure 3). Qualitatively, the diffusion patterns are

∞ clearly sigmoidal (Figures 1­3). The results indicate that asymptotic levels of diffusion (W )​ ​ have already been achieved (at time T∞) for all three data sets. The rate parameter k and the ​ ​ shape parameter m are highest for the TAM data and lowest for the Precautionary Principle data. The rate parameter interpretation is straightforward: maximum diffusion per unit time relative to the number of publications. The shape parameter m may correspond at least in part 9 10 to Henrich’s parameters α and/or B ,​ where the higher m­values correspond to the lower 11 α­values and higher m­values correspond to higher B­values.

Source Degrees of Sum of Mean Square Approximate Pr > F Freedom Squares F Value

Model 4 11549608 2887402 1406.06 < 0.0001

Error 22 45178 2053.5

Uncorrected 26 11594786 Total Table 1. Richards model goodness­of­fit for the Technology Acceptance Model publications data.

9 “The symbol α, which varies between 0 and 1, gives the relative strength of conformist ​ transmission in human cognition—it scales the cognitive weight given to the frequency of a behavior relative to other biases” (Henrich 2001: 1002). 10 B=r1­r2, where r­values are the replicatory propensities for each of the traits. ​ ​ ​ ​ ​ 11 Compare Henrich (2001) Figures 4, 9 and 11; with Marinakis (2012) Figures 1 and 2. ​ However, replicating the variety of curves in Henrich’s (2001) Figures 4, 9 and 11 requires manipulating all four Richards model parameters. 56

Parameter Estimate Standard Error Approximate (upper) 95% Confidence Limits (lower) m 114.1 432864 ­897584 897812

W∞ 1167.0 18.5002 1131.6 1208.4 ​ T∞ 2009.1 399.9 1179.7 2838.5 ​ k 0.3000 48.0740 ­99.3986 99.9986 Table 2. Richards model parameters for the Technology Acceptance Model publications data.

Source Degrees of Sum of Mean Square Approximate Pr > F Freedom Squares F Value

Model 4 4017275 1004319 350.01 < 0.0001

Error 50 143469 2869.4

Uncorrected 54 4160744 Total Table 3. Richards model goodness­of­fit for the Diffusion of Innovations publications data.

Parameter Estimate Standard Error Approximate (upper) 95% Confidence Limits (lower) m 62.6435 65614.2 ­131727 131852

W∞ 618.6 20.6137 577.2 660.0 ​ T∞ 2008.2 220.6 1565.2 2451.2 ​ k 0.2252 16.0702 ­32.0527 32.5031 Table 4. Richards model parameters for the Diffusion of Innovations publications data.

Source Degrees of Sum of Mean Square Approximate Pr > F

57

Freedom Squares F Value

Model 4 2444436 611109 941.53 < 0.0001

Error 37 24015.3 649.1

Uncorrected 41 2468451 Total Table 5. Richards model goodness­of­fit for the Precautionary Principle publications data.

Parameter Estimate Standard Error Approximate (upper) 95% Confidence Limits (lower) m 1.9979 0.6591 0.6624 3.3334

W∞ 436.6 11.2070 413.9 459.3 ​ T∞ 2000.4 0.7038 1999 2001.9 ​ k 0.1113 0.0122 0.0865 0.1361 Table 6. Richards model parameters for the Precautionary Principle publications data.

Discussion

Our research question was, how do technology adoption models diffuse? Is their pattern of diffusion consistent with the pattern of diffusion of technology­­or not? The three models investigated here diffused in a sigmoidal fashion, suggesting that they diffused like technology. Apparently model choice is a cultural process and models diffuse through cultural transmission. Thus via abductive reasoning we are justified in proposing the hypothesis that models are technology. We are not asserting or concluding that the three models diffused according to the same process. Rather, based on the results of these analyses, and using abductive reasoning, we propose the hypothesis that the three models diffused (i.e., ​ may have diffused) according to the same process. Rogers himself asserted that the diffusion of innovations occurred through “imitation by potential adopters of their network partners” (Rogers 2010: 18, Henrich 2002), but he probably had no idea that his book would also diffuse through imitation.

58

Our use of the sigmoidal model adds depth to the inquiry and takes us back to the epistemology­social science dichotomy of technoscience mentioned in the Introduction. If a sigmoidal model can be fit to the number of publications citing a model, then we can say the pattern of diffusion consistent with the pattern of diffusion of technology; and because Henrich (2001) showed that the basic form of the sigmoidal curve arises from cultural transmission bias (and that variations thereof arise from rational decisions), we can say that the model is diffusing by cultural transmission bias. This is also a relatively under­researched area. Social factors have been incorporated into a model of scientific theory choice (Brock and Durlauf 1999), but research on social factors in model choice (diffusion) is underrepresented. The question of model selection has been addressed, viz. given two or more equally plausible conceptual models, how does one choose which one to use (Morrison 2011)? Some researchers respond by constructing chimeras of two or more models (Mun et al. 2006, Park and Chen 2007, Phuangthong and Settapong 2008, Chen et al. 2010, Lee et al. ​ 2011, Närman et al. 2012, Zampou et al. 2012). Others use two or more models and ​ quantitatively compare the results (Lin 2007). It has been argued that mechanistic explanations can be distributed across sets of models (Hochstein 2015); generalized, it may be that the representation of some phenomena requires a bundle of models.

The sigmoidal curve, informed by Henrich’s (2001) work, organically combines the two perspectives of technoscience. Thus the present study is epistemological, because it relates to “more general aspects such as the convergence of science and technology, of representing and intervening, of understanding and performing, and of the natural and the artificial” (Fiedeler 2011: 85). It is also social science, because its hypothesis of model diffusion through cultural transmission bias relates to “social aspects within the production of knowledge” (Fiedeler 2011: 85), and to how science is actually performed or “Science in Action” (the title of Latour’s 1987 book). That duality enables the present work to contribute to bridging “the estrangement of sociologists of science from philosophers of science, a consequence of their divergent responses to Kuhn’s Structure of scientific revolutions (1962)” (Bycroft 2012: 425). Those divergent responses arose because Kuhn wrote about how “progress in natural science is considerably dependent on extra­scientific aspects, such as personality, power, and culture” (Fiedeler 2011: 84), which places him in the social sciences branch of technoscience. Yet a closer reading of Kuhn shows that he studied the social dimensions of inquiry without being a

59 social (de­)constructivist (Wray 2011, Bycroft 2014: 428), which also places him in the epistemological branch of technoscience. Perhaps the proper response to Kuhn is a synthesis, a hybridization that is a reflection of the technoscience chimera. The sigmoidal curve offers such a synthesis.

Technoscience is defined as technology­driven science performed in a technology milieu (Kastenholer and Schwarz 2011). As a descendant of modern science, technoscience is said to contain the DNA of both modern science and of technology. Technoscience is said to be a new species, a hybrid slave­and­master (previously, it was thought that technology served science), a chimera of opposites (the pure and the applied, the natural and the artificial). Science is a matter of observing, technology is a matter of intervening, and technoscience blurred the distinction between them. A leading example is the scanning tunnelling , which observes by interventionally “groping” a surface (Nordmann 2005: 7, ​ 2006). Dissenters to the technoscientific concept counter that technology (technique) has been ​ an indispensible part of modern science since its inception (Hacking 1983, Bensaude­Vincent et al. 2011, Fiedeler 2011), and that the issue is more nuanced and that this was recognized as early as the sixteenth century. Frances Bacon, for example, noted that experimental measuring (observing) entails intervention because it involves the artificial separation of a natural phenomenon or it involves the creation of an artificial environment (Fiedeler 2011: 89). It has been suggested that models “give structure, coherence, and direction to intervention” (Adúriz­Bravo 2014: 174) and that science is “a model­based intervention on ​ the natural world” (Adúriz­Bravo 2014: 169). Technoscience has already hybridized ​ ​ ​ technology and science. It has been argued that scientific theories are intervening ​ representations (or representational interventions; Ibarra and Mormann 2006). It has been said that “scientific intervention entails the construction and use of theoretical models, which give rise to a range of conceptual and symbolic tools that act as mediators in our activity on the world (this is what Ibarra and Mormann (2006) call ‘representational intervention’)” (Adúriz­Bravo 2014: 176). Could models also be intervening representations (representational ​ interventions), some type of technoscientific hybrid, such as when they represent an intervention (as when they represent a laboratory experiment, sensu Bacon)?

60

The implicit mentality behind technoscience comports with Heidegger’s (1977) view of technology. Heidegger asserted that technology’s way of revealing is a rapacious ​ challenging­forth (Waddington 2005: 569). Everything is a raw material that must be maximally consumed. Nothing is sacred. Everything is disposable. Technoscience is a ​ subculture that operationalizes this mentality by using high technology to consume and reconfigure the natural world, sometimes at micro­ and nano­scales, sometimes at high energies and temperatures. Technoscience is thus distinguished by its novel products or

TM “objects” (Bensaude­Vincent et al. 2011). These include transgenic mice (the oncomouse ;​ ​ 12 Salvi 2001), products of “post­genomic technologies” (Styhre 2011) such as systems biology and (Knuuttila and Loettgers 2013, Vincent 2013), modular pieces ​ ​ of DNA called BioBricks (Knuuttila and Loettgers 2013), transhumans and posthumans (Lee ​ ​ 2016), and cyborgs (Hoffmannesque underground body hackers; Duarte 2014, Olivares 2014). Are interventional models a novel product of technoscience? Or have models always been interventional, in the way that science and technology have always been intertwined (Klein 2005)? Heidegger may have been right. The technoscience turn, the potentially epochal change (Nordmann et al. 2011), may have been a matter of degree, the difference between bringing­forth and challenging­forth.

Conclusion

We colloquially understand the model as a replica, a representation, not as an interventional tool. What does it mean to say that models are interventional? One answer lies in topos theory (Trifonov 1995, Zimmermann 2000, 2007). As mentioned above in the Theoretical

Background, a topos is a discourse in which

“the process of the concrete, physical unfolding of the world (as it can be assessed in empirical terms) is essentially identical with the process of reflecting about it, in a cyclic manner which secures that the egg comes before the hen. Following Sandkühler here, we call this aspect “onto­epistemic”, in the sense that both the ontological and

12 The post­genomic era is “a period when the insights from the human genome mapping program (HUGO), conventional in vivo research models, and use of new ‘omics technologies’ (proteomics, metabolomics, transcriptomics, etc.) are being combined in new ways” (Styhre 2011: 376). 61

epistemological components of the human mode of grasping the world operate very

much on the same footing”

(Zimmermann 2000: 31; footnote omitted). The onto­epistemic approach “visualize[s] physical properties of systems (and hence also biological properties) as a result of human cognition which is initializing the modeling in the first place” (Zimmermann 2007: 49). Our logic affects how we see the world, e.g., an observer with Boolean logic perceives her environment as a four­dimensional Lorentzian manifold (Trifonov 1995). A model is a topos. A model entails reflecting about the world, and this reflecting is identical to the physical unfolding of the world. When we say that a model is an intervention, we are being self­aware of the impact of our logic on our perception. The views of scientific theories are also topoi, because ceteris paribus, the physical unfolding of the world changes with the process of ​ ​ reflecting about it (the syntactic, or the semantic view, etc.).

The sigmoidal model, read in light of Henrich (2001), indicates that models diffuse primarily through cultural transmission bias and secondarily through reasoned processes; primarily and secondarily, not in sequential terms, but in causal terms. The primacy of cultural transmission bias in technology diffusion, and the secondary role of rationality, were both presaged by Kuhn (2012 [1962]), who intuited that “[s]ubjective factors may dominate the early stages of a debate, but in the long run they are swamped by epistemic factors” (Bycroft 2012: 428). Kuhn’s hypothesis, however, is a sequential conceptualization that lacks the causal synthesis of the onto­epistemic approach: epistemology eventually subsumes the subjective. In other words, Kuhn proposed a two­stage temporal model. The onto­epistemic approach, in contrast,

is a not a temporal model. It proposes placing ontology and epistemology on equal footing.

The onto­epistemic approach suggests to us a hybrid approach to Kuhn and to technoscience. Dimopoulos and Koulaidis (2002) describe technoscience as the socio­epistemic constitution of science and technology. We hypothesize that a socio­epistemic approach, analogous to the onto­epistemic approach, visualizes properties of societies (societies at all scales, whether nations or communities) as a result of the human cognition which is initializing the modeling. The process of the concrete physical unfolding of society is essentially identical with the process of reflecting about it. A socio­epistemic view places social and cognitive aspects on the same footing. Thus in technoscience we have the social science perspective

62 promiscuously iterating with the epistemological perspective, the intervening promiscuously iterating with the representing. We hypothesize that technoscience is a socio­epistemic way of thinking about science and technology. The sigmoidal curve and the three models analyzed herein are all technoscientific models because they all engage society (through their coverage of social processes) and epistemology (through their rational aspects as in the TAM’s TRA lineage) simultaneously.

Now that we are justified in proposing the hypothesis that models are technology, the next steps will be to test the hypothesis. Testing the former would first require selecting a definition of technology. Though the notion that models are technology is appealing and interesting, the number and abstract nature of the definitions of technology suggest that it may be more tenable to test whether models are interventional. It may be even more elegant to test whether a model is a topos. That experiment would entail selecting “a given model of the universe, to construct models of the observer, and find out how the observer’s perception of the universe changes if his logic is changed” (Trifonov 1995: 1­2).

63

Literature Cited

Acerbi, Alberto, and R. Alexander Bentley. "Biases in cultural transmission shape the turnover of popular traits." Evolution and Human Behavior 35, no. 3 (2014): 228­236. ​ ​ Adúriz­Bravo, Agustín. "Teaching the Nature of Science with Scientific Narratives." Interchange 45, no. 3­4 (2014): 167­184. ​ Aliseda, Atocha. "Mathematical reasoning vs. abductive reasoning: a structural approach." Synthese 134, no. 1 (2003): 25­44. ​ Alvard, Michael S. "The adaptive nature of culture." Evolutionary Anthropology: Issues, ​ News, and Reviews 12, no. 3 (2003): 136­149. ​ Anderson, Warwick. "Introduction: postcolonial technoscience." Social Studies of Science 32, ​ ​ no. 5/6 (2002): 643­658.

Arthur, W. Brian. The nature of technology: What it is and how it evolves. Simon and ​ ​ Schuster, 2009.

Asch, Solomon E. "Opinions and social pressure." Readings about the social animal 193 ​ ​ (1955): 17­26.

Awodey, Steven, and A. W. Carus. "Carnap’s dream: Gödel, Wittgenstein, and Logical, Syntax." Synthese 159, no. 1 (2007): 23­45. ​ ​ Bachelard, Gaston. "Le nouvel esprit scientifique." Paris: Les Presses universitaires de ​ France, 10e édition, 1968, 181 pp. Première édition, 1934. Collection: Nouvelle encyclopédie philosophique.

Barnes, B. Scientific knowledge and sociological theory. Routledge, London, 1974. ​ ​ Bensaude­Vincent, Bernadette, Sacha Loeve, Alfred Nordmann, and Astrid Schwarz. "Matters of interest: the objects of research in science and technoscience." Journal for ​ general philosophy of science 42, no. 2 (2011): 365­383. ​ bin Abdul Murad, Mohd Hazim Shah. "Models, scientific realism, the intelligibility of nature, and their cultural significance." Studies in History and Philosophy of Science Part A 42, no. 2 ​ ​ (2011): 253­261.

Bleed, P. “Content as Variability, Result as Selection: Toward a Behavioral Definition of Technology.” Archaeol. Pap. Am. Anthropol. Assoc. 7 (1997): 95–104.

64

Bloor, David. Knowledge and social imagery. University of Chicago Press, 1991 [1976]. ​ ​ Boyd, R. and Richerson, P.J. Culture and the Evolutionary Process. Chicago, The University of Chicago Press, 1985.

Braithwaite, Richard B. "Models in the Empirical Sciences." Nagel, Ernest, Patrick Suppes, and Alfred Tarski (1962), Logic, Methodology and the Philosophy of Science. Proceedings of ​ ​ the 1960 International Congress. Stanford: Stanford University Press, 224­231.

Brock, William A., and Steven N. Durlauf. "A formal model of theory choice in science." Economic Theory 14, no. 1 (1999): 113­130. ​ Byambaa, Tsogtbaatar, Craig Janes, Tim Takaro, and Kitty Corbett. "Putting Health Impact Assessment into practice through the lenses of diffusion of innovations theory: a review." Environment, Development and Sustainability 17, no. 1 (2015): 23­40. ​ Bycroft, Michael. "Kuhn’s evolutionary social epistemology." Studies in History and ​ Philosophy of Science Part A 43, no. 3 (2012): 425­429. ​ Carnap, Rudolf. Logische Syntax der Sprache. Springer­Verlag, 2013 [1934]. ​ ​ Carnap, Rudolf. "Testability and meaning." Philosophy of science 3, no. 4 (1936): 419­471. ​ ​ Carnap, Rudolf. "Testability and meaning." Philosophy of science 4, no. 1 (1937): 1­40. ​ ​ Carnap, Rudolf (1967 [1928]) The Logical Structure of the World and Pseudo­Problems in ​ Philosophy (English translation by R. A. George of Der Logische Aufbau der Welt, Leipzig: ​ ​ ​ Felix Meiner Verlag, 1928), Berkeley: University of California Press.

Carnap, R. (2000). Untersuchungen zur allgemeinen Axiomatik. Ed. by T. Bonk and J. Mosterin. Darmstadt: Wissenschaftliche Buchgesellschaft.

Cartwright, Nancy. The dappled world: A study of the boundaries of science. Cambridge ​ ​ University Press, 1999.

Chang, Chen Chung, and H. Jerome Keisler. Model theory. Second edition. Elsevier, 1990. ​ ​ Chang, Hasok. "History and philosophy of science as a continuation of science by other means." Science & Education 8, no. 4 (1999): 413­425. ​ ​ Chakravartty, Anjan. "Informational versus functional theories of scientific representation." Synthese 172, no. 2 (2010): 197­213. ​ Chen, Qiangbing, and Yali Liu. "The diffusion of a process innovation with gently declining production cost." Journal of Industry, Competition and Trade 11, no. 2 (2011): 109­129. ​ ​

65

Chen, Jengchung, Yangil Park, and Gavin J. Putzer. "An examination of the components that increase acceptance of smartphones among healthcare professionals." Electronic Journal of ​ Health Informatics 5, no. 2 (2010): 16. ​ Chin, Chuanfei. "Models as interpreters (with a case study from pain science)." Studies in ​ History and Philosophy of Science Part A 42, no. 2 (2011): 303­312. ​ Coffa, Alberto. "Carnap, Tarski and the search for truth." Nous (1987): 547­572. ​ ​ Contessa, Gabriele. "Scientific models, partial structures and the new received view of theories." Studies in History and Philosophy of Science Part A 37, no. 2 (2006): 370­377. ​ ​ Creath, R., “Carnap’s move to semantics: gains and losses,” In: Wolenski, Jan, and Eckehart Köhler, eds. Alfred Tarski and the Vienna Circle: Austro­Polish connections in logical ​ empiricism. Vol. 6. Springer Science & Business Media, 2013, 65­76. ​ Da Costa, Newton, and Steven French. "Models, theories, and structures: Thirty years on." Philosophy of Science (2000): S116­S127. ​ Da Costa, Newton CA, and Steven French. Science and partial truth: A unitary approach to ​ models and scientific reasoning. Oxford University Press, 2003. ​ Daim, Tugrul U., Guillermo Rueda, Hilary Martin, and Pisek Gerdsri. "Forecasting emerging technologies: Use of bibliometrics and patent analysis." Technological Forecasting and ​ Social Change 73, no. 8 (2006): 981­1012. ​ Daim, Tugrul, and Pattharaporn Suntharasaj. "Technology diffusion: forecasting with bibliometric analysis and Bass model." Foresight 11, no. 3 (2009): 45­55. ​ ​ Davis Jr, Fred D. "A technology acceptance model for empirically testing new end­user information systems: Theory and results." PhD diss., Massachusetts Institute of Technology, 1986.

Davis, F. D. “Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology,” MIS Quarterly, 13(3) (1989): 319 339. ​ ​ ‐ Davis, F.D, Bagozzi, P.R. and Warshaw, P. “User acceptance of computer technology: A comparison of two theoretical models,” Management Science, 35 (1989): 982 1003. ​ ​ ‐ Dimopoulos, Kostas, and Vasilis Koulaidis. "The socio­epistemic constitution of science and technology in the Greek press: an analysis of its presentation." Public Understanding of ​ Science 11, no. 3 (2002): 225­241. ​

66

Duarte, Bárbara Nascimento. "Entangled Agencies: New Individual Practices of Human­Technology Hybridism Through Body Hacking." NanoEthics 8, no. 3 (2014): ​ ​ 275­285.

Ducheyne, Steffen. "Towards an ontology of scientific models." Metaphysica 9, no. 1 (2008): ​ ​ 119­127.

Fiedeler, Ulrich. "When does the co­evolution of technology and science overturn into technoscience?." Poiesis & Praxis 8, no. 2­3 (2011): 83­101. ​ ​ Fishbein, M., and Ajzen, I., (1975), Belief, Attitude, Intention and Behavior: An Introduction ​ to Theory and Research, Addison­Wesley, Reading, MA. ​ Fleck, James, and John Howells. "Technology, the technology complex and the paradox of technological determinism." Technology Analysis & Strategic Management 13, no. 4 (2001): ​ ​ 523­531.

Forman, Paul. "The primacy of science in modernity, of technology in postmodernity, and of ideology in the history of technology." History and technology 23, no. 1­2 (2007): 1­152. ​ ​ French, S. (2008). “Symmetry, invariance and reference.” In M. Frauchiger & W. K. Essler (Eds.), Representation, Evidence and Justification. Frankfurt, Ontos Verlag, 127­156. ​ ​ French, Steven. "Keeping quiet on the ontology of models." Synthese 172, no. 2 (2010): ​ ​ 231­249.

Friederich, Simon. "Structuralism and meta­mathematics." Erkenntnis 73, no. 1 (2010): ​ ​ 67­81.

Greener, Ian. "The potential of path dependence in political studies." Politics 25, no. 1 (2005): ​ ​ 62­72.

Greenhalgh, Trisha, Glenn Robert, Fraser Macfarlane, Paul Bate, and Olivia Kyriakidou. "Diffusion of innovations in service organizations: systematic review and recommendations." Milbank Quarterly 82, no. 4 (2004): 581­629. ​ Hacking, I. Representing and intervening. Introductory topics in the philosophy of natural ​ science. Cambridge: Cambridge University Press, 1983. ​ Halvorson, Hans. "What Scientific Theories Could Not Be." Philosophy of Science 79, no. 2 ​ ​ (2012): 183­206.

Halvorson, Hans. "The semantic view, if plausible, is syntactic." Philosophy of Science 80, ​ ​ no. 3 (2013): 475­478.

67

Haraway D.J. “A cyborg manifesto: science, technology, and socialist­feminism in the late twentieth century.” In: Haraway D (ed) Simians, cyborgs and women: the reinvention of ​ nature. Routledge, New York, 1985/1991, 149–181 (first published in: Socialist Review 80, ​ 1985, 65–108).

Hartmann, Stephan. "Modeling in philosophy of science." In M. Frauchiger and W.K. Essler (eds.), Representation, evidence, and justification: Themes from Suppes, Frankfurt, ontos ​ ​ Verlag (2008): 95­121.

Heidegger, M. “The Question Concerning Technology.” The Question Concerning ​ Technology and Other Essays, W. Lovitt (trans.). Harper and Row, 1977, 3­35. ​ Heine, Steven J., and Ara Norenzayan. "Toward a psychological science for a cultural species." Perspectives on Psychological Science 1, no. 3 (2006): 251­269. ​ ​ Henrich, Joseph. “Cultural transmission and the diffusion of innovations: Adoption dynamics ​ ​ indicate that biased cultural transmission is the predominate force in behavioral change and much of sociocultural evolution,” American Anthropologist, 103(4) (2001): 992­1013. ​ ​ Henrich, Joseph. "Decision­making, cultural transmission and adaptation in economic anthropology." In Ensminger, Jean, ed. Theory in economic anthropology. Rowman ​ ​ Altamira, 2001, 251­295.

Henrich, Joseph, and Robert Boyd. "The evolution of conformist transmission and the emergence of between­group differences." Evolution and human behavior 19, no. 4 (1998): ​ ​ 215­241.

Henrich, Joseph, and Richard McElreath. "The evolution of cultural evolution."Evolutionary ​ Anthropology: Issues, News, and Reviews 12, no. 3 (2003): 123­135. ​ Hilbert, D. (1899). Grundlagen der geometrie. In Festschrift zur Feier der Enthüllung des ​ Gauss­Weber Denkmals in Göttingen. Herausgegeben von dem Fest­Comitee. Leipzig: ​ Teubner. (Original work published 1902b).

Hilbert, D. (1916). Die Grundlagen der Physik (Erste Mitteilung). Nachrichten von der ​ ​ Königlichen Gesellschaft der Wissenschaften zu Göttingen. Mathematisch­Physikalische Klasse aus dem Jahre 1915, 395­407.

Hilbert, D. (1917). Die Grundlagen der Physik (Zweite Mitteilung). Nachrichten von der ​ ​ Königlichen Gesellschaft der Wissenschaften zu Göttingen. MathematischPhysikalische Klasse aus dem Jahre 1916, 53­76.

Hintikka, Jaakko. "What is the axiomatic method?" Synthese 183, no. 1 (2011): 69­85. ​ ​

68

Hochstein, Eric. "One mechanism, many models: a distributed theory of mechanistic explanation." Synthese (2015): 1­21. ​ ​ Hottois G. Le signe et la technique. La philosophie a` l’e´preuve de la technique. Aubier, ​ ​ 1984.

Ibarra, Andoni, and Thomas Mormann. "Scientific theories as intervening representations." THEORIA. Revista de Teoría, Historia y Fundamentos de la Ciencia 21, no. 1 (2006): 21­38. ​ Jiang, Hanchen, Maoshan Qiang, and Peng Lin. "A topic modeling based bibliometric exploration of hydropower research." Renewable and Sustainable Energy Reviews 57 (2016): ​ ​ 226­237.

Jordan, Andrew, and Timothy O’Riordan. "The precautionary principle in contemporary environmental policy and politics." In: Raffensberger, C. Ticker, J. (eds), Protecting public ​ health and the environment: implementing the precautionary principle. Island Press, Washington, DC (1999), 15­35. ​ Kastenhofer, Karen, and Doris Allhutter. "Technoscience and ." Poiesis & Praxis: International Journal of Technology Assessment and Ethics of Science 7, ​ no. 1 (2010): 1­4.

Kastenhofer, Karen, and Astrid Schwarz. "Probing technoscience." Poiesis & Praxis: ​ International Journal of Technology Assessment and Ethics of Science 8, no. 2 (2011): 61­65. ​ Klein, Ursula. "Technoscience avant la lettre." Perspectives on Science 13, no. 2 (2005): ​ ​ 226­266.

Knuuttila, Tarja, and Andrea Loettgers. "Basic science through engineering? Synthetic modeling and the idea of biology­inspired engineering." Studies in History and Philosophy of ​ Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences 44, ​ no. 2 (2013): 158­169.

Kolkman, DA, P Campo, T Balke­Visser, N Gilbert. “How to build models for government: criteria driving model acceptance in policymaking.” Policy Sciences online (2016): 1­16. ​ ​ Kuhn, Thomas S. The structure of scientific revolutions. University of Chicago Press, 2012 ​ ​ [1962].

Landry, E.M., “Methodological structural realism.” Landry, E.M., and Dean Rickles, eds. Structural Realism: Structure, Object, and Causality. Vol. 77. Springer Science & Business ​ Media, 2012, 29­58.

Latour, Bruno. Science in action: How to follow scientists and engineers through society. ​ ​ Harvard university press, 1987.

69

Lee, Joseph. "Cochlear implantation, enhancements, and posthumanism: some human questions." Science and engineering ethics 22, no. 1 (2016): 67­92. ​ ​ Lee, Yi­Hsuan, Yi­Chuan Hsieh, and Chia­Ning Hsu. "Adding Innovation Diffusion Theory to the Technology Acceptance Model: Supporting Employees' Intentions to use E­Learning Systems." Educational Technology & Society 14, no. 4 (2011): 124­137. ​ ​ Leu, Hoang­Jyh, Chao­Chan Wu, and Chiu­Yue Lin. "Technology exploration and forecasting of biofuels and biohydrogen energy from patent analysis."International Journal of ​ Hydrogen Energy 37, no. 20 (2012): 15719­15725. ​ Levy, Arnon. "Modeling without models." Philosophical studies 172, no. 3 (2015): 781­798. ​ ​ Lin, Hsiu­Fen. "Predicting consumer intentions to shop online: An empirical test of competing theories." Electronic Commerce Research and Applications 6, no. 4 (2008): ​ ​ 433­442.

Lutz, Sebastian. "What’s Right with a Syntactic Approach to Theories and Models?." Erkenntnis 79, no. 8 (2014): 1475­1492. ​ Lyne, J., “Book review: Abductive Reasoning by Douglas Walton.” Argumentation & ​ Advocacy Vol. 42 Issue 2 (2005): 114­116. ​ Ma, Lin, and Jaap Van Brakel. "Heidegger’s thinking on the “Same” of science and technology." Continental Philosophy Review 47, no. 1 (2014): 19­43. ​ ​ MacKenzie D (1981) Statistics in Britain. 1865–1930. Edinburgh University Press, ​ ​ Edinburgh

Magnani, Lorenzo. "An abductive theory of scientific reasoning." Semiotica 2005, no. ​ ​ 153­1/4 (2005): 261­286.

Marinakis, Yorgos D. "Ecosystem as a topos of complexification." Ecological complexity 5, ​ ​ no. 4 (2008): 303­312.

Marinakis, Yorgos D. "Forecasting technology diffusion with the Richards model." Technological Forecasting and Social Change 79, no. 1 (2012a): 172­179. ​ Marinakis, Yorgos. "On the Semiosphere, Revisited." Signs­International Journal of ​ Semiotics 6 (2012b): 70­126. ​ Michalakelis, Christos, Dimitris Varoutas, and Thomas Sphicopoulos. "Diffusion models of mobile telephony in Greece." Telecommunications Policy 32, no. 3 (2008): 234­245. ​ ​

70

Miranda, Luiz CM, and Carlos AS Lima. "A new methodology for the logistic analysis of evolutionary S­shaped processes: Application to historical time series and forecasting." Technological Forecasting and Social Change 77, no. 2 (2010): 175­192. ​ Morgan, Thomas Joshau Henry, and Kevin N. Laland. "The biological bases of conformity." Frontiers in neuroscience vol. 6 article 87 (2012): 1­7. ​ Morrison, Margaret. “Models as Autonomous Agents”, in Mary S. Morgan and Margaret Morrison (eds.), Models as Mediators: Perspectives on Natural and Social Science. ​ ​ Cambridge: Cambridge University Press, 1999, 38–65.

Morrison, Margaret. "One phenomenon, many models: Inconsistency and complementarity." Studies In History and Philosophy of Science Part A 42, no. 2 (2011): 342­351. ​ Mun, Y. Yi, Joyce D. Jackson, Jae S. Park, and Janice C. Probst. "Understanding information technology acceptance by individual professionals: Toward an integrative view." Information ​ & Management 43, no. 3 (2006): 350­363. ​ Närman, Per, Hannes Holm, David Höök, Nicholas Honeth, and Pontus Johnson. "Using enterprise architecture and technology adoption models to predict application usage." Journal ​ of Systems and Software 85, no. 8 (2012): 1953­1967. ​ Niiniluoto, Ilkka. "Representation and truthlikeness." Foundations of Science 19, no. 4 ​ ​ (2014): 375­379.

Nordmann, A. “Was ist TechnoWissenschaft? Zum Wandel der Wissenschaftskultur am Beispiel von Nanoforschung und Bionik.” In: Rossmann T, Tropea C (eds) Bionik. Springer, ​ ​ Berlin, 2005.

Nordmann, Alfred. "Collapse of distance: epistemic strategies of science and technoscience." Danish yearbook of philosophy 41, no. 7 (2006): 7­34. ​ Nordmann, Alfred, Hans Radder, and Gregor Schiemann, eds. Science Transformed?: ​ Debating Claims of an Epochal Break. University of Pittsburgh Press, 2011. ​ O'Riordan, Timothy, and Andrew Jordan. "The precautionary principle in contemporary environmental politics." Environmental Values (1995): 191­212. ​ ​ Olivares, Lissette. "Hacking the Body and Posthumanist Transbecoming: 10,000 Generations Later as the mestizaje of Speculative Cyborg Feminism and Significant Otherness." NanoEthics 8, no. 3 (2014): 287­297. ​ Paananen, Aija, and Saku J. Mäkinen. "Bibliometrics­based foresight on renewable energy production." Foresight 15, no. 6 (2013): 465­476. ​ ​

71

Park, Yangil, and Jengchung V. Chen. "Acceptance and adoption of the innovative use of smartphone." Industrial Management & Data Systems 107, no. 9 (2007): 1349­1365. ​ ​ Peregrin, Jaroslav. "Structure and meaning." Semiotica 113, no. 1­2 (1997): 71­88. ​ ​ Peruzzi, Alberto. "The meaning of category theory for 21st century philosophy." Axiomathes ​ 16, no. 4 (2006): 424­459.

Petrenko, Anton, and Dan McArthur. "Between same­sex marriages and the large hadron collider: Making sense of the precautionary principle." Science and engineering ethics 16, no. ​ ​ 3 (2010): 591­610.

Phuangthong, Dulyalak, and Settapong Malisuwan. "User acceptance of multimedia mobile internet in Thailand." International Journal of the Computer, the Internet and Management ​ 16, no. 3 (2008): 22­33.

Popper, Karl. The logic of scientific discovery. Routledge, 2005 [1934]. ​ ​ Portides, Demetris P. "Scientific models and the semantic view of scientific theories." Philosophy of science 72, no. 5 (2005): 1287­1298. ​ Portides, Demetris P. "The relation between idealisation and approximation in scientific model construction." Science & Education 16, no. 7­8 (2007): 699­724. ​ ​ Portides, Demetris. "Seeking representations of phenomena: Phenomenological models." Studies In History and Philosophy of Science Part A 42, no. 2 (2011): 334­341. ​ Psillos, Stathis. "An explorer upon untrodden ground: Peirce on abduction." Gabbay, Dov M., and John Hayden Woods. Handbook of the history of logic: Inductive logic. Vol. 10. Elsevier, ​ ​ 2004, 117­151.

Putnam, Hilary. "What theories are not." Nagel, Ernest, Patrick Suppes, and Alfred Tarski, eds. Logic, methodology and philosophy of science: proceedings of the 1960 International ​ Congress. Vol. 1. Stanford University Press, 1962: 240­251. ​ Rand, Rose. "Reading Notes and Summaries on Works by Rudolph Carnap, 1932 and Undated." Rose Rand Papers. Special Collections Department, University of Pittsburgh, ​ ​ online: http://digital.library.pitt.edu/u/ulsmanuscripts/pdf/31735061817825.pdf. Retrieved May 16, 2013.

Read, Dwight W. "From behavior to culture: An assessment of cultural evolution and a new synthesis." Complexity 8, no. 6 (2003): 17­41. ​ ​

72

Reck, Erich. "From Frege and Russell to Carnap: Logic and logicism in the 1920s." In: Awodey, Steve, and Carsten Klein. Carnap brought home: The view from Jena. Vol. 2. Open ​ ​ Court Publishing, 2004, 151­180.

Richards, F. J. "A flexible growth function for empirical use." Journal of experimental ​ Botany 10, no. 2 (1959): 290­301. ​ Riemann, B. "über die Hypothesen, welche der Geometrie zu Grunde liegen. (Mitgetheilt durch R. Dedekind)." Abhandlungen der Königlichen Gesellschaft der Wissenschaften in ​ Göttingen 13 (1868): 133­152. ​ Rogers, Everett M. Diffusion of innovations. Simon and Schuster, 2010 [1962]. ​ ​ Ruohonen, Jukka, Sami Hyrynsalmi, and Ville Leppänen. "The sigmoidal growth of operating system security vulnerabilities: An empirical revisit."Computers & Security 55 (2015): 1­20. ​ ​ Salvi, Maurizio. "Transforming animal species: the case of ‘oncomouse’."Science and ​ engineering ethics 7, no. 1 (2001): 15­28. ​ Schäfer L (1999) Das Bocon­Projekt. Von der Erkenntnis, Nutzung und Schonung der Natur. ​ ​ Suhrkamp, Frankfurt a. M.

Shapin, Steven. "History of science and its sociological reconstructions." In Cognition and ​ Fact, pp. 325­386. Springer Netherlands, 1986. ​ Shelah, Saharon. Classification theory: and the number of non­isomorphic models. Elsevier, ​ ​ 1990.

Singh, Sanjay Kumar. "The diffusion of mobile phones in India."Telecommunications Policy ​ 32, no. 9 (2008): 642­651.

Spier, Robert FG. From the hand of man: primitive and preindustrial technologies. ​ ​ Houghton­Mifflin, 1970.

Sriwannawit, Pranpreya, and Ulf Sandström. "Large­scale bibliometric review of diffusion research." Scientometrics 102, no. 2 (2015): 1615­1645. ​ ​ Stöltzner, Michael. "Hilbert's axiomatic method and Carnap's general axiomatics." Studies in ​ History and Philosophy of Science Part A 53 (2015): 12­22. ​ Styhre, Alexander. "Institutionalizing technoscience: Post­genomic technologies and the case of systems biology." Scandinavian Journal of Management 27, no. 4 (2011): 375­388. ​ ​ Suárez, Mauricio. "Scientific representation." Philosophy Compass 5, no. 1 (2010): 91­101. ​ ​

73

Sugden, Lawson G., E. A. Driver, and Michael CS Kingsley. "Growth and energy consumption by captive mallards." Canadian Journal of Zoology 59, no. 8 (1981): ​ ​ 1567­1570.

Suppe, Frederick. "What's wrong with the received view on the structure of scientific theories?." Philosophy of Science (1972): 1­19. ​ ​ Suppe, Frederick, ed. The structure of scientific theories. University of Illinois Press, 1977. ​ ​ Suppe, Frederick. "Understanding scientific theories: An assessment of developments, 1969­1998." Philosophy of Science (2000): S102­S115. ​ ​ Suppes, Patrick. "A comparison of the meaning and uses of models in mathematics and the empirical sciences." Synthese 12, no. 2 (1960): 287­301. ​ ​ Tarski, A. 1933a, “The concept of truth in the languages of the deductive sciences” (Polish), Prace Towarzystwa Naukowego Warszawskiego, Wydzial III Nauk Matematyczno­Fizycznych 34, Warsaw; reprinted in Zygmunt 1995, pp. 13–172; expanded English translation in Tarski 1983, pp. 152–278. Tarski, A., Pojęcie prawdy w językach nauk dedukcyjnych. Nakładem / Prace Towarzystwa Naukowego Warszawskiego, wydzial III, 34 (1933b).

Tarski, A. and Vaught, R. 1956, “Arithmetical extensions of relational systems”, Compositio ​ Mathematica, 13: 81–102. ​ Torretti, R., 1978. Philosophy of Geometry from Riemann to Poincaré, Dordrecht: Reidel. ​ ​ (Corrected reprint: Dordrecht, Reidel, 1984).

Trappey, Charles V., Hsin­Ying Wu, Fataneh Taghaboni­Dutta, and Amy JC Trappey. "Using patent data for technology forecasting: China RFID patent analysis." Advanced Engineering ​ Informatics 25, no. 1 (2011): 53­64. ​ Trifonov, Vladimir. "A linear solution of the four­dimensionality problem." EPL ​ (Europhysics Letters) 32, no. 8 (1995): 621. ​ Uebel, Thomas. "“Logical Positivism”—“Logical Empiricism”: What's in a Name?." Perspectives on Science 21, no. 1 (2013): 58­99. ​ Van Fraassen, B.C. (1980). The scientific image. Oxford: Clarendon Press. ​ ​ Van Fraassen, Bas C. "One or two gentle remarks about Hans Halvorson’s critique of the semantic view." Philosophy of Science 81, no. 2 (2014): 276­283. ​ ​ Velázquez­Quesada, Fernando R. "Reasoning processes as epistemic dynamics." Axiomathes ​ 25, no. 1 (2015): 41­60.

74

Venkatesh, V. and Bala, H., (2008), “Technology Acceptance Model 3 and a Research Agenda on Interventions,” Decision Science, 39 (2), pp. 273­312. ​ ​ Venkatesh, V. and Davis, F.D., (2000), “A Theoretical Extension of the Technology Acceptance Model: Four Longitudinal Field Studies,” Management Science, 46, pp. 186 204. ​ ​ ‐ Vincent, Bernadette Bensaude. "Discipline­building in synthetic biology."Studies in History ​ and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences 44, no. 2 (2013): 122­129. ​ Vorms, Marion. "Representing with imaginary models: Formats matter."Studies In History ​ and Philosophy of Science Part A 42, no. 2 (2011): 287­295. ​ Waddington, David I. "A field guide to Heidegger: Understanding ‘the question concerning technology’." Educational Philosophy and Theory 37, no. 4 (2005): 567­583. ​ ​ Weber, Jutta. "Making worlds: epistemological, ontological and political dimensions of technoscience." Poiesis & Praxis 7, no. 1­2 (2010): 17­36. ​ ​ Wittgenstein, Ludwig. 1922. Tractatus logico­philosophicus. London: Routledge & Kegan ​ ​ Paul.

Wray, K. Brad. Kuhn's evolutionary social epistemology. Cambridge University Press, 2011. ​ ​ Zarmpou, Theodora, Vaggelis Saprikis, Angelos Markos, and Maro Vlachopoulou. "Modeling users’ acceptance of mobile services." Electronic Commerce Research 12, no. 2 ​ ​ (2012): 225­248.

Zimmermann, R.E., 1999. The Klymene principle: a unified approach to emergent consciousness. Kassel Universität Gesamthochschule Kassel.

Zimmermann, Rainer E. "Loops and knots as topoi of substance. spinoza revisited." arXiv ​ preprint gr­qc/0004077 (2000). ​ Zimmermann, Rainer E. "Spinoza in context: A holistic approach in modern terms." Infinity, ​ Causality, and Determinism. Cosmological Enterprises and their Preconditions (2002): ​ 165­186.

Zimmermann, Rainer E. "Topological aspects of biosemiotics." tripleC: , ​ Capitalism & Critique. Open Access Journal for a Global Sustainable Information Society 5, ​ no. 2 (2007): 49­63.

75

3. EXPLAINING NEW PRODUCT ADOPTION AT THE BASE OF THE PYRAMID

Marinakis, Yorgos D., Rainer Harms and Steven T. Walsh. Submitted to Technology Analysis ​ and Strategic Management. ​

76

Explaining Product Adoption and Diffusion at the Base of the Pyramid

Abstract

Product adoption and diffusion dynamics observed in field studies vary with sample size and population structure. The purpose of this study was to identify a single model that could produce this variety of dynamics. A single model was in fact identified in this study, namely replicator dynamics. To identify this single model, a case study approach based on secondary sources was utilized, supplemented by methods of qualitative research. Product adoption cases were selected from the Base of the Pyramid (BOP) because the richness and variety of that context provided a unique opportunity to study the phenomenon. It was shown that replicator dynamics could produce a variety of product adoption and diffusion dynamics that had been observed at the Base of the Pyramid. These findings imply that replicator dynamics, hence imitation and biased cultural transmission, may lie at the core of all product adoption and diffusion.

Keywords: Base of the Pyramid, biased cultural transmission, product adoption, qualitative ​ research, replicator dynamics

77

Introduction

Product adoption and diffusion dynamics observed in field studies vary with sample size and population structure. At large sample size and in well­mixed populations, the dynamics trace out sigmoidal curves. At small sample size and in structured populations, the dynamics are described by non­sigmoidal individual learning models. The challenge is that no rationale has been provided for why different types of models or mechanisms are activated in different contexts. Alternatively, no single model has been identified that can explain this variety of dynamics.

Researchers studying product adoption and diffusion models have focused on combining individual learning models. For example, a conceptual framework was offered to organize the variety of variables involved in the diffusion of innovations (Wejnert 2002). This framework is similar to Rogers’ (2010 [1962]) diffusion of innovations, which is an individual learning model. Researchers have also combined Rogers’ diffusion of innovations with other individual learning models, such as the Technology Acceptance Model (e.g., Seyal et al. 2011), Lin’s (2003) interactive communication technology adoption model (Atkin et al. 2015), and Sen’s (1999) theory of poverty (Nakata and Weidner 2012).

However, these combined individual learning models cannot produce the sigmoidal behavior that characterizes large scale product diffusion. A consolidated product adoption model must explain both sigmoidal dynamics and individual learning dynamics. Sigmoidal curves are produced by epidemic models and simple replicator dynamics. Individual learning models and complex update dynamics produce a wider variety of dynamics such as dual equilibria, reversed equilibria, and the failure to reach equilibrium.

The purpose of this study was to identify a single model that could produce the variety of dynamics observed in product adoption and diffusion field studies. A single model was in fact identified by this study, namely replicator dynamics. To identify this single model, a case study approach based on secondary sources was utilized, supplemented by methods of qualitative research. Product adoption cases were selected from the Base of the Pyramid

78

(BOP) because the richness and variety of that context provided a unique opportunity to study the phenomenon. These BOP product adoption and diffusion cases were then qualitatively compared to results from replicator dynamics simulations. It was shown that replicator dynamics could produce a variety of product adoption and diffusion dynamics that had been observed at the Base of the Pyramid. The replicator dynamic framework was then used to explain product adoption in BOP markets.

One motive for performing the present research was the aforementioned lack of a single model, and the diversity of results in field studies and product launches. A second motive was that sustainable development is said to depend on the adoption of new technology­based goods and services (Paradis 2011). Moreover, practitioners who are informed by ill­fitting models may develop ineffective strategies, and the adoption by the poor of new goods may be delayed.

Theoretical Background

We begin with a review of recent research on product adoption models and game­theoretical approaches at the BOP. Replicator dynamics, which model imitative social dynamics and which produce sigmoidal curves (Henrich 2001), are then discussed. The section concludes with a review of the literature on conformity bias, which is a type of imitative social dynamics.

Product Adoption at the Base of the Pyramid

Selling goods and services at the BOP can empower the poor, while at the same time delivering tremendous business opportunities (Prahalad and Hart 2002, Prahalad 2006, Karnani 2007, Agnihotri 2013). The term Base of the Pyramid (BOP) refers to those 4 to 4.5b people living on $3260 or less per year. At its very base are 1.5 billion people who live on less than $1.25 a day and an additional 1 billion who live on less than $2.50 a day. Interest has been increasing in how to catalyze new product acceptance or adoption and diffusion in BOP markets. The temporal evolution of the number or share of users of a product through a social system (e.g., an organization, a social network, a country) is referred to as diffusion

79

(Rogers 2010 [1962]). An individual makes a decision to accept or adopt a new product, and the sum of those individual decisions create diffusion. The challenge of catalyzing new product adoption at the BOP is that current models might be neglecting a key social dynamic in the adoption process. Thus while these models in their current forms have value as epistemic tools (Knuuttila 2011), their value as empirical models might be wanting.

All published BOP product adoption studies utilized the Technology Acceptance Model (TAM), or its ancestor the Theory of Planned Behavior (TPM) (e.g., Tobbin 2012, Urmee and Gyamfi 2014, Chen and Huang 2016; verified through a literature analysis on the keywords “Base of the Pyramid” and “diffusion”). TAM and TPM models emphasize individual learning. They do not address imitative social dynamics without learning. However, sigmoidal curves can be created by imitative social dynamics without learning. As we show below, diffusion at the BOP has traced out a sigmoidal curve in the case of M­Pesa at the BOP in Kenya. In addition, individual learning models such as the Technology Acceptance Model cannot create sigmoidal curves. They create r­shaped curves (Henrich 2001). Hence, the theoretical models used in BOP diffusion studies cannot explain the empirically observed diffusion curves.

Researchers studying markets at the BOP have been constructing new product adoption models that are based mainly on individual learning. This construction is generally derived from and justified by the literature, and to some extent is confirmed by empirical studies. These empirical studies have small sample sizes, e.g., Lal Day et al. (2013) with a sample size of 50 and Jebarajakirthy et al. (2015) with a sample size of 795. This distinction provides an important clue for identifying a single explanatory model. Later in this section it will be shown that replicator dynamics demonstrate two entirely different modes of dynamics, depending on sample size and whether the population is well­mixed. At large sample sizes in well­mixed populations, the replicator equations produce sigmoidal curves. At small sample sizes in spatially correlated populations, the replicator equations produce a variety of bizarre dynamics that qualitatively resemble the results of individual learning­based dynamics. The difference between large sample size and small sample size replicator dynamics is due to the fact that achieving well­behaved replicator dynamics requires meeting four assumptions (discussed below). These four assumptions can be approximated as well­mixed populations

80 and large sample sizes. It is easier to provide examples of populations that are not well­mixed. These include subpopulations in vertices in generic graphs (Madeo and Mocenni 2015), and populations on regular lattices (Roca et al. 2009). Exactly what constitutes a small or large sample size depends on the game being produced with the replicator equations. Researchers investigating Dawkins’ Battle of the Sexes identified the transition from small to large sample size around N=125 (Traulsen et al. 2005). In 2x2 games with fixed payoff matrices, researchers identified the transition at N=16 for one game and N=650 for another (Taylor et al. 2004). In a Prisoners’ Dilemma game, researchers identified the transition at N=10000 (Traulsen et al. 2006).

Replicator Dynamics: mean field approximation

Product adoption and diffusion models are varied (Sarkar 1998, Geroski 2000). They range from epidemic models and simple replicator dynamics, to individual choice models such as Probit, to individual learning models using a variety of update rules such as the Fermi rule (imitation but also allowing individuals to make mistakes) and endogenous evolution (Roca et al. 2009, Xia et al. 2012). Replicator dynamics in particular is one of several types of what is referred to as update dynamics of strategies (Roca et al. 2009, Xia et al. 2012). Replicator dynamics and the Fermi rule are stochastic update rules. In replicator dynamics, an agent randomly chooses a neighbor and compares the payoff of his own strategy with his neighbor’s payoff. If his own payoff is greater than his neighbor’s payoff, the agent does not change his strategy. If his own payoff is less than his neighbor’s payoff, the agent adopts the neighbor’s strategy with a probability proportional to the difference between the two payoffs. Under the Fermi rule, each agent stochastically chooses a neighbor and imitates its strategy at the Fermi probability without considering the actual payoff. Under the best response rule, an agent adopts the optimum strategy (the best response) with probability p. Update rules are interesting for a number of reasons. Replicator dynamics produce the sigmoidal curves characteristic of large scale diffusion of innovations curves (Henrichs 2001; Table 1). In addition, the Fermi and best response update rules are strong enough to suppress the effects of a structured (i.e., not well­mixed) population (Roca et al. 2009).

81

Replicator dynamics, in contrast to the Fermi and best response rules, are not strong enough to suppress the effects of a structured population. Replicator dynamics produce the sigmoidal curves only under conditions in which four assumptions are met. These assumptions guarantee the “mean­field approximation” in which “any individual effectively interacts with a player which uses the average strategy within the population” (Roca et al. 2009: 210). The four assumptions are (Hilbe 2011: 2069):

“(i) The population is well mixed, meaning that any two players interact with the same probability.

(ii) Before reproduction [reproducing a strategy, here: adoption or not], individuals play against a representative sample of the population.

(iii) Players may choose among a finite set of strategies, and

(iv) the population needs to be infinite.”

When the population is well mixed, “the rate at which individuals meet and interact is independent of their strategies” (Ji and Xian­Jia 2013: 37­38). When the population is finite, “there will be some random interference in the individuals’ strategy selection process” (Ji and Xian­Jia 2013: 38).

The replicator dynamics equations (Taylor and Jonker 1978, Schuster and Sigmund 1985) assume a population of n types, where xi is the frequency of type i. The state of the population ​ ​ T is x = (x1, ..., xn) , where x​ is an element of the unit simplex Sn (i.e., x ∈ Sn). Let x(t) be a law ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ of motion. Let individuals meet randomly and engage in a symmetric game with payoff

T matrix A, where (Ax)i is the expected payoff for an individual of type i and x Ax is the​ ​ ​ ​ average payoff in the population state x. The replicator equation is

T dxi/dt = xi((Ax)i − x Ax)​ ​ ​ ​ ​ ​ ​ ​

Thus this equation describes the temporal evolution of frequencies of strategies (Alboszta and Miekisz 2004, Zhang and Hofbauer 2015), where more successful strategies spread in the population (Hofbauer and Sigmund 2003), and the growth rate of the frequency of a strategy is proportional to the difference between the payoff of individuals playing this strategy and

82

the mean payoff in the population (Mertikopoulos and Viossat 2016). With the proper payoff matrix, it is a model of the imitation of successful individuals (Hofbauer and Sigmund 2003: 491 ff.).

Replicator Dynamics: non­mean field approximation

Replicator dynamics at large sample size produce a sigmoidal curve, but if the sample size is relatively small or the population is not well­mixed then replicator dynamics can also produce the dynamics that are characteristic of small sample sizes (small sample populations). Above we gave some examples of small sample sizes, and we showed that the definition of small depended on the game being simulated. We also gave examples of what it means to not be well­mixed. Game theory researchers have been investigating “beyond replicator dynamics” and “beyond the mean­field approximation,” studying fluctuations and correlations (Roca et al. 2009: 210). Extended replicator dynamics include imitative dynamics in finite populations (Borkar et al. 2004, Taylor et al. 2004, Hilbe 2011, Madeo and Mocenni 2015) and with non­uniform interaction rates (Taylor and Nowak 2006, Van Veelen 2011, Ji and Xian­Jia 2013).

The non­mean field replicator dynamic literature was surveyed. The survey was focused on finding the stable coexistence of two product designs, the dominance of inferior product designs, and the failure to move towards an asymptote. Current representative examples were identified (Table 1). It will be shown below in the case discussions that non­mean field dynamics resemble small sample size new product adoption at the Base of the Pyramid (Table 1).

Model Dynamics Reference Case

Replicator Sigmoidal curve Henrich (2001) M­Pesa, Tata Nano Dynamics: mean field approximation

83

Replicator Dynamics: non­mean field approximation:

Small (Finite) Bistable or dual Madeo and Mocenni Sudan Fuel Stoves Populations equilibria (2015)

(replicators under conditions that do not satisfy replicator dynamics assumptions)

Non­uniform Reversed dominance Roca et al. (2009) Sudan Fuel Stoves Interaction Rates (in at equilibrium, Finite Populations) spatial clustering

(replicators under conditions that do not satisfy replicator dynamics assumptions)

Non­Nash Failure to reach Viossat (2007) Sudan Fuel Stoves Equilibrium equilibrium Solutions

(replicator dynamics with an additional strategy)

84

Intrinsic Noise Failure to reach Li et al. (2016) Sudan Fuel Stoves equilibrium (replicators under conditions that do not satisfy replicator dynamics assumptions)

Short Length of Failure to reach Flåm and Morgan Sudan Fuel Stoves Time equilibrium (2004)

(replicators under conditions that do not satisfy replicator dynamics assumptions)

Table 1. Replicator dynamics in both the mean field and non­mean field approximations produce the wide varieties of dynamics observed in new product diffusion field studies at the BOP.

Biased Cultural Transmission and Conformity Bias

The replicator dynamics framework is a model of biased cultural transmission such as conformity bias. Humans rely on cultural learning to acquire the majority of their dynamics (Henrich 2001). This learning occurs through biased cultural transmission such as direct bias, prestige bias and conformity bias. Direct biases derive from the intersection of our ideas, beliefs, practice and values, with our learning psychology. Here, people copy a dynamics or an idea because there is something about it that appeals to them, given their particular cultural traits and their particular psychologies (preferences, ease of storage and recall from memory, goals). Under prestige bias, people copy the ideas or practices of prestigious or successful individuals, even if those ideas or practices have nothing to do with the prestige or success.

85

Under conformity bias, people preferentially imitate ideas and dynamics that are expressed by a majority of a group over those expressed by a minority, even when their personal opinions or dynamics will not be known by the other group members.

Conformity bias is particularly relevant because it causes the slow initial growth that often characterizes product diffusion (Henrichs 2001: 1001­1003). Conformity is the phenomenon in which an individual displays a particular dynamics because it is the most frequent the individual witnessed in others (Claidière and Whiten 2012). Conformity bias, a form of cultural transmission bias, can maintain both similarities within, and differences between, cultural groups (Boyd and Richerson 1996). A rare trait inhibits its own diffusion and a common trait encourages its own diffusion. To explain the phenomenon of conformity, it was suggested that social psychology’s distinction between informational and normative conformity be imported into behavioral ecology and evolutionary biology (Claidière and Whiten 2012: 127). In informational conformity, a person conforms in order to find information about reality. In normative conformity, people conform to social rules to maintain and develop group identity. People might also conform in order to reduce their existential uncertainties (Yair 2008).

Questions relating to cultural transmission catalysis are often addressed under the rubric of irrational herding, which is the phenomenon in which herd members passively mimic others’ choices and refer to others’ decisions as a descriptive social norm (Zhuang and Liu 2012). The role of irrational herding dynamics has been examined for example in online product choice (Huang and Fen 2006) and in stock market investments (Yang et al. 2015). What has been less studied is the question of how to break herds, which is the initiation (vis­à­vis the acceleration) of cultural transmission. Queues are said to play a catalytic role (Debo and Veeraraghavan 2009). Consumer opinions were found to play a more determinative role than expert opinions (Huang and Fen 2006). A mathematical analysis suggested that one method to manipulate herds to ensure higher sales for firms was to sell to groups (Sgroi 2004). Another approach to the problem of cultural transmission catalysis focuses on peer effects/influences, such as those catalyzed by Solar Community Organizations (Noll et al. 2014) and by mobile carriers (de Matos et al. 2014). The two approaches are likely equivalent.

86

If people are adopting a new product because they see others adopting it, then new product adoption can be catalyzed by selling it under conditions in which consumers can imitate one another. One way to catalyze new product adoption is through group sales, where people can see each other adopting the product. Two types of group sales, roadshows and tent shows, are still common sales venues in India and Kenya for reaching Base of the Pyramid markets. But in the United States, roadshows and tent shows were much more common in the first half of twentieth century than they are today (e.g., Mintz 1985, Waller 1995).

Methods

The purpose of this study was to identify a single model that could produce the variety of dynamics observed in product adoption and diffusion field studies. To analyze the research question, the case study method (Yin 2013, Flyvbjerg 2006). In terms of Yin’s (2013) five case study components, this study’s question was whether a single model could be identified that could explain the varieties of dynamics observed in product adoption and diffusion field studies. This study’s proposition was that the diffusion of innovations occurs through replicator dynamics. This study’s unit of analysis was the population that was the intended target of the new product. The logic linking the data to the propositions was that replicator dynamics creates sigmoidal curves (Henrich 2001), and that imitative dynamics under conditions that do not meet the four assumptions (see below) of replicator dynamics often demonstrates bizarre dynamics that (qualitatively) matched the dynamics observed in field studies (see below, Theoretical Background). The criteria for interpreting the findings was qualitative resemblance between the field study observations and a representative sample from mathematical studies of replicator dynamics.

A theoretical framework was identified, namely replicator dynamics, that displayed the requisite variety of dynamics. That step was easy, as replicator dynamics were already being used to model large scale diffusion. In addition, replicator dynamics research had already been extended to small sample sizes. A representative set of results was then compiled from

87 small sample size replicator dynamics, with the intent of qualitatively comparing them with a representative set of product diffusion studies.

To select the new product diffusion Base of Pyramid cases, two sets of filters were applied to identify three representative cases, and then literature searches were performed on those cases to develop a comprehensive background for each of them. The specific steps of our methods follow.

1. A literature search was performed on the following terms: kw:diffusion AND kw:innovation AND (kw:"base of the pyramid" OR kw:"bottom of the pyramid"). This search utilized 29 databases, including Academic Search Complete and Business Source Complete. It resulted in 265 articles or chapters, 12 books, and 2 items or archival materials. 2. To safeguard the quality of the sources used, the results were limited to peer­reviewed materials. This resulted in 89 articles or chapters. 3. The results were further narrowed by selecting only those articles that reported actual diffusion of innovations at the Base of the Pyramid, either primarily or in a secondary study that included an analysis that went beyond a literature review. Excluding duplicates, these were 7 articles. 4. The articles were then categorized based on whether the reported innovation occurred in materials (M), fabrication & assembly (F&A), or processes (P), because each of those innovations occurs differently (Linton and Walsh 2003, Walsh et al. 2015). 5. One case was then selected from each M, F&A and P category, based on the depth and scope of the available literature. 6. To provide further background information for the cases, a second set of searchers was then performed utilizing a separate set of keywords for each of the selected cases. In this step numerous articles were identified relating to each of the selected cases. The keywords and search results were as follows. Kw: (Sudan stove) resulted in 68 articles/chapters. Kw:(“M­Pesa” Kenya) resulted in 350 articles/chapters. Kw:(“Tata Nano” India) resulted in 675/chapters.

88

The cases that were selected and not selected will now be discussed. The principal logic for selecting the three cases is that they were strong examples of their respective M, F&A and P category; and they provided a variety of examples with respect to the replicator dynamics assumptions. We explicitly accounted for and represented the three innovation categories because we were uncertain of their respective effects on product adoption and diffusion. The following cases were selected: Khavul and Bruton (2013) for M (fuel stoves in Darfur); Van den Waeyenberg and Hens (2008) for F&A (the Tata Nano); and Sivapragasam et al. (2011) for P (M­Pesa). The Darfur fuel stove case was selected because it involved a social enterprise model (Yang and Wu 2016) and because it presented a bistable or dual equilibrium. The Tata Nano case was selected because it involved an allegedly unsuccessful launch and a slightly more successful re­launch. The M­Pesa case was selected because it was an unqualified success in Kenya. The Tata Nano case and the M­Pesa cases were also attractive because they involved large sample sizes and likely satisfied assumptions (i)­(iv) of replicator dynamics. The Darfur fuel stove case likely did not satisfy the assumptions, but it was the only materials (M) case in the group of articles.

The following cases were not selected: Patrimonio Hoy (a new construction financing service), e­Choupal (Internet kiosks in Indian villages so farmers can get market information), and Grameen’s Village Phone (cellular phone service to poor rural households in Bangladesh) (Ratcliff and Doshi 2016); toilets (Ramani et al. 2012); the use of mobile telephony in Asia to send remittances (Sivapragasam et al. 2011, but this article was selected for M­Pesa); clean drinking water and household electrification (Khavul and Bruton 2013, but this article was selected for Darfur); motorized transport and mobile phones (for agricultural information in Indonesian farming communities) (Matous et al. 2015); or mobile telephony in Kenya (Foster 2014). Mobile telephony diffused universally across the Base of the Pyramid according to a sigmoidal curve, so there is little to learn from these cases. The other cases likely did not satisfy assumptions (i)­(iv) because the sample sizes were too small.

The replicator dynamics and other game theory examples indicate which characteristics are relevant to understanding Base of the Pyramid new product adoption, and therefore what information should be extracted from the cases. These are the four assumptions (i)­(iv), namely (i) a well­mixed population (vis­a­vis the lack of one, which has spatial correlations

89 due to territorial or physical constraints, and from segregation or group formation); (ii) before selection, players play against a representative sample of the population (vis­a­vis a situation in which players play a small number of games, not against a representative sample of the population, before selection occurs); (iii) a finite set of strategies (vis­a­vis an infinite selection of strategies); and (iv) an infinite population (vis­a­vis a finite population). In terms of field research, assumption (i) can be interpreted as referring to the heterogeneity of the small distribution of the population. Assumption (ii) can be interpreted as referring to how much interaction an individual has with other individuals in the population before making a product adoption decision. To some extent, assumption (ii) is subsumed by assumption (i), because it is easier to assume (ii) has occurred when the representative sample is well­mixed. Assumption (iii) can be interpreted as referring to the number of product choices available to the individual. Assumption (iv) is sample size. Thus at a minimum we are interested in whether the population was well­mixed (assumption (i)), and whether the sample size was large (assumption (iv)). We extracted this information and reported it below in the Results.

Results and Discussion

For each of the selected cases, the selected article was supplemented with additional articles to fill in the necessary background. The geography of the unit of analysis was compared to the assumptions of replicator dynamics, specifically whether the geography allowed for a well­mixed population and a large sample size. The facts of the case were then compared to the mathematical analyses discussed in the Theoretical Background.

Case 1. Materials Innovation: Fuel Stoves

Since 1997, development and relief agencies have been supplying fuel stoves of various types to refugee camps in Darfur, Sudan. Abdelnour and Branzei (2010) treated the Darfur refugee camps as markets, and sought to determine why some development and relief interventions successfully enabled “subsistence marketplaces” while others delayed or distorted them. They identified three temporal stages of market development in which agencies and other actors played sometimes changing roles. In Stage 1 (1997­2002), the focus was on reducing or

90 eliminating the health risks of wood burning stove smoke in kitchens. In Stage 2 (2002­2005), the focus was on reducing or eliminating the rape of women who left the refugee camp in search of firewood (Abdelnour and Saeed 2016). In Stage 3 (2005­2008), the focus was on building a local economy. Within these stages, the agency Intermediate Technology Development Group (ITDG), which later changed its name to Practical Action (PA), focused on developing individual production skills and consumer­user acumen; the agency Cooperative Housing Foundation, which later changed its named to CHF International, initially focused on developing the local economy (local fabrication and assembly, distribution by female entrepreneurs) through funding a local plant to produce metal stoves; and the Berkeley Lab focused on optimizing the technology by redesigning the metal Tara stove from India. CHF International eventually broadened its focus to encourage increased involvement by the consumer­users.

Across these three stages, a variety of stove technologies were introduced. ITDG/PA introduced mud stoves in 1997. In 2003, ITDG/PA introduced Liquid Petroleum Gas (LPG) cookers. In 2006, CHF International introduced metal stoves in cooperation with Lawrence Berkeley National Laboratory. These metal stoves possessed high combustion efficiency and good heat transfer efficiency; and they used less than half the fuel of the traditional three­stone fire and cut emissions in half, and cost $20 to make (Kramer 2012). In mid­2006, International Lifeline Fund introduced brick stoves. The indigenous Darfur stove comprised a three­stone ring, the top of which simultaneously accommodated both a small (16–19 cm diameter) and large (23–28 cm diameter) round­bottomed pot. The introduced mud stoves also comprised natural materials, as they were constructed of brick covered with mud walls. The tops of the mud stoves also accommodated round­bottomed pots, albeit only a single pot at a time. The metal stoves could accommodate a single round­bottomed pot.

These releases of stove technologies to some extent overlapped, a situation to which some of the agencies responded by competing with each other through product subsidies and free product distributions. These market interventions likely distorted the diffusions that would have occurred based solely on product merits. For example, Christensen et al. (2015) found in rural Malawi that those who paid the deeply discounted price for a water purification product were more likely to re­obtain and use the product than those who paid a moderate price or

91 who took it for free. Yet some observations are relevant and interesting. By Stage 3, 80% of the households had a mud stove and frequently used them. By the end of Stage 3, the demand continued for both mud stoves and brick stoves, but the supply of metal stoves exceeded the demand even at heavily subsidized prices. By 2008, more than 80% of the people surveyed who have received a fuel­efficient stove were still using it, and the majority of stoves in use in the Darfur refugee camps were mud stoves (between 74 and 95%, ProAct Network 2008). Abdelnour and Branzei (2010) similarly reported that, by 2008, 90% of the stove owners in the Darfur refugee camps used mud stoves, and that the 49% who owned both mud and metal stoves preferred the mud stoves.

Fuel Stoves ­ Assumption (iv): the population needs to be infinite (large sample size)

Internally Displaced Persons (IDP) camp population data is difficult to find. United Nations statistics report monthly new arrivals but not censuses. One of the best sources is the press. According to Radio Dabanga, the largest IDP camps in Darfur are the Murnei camp in West Darfur, the Zamzam camp in North Darfur, and the Kalma camp in South Darfur are the largest camps, hosting approximately 125,000, 150,000, and 160,000 displaced respectively. (Dabanga Jan. 4, 2016). El Salam camp, with more than 80,000 residents, is still considered one of the largest camps in Darfur. Radio Dabanga also reports that “[a]s of mid­June, 2.6 million people remain displaced across Darfur and 1.6 million civilians continue to reside in some 60 camps for internally displaced persons across the region, according to the UN Office for the Coordination of Humanitarian Affairs” (Dabanga June 30, 2016). We cannot assume that the sample size assumption was met. As reported above, the effective sample sizes were 125,000, 150,000 and 160,000 because each camp was a closed sample.

Fuel stoves ­ Assumption (i): The population is well mixed

We begin with a brief overview of the diversity of Sudan. Prior to its division in 2011 into North and South Sudan, the Sudan was a highly diverse state. Sudan comprises approximately 570 tribes, 56 ethnic groups, 8 major ethnic categories and 113 vernaculars (Gatkuoth 1995: 208). The 8 categories are Arabs (39%), Nilots (20%), Para­Nilots (5%), Westerners of Darfur (13%), Nuba (5%), Nubians (5%), Sudanic (6%), and foreigners (7%). The Sudan recently divided into two separate nations, North Sudan and South Sudan (Jok 2011). The

92

North is relatively religiously and linguistically homogenous, as Islam and the Arabic language are dominant. The South Sudan is more culturally diverse and struggled with the idea of a national identity (Frahm 2012). At the time of its independence in 2011, the South Sudan comprised “more than sixty cultural and linguistic groups” and it had a “history of internal political rivalries along ethnic lines” (Jok 2011: 2).

Sudan has a highly diverse population, but the IDP camps are apparently highly spatially correlated. Tribal members in the IDP camps voluntarily self­segregated into tribes (Moran 2015). In a camp that was mainly Nuer, violence erupted between politically opposed subgroups (Id.). The Nuer were well­mixed but apparently prone to fracture. Evans­Pritchard (1940) discussed Nuer tribes. There were 300,000 Nuer. Most tribes had a population over 5000. The largest tribes had populations of 30,000 to 45,000. Each tribe has a dominant clan (a clan is an exogamous system of lineages which trace their descent to a common ancestor). The clans were highly dispersed. Any village comprised members of diverse clans. The dominant clan had aristocratic status with privilege, or at least it had prestige. Clans were so dominant in Nuer tribes because social obligations were expressed though a kinship idiom and the interrelations of local communities within a tribe were described in terms of an agnatic relationship. When the tribe segments, it does so along lineages. We cannot assume that the well­mixed assumption was met.

Fuel Stoves ­ Discussion

Because the final stove diffusion data comes from Stage 3, our results are limited to that stage. We cannot assume that either assumption (i) or (iv) of replicator dynamics was met in the IDP camps. That does not necessarily exclude the possibility of imitative dynamics, replicator or otherwise. To understand the diffusion of fuel stoves in Sudan, we turn to a report by the ProAct Network (ProAct Network 2008). Their researchers visited 18 camps or communities between 30 March and 17 April 2008 in four states for a total of 932 households (ProAct Network 2008). They found that 74%­95% of the households were using an improved mud stove (Table 1). The other (26%­5% respectively) stove or stoves in use varied by state. For example, in South Sudan these other stoves were traditional mud stoves, traditional metal stoves that used charcoal, and Tara stoves. In North Sudan these other stoves

93 were rocket stoves and Tara stoves. These may be examples of the bistable or dual equilibria discussed by Madeo and Mocenni (2015) in finite populations, or the clustering of Roca et al. (2009).

Given the extensive diffusion of the improved mud stove, and given the findings of the ProAct Network (2008) interpreted in light of Madeo and Mocenni (2015), a replicator dynamics update rule might have been underlying the product adoption in the IDP camps. This may have been the case at least within the specific assemblages (possibly tribes or clans or lineages) in which new product adoption took place. The dual equilibrium may also be due to a variety of initial conditions, as discussed by Viossat (2007) (in light of Cartwright and Wooders (2014)). It could also be a result of intrinsic noise (Li et al. 2016) or a diffusion time that is too short (Flåm and Morgan 2004). The findings of Viossat (2007) suggest that a failure to reach equilibrium does not foreclose the possibility of an underlying replicator dynamic update rule. Future efforts to catalyze fuel stove adoption should be based on facilitating imitative social dynamics.

Case 2. Process Innovation: M­Pesa

M­Pesa is a mobile phone­based payments system introduced to Kenya by Safaricom, the country’s leading mobile telephony operator. Rather than providing full banking functionality, M­Pesa provides users with the functions of storing and transferring relatively small amounts of money at a relatively low cost. M­Pesa was an unqualified success. It was launched in March 2007. As of June 2010, 46% of the adult population of Kenya was using M­Pesa (Mas and Ng’weno 2010). Mobile telephony did not have social pressure against it in Kenya in 2007 when M­Pesa was introduced. There was already a 34% mobile telephone penetration into the Kenyan adult population (Banka 2013). Mobile telephones are currently “ubiquitous” in rural Kenyan livestock communities (Butt 2015). However, carrying and transferring money using mobile telephones was foreign at the time. Before M­Pesa, money was transferred over distances by bus companies (Eijkman et al. 2010, Mas and Ng’weno 2010) or the post office (Mas and Ng’weno 2010).

94

Some researchers suggested that a key to M­Pesa’s success is its use of a large network of retail shops where consumers can make deposits and withdrawals (Eijkman et al. 2010), but a broader view suggested that M­Pesa was successful because it built awareness and trust through branding, provided a consistent user experience while building and extensive network of retailer­agents, and designed an effective consumer pricing and agent commission structure (Mas and Ng’weno 2010). In this analysis we focus on the branding aspect, which we interpret here as a successful effort to overcome the social pressure to conform by rejecting the product.

In its first year of introducing M­Pesa, Safaricom supplemented its use of traditional mass media advertising with roadshows explaining the product and demonstrating how to use it, which was a method to which the low end of the market was accustomed (Mas and Ng’weno 2010). This marketing approach sought to overcome the social pressure to conform, because it did not assume Kenyans would immediately adopt the technology­based product simply because it offered a superior solution to one of their problems. Rather, Safaricom apparently believed that the technology would have to be introduced to Kenyans on their own terms, with care taken to show how the values embodying in the product design conformed to the values of the Kenyans themselves. Namely, Safaricom utilized the marketing slogan “Send money home.” Initially the firm positioned the M­Pesa consumer value proposition as a new way to make payments on microloans, but after test marketing the product they repositioned M­Pesa as a means for urban workers to make remittance payments to relatives and friends who lived in rural locations. This slogan simultaneously acknowledged the issues facing split urban­rural families in Kenya, while positioning M­Pesa as an aspirational product rather than a poor man’s substitute for a bank. Through the road shows, and a policy of having knowledgeable clerks available at retail stores ready to explain how to use the product, the low end of the market quickly adopted the product (Mas and Ng’weno 2010).

M­Pesa ­ Assumption (iv): the population needs to be infinite (large sample size)

The July 2015 population estimate of Kenya was 45,925,301 (CIA Kenya 2016). M­Pesa would be used by the poorest economic classes. In the 0 to 90% economic class, the highest wage was 15,000 Kenyan Shillings. In the 91% ­ 99 % economic class, the highest wage was

95 much higher at 100,000 Kenyan Shillings (DPMF 2016). We can assume that the sample size assumption was met.

M­Pesa ­ Assumption (i): The population is well mixed

Kenya comprises over 70 ethnic groups (AFC 2016). They are: Kikuyu 22%, Luhya 14%, Luo 13%, Kalenjin 12%, Kamba 11%, Kisii 6%, Meru 6%, other African 15%, non­African (Asian, European, and Arab) 1% (CIA Kenya 2016). It is reported that inter­ethnic rivalries and resentment over Kikuyu dominance in politics and commerce have hindered national integration (AFC 2016). We cannot assume that the well­mixed assumption was met.

M­Pesa ­ Discussion

We now turn to actual M­Pesa data. From the M­Pesa statistics tool (Safaricom 2016), we have the following annual deposits in Million Kenya Shillings (KSh): 2010: 382,786; 2011: 605,377; 2012: 808,567; 2013 (estimate): 967,925).

Because this data is monotonically increasing, we are justified in testing whether a sigmoidal model can fit it. We utilize the Richards model for this purpose. The Richards model is a flexible, four­parameter model that is able to fit the full range of sigmoidal shapes. The Richards model was introduced in 1959 in the context of plant growth (Richards 1959). It was recently applied to technology diffusion data (Marinakis 2012). The model has been modified and reparameterized by several researchers. As modified by Sugden et al. (1981), the model is:

∞ m/(1­m) 1/(1­m) Wt = W [1–(1–m)exp[−k​ (t–T*)/m ]]​ ​ ​ ​ ​ ​ where Wt is the weight or growth at time t, W∞ is the asymptotic weight, k is the maximum ​ ​ relative growth rate per unit time, T∞ is the time to asymptote, and m is a shape parameter

1/(1−m) with the property that m is the relative weight at time T∞. In application to the present​ ​ study, W∞ is the asymptotic amount of deposits, k signifies the maximum diffusion per unit time relative to the amount of deposits, T∞ is the time to asymptote, and m is a shape parameter.

96

The Richards Model fits the data statistically significantly at Pr>F = 0.0035 with m=11.6966,

∞ ∞ W =1.955E13, T​ =2023 and k=0.2000 (Figure 1). This sigmoidal curve indicates an​ ​ ​ underlying replicator dynamic update function. New product adoption was likely catalyzed through imitation, and future new product adoption will also likely be catalyzed through imitation. Future marketing should focus on group sales events that allow for imitation such as in roadshows and tent shows.

Figure 1. The Richards model fit to the M­Pesa deposits. The y­axis is deposits in Kenya Shillings. The x­axis is year.

Case 3. Fabrication & Assembly Innovation: Tata Nano

The Tata Nano, a small automobile that was the brainchild of Indian businessman Ratan Tata, was intended to serve as a safer substitute for two­wheeled family such as motorcycles scooters (Ray and Ray 2011). To serve as a substitute, the Tata Nano needed to have a price that was roughly the same as that of motorcycles and scooters. Thus Ratan Tata stipulated that the price would need to be approximately 2,000 USD. Achieving such a price

97 required completely redesigning not only the car but also the processes of designing and manufacturing a car. For example, Tata Motors outsourced to Indian firms approximately 80% of the component design and manufacturers, and used cheap local labor instead of expensive robots to assemble the automobiles (Ray and Ray 2011).

Initially, Tata Motors did not seek to introduce the Tata Nano to the Indians on their own terms. Tata Motors incorrectly assumed that the low end of the market would feel not be intimidated by large showrooms or a pay first­drive later booking model (Singh and Srivastava 2012). They did not make easy financing available. They chose to cut costs by not advertising through the most popular mass media channel of television. They also failed to position the Tata Nano as an aspirational product, though many of the customers turned out to be wealthy and were buying the car to show their national pride (Singh and Srivastava 2012).

Once the initial launch showed poor results, Tata Motors relaunched with low key showrooms in both urban and rural locations. They made 0% financing available. They launched a roadshow across 104 towns in 5 states. They also sought, through advertisements, to reposition the as a family car (Singh and Srivastava 2012). The result was a double­digit percentage jump in sales.

Tata Nano ­ Assumption (iv): the population needs to be infinite (large sample size)

The Tata Nano was on sale to the entire Indian nation. The population as of July 2015 was estimated to be 1.2 billion (CIA India 2016). Of these, 755 million had an average annual wealth of less than 10,000 USD; 33 million had an average annual wealth of 10,000 to 100,000 USD; 2 million had an average annual wealth of 100,000 to 1 million USD; and 185,000 had an average annual wealth of more than 1 million USD (Statista 2016). We can assume that the sample size assumption was met.

Tata Nano ­ Assumption (i): The population is well mixed

Behera (2007) described India as comprising distinct regional identities that to some extent subsumed constituent ethnic identities, but that nevertheless possessed a Pan­Indian sense of a unified civilization. The unification is achieved as the various strands of culture from one region meet the strands from another regional and the two coalesce. Behera (2007: 83­84)

98 further reports that India has been traditionally divided into five regions, where “each region has its own composition of ethnic and linguistic groups, religious communities and land­based jatis (caste). Each region also has its own specific pattern of economy, craft and trading practices, local history, psychological make–up and behavioural patterns.” However, the diversity of India can be quantitatively appreciated through its linguistic diversity: Hindi 41%, Bengali 8.1%, Telugu 7.2%, Marathi 7%, Tamil 5.9%, Urdu 5%, Gujarati 4.5%, Kannada 3.7%, Malayalam 3.2%, Oriya 3.2%, Punjabi 2.8%, Assamese 1.3%, Maithili 1.2%, other 5.9% (CIA India 2016). In spite of this diversity, India is claimed to still possess a civilizational unity. A counterweight to this claim is the concentration of poverty in rural India in the “scheduled castes” and “scheduled tribes” (Gang et al. 2008). These groups are listed or scheduled in the Indian constitution as entitled to special preferential treatment. They respectively comprise 16.2% and 8.2% of the population and 47.3 percent of India’s rural poor. We cannot assume that the well­mixed assumption was not met, but we cannot foreclose the possibility.

Tata Nano ­ Discussion

The Tata Nano cumulative sales numbers are sigmoidal (Figure 2). The Richards Model fits

∞ ∞ the data statistically significantly at Pr>F = 0.0001 with m=0.8097, W =287613, T​ =2009​ ​ ​ and k=0.2768 (Figure 2). This sigmoidal curve indicates an underlying replicator dynamic update function. The product appears to have reached market saturation. The automobile allegedly had an unsuccessful launch and a more successful relaunch, but it may be rather that it was the expectations that were flawed. The launch may have successfully reached the only market that was going to purchase the automobile, and that was not the millions of consumers at the BOP. It was the wealthier Indians purchasing the car as a matter of national pride.

99

Figure 2. Tata Nano diffusion. Sources: Tata Nano Wikipedia (2016), Modi (2016).

100

Conclusion

The purpose of this study was to identify a single model that could produce the varieties of dynamics observed in product adoption and diffusion field studies. The single model is replicator dynamics. Replicator dynamics produce sigmoidal curves when the sample size is large and the population is well­mixed. When the sample size is small and the population is spatially correlated, the replicator equations produce a variety of bizarre dynamics that qualitatively resemble the results of individual learning­based dynamics. These findings imply that replicator dynamics, hence imitation and biased cultural transmission, may lie at the core of all product adoption and diffusion. This in turn may imply that product adoption can be catalyzed by providing consumers with opportunities to imitate one another. This can be most effectively performed in traditionally appropriate public venues. In some BOP cultures, that means roadshows and tent shows.

The present study challenges the theoretical foundation of recent BOP product adoption studies. As discussed above, custom models of product adoption at the BOP are based on individual learning, and the relatively high variance explained by those models has validated that approach. Yet it was shown in the present study that the same mechanism, namely replicator dynamics, may underlie both large sample size (e.g., nation) and small sample size (e.g., IDP camp) field studies. This is a preliminary and tentative conclusion given our sample size of three cases, and should be treated as a working hypothesis. It is possible that the resemblance between individual learning, and small sample size replicator dynamics, is superficial and lacking in significance. But it is also possible that individual learning originates in the mechanisms of imitation. Another possibility is that individual learning dynamics can dominate product adoption at small sample sizes; but as sample sizes run large, the effects of individual learning are made inconsequential by the dominant role of social imitation.

The present study also suggests that group identity may be one of the forces at work in product adoption and diffusion. As mentioned above, in normative conformity people conform to social rules to maintain and develop group identity. Conformity bias in product adoption may be a form of normative conformity. This would suggest that stove selection in

101

Sudanese IDP camps was occurring along tribal or clan divisions, why wealthy Indians were purchasing the Tata Nano, and why poor Kenyans were using M­Pesa. In terms of the model, as mentioned above, replicator dynamics are not strong enough to suppress the effects of population structure. It is possible that the role of social imitation may go beyond product adoption. It may be a cultural phenomenon that affects other types of social behaviors, such as the origin and variability of the national cultural dimension of conformity or “tightness” (Harms and Groen 2016) and proenvironmental (green) behavior (Bamberg and Möser 2007, Harms and Linton 2015).

The limitations of this study include the fact that it was not a primary source case study. The criterion for interpreting the findings was qualitative resemblance between the field study observations and the mathematical studies. Because field research results were directly compared with the results of mathematical simulations and analyses, the present study was necessarily a matter of qualitative research (Bazeley 2013, Flick 2009, Richards and Morse 2012). This was a limitation but not necessary a detriment. “The advantage of the case study is that it can ‘close in’ on real­life situations and test views directly in relation to phenomena as they unfold in practice” (Flyvbjerg 2006: 235). However, a quantitative method for comparing the two sets of results would have required calculating some intermediate metric (e.g., something analogous to a Hausdorff dimension or a regression coefficient) for each result and then comparing metrics. In addition, we had only one case for each innovation type (M, F&A, P), and there were other differences between the cases such as whether the population was spatially correlated. Thus we were unable to draw conclusions as to their effect on product adoption and diffusion. Future research may be directed towards this area.

There is apparently the dynamical equivalent of a physical phase transition (Marinakis 1994) in the replicator dynamics between the small and large sample size modes. Future research may be directed towards investigating this transition, as it may shed light on the nature of the transition in the social phenomenon. It remains an open question whether product adoption dynamics that is based on individual learning only superficially resembles the misfirings of a malfunctioning replicator equation, or whether individual learning has roots in imitation.

102

Literature Cited

Abdelnour, S. and O. Branzei. 2010. “Fuel­efficient stoves for Darfur: The social construction of subsistence marketplaces in post­conflict settings,” Journal of Business Research, 63, 617–629.

Abdelnour, Samer, and Akbar M. Saeed. 2014. "Technologizing Humanitarian Space: Darfur Advocacy and the Rape Stove Panacea." International Political Sociology 8 (2): 145­163. ‐ AFC. 2016. “Kenya­ethnic groups.” Accessed July 23, 2016. https://www.africa.upenn.edu/NEH/kethnic.htm

Agnihotri, Arpita. 2013. "Doing good and doing business at the bottom of the pyramid." Business Horizons 56 (5): 591­599.

Alboszta, Jan, and Jacek Miekisz. 2004. "Stability of evolutionarily stable strategies in discrete replicator dynamics with time delay." Journal of theoretical biology 231 (2): 175­179.

Atkin, D. J., Hunt, D. S., & Lin, C. A. 2015. Diffusion Theory in the New Media Environment: Toward an Integrated Technology Adoption Model. Mass Communication and Society, 18(5): 623­650.

Baek, S. K., Durang, X., & Kim, M. 2013. “Faster Is More Different: Mean­Field Dynamics of Innovation Diffusion.” PloS one 8(7): e68583.

Bamberg, S. and Möser, G. 2007. “Twenty years after Hines, Hungerford, and Tomera: A new meta­analysis of psycho­social determinants of pro­environmental behavior.” Journal of Environmental Psychology 27(1): 14–25.

Banka, H. 2013. “M­PESA at the Point of Sale: Expanding Financial Inclusion & Reducing Demand for Physical Cash,” Journal of Payments Strategy and Systems, 7(4): 359­369.

Bazeley, Patricia. 2013. Qualitative data analysis: Practical strategies. Sage.

Behera, S. 2007. Identities in India: Region, nationality and nationalism A theoretical ‐ framework. Studies in Ethnicity and Nationalism, 7(2), 79­93.

Borkar, Vivek S., Sanjay Jain, and Govindan Rangarajan. 2004. "Collective behaviour and diversity in economic communities: Some insights from an evolutionary game." In The Application of Econophysics, pp. 330­337. Springer Japan.

103

Boyd, R. and P.J. Richerson, P. J. 1996, “Why culture is common, but cultural evolution is rare.” Proceedings of the British Academy 88: 77­93.

Butt, B. 2015. “Herding by Mobile Phone: Technology, Social Networks and the “Transformation” of Pastoral Herding in East Africa,” Human Ecology 43 (1): 1­14.

Cartwright, Edward, and Myrna Wooders. 2014 "Correlated Equilibrium, Conformity, and Stereotyping in Social Groups." Journal of Public Economic Theory 16 (5): 743­766.

Chen, N. H., and Huang, S. C. T. 2016. “Domestic Technology Adoption: Comparison of Innovation Adoption Models and Moderators.” Human Factors and Ergonomics in Manufacturing & Service Industries, 26(2): 177­190.

Christensen, L.J., E. Enno Siemsen, and S. Sridhar Balasubramanian. 2015. “Consumer behavior change at the base of the pyramid: Bridging the gap between for­profit and social responsibility strategies,” Strategic Management Journal 36: 307­317.

CIA India. 2016. “The world factbook (India).” Accessed July 23, 2016. https://www.cia.gov/library/publications/the­world­factbook/geos/in.html

CIA Kenya. 2016. “The world factbook (Kenya).” Accessed July 23, 2016. https://www.cia.gov/library/publications/the­world­factbook/geos/ke.html,

Claidière, N. and A. Whiten. 2012. “Integrating the Study of Conformity and Culture in Humans and Nonhuman Animals.” Psychological Bulletin 138 (1): 126–145.

Davis, F. D. 1989. “Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology.” MIS Quarterly 13 (3): 319 339. ‐ Davis, F.D, P.R. Bagozzi, and P. Warshaw. 1989. “User acceptance of computer technology: A comparison of two theoretical models.” Management Science 35: 982 1003. ‐ Debo, L.G. and S. Veeraraghavan. 2009. “Models of herding behavior in operations management” in Consumer­Driven Demand and Operations Management Models, A Systematic Study of Information­Technology­Enabled Sales Mechanisms edited by Serguei Netessine and Christopher Tang in the International Series in Operations Research & Management Science, Vol. 131. de Matos, M. G., P. Ferreira, and D. Krackhardt. 2014. Peer Influence in the Diffusion of Iphone 3G over a Large Social Network. Mis Quarterly 38 (4): 1103­1134.

DPMF. 2016. “A brief general profile on inequality in Kenya.” Accessed July 23, 2016. http://www.dpmf.org/dpmf/index.php?option=com_content&view=article&id=97:a­brief­gen eral­profile­on­inequality­in­kenya&catid=43:social­policy­development­and­governance­in­ kenya&Itemid=94

104

Eijkman, F., J. Kendall, and I. Mas. 2010. “Bridges to Cash: The Retail End of M­PESA.” Savings & Development 34 (2): 219­252.

Evans­Pritchard, E.E. 1940. "The Nuer of the Southern Sudan." In African Political Systems. M. Fortes and E.E. Evans­Pritchard, eds., London: Oxford University Press, 272­296.

Flåm, Sjur Didrik, and J. Morgan. 2004. "Newtonian mechanics and Nash play." International Game Theory Review 6 (2): 181­194.

Flick, Uwe. 2009. An introduction to qualitative research. Sage.

Flyvbjerg, Bent. 2006. "Five misunderstandings about case­study research." Qualitative inquiry 12 no. (2): 219­245.

Foster, Christopher. 2014. "Does quality matter for innovations in low income markets? The case of the Kenyan mobile phone sector." Technology in Society 38: 119­129.

Frahm, Ole. 2012. "Defining the nation: National identity in South Sudanese media discourse." Africa Spectrum Jan.: 21­49.

Gang, I. N., Sen, K., and Yun, M. S. 2008. “Poverty in rural India: caste and tribe.” Review of Income and Wealth, 54(1), 50­70.

Gatkuoth, J.M. 1995. “Ethnicity and nationalism in the Sudan,” The Ecumenical Review 47 (2): 206­216.

Geroski, P. A. 2000. “Models of technology diffusion.” Research policy 29(4): 603­625.

Harms, R., and A. Groen. 2016. “Loosen up? Cultural Tightness and National Entrepreneurial Activity.” Technological Forecasting and Social Change. In press. doi:10.1016/j.techfore.2016.04.013.

Harms, R. and Linton, J.D. 2015. “Willingness to pay for eco­certified refurbished products.” Journal of Industrial Ecology 20(4): 893­904.

Henrich, J. 2001. “Cultural transmission and the diffusion of innovations: Adoption dynamics indicate that biased cultural transmission is the predominate force in behavioral change and much of sociocultural evolution,” American Anthropologist 103 (4): 992­1013.

Hilbe, Christian. 2011. "Local replicator dynamics: A simple link between deterministic and stochastic models of evolutionary game theory." Bulletin of mathematical biology 73 (9): 2068­2087.

Hofbauer, Josef, and Karl Sigmund. 2003. "Evolutionary game dynamics." Bulletin of the American Mathematical Society 40 (4): 479­519.

105

Huang, J.H., and Fen, C.Y. 2006. “Herding in online product choice,” Psychology & Marketing 23: 413­428.

Jebarajakirthy, Charles, Antonio Lobo, and Chandana Hewege. 2015. "Enhancing youth's attitudes towards microcredit in the bottom of the pyramid markets." International Journal of Consumer Studies 39 (2): 180­192.

Ji, Quan, and Wang Xian­Jia. 2013. "Some Analytical Properties of the Model for Stochastic Evolutionary Games in Finite Populations with Non­uniform Interaction Rate." in Theoretical Physics 60 (1): 37.

Jok, Jok Madut. 2011. Diversity, unity, and nation building in South Sudan. US Institute of Peace.

Karnani, Aneel. 2007. "The mirage of marketing to the bottom of the pyramid: How the private sector can help alleviate poverty." California management review 49 (4): 90­111.

Khavul, S., and G. Bruton. 2013. “Harnessing innovation for change: Sustainability and poverty in developing countries.” Journal of Management Studies 50 (2): 285­306.

Knuuttila, T. 2011. “Modelling and representing: An artefactual approach to model­based representation.” Studies in History and Philosophy of Science Part A 42(2): 262­271.

Kramer, D. 2012. “Stove designed by US national lab improves lives in Darfur.” Physics Today 65 (6): 26­27.

Lal Dey, Bidit, Ben Binsardi, Renee Prendergast, and Mike Saren. 2013. "A qualitative enquiry into the appropriation of mobile telephony at the bottom of the pyramid." International Marketing Review 30 (4): 297­322.

Li, Jiawei, Graham Kendall, and Robert John. 2016. "Computing Nash equilibria and evolutionarily stable states of evolutionary games." IEEE Transactions on Evolutionary Computation 20 (3): 460­469.

Lin, C. A. 2003. “An interactive communication technology adoption model.” Communication Theory, 13(4): 345­365.

Linton, J., and S. Walsh. 2003. “From Bench to Business.” Nature of Materials 2: 287­289.

Madeo, Dario, and Chiara Mocenni. 2015. "Game interactions and dynamics on networked populations." IEEE Transactions on Automatic Control 60 (7): 1801­1810.

Marinakis, Y. D. 1994. “p­germanium links physical and dynamical phase transitions.” Physica A: Statistical Mechanics and its Applications, 209(3­4): 301­308.

106

Marinakis, Y. D. 2012. “Forecasting technology diffusion with the Richards model.” Technological Forecasting and Social Change, 79(1): 172­179.

Mas, I. and A. Ng’weno. 2010. “Three keys to M­PESA’s success: Branding, channel management and pricing,” Journal of Payments Strategy & Systems 4 (4): 352–370.

Matous, Petr, Yasuyuki Todo, and Ayu Pratiwi. 2015. "The role of motorized transport and mobile phones in the diffusion of agricultural information in Tanggamus Regency, Indonesia." Transportation 42 (5): 771­790.

Mertikopoulos, Panayotis, and Yannick Viossat. 2016. "Imitation dynamics with payoff shocks." International Journal of Game Theory 45 (1­2): 291­320.

Mintz, Lawrence E. 1985. "Standup comedy as social and cultural mediation." American Quarterly 37 (1): 71­80.

Modi, A. 2014. “Nano sticks to the slow lane.” accessed July 21, 2016. http://www.business­standard.com/article/management/nano­sticks­to­the­slow­lane­1160113 00323_1.html

Moran, B., 2015. “In South Sudan, camps offer no refuge from divisions and violence.” Accessed July 20, 2016. http://internationalreportingproject.org/stories/view/in­south­sudan­camps­offer­no­refuge­fro m­divisions­and­violence

Nakata, C. and Weidner, K. 2012. “Enhancing New Product Adoption at the Base of the Pyramid: A Contextualized Model,” Journal of Product Innovation Management 29(1): 21–32.

Noll, D., C. Dawes, and V. Rai. 2014. “Solar community organizations and active peer effects in the adoption of residential PV.” Energy Policy 67: 330­343.

Paredis, E. 2011. Sustainability transitions and the nature of technology.Foundations of Science, 16(2­3): 195­225.

Prahalad, C. K. 2006. The Fortune at the Bottom of the Pyramid. Pearson Prentice Hall.

Prahalad, C. K., and S.L. Hart. 2002. “The fortune at the bottom of the pyramid,” Strategy+Business, 26: 54­67.

ProAct Network. 2008. Assessing the Effectiveness of Fuel­efficient Stove Programming: A Darfur­Wide Review, CHF International.

107

Ramani, Shyama V., Shuan SadreGhazi, and Geert Duysters. 2012. "On the diffusion of toilets as bottom of the pyramid innovation: Lessons from sanitation entrepreneurs." Technological Forecasting and Social Change 79 (4): 676­687.

Ratcliff, Ryan, and Kokila Doshi. 2016. "Using the Bass Model to Analyze the Diffusion of Innovations at the Base of the Pyramid." Business & Society 55 (2): 271­298.

Ray, S. and P.K. Ray. 2011. “Product innovation for the people’s car in an emerging economy,” Technovation 31 (5­6): 216­227.

Richards, F. J. 1959. "A flexible growth function for empirical use." Journal of experimental Botany 10 (2): 290­301.

Richards, Lyn, and Janice M. Morse. 2012. Readme first for a user's guide to qualitative methods. Sage.

Roca, Carlos P., José A. Cuesta, and Angel Sánchez. 2009. "Evolutionary game theory: Temporal and spatial effects beyond replicator dynamics." Physics of life reviews 6 (4): 208­249.

Rogers, Everett M. 2010 [1962]. Diffusion of innovations. Simon and Schuster.

Safaricom. 2016. “M­Pesa Statistics Tool.” Accessed July 21, 2016. http://www.safaricom.co.ke/personal/tools/m­pesa­tools/m­pesa­statistics

Sandholm, W. H. 2008. “Deterministic evolutionary dynamics.” New Palgrave Dictionary of Economics (2nd ed.). Palgrave Macmillan, New York.

Sarkar, J. 1998. “Technological diffusion: alternative theories and historical evidence.” Journal of economic surveys. 12(2): 131­176.

Schuster, Peter, and Karl Sigmund. 1985. "Towards a dynamics of social behaviour: Strategic and genetic models for the evolution of animal conflicts." Journal of Social and Biological Structures 8 (3): 255­277.

Sen, A. 1999. Development as freedom. Anchor Books.

Sesan, T. 2014. “What's cooking? Evaluating context­responsive approaches to stove technology development in Nigeria and Kenya.” Technology in Society 39: 142­150.

Seyal, A.H., Rahim, M.M., Turner, R. 2011. “Understanding the Behavioral Determinants of M­Banking Adoption: Bruneian Perspectives,” Journal of Electronic Commerce in Organizations 9(4): 22­47.

Sgroi, D. 2004. “Controlling the Herd: Applications of Herding Theory.” Cambridge Working Papers in Economics, No. 0106.

108

Singh, S., and P. Srivastava. 2012. “The Turnaround of Tata Nano: Reinventing the Wheel,” The Journal of Business Perspective 16 (1): 145­152.

Sivapragasam, Nirmali, Aileen Agüero, and Harsha de Silva. 2011. "The potential of mobile remittances for the bottom of the pyramid: findings from emerging Asia." info 13 (3): 91­109.

Statista. 2016. “Population distribution by average wealth per capita in India, as of 2015.” Accessed July 23, 2016. http://www.statista.com/statistics/482579/india­population­by­average­wealth

Sugden, Lawson G., E. A. Driver, and Michael C.S. Kingsley. 1981. "Growth and energy consumption by captive mallards." Canadian Journal of Zoology 59 (8): 1567­1570.

Tata Nano Wikipedia. 2016. “Tata Nano.” Accessed July 21, 2016. https://en.wikipedia.org/wiki/Tata_Nano

Taylor, Peter D., and Leo B. Jonker. 1978. "Evolutionary stable strategies and game dynamics." Mathematical biosciences 40 (1­2): 145­156.

Taylor, Christine, Drew Fudenberg, Akira Sasaki, and Martin A. Nowak. 2004. "Evolutionary game dynamics in finite populations." Bulletin of mathematical biology 66 (6): 1621­1644.

Taylor, Christine, and Martin A. Nowak. 2006. "Evolutionary game dynamics with non­uniform interaction rates." Theoretical population biology 69 (3): 243­252.

Tobbin, P. 2012. "Towards a model of adoption in mobile banking by the unbanked: a qualitative study.” info, 14(5): 74 – 88.

Traulsen, A., Claussen, J. C., and Hauert, C. 2005. “Coevolutionary dynamics: from finite to infinite populations.” Physical review letters 95(23): 238701.

Traulsen, A., Claussen, J. C., and Hauert, C. 2006. “Coevolutionary dynamics in large, but finite populations.” Physical Review E 74(1): 011901.

Urmee, T. and Gyamfi, S. 2014. "A review of improved Cookstove technologies and programs." Renewable and Sustainable Energy Reviews 33: 625­635.

Van den Waeyenberg, Sofie, and Luc Hens. 2008. "Crossing the bridge to poverty, with low­cost cars." Journal of Consumer Marketing 25 (7): 439­445.

Venkatesh, V., and H. Bala. 2008. “Technology Acceptance Model 3 and a Research Agenda on Interventions.” Decision Science 39 (2): 273­312.

Venkatesh, V. and F.D. Davis. 2000. “A Theoretical Extension of the Technology Acceptance Model: Four Longitudinal Field Studies.” Management Science 46: 186 204. ‐

109

Viossat, Yannick. 2007. "The replicator dynamics does not lead to correlated equilibria." Games and Economic Behavior 59 (2): 397­407.

Wallace, C., and Young, H. P. 2015. “Stochastic Evolutionary Game Dynamics.” Handbook of Game Theory with Economic Applications 4: 327­380.

Waller, Gregory Albert. 1995. "Main street amusements: movies and commercial entertainment in a southern city, 1896­1930." Main street amusements: movies and commercial entertainment in a southern city, 1896­1930.

Walsh, S. T., Marinakis, Y. D., and Boylan, R. 2015. “Teaching case and teaching note systems equipment division at Ferrofluidics.” Technological Forecasting and Social Change 100: 29­38.

Wejnert, B. (2002). “Integrating models of diffusion of innovations: A conceptual framework.” Annual review of sociology 28:297­326.

Xia, C., Wang, J., Wang, L., Sun, S., Sun, J., & Wang, J. 2012. “Role of update dynamics in the collective cooperation on the spatial snowdrift games: Beyond unconditional imitation and replicator dynamics.” Chaos, Solitons & Fractals 45(9): 1239­1245.

Yair, G., (2008), “Insecurity, Conformity and Community: James Coleman’s Latent Theoretical Model of Action,” European Journal of Social Theory 11(1): 51–70.

Yang, Shih­Jui, Hsu, Ai­Chi, Lai, Show­Yen, and Lee, Chien­Chiang. 2015. “Empirical Investigation of Herding Behavior in East Asian Stock Markets toward the U.S. Market,” International Journal of Business and Finance Research 9(1): 19­32.

Yang, Yung Kai, and Shu Ling Wu. "In search of the right fusion recipe: the role of ‐ ‐ legitimacy in building a social enterprise model." Business Ethics: A European Review 25, no. 3 (2016): 327­343.

Yin, Robert K. Case study research: Design and methods. Sage publications, 2013.

Zhang, J. and Liu, P., (2006), "Rational Herding in Microloan Markets," Management Science 58(5): 892­912.

110

4. THE PRECAUTIONARY PRINCIPLE: WHAT IS IT?, WHERE DID IT COME FROM?, HOW SHOULD WE USE IT?

Marinakis, Y.D., Rainer Harms.and Steven T. Walsh Journal of International & ​ Interdisciplinary Business Research. ​

111

The Precautionary Principle: what is it, where did it come from, how should we use it?

Abstract

Since being globalized by inclusion in the 1980’s and 1990’s through several U.N. declarations and treaties, the Precautionary Principle has become a flashpoint internationally among scholars working in the fields of risk, international environmental law, European Union law and even U.S. federal law. The controversy surrounding the Precautionary Principle apparently arose and persists because the Principle is undertheorized. We revisit three fundamental questions: what is the Precautionary Principle, where did it come from, and how should we use it. Because the Precautionary Principle is a legal tool that is used internationally to manage technology, a comprehensive discussion of it is organically international and interdisciplinary. We argue that the Precautionary Principle is an index of formative measures of risk and fear; that its origins should be investigated specifically in relation to the particular legal instrument in question; and that it should be utilized only as an indicator of public perception and not as a prescriptive risk management tool. Because the Precautionary Principle is currently utilized internationally as a prescriptive tool, our recommendation is both controversial and non­trivial.

Keywords: Precautionary Principle; formative measures ​

112

Introduction

The controversial Precautionary Principle is one answer to the question, “how should governments make decisions about (the fear of) uncertain risk about technology?” It answers that question in its enigmatic triple negative form, namely that the absence of rigorous proof of danger does not justify inaction (the non­preclusion Precautionary Principle; Stewart 2002). In practice, the slightly stronger and more substantive so­called weak form is usually invoked: the lack of scientific evidence does not preclude action if damage would otherwise be serious and irreversible (Mandel and Gathii 2006). The Precautionary Principle is particularly invoked in the commercialization of controversial new and emerging technologies that are widely considered to be characterized by unknown risk and uncertainty, such as nanotechnology (Stebbing 2009, Heselhaus 2010), genetically modified organisms (Giampietro 2002) and (Clarke 2005). The Precautionary Principle is not an obscure legal concept. Rather, it appears in European legislation applicable to technology, viz. in Article 23 (the Safeguard clause) of Directive 2001/18/EC on the release of genetically modified organisms (GMO) at the European Community level; and Article 34 of Regulation no. 1829/2003 on the consumption of GMOs as food or feed; and it has been invoked in technology cases before the European Court of Justice (Rogers 2011), such as in cases relating to food safety (Sadeleer 2006), plant fungicide (ECJ 2010), and the legal status of food containing trace amounts of genetically­modified DNA (ECJ 2011). And at least one researcher has argued that the Precautionary Principle appeared in U.S. legislation and related case law starting in 1970 (Ashford 2007: 354, 361).

Since being globalized by inclusion in the 1980’s and 1990’s through several U.N. declarations and treaties, the Precautionary Principle has become a flashpoint internationally among scholars working in the fields of risk, international environmental law, European Union law and even U.S. federal law. On the one hand, the Precautionary Principle has been called a widespread guide to action in modern society (Furedi and Derbyshire 1997, Tudor 2003). On the other hand, it has been called incoherent (Peterson 2006), unprincipled (Marchant 2001) and even dangerous (Sunstein 2002). It has also been asserted that it doesn’t matter whether the Precautionary Principle is incoherent, because the relevant question is whether there are contexts in which it makes sense to use it (Dana 2009). A consensus has not

113 been reached. Yet before being adopted, proposed new legal principles, rules and rights are usually rigorously discussed and tested. Plaintiffs bring novel cases in local courts, legal scholars argue positions through articles and amicus briefs. These proceedings can take decades. The judicial history of the U.S. Supreme Court 2015 decision on same­sex marriage (Obergefell et al. v Hodges, Director, Ohio Department of Health, et al., 576 U.S. (2015)) can ​ ​ be traced back to 1970 (Frost 2015). The same procedure applies to positive international law or treaties. It took the United Nation’s International Law Commission (ILC) more than twenty years to adopt in 1994 a set of thirty­three draft articles on the Law of the Non­Navigational Uses of International Watercourses (Arcari 1997). To complement that work, in 2008 the ILC issued nineteen draft articles on transboundary aquifers (McCaffrey 2009). It is true that the ​ United Nations Convention on the Law of the Non­Navigational Uses of International Watercourses is a codification of customary international law; and it is true that international law does not require a minimum duration of time for custom to ripen into law. But it still took ​ the ILC and the international community more than twenty years to codify the customary law into positive law; and because the Precautionary Principle appears in treaties and statutes, we are concerned here with the process of making positive international law. Some have argued that the Precautionary Principle is part of customary international law (Cameron and Abouchar 1991, McIntyre and Mosedale 1997), but that is a minority position (Bodansky 1991, 1995).

The problem with the Precautionary Principle is, none of this ever happened. Somehow it managed to get into one international treaty, then another, then another. Precautionary thinking bubbled up in American federal legislation (Wagner 2000, Ashford 2007) and, as we argue below, federal common law. Unease about the Precautionary Principle can be succinctly summarized: it is undertheorized, and that deficiency is amplified by its growing use in international, regional and national law. As with any other legal principle, we still need to understand its fundamental theoretical underpinnings, viz. its origin and place in law and society, and its ontology and epistemology as both a legal principle and more generally as a form of innovation or new information.

In the present article we take a fresh look at the Precautionary Principle, with the intent of revisiting three fundamental questions: what is it, where did it come from, and how

114 should we use it. While we are aware that the answers to these three questions may appear to some as only loosely related, we assert that the questions themselves are tightly related. The three questions are also interdependent: we should manage technologies only with legal principles that we fully understand. Moreover, since the Precautionary Principle is a legal tool that is used to manage technology, a comprehensive discussion of it is inescapably interdisciplinary.

The present research lies at the confluence of risk management, psychometrics and sociology; international, regional and national law; formative measures; philosophy; and the history of mathematics. The present article should be of interest to those studying and applying the Precautionary Principle in legislative, judicial, regulatory and policy contexts. It should also be of interest to those working in the risk management field, as will as to anthropologists studying the ethnology of risk and fear. It may also interest sociologists studying the so­called rise of the risk culture.

The Precautionary Principle: what is it?

It has been asserted that uncertainty frames the Precautionary Principle (Francot Timmermans and De Vries 2013). It has also been asserted that the Precautionary ​‐​ Principle is “based on fear of uncertainty” (Barnard and Morgan 2000: 112). We think both positions are correct. To illustrate we turn to Slovic’s (1987) seminal psychometric study on the perception of risk.

Slovic (1987) showed that lay people do not view risk in the same way that experts view risk. Whereas experts view risk as an index of associated estimated annual fatality, lay people view risk as comprising other hazard characteristics (Slovic 1987: 283). Slovic’s (1987) psychometrics study parsed risk into two principal components. The first principal component, “dread risk,” comprised hazards that were associated with perceived lack of control, dread (great fear or apprehension), catastrophic potential, fatal consequences, and the inequitable distribution of risks and benefits. This component was associated with, for example, nuclear technology. The second component, “unknown risk,” comprised hazards

115

that were considered to be unobservable, unknown, new, and having delayed effects. This component was associated with, for example, chemical technologies.

There is an apparent resonance between Slovic’s (1987) study and the Precautionary Principle. According, for example, to Todt and Luján (2014: 2164), the Precautionary Principle is meant to be invoked “whenever there are reasonable indications of possible important (highly damaging, irreversible, systemic, etc.) impacts on human health and the environment, even in the face of inconclusive data, lacunae in scientific knowledge, and doubts about the respective cause­and­effect relationships.” Slovic’s dread risk (perceived lack of control, dread, catastrophic potential, fatal consequences) maps onto Todt and Luján’s “possible important (highly damaging, irreversible, systemic, etc.) impacts on human health and the environment.” Slovic’s unknown risk (unobservable, unknown, new, and having delayed effects) maps onto Todt and Luján’s “inconclusive data, lacunae in scientific knowledge, and doubts about the respective cause­and­effect relationships” (Fig. 1). Similarly, Slovic’s (1987) risk components can also be mapped onto Sandin’s (1999) dimensions of the Precautionary Principle. Sandin (1999) recast the Precautionary Principle in four so­called dimensions: if there is (1) a threat, which is (2) uncertain, then (3) some kind of ​ ​ ​ ​ action (4) is mandatory. Dread risk relates to the threat dimension, and unknown risk relates to the uncertain dimension (Fig. 2).

Slovic’s Todt and Luján’s Risk Precautionary Principle Components Definition Dread risk ⇝ possible important impacts on human health and the environment ​ Unknown risk ⇝ inconclusive data, lacunae in scientific knowledge, doubts about ​ cause­effect Fig 1. Mapping of Slovic’s (1987) risk components onto Todt and Luján’s (2014) definition of the Precautionary Principle.

116

Slovic’s Sandin’s Risk Precautionary Principle Components Dimensions Dread risk ⇝ the Threat dimension (ontological) ​ Unknown risk ⇝ the Uncertain dimension (epistemological) ​ the Action dimension the Command dimension Fig 2. Mapping of Slovic’s (1987) risk components onto Sandin’s (1999) dimensions of the Precautionary Principle.

These mappings show that the Precautionary Principle is a layperson’s approach to risk management. Stated in terms of Slovic’s (1987) psychometrics, the Precautionary Principle tells judges, law­makers, policy makers, etc., that the layperson’s perception (i.e., the public’s perception) that something comprises an unknown risk is an acceptable reason for taking protective action. The Precautionary Principle apparently validates unknown risk as a motivation for governmental decision making. It apparently equalizes unknown risk with dread risk.

But the Precautionary Principle conflates risk­sensu­experts and risk­sensu­laypersons; thus these mappings also show that the Precautionary Principle is animated not only by risk but also by fear, because risk­sensu­laypersons comprises (in part) fear. Dread risk, as previously mentioned, comprises dread, a form of fear. Unknown risk comprises hazards considered to be unobservable, unknown, new, and having delayed effects. It is likely not a controversial statement to say that laypersons consider these unknown risks to be hazards because they fear them.

Sandin (1999: 892) also asserted that “the threat dimension concerns ontology, the uncertainty dimension concerns epistemology”; thus the mapping of Slovic’s (1987) risk components onto Sandin’s (1999) dimensions of the Precautionary Principle shows us that the Precautionary Principle comprises both ontological and epistemological components. The Precautionary Principle’s epistemology (Aven 2011, Carter and Peterson 2015,

117

Steglich­Petersen 2015, Carter and Peterson 2016) and ontology (Aven 2011) both have been studied. These philosophical investigations, while valuable, do not relate to the present discussion of psychometrics.

The preceding investigations uncovered the bifurcated onto­epistemic nature of the Precautionary Principle, as having a risk component and a fear (and/or fear of uncertainty) component. The Precautionary Principle expresses public fear. But to express public fear, first it must measure it. This leads us to consider formative measures.

Formative Measure of Risk and Fear

Researchers often seek to operationalize the Precautionary Principle by substituting specific terms for its general terms (e.g., Sandin 1999: 898), that is, by transforming the legal principle into a legal rule; but this is an overly simplistic approach to legal principles. Legal principles have far broader and deeper functions. It has been stated that “[p]rinciples differ from rules in the sense that rules can be more easily directly applied in individual cases, while principles give a general direction for a decision” (Verschuuren 2006: 237). This definition of principle adequately characterizes the Precautionary Principle. The Precautionary Principle gives a general direction for a decision. It does not provide a template for substantive legal or scientific rule by which cases are adjudicated. It does not provide a template for a legal process. Principles “set the goals that have to be reached with (new) laws” (Verschuuren 2006: 238). What is the goal of the Precautionary Principle? To cause proactive measures to be taken, even in the face of uncertainty. It gives license to a decision maker to prescribe proactive measures in the face of uncertain but possibly substantial risk (Rogers 2011). It gives a decision maker license to measure the relevant fear of uncertainty, possessed by his constituents, that relates to the technology in question; and then to prescribe actions that are commensurate, not to the risk but to the fear.

What is underlying the Precautionary Principle is an index of formative measures that concretizes not only public perceptions of risk­sensu­experts but also public fear of uncertainty arising from the technology in question. Like other indexes, it comprises a

118 plurality of formative measures, independent variables that interact with one another to form a single organic or organic­like entity (vis­a­vis a mere aggregate). The formative measurement model is

η = ∑γixi+ ζ, i = 1 to n, ​ ​ ​ ​

where γ is the contribution of x to the latent construct η and ζ is the residual, such ​i ​i that the latent constructs are functions of the observables (Howell et al. 2007). The latent construct η is also called an index. Formative measurement models were introduced to applied statistics in 1962 (Curtis and Jackson 1962), but economists have been constructing indexes (index numbers) since the beginning of the Nineteenth Century (Boumans 2001).

The fundamental issue with indexes is the correspondence between theoretical concept and measurable phenomena (Hansen and Lucas 1984: 24). This invites what Zimmermann (2007: 51) calls an onto­epistemic inquiry, which requires a researcher to “visualize physical properties of systems (and hence also biological properties) as a result of human cognition which is initializing the modeling in the first place and defines some sort of specific disposition with respect to the world.” In other words, your choice of model determines how you perceive the world, and the world determines your choice of models. Because we aggregate components into indexes, the world seems to us to be full of indexes; and we construct indexes because their components appear to us to synergistically comprise organic unities.

The value of identifying the Precautionary Principle as an index of formative measures is that it allows us to characterize and address the criticisms of the Precautionary Principle in a philosophically fundamental way. The Precautionary Principle has been said to be incoherent (Peterson 2006) but still rational to use it in certain policy contexts (Dana 2009). Similar criticisms and positive statements have been made regarding indexes of formative measures, e.g., economic index numbers have been supported as pragmatic compromises (Boumans 2001), but they have also been criticized as being not theoretically sound (Edwards 2010). These opposing positions have not been reconciled, but their separate reference frames have been identified (Boumans 2001); and as any mathematician or physical scientist will tell you, once reference frames have been identified, it is often possible to find a

119

way to translate from one to the other. The reference frame of the critics of formative measures is the Axiomatization Movement.

The Axiomatization Movement

Indexes of formative measures are interpreted in two ways, according to respective schools of thought. The instrumental school of Fisher treats indexes as if they were empirical objects, and the axiomatic school treats indexes as formal axiom­based abstractions (Boumans 2001: 35). Under the instrumental approach, indexes such as the Precautionary Principle have an ontology because they exist separately from our ideas about them. Under the axiomatic approach, indexes have only an epistemology.

The Axiomatization Movement began with Pasch’s (2013 [1882]) publication in 1882 of Vorlesungen über neuere Geometrie, in which he advocated grounding Euclidean ​ ​ geometry in more precise primitive notions and axioms. David Hilbert actually grounded ​ ​ ​ Euclidean geometry in his own set of axioms in his Grundlagen der Geometrie (Hilbert 1899, ​ 1902). He later discussed the broader intention of his program (Hilbert 1918); viz. that the axiomatic method will eventually render all of science subject to the mathematical method, which is important because he often said that every mathematical problem can be solved (Stöltzner 2015: 16).

The lack of axiomatization is now seen by proponents of axiomatization as indicating a lack of rigor (Boumans 2001: 315). Yet Hilbert’s program was more pragmatic (Peckhaus 2003). He spoke in an architectural metaphor, stating that his intent was to provide a method for a “deepening of foundations” (Peckhaus 2003: 145) where foundations were in question, thus enabling continued development of theory and system. Hilbert might have been chagrined to see that his attempts to provide sound logical foundations had the effect of sacrificing pragmatic empirical adequacy and relation to reality for logical consistency. The solution to this paradox lies in the life work of Fisher, who pragmatically sought compromises between incompatible requirements (Boumans 2001), i.e., not everything is a theory, so not everything should be assessed as a theory.

120

Other Risk Indexes of Formative Measures

The Precautionary Index is not the only risk index of formative measures, and something may be learned from other risk indexes. Slovic’s (1987) psychometric study treated risk as an index comprising formative measures. Rather than asking whether fear reflects certain symptoms such as anxiety, etc., Slovic sought participant responses to hazards. He used hazards as independent variables to a function comprising participant’s reactions to those hazards (Fig. 3). These components comprise formative measures. As such they are indexes. More recently, Siegrist et al. (2005) applied the cognitive maps of hazard perceptions to data taken from individuals rather than to data of aggregated individuals. They verified that the two components also manifest in individuals, but they found variability in component loadings (for example, some individuals scored high on one component and low on the other); and that these variations were correlated with general trust and general confidence, where general trust is the belief that other people can be relied on and general confidence is the conviction that everything is under control, and uncertainty is low. General trust and general confidence apparently were also formative measures. It is not surprising that an index comprises a distribution rather than a single number. Stochastic index numbers are based on the assumption than indexes have distributions (Clements and Izan 1987). A more precise deployment of the Precautionary Principle, if feasible, would be based on a distribution of individual responses rather than a single index number calculated from aggregated data.

Nuclear Coal Pollution Risk Index y = f (Hazards) Biotechnology Nanotechology Chemicals Fig. 3. Hazards as formative measures for a Risk Index

121

Finance provides us with a fear index of formative measures. The Chicago Board Options Exchange Volatility Index (VIX) has been called a proxy for investor sentiment where high values indicate that investors are fearful about future performance of the U.S. stock market (Escueda et al. 2015). “The CBOE Volatility Index® (VIX)® is based on the S&P 500® Index (SPX), the core index for U.S. equities, and estimates expected volatility by averaging the weighted prices of SPX puts and calls over a wide range of strike prices” (CBOE 2016 [webpage]). The publishing of indexes of public precautionary sentiment may prove to be a useful risk management tool.

Another risk index of formative measures is risk factor epidemiology. In risk factor epidemiology, “various characteristics, including not only environmental variables but also physiological variables, habits, lifestyle, and so on, have been conceptualized as ‘risk factors’ for diseases” (Giroux 2015: 181). These “risk factors” are formative measures and “risk for disease” is their index. Criticisms of epidemiology resonate with those of formative measures, e.g., “the past 30 years of risk factor epidemiology have also presented us with a baffling and almost endless array of potentially causal observations” (Keyes and Galea 2015: 305); and “[a]s there are no underlying hypotheses for this kind of ‘research,’ beyond a general feeling that ‘diseases of civilization’ are caused by civilization, the method is based on ‘stabs in the dark’ (in Savitz’s terminology)” (Skrabanek 1994: 553, referring to Savitz 1994). This epidemiology would be more scientifically defensible if it were used in conjunction with the experimental method of hypothesis formation and testing. As it stands, epidemiology and other formative measure­based activities have more the look of creative expression than scientific investigation.

The Precautionary Principle: where did it come from?

The Precautionary Principle is often said to have sprung from German environmental law like Athena from Zeus’ forehead. We will now show that precautionary thinking appeared even earlier, at least 1953, in the U.S. federal common law equitable remedy of the preliminary

122

injunction; and that it makes more sense to research the appearance of the precautionary thinking in separate disciplines, as an indication of a widespread cultural movement (Hanekamp et al. 2005), rather than searching for a single origin. Along these lines, we are not saying that German law or international law descended from the American preliminary injunction. Rather, we are looking for strong precautionary thinking in modern law that preceded the German law, only to show that the German law was not the first. Neither are we saying that the standard preliminary injunction illustrates strong precautionary thinking. Instead we are showing the failure of a modern attempt to expand the standard preliminary injunction to encompass strong precautionary thinking. Moreover, to be clear and to not overstate our case, we use the term “precautionary thinking” to refer to legal principles that resemble the Precautionary Principle but that are not specifically referred to as the Precautionary Principle. The Precautionary Principle is often said to have first appeared in the guise of the Vorsorgeprinzip (foresight principle) in the 1970 German water protection law ​ (Boehmer­Christiansen 1994, Raffensperger and Tickner 1999). Precautionary thinking also appeared around that time in Sweden’s 1973 Act on Products Hazardous to Man or the Environment (Wahlström 1999). Why is the origin of the Precautionary Principle attributed by some to German law rather than to Swedish law? It has been noted that the Germans were the first to introduce precautionary thinking into an international declaration (McIntyre and Mosedale 1997). That early advocacy may have endowed Germany with the appearance of authoring the Principle.

What has hitherto escaped notice is appearance of precautionary thinking in the U.S. federal common law preliminary injunction. A preliminary injunction restrains a party from going ahead with a course of conduct or compelling a party to continue with a course of conduct until the case has been decided. According to the U.S. Supreme Court (Winter v. NRDC, Inc., ​ ​ 129 Sup.Ct. 365, 374 (2008)), “A plaintiff seeking a preliminary injunction must establish that he is likely to succeed on the merits, that he is likely to suffer irreparable harm in the ​ absence of preliminary relief, that the balance of equities tips in his favor, and that an injunction is in the public interest.” This legal standard resonates with the aforementioned so­called weak form of the Precautionary Principle, which states that the lack of scientific evidence does not preclude action if damage would otherwise be serious and irreversible. ​ ​

123

The preliminary injunction dates back to at least the eighteenth century (Leubsdorf 1978); but that does not suggest an eighteenth century origin of the Precautionary Principle, because there is a critical difference between the two. The preliminary injunction requires a showing by the plaintiff that he or she is likely to succeed on the merits. The Precautionary Principle requires no such showing. However, an attempt was recently made to make the preliminary injunction more precautionary, and that attempt failed.

The Ninth Circuit United States Court of Appeals attempted to expand the federal preliminary injunction to protect plaintiffs seeking protection from possibility of irreparable harm, but In Winter v. NRDC the U.S. Supreme Court struck it down (Bates 2011). The Ninth ​ Circuit was issuing preliminary injunctions on only the possibility of irreparable harm to the ​ ​ plaintiff; they did not require plaintiff to show the likelihood of irreparable harm. The ​ Supreme Court held that “Issuing a preliminary injunction based only on a possibility of irreparable harm is inconsistent with our characterization of injunctive relief as an extraordinary remedy that may only be awarded upon a clear showing that the plaintiff is entitled to such relief” (Winter v. NRDC, Inc., 129 Sup.Ct. at 375­76). We traced the origins ​ ​ of the Ninth Circuit’s “possibility of irreparable harm” test to at least their 1975 decision, William Inglis & Sons Baking Co. v. ITT Continental Baking Co., Inc., 526 F.2d 86 (C.A.9 ​ (Cal.), 1975); that decision cited an earlier Ninth Circuit decision from 1972, Costandi v. ​ AAMCO Automatic Transmissions, Inc., 456 F.2d 941 (9th Cir., 1972), but the language in ​ the 1972 decision is not as clear as the language in the 1975 decision. In the 1975 case the Ninth Circuit also cited two Second Circuit decisions from 1970 and 1953. The 1953 decision reads “To justify a temporary injunction it is not necessary that the plaintiff's right to a final decision, after a trial, be absolutely certain, wholly without doubt; if the other elements are present (i. e., the balance of hardships tips decidedly toward plaintiff), it will ordinarily be enough that the plaintiff has raised questions going to the merits so serious, substantial, difficult and doubtful, as to make them a fair ground for litigation and thus for more deliberate investigation,” Hamilton Watch Co. v. Benrus Watch Co., 206 F.2d 738, 740 (2 Cir. ​ ​ 1953). To this the judge makes several citations, but it is not necessary to trace the rule back further. We have shown that it pre­dates the 1970 German water protection law. The ​ significance of the Ninth Circuit’s Preliminary Injunction jurisprudence (i.e., based on only

124

the possibility of irreparable harm) suggests that the attempt to identify a single origin of the ​ ​ Precautionary Principle is misguided. It pre­dates the rise of the risk society (Beck 1992), which allegedly characterizes “late modernity” beginning circa 1960.

It is also notable that the Supreme Court has not ruled on a related precautionary practice, one that it is currently used by the Ninth Circuit Court and also by other federal Courts of Appeals. This is the so­called sliding scale/serious question test for preliminary injunction applications. This test states that a stronger showing on one element of the preliminary injunction may offset a weaker showing on a different element, e.g., a stronger showing of irreparable harm to a plaintiff might offset a lesser showing of likelihood of success on the merits. The serious question test is a narrow slice of the sliding scale test: a preliminary injunction will issue if serious questions going to the merits were raised (i.e., in favor of plaintiff; plaintiff presented a plausible case) and the balance of hardships also tips sharply in the plaintiff’s favor (Alliance for the Wild Rockies v. Cottrell, 632 F.3d 1127, ​ ​ 1131­32 (9th Cir. 2011)). In contrast to the Ninth Circuit’s practice that was overturned in Winter v. NRDC, which relates to harm (a plaintiff need show only the possibility of ​ irreparable harm), the serious question test turns on the plausibility of plaintiff’s case and the balance of hardships. This type of approach has been distinguished from the Precautionary Principle, as a “precautionary approach” (Barnard and Morgan 2000: 116).

So where did precautionary thinking, and the Precautionary Principle come from? The question is likely too vague and broad. It is more scholarly and less mystical to ask how it came to be in a specific legal instrument. For American legislation we would turn to the Congressional Record. For international treaties we would turn to the travaux préparatoires. Sociological explanations such as the risk society may help explain the diffusion of the innovation but they do not help explain its origin.

The Precautionary Principle: how should we use it?

Epidemiological risk factors and other formative measures are criticized as stabs in the dark having no underlying hypotheses (e.g., Skrabanek 1994); but stabs in the dark can have value if their energy is directed. What is missing from the practice of indexes of formative measures

125

is a disciplined follow­up to hypothesis generation, in which explanations of how the formative measures are able to interact to form an organic whole. Even the Ansatz in ​ ​ mathematics must eventually be validated by the results it leads to. What proponents and opponents of formative measures might be able to agree to, is an informal convention by which formative measures must be validated when used. These informal conventions are not uncommon in science, e.g., as in the use of certain goodness­of­fit statistics for evaluating structural equation models, or in the use of 0.01 and 0.05 as p­values. This approach could lead to a new role for the Precautionary Principle: as a tool for identifying social fears, which fears could then be evaluated using traditional risk assessment techniques.

Risks, according to Beck (1992 [1986]: 19–20), are the unintended consequences of the rapid development of science and technology in late modern (capitalist) society (Rasborg 2012); and these risks may be responsible for late modern society’s distinctive culture of fear (Furedi and Derbyshire 1997, Glassner 2010, Hanekamp et al. 2005, Tudor 2003). But our responses to risks, such as our deployment of the Precautionary Principle, also have unintended consequences. For one, the Precautionary Principle validates and operationalizes the fear of uncertainty. But the fear of uncertainty can be a non­propositional fear (Davis 1987) in disguise, i.e., anxiety about technology and its possible but unknowable effects on us. Yet an uncertain future stimulates innovation. The uncertain future is a nothingness that acts against the present to form a creative potential. The great varieties of religion and other forms of innovative metaphysical speculation, for example, would be unimaginable without the influences of the uncertain province of death. Embracing the uncertain future is what gives us freedom: the uncertain future is a blank canvas upon which we paint our lives, limited only by our imagination and our desire to remain free. Fear of uncertainty is fear of freedom. Flight from uncertainty is the flight from freedom. Fear of uncertainty stifles innovation. Fear of uncertainty leads us to rend the canvas upon which we would otherwise paint our desired future. Yet the fear of uncertainty also likely saved us, in evolutionary time, from predators, volcanoes and other natural hazards. The fear of uncertainty is the fear of freedom, but lack of fear of uncertainty is potentially suicidal, and that takes us back to the Precautionary Principle. The answer to this paradox, and to the question of the Precautionary Principle, is not to disregard fear; rather, once fear is identified, the answer is to face it

126 head­on. Again, this suggests that the most valuable use of the Precautionary Principle may be to identify fears; fears that may then be subject to analytic assessment and action based on scientific judgment. What we are recommending is a dynamical process that iterates between fear and analysis. Previously either we felt fear or we ignored it. We are suggesting that we feel fear and then ignore it.

Along these lines, usage of the Precautionary Principle should be restricted to its first two dimensions, threat and uncertainty, ontology and epistemology. The flashpoints of the Precautionary Principle are its two final dimensions, Action and Command. Those two dimensions should remain within the province of traditional risk assessment. We should act not out of (fear of) uncertainty, but out of rationality and resolve. An example of the successful application of this method is Cerf and Condron’s (2006) case study on how milk pasteurization heat treatment procedures were driven by the requirement to destroy Coxiella ​ burnetii and thereby eradicate Q fever. It is still an open question whether Q fever is a foodborne disease or if pasteurization is scientifically justified for the prevention of Q fever. However, rather than either eliminating milk from our diets or ignoring these fears, precautionary procedures were implemented; and now that there exists more evidence, scholars such as Cerf and Condron are beginning to rationally question these precautionary procedures. The minimum actions were taken for what appeared to be self­preservation, but the scholarly community did not adopt epistemic closure with respect to the issue.

Conclusion

In the preceding, we argued that the Precautionary Principle is an index of formative measures of risk and fear. We mapped Todt and Luján (2014) and Slovic’s (1987) risk components onto the Precautionary Principle. This mapping showed that the Precautionary Principle is directed the layperson’s view of risk, i.e., to both dread risk and unknown risk. This mapping also showed that the Precautionary Principle comprises both ontological (threat) and epistemological (uncertainty) components. This approach appears to have considerable explanatory power regarding both the Precautionary Principle and its criticisms.

127

It may also provide a path forward for opposing parties to work together to investigate if not implement the Precautionary Principle at a global level. Given the apparent desire by a large portion of society for such a principle, this potential path is notable. We suggested that the origins of Precautionary Principle should be investigated specifically in relation to the particular legal instrument in question. We recommended that usage of the Precautionary Principle should be restricted to its first two dimensions, threat and uncertainty; and that its two final dimensions, Action and Command, should remain within the province of traditional risk assessment.

128

Literature Cited

Arcari, Maurizio. (1997). The draft articles on the law of international watercourses adopted by the International Law Commission: an overview and some remarks on selected issues. Natural resources forum, 21(3), 169­179. ​ ​ ​

Ashford, Nicholas A. (2007). The legacy of the precautionary principle in US law: The rise of cost­benefit analysis and risk assessment as undermining factors in health, safety and environmental protection. In N. De Sadeleer (Ed.), Implementing the Precautionary ​ Principle: Approaches from the Nordic Countries, the EU and the United States (352­378). ​ ​ London: Earthscan.

Aven, Terje. (2011). On different types of uncertainties in the context of the precautionary principle. Risk Analysis 31(10), 1515­1525. ​ ​ ​ ​

Barnard, Robert C., and Morgan, Donald L. (2000). The National Academy of Sciences offers a new framework for addressing global warming issues. Regulatory Toxicology and ​ Pharmacology 31(1), 112­116. ​ ​ ​

Bates, Bethany M. (2011). Reconciliation After Winter: The Standard for Preliminary Injunctions in Federal Courts. Columbia Law Review 111(7), 1522­1556. ​ ​ ​ ​

Beck, U. (1992 [1986]). Risk Society: Towards a New Modernity. Cambridge: Polity Press. ​ ​

Bodansky, Daniel. (1991). Law: scientific uncertainty and the precautionary principle. Environment: Science and Policy for Sustainable Development 33(7), 4­44. ​ ​ ​

Bodansky, D. (1995). Customary (and not so customary) international environmental law. Indiana Journal of Global Legal Studies, 3(1), 105­119. ​ ​ ​

Boehmer­Christiansen, Sonja. (1994). The precautionary principle in Germany–enabling government. In T. O’Riordan (Ed.), Interpreting the precautionary principle (31­39). ​ ​ London: Earthscan.

Boumans, Marcel. (2001). Fisher's instrumental approach to index numbers. History of ​ political economy 33(5), 313­344. ​ ​ ​

129

Cameron, J., and Abouchar, J. (1991). The Precautionary Principle: A Fundamental Principle of Law and Policy for the Protection of the Global Environment. BC Int'l & Comp. L. Rev. 14, ​ ​ ​ ​ 1­27.

Carter, J. Adam, and Peterson, Martin. (2015). On the epistemology of the precautionary principle. Erkenntnis 80(1), 1­13. ​ ​ ​ ​

Carter, J. Adam, and Peterson, Martin. (2016). On the Epistemology of the Precautionary Principle: Reply to Steglich­Peterson. Erkenntnis 81(2), 297­304 ​ ​ ​ ​

CBOE. (2016). The VIX Index calculation. Retrieved from ​ ​ http://www.cboe.com/micro/vix/part2.aspx

Cerf, O., and Condron, R. (2006). Coxiella burnetii and milk pasteurization: an early application of the precautionary principle?. Epidemiology and Infection 134(5), 946­951. ​ ​ ​ ​

Clarke, S. (2005). Future technologies, dystopic futures and the precautionary principle. Ethics and Information Technology, 7(3), 121­126. ​ ​ ​

Clements, K. W., and Izan, H. Y. (1987). The measurement of inflation: a stochastic approach. Journal of Business & Economic Statistics, 5(3), 339­350. ​ ​ ​ ​

Curtis, R. F., and Jackson, E. F. (1962). Multiple indicators in survey research. American ​ Journal of Sociology 68(2), 195­204. ​ ​ ​

Dana, David. (2009). The contextual rationality of the Precautionary Principle. Queen's LJ ​ 35(1), 67­95. ​

Davis, Wayne A. (1987). The varieties of fear. Philosophical Studies 51(3), 287­310. ​ ​ ​ ​

ECJ. (2010). European Court of Justice: Judgment of the Court (Second Chamber) of 22 December 2010. Gowan Comércio Internacional e Serviços Lda v Ministero della Salute.

ECJ. (2011). European Court of Justice: Advocate General Yves Bot’s Opinion in Case C­442/09. 9 February 2011.

130

Esqueda, Omar A., Luo, Yongli and Jackson, Dave O. (2015). The linkage between the US “fear index” and ADR premiums under non­frictionless stock markets. Journal of Economics ​ and Finance 39(3), 541­556. ​ ​ ​

Francot Timmermans, Lyana, and De Vries, Ubaldus. Eyes Wide Shut: On Risk, Rule of Law ‐ and Precaution. Ratio Juris 26(2), 282­301. ​ ​ ​ ​

Franklin, Sarah. (2006). The cyborg embryo our path to transbiology. Theory, Culture & ​ Society 23(7­8), 167­187. ​

Frost, Tom. (2015). The Promise of Liberty to All: The Long March to Marriage Equality. Liverpool Law Review 36(2), 171­182. ​

Furedi, Frank, and Derbyshire, Stuart WG. (1997). Culture of Fear: Risk Taking and the Morality of Low Expectation. BMJ­British Medical Journal­International Edition 315(7111), ​ ​ 823­823.

Giampietro, Mario. (2002). The precautionary principle and ecological hazards of genetically modified organisms. AMBIO: A Journal of the Human Environment 31(6), 466­470. ​ ​

Giroux, É. (2015). Epidemiology and the bio­statistical theory of disease: a challenging perspective. Theoretical medicine and , 36(3), 175­195. ​ ​ ​ ​

Glassner, Barry. (2010). The Culture of Fear: Why Americans Are Afraid of the Wrong ​ Things: Crime, Drugs, Minorities, Teen Moms, Killer Kids, Mutant Microbes, Plane Crashes, Road Rage, & So Much More. New York: Basic Books. ​

Hanekamp, J. C., Vera Navas, G., & Verstegen, S. W. (2005). The historical roots of ‐ precautionary thinking: the cultural ecological critique and ‘The Limits to Growth’. Journal ​ of Risk Research, 8(4), 295­310. ​ ​ ​

Hansen, B., and Lucas, E. F. (1984). On the accuracy of index numbers.Review of Income and ​ Wealth, 30(1), 25­38. ​ ​ ​

Heselhaus, S. (2010). Nanomaterials and the Precautionary Principle in the EU. Journal of ​ consumer policy, 33(1), 91­108. ​ ​ ​

131

Hilbert, D. (1899). Grundlagen der Geometrie: Festschrift zur Feier der Enthüllung des ​ Gauss­Weber­Denkmals in Göttingen. Herausgegeben von dem Fest­Comitee. I. Theil. ​ Teubner.

Hilbert, D. (1902). The foundations of geometry. Chicago: Open Court Publishing Company. ​ ​

Hilbert, D. (1917). Axiomatisches denken. Mathematische Annalen, 78(1), 405­415. ​ ​ ​ ​

Hintikka, J. (2011). What is the axiomatic method?. Synthese, 183(1), 69­85. ​ ​ ​ ​

Howell, R. D., Breivik, E., and Wilcox, J. B. (2007). Reconsidering formative measurement. Psychological methods, 12(2), 205. ​ ​ ​

Keyes, K., and Galea, S. (2015). What matters most: quantifying an epidemiology of consequence. Annals of epidemiology, 25(5), 305­311. ​ ​ ​ ​

Krause, D., Arenhart, J. R., & Moraes, F. T. (2011). Axiomatization and models of scientific theories. Foundations of science, 16(4), 363­382. ​ ​ ​ ​

Leubsdorf, J. (1978). The standard for preliminary injunctions. Harvard Law Review 91(3), ​ ​ 525­566.

Mandel, G. N., and Gathii, J. T. (2006). Cost­Benefit Analysis Versus the Precautionary Principle: Beyond Cass Sunstein's Laws of Fear. University of Illinois Law Review, 1037. ​ ​

Marchant, G. E. (2001). The precautionary principle: an'unprincipled'approach to biotechnology regulation. Journal of Risk Research, 4(2), 143­157. ​ ​ ​ ​

McCaffrey, S. C. (2009). The International Law Commission adopts draft articles on transboundary aquifers. The American Journal of International Law, 103(2), 272­293. ​ ​ ​ ​

McIntyre, O., and Mosedale, T. (1997). The Precautionary Principle as a Norm of Customary International Law. Journal of Environmental Law, 9, 221­241. ​ ​ ​ ​

Pasch, M. (2013 [1882]). Vorlesungen über die neuere Geometrie: Mit einem Anhang von Max Dehn: ​ Die Grundlegung der Geometrie in historischer Entwicklung (Vol. 23). Springer­Verlag. ​

132

Peckhaus, Volker. (2003). The pragmatism of Hilbert's programme. Synthese 137(1­2), ​ ​ ​ 141­156.

Peterson, Martin. (2006). The precautionary principle is incoherent. Risk analysis 26(3), ​ 595­601.

Purnhagen, Kai. (2014). The Behavioural Law and Economics of the Precautionary Principle in the EU and Its Impact on Internal Market Regulation. Journal of Consumer Policy 37(3), ​ ​ 453­464.

Raffensperger, C. and Tickner, J. (1999). Introduction: to foresee and forestall. In C. ​ Raffensperger and J. Tickner (Eds.), Protecting Public Health and the Environment: ​ Implementing the Precautionary Principle (1–11). Washington, DC: Island Press. ​

Randall, Alan. (2009). I Already Have Risk Management—Do I Really Need the Precautionary Principle? International Review of Environmental and Resource Economics ​ 3(1), 39­74. ​

Rasborg, Klaus. (2012). ‘(World) risk society’ or ‘new rationalities of risk’? A critical discussion of Ulrich Beck’s theory of reflexive modernity." Thesis Eleven 108(1), 3­25. ​ ​ ​ ​

Rogers, Michael D. "Risk management and the record of the precautionary principle in EU case law." Journal of Risk Research 14(4), 467­484. ​ ​ ​ ​

Sadeleer, Nicolas de. "The precautionary principle in EC health and environmental law." European Law Journal 12(2), 139­172. ​ ​ ​

Sandin, Per. "Dimensions of the precautionary principle." Human and Ecological Risk ​ Assessment: An International Journal 5(5), 889­907. ​

Savitz, David A. (1994). In defense of black box epidemiology. Epidemiology 5(5), 550­552. ​ ​

Siegrist, Michael, Carmen Keller, and Henk AL Kiers. (2005). A new look at the psychometric paradigm of perception of hazards. Risk Analysis 25(1), 211­222. ​ ​ ​ ​

Skrabanek, Petr. (1994). The emptiness of the black box. Epidemiology 5(5), 553­554. ​ ​

133

Slovic, Paul. (1987). Perception of risk. Science 236(4799), 280­285. ​ ​ ​ ​

Stebbing, Margaret. (2009). Avoiding the trust deficit: public engagement, values, the precautionary principle and the future of nanotechnology. Journal of Bioethical Inquiry 6(1), ​ 37­48.

Steglich­Petersen, A. (2015). The epistemology of the precautionary principle: Two puzzles resolved. Erkenntnis, 80(5), 1013­1021. ​ ​ ​ ​

Stewart, Richard B. (2002). Environmental regulatory decisionmaking under uncertainty. Research in Law and Economics 20, 71­126. ​

Stöltzner, Michael. (2015). Hilbert's axiomatic method and Carnap's general axiomatics. Studies in History and Philosophy of Science Part A 53, 12­22. ​

Sunstein, Cass R. (2002). The paralyzing principle. Regulation 25, 32­37. ​ ​

Todt, Oliver, and Luján, José Luis. (2014). Analyzing precautionary regulation: do precaution, science, and innovation go together? Risk Analysis 34(12), 2163­2173. ​ ​ ​ ​

Tudor, Andrew. (2003). A (macro) sociology of fear? The Sociological Review 51(2), ​ ​ ​ 238­256.

Verschuuren, Jonathan. (2006). Sustainable development and the nature of environmental legal principles. Potchefstroom Electronic Law Journal/Potchefstroomse Elektroniese ​ Regsblad 9(1). ​ ​

Wagner, Wendy E. (2000). The precautionary principle and chemical regulation in the US. Human and Ecological Risk Assessment 6(3), 459­477. ​ ​ ​

Wahlström B. (1999). The precautionary approach to chemicals management: a Swedish perspective. In C. Raffensperger and J. Tickner (Eds.). Protecting public health and the ​ environment: implementing the precautionary principle (51­70). Washington, DC: Island Press.

134

Zimmermann, Rainer E. (2007). Topological aspects of biosemiotics. tripleC: ​ Communication, Capitalism & Critique. Open Access Journal for a Global Sustainable Information Society 5(2), 49­63. ​

135

5. INDEXES OF SERIOUS ADVERSE EVENTS REPORTED IN INTERVENTIONAL CLINICAL TRIALS

Marinakis, Yorgos D., Harms, R. and Walsh, S.T., submitted to Journal of Biomedical ​ Informatics

136

Indexes of serious adverse events reported in interventional clinical trials

ABSTRACT

Sector­ and industry­wide indexes can help policy makers, regulators and law­makers engage in contextual monitoring, early trend detection, and forecasting. We constructed novel indexes comprising aggregates of percents of interventional clinical trials reporting serious adverse events, by intervention type, and by intervention type and phase, for the entire U.S. FDA clinical trials database (1997­2014). We demonstrated one use of these indexes by quantitatively testing them for longer term trends. The results showing decreasing adverse serious events suggest that the policies and regulations around clinical trials are efficacious. We also introduce the theoretical complexity of index numbers, and discuss them in terms of constructs of formative measurement and Iterated Function Systems. This research will be of interest to practitioners and researchers in the area of clinical research informatics, and to those engaged in evidence­based policy­making (EBPM).

Keywords: clinical research informatics; clinical trials; evidence­based policy­making ​ (EBPM); index numbers; serious adverse event

137

1. INTRODUCTION

Under U.S. federal regulation 31 CFR §312, medical interventions must report on a series of clinical trials phases before being submitted for approval for release to the U.S. market. Phase 1 is usually the first phase of testing on humans. According to the FDA (FDA 2015a), Phase 1 involves 20 to 100 healthy volunteers or people with the disease/condition, lasts several months, and has the purposes of studying safety and dosage. FDA states that 70% of Phase 1 drugs progress to Phase 2. Phase 2 is often the first phase of testing how the intervention works on persons who actually suffer from the targeted disease (Petsko 2010). Phase 2 involves up to several hundred people with the disease/condition, lasts several months to 2 years, and has the purposes of testing efficacy and side effects. Phase 2 is sometimes divided into Phase 2a for focusing primarily on dosing and Phase 2b for focusing primarily on efficacy (Petsko 2010). FDA states that 33% of drugs progress to Phase 3. Phase 3 involves 300 to 3,000 volunteers who have the disease or condition, lasts 1 to 4 years, and has the purposes of determining efficacy and monitoring adverse reactions. FDA states that 25­30% of drugs move on to the submission and Phase 4. Usually two successful Phase 3 trials are required to obtain US or European regulatory approval (Petsko 2010). Phase 4 studies are done after the drug or treatment has been marketed to gather information on its effect in various populations and on side effects associated with long­term use.

Recent articles have reported studies on clinical trials data that were aggregated by intervention type (Behavioral, Biological, Device, Drug, etc.) and clinical trial phase (Phase 1, Phase 2, Phase 3). These articles demonstrate an interest in sector­ and industry­wide indexes, but they do not explain or justify the use of aggregated data. It was recently reported, for example, that Phase II success rates for reported drugs clinical trials, as a whole, fell from 28% in 2006–2007 to 18% in 2008–2009 (Arrowsmith 2011b); that the combined success rate at Phase III and submission for reported drugs clinical trials, as a whole, has fallen to about 50% (Arrowsmith 2011b); and that 2010­2012 showed low Phase II success rates but improving Phase III success rates (Arrowsmith and Miller 2013). Knowledge of these success

138 rates is useful “in assessing the impact of changes in development strategy and research area focus by the pharmaceutical industry” (Arrowsmith and Miller 2013: 569). They may even alert us to a looming crisis. It was asserted, for example, that use of a particular index number (Divisia or Fisher ideal index) could have signaled the 2008 financial crisis (Barnett and

Chauvet 2012).

In the present study, we constructed indexes comprising aggregates of percents of interventional clinical trials reporting serious adverse events, by intervention type, and by intervention type and phase, for 1997­2014. To demonstrate the utility of this tool, as an example we used the indexes to address the basic research question of whether clinical trials serious adverse events are increasing or decreasing since the inception of the U.S. FDA database in 1997. Serious adverse events were of sufficient concern that Congress dealt with them in the Food and Drug Administration Amendments Act (FDAAA) of September 2007. Section 801 of the FDAAA requires a researcher, after September 2009 and for device clinical trials and drug clinical trials, to report all serious adverse events. Serious adverse events are events that result in death, that require inpatient hospitalization or the prolongation of hospitalization, that are life­threatening, or that result in persistent or significant disability or incapacity or a congenital anomaly or birth defect (ClinicalTrials.gov 2013). This reporting requirement has been characterized as “among the most important elements of a clinical trial publication” (Sivendran 2014: 83).

The present research falls squarely within the discipline of biomedical informatics (AMIA 2016a). In particular, this research is classified as Clinical Research Informatics, which “involves the use of informatics in the discovery and management of new knowledge relating to health and disease. It includes management of information related to clinical trials and also involves informatics related to secondary research use of clinical data” (AMIA 2016b). In the

Discussion we relate the relevance of this research to evidence­based policy­making (EBPM).

2. THEORETICAL BACKGROUND

139

We now describe in more detail the subject of our indexes, namely Serious Adverse Events. We also discuss the aggregates as Indexes and as constructs of formative measures. Our purpose is to introduced the reader to the theoretical complexity of index numbers.

Serious Adverse Events

Federal regulation 31 CFR §312.32a provides relevant definitions for investigational new drug applications (IND's). Under 31 CFR §312.32(c)(1)(i) a researcher must report any suspected adverse reaction that is both serious and unexpected:

“Adverse event means any untoward medical occurrence associated with the use of a

drug in humans, whether or not considered drug related…

“Serious adverse event or serious suspected adverse reaction. An adverse event or ​ suspected adverse reaction is considered "serious" if, in the view of either the investigator or sponsor, it results in any of the following outcomes: Death, a life­threatening adverse event, inpatient hospitalization or prolongation of existing hospitalization, a persistent or significant incapacity or substantial disruption of the

ability to conduct normal life functions, or a congenital anomaly/birth defect…

“Unexpected adverse event or unexpected suspected adverse reaction. An adverse ​ ​ event or suspected adverse reaction is considered "unexpected" if it is not listed in the investigator brochure or is not listed at the specificity or severity that has been

observed...”

Under Section 801 of the Food and Drug Administration Amendments Act, these regulations apply only to medication and medical device clinical trials after September 2009.

Aggregates of Clinical Trials

Aggregates of clinical trials are troublesome but tantalizing figures. They are probably basic instances of what economists call index numbers. Research on index numbers is performed mainly by economists (e.g., Diewert 2008), such as in productivity indexes (e.g., Diewert 1992, Färe and Primont 2003, Fox 2012). Like index numbers, these aggregates are also constructs of formative measures and they will possess the problems inhering in those

140 constructs. Index numbers are deceptively simple. They are so much a part of our daily lives as “public numbers” (Porter 1995, Neiburg 2010) that we think we understand them, e.g., stock market indexes (Allen 1975, Fisher and Weaver 1992, Clements et al. 2006, Lan and Tan 2007) and consumer price indexes (Belter et al. 2005). But most of us really know only what they are used for, i.e., to enable the quantitative treatment of composite commodities (Hansen and Lucas 1984). Composite commodities, such as real wages and producer goods, are the totality of commodities that have certain characteristics in common (Hansen and Lucas 1984). How you define index numbers depends on which stream of thought you follow. The economic­theoretic stream derives index numbers from the consumer’s utility function or the cost function; whereas Axiomatic Index Theory postulates indexes that must (1) be an average of the prices and quantities of the component commodities, and (2) pass the six analogue tests of identity, proportionality, commensurability, time reversal, factor reversal and circularity (Hansen and Lucas 1984, Clements et al. 2006). These conditions are designed to ensure that the index is a composite of the separate properties of its components, and that these composite properties behave like the component properties (Hansen and Lucas 1984); but this is the definition of formative measurement, in which “measures are combined to form weighted linear composites intended to represent theoretically meaningful concepts” (Edwards 2010: 371). Formative measures suffer from defects in dimensionality, internal consistency, identification, measurement error, construct validity, and causality (Edwards 2010). Yet despite these difficulties, we continue to construct index numbers (formative measures) and use them as benchmarks for some of the most important elements of economies such as cost of living indexes, stock market indexes, consumer price indexes, producer price indexes, chemical plant cost indexes, Bureau of Labor statistics, etc.

Aggregates of clinical trials are formative measures because they comprise different study designs and different specific interventions (e.g., different drugs); and as such they also suffer from many of the same defects of formative constructs. Not the least defect is that the aggregates channel conceptually distinct measures (different study designs, different drugs) into a single construct: “a construct that presumably carries relationships linking heterogeneous measures to distinct outcomes becomes a conceptual polyglot with no clear interpretation of its own” (Edwards 2010: 379). Economists have already noted these defects

141 in their indexes, such as that formative measures assume no measurement error (Hansen and Lucas 1984) Interesting attempts to remedy the index number problem, such as using stochastic indexes to address the measurement error problem (Clements et al. 2006) and distinguishing index numbers from theory (Boumans 2001), still rely on formative measures. In the characteristic dismal style, two economists concluded “we must accept the sad facts of life, and be grateful for the more complicated procedures economic theory devises" (Samuelson and Swamy 1974: 592, Hansen and Lucas 1984: 25). The number of studies of aggregates of clinical trials is growing but the scope of these studies is hitherto limited to specific conditions. One study for example aggregated pediatric cardiovascular trials (Hill et al. 2014). Another study reported an application that enables a researcher to perform statistical analyses over trials aggregated by a medical condition (He et al. 2015). The present study in contrast aggregates clinical trials over entire intervention types, and over phases within intervention types. This degree of aggregation results in such a mix of interventions and study designs, it invokes previous work on heterogeneous indicators, namely economic index theory and the theory of formative measures.

Index Numbers: Their Construction and Tests of Their Adequacy

Index numbers should behave like their constituents (Samuelson and Swamy 1974, Hansen and Lucas 1984), but the simplicity of this proposition betrays its complexity in practice. Fisher (1922) postulated that the composite should comprise an average of its constituents quantities (and prices), and he then developed tests to identify which averages (there are 125 of them: arithmetic, geometric, harmonic, etc.) produce numbers (quantities and prices) that behave like those of a homogeneous good. Criticism of the internal inconsistency of these tests resulted in the subsequent development of Axiomatic Index Theory (Frisch 1930, Eichhorn 1973). These criticisms were part and parcel of the axiomatization movement that was sweeping through mathematics from 1900 to 1960 (Isaac 2012). This movement sought to remove all vestiges of intuition from science and to reconstruct science with axioms and definitions that led to lemmas and theorems. The problem with the axiomatic approach was that it elevated logical consistency over empirical utility, in effect divorcing mathematics from empiricism (Boumans 2001). Fisher’s “instrumental approach” (Boumans 2001: 315) in contrast was empirically driven. He treated the index problem literally as a design problem of

142 an instrument. His proposal for (or design of) a price index, for example, involved making a mechanical balance in which a weight at one end represented the money in circulation, a weight at the other end represented goods, the fulcrum represented the velocity of circulation, and the distance to the fulcrum represented the price (Boumans 2001: 325). Frisch (1936) criticized Fisher’s physical model on grounds that the only thing the money and the goods had in common was their weight. This criticism of incommensurability is reminiscent of the contemporary criticism of formative measures (Edwards 2010) which we shall discuss below.

Index numbers are commonly constructed from percentages. The Laspeyres index and the Divisia index, for example, both which use the production share of a sector, where Y = Total ​t industrial production, Y = Production of industrial sector i, and S = Production share of ​i,t ​i,t sector i (=Yi,t/Yt ) (Ang and Zhang 2000). Such percentage­based indexes readily satisfy the tests for gauging the adequacy of an index number. Some the most commonly used today are Fisher’s (1922) time­reversal test and Fisher’s unit test (Diewert 1992). In the time reversal test, if the data for the base and current periods are interchanged, then the resulting index is the reciprocal of the original, i.e., P X P = 1, where P is the index for time 1 on time 0 ​01 ​10 ​01 and P is the index for time 0 on time 1. To pass the unit test, the formula must be ​10 independent of the unit in which the quantities are quoted.

Formative Measures

Measure theory comprises the theory of both reflective and formative measures. This dichotomy parallels the axiomatic­instrumental dichotomy in index theory. Just as Fisher’s instrument approach to indexes is seen as an empirical approach rooted in reality that was superseded by the axiomatization movement (Boumans 2001), so formative measures have been criticized as “based almost exclusively on practical or empirical examples” in contrast to the “rich psychometric history underlying reflective measurement” (Hardin and Maroulides 2011: 759). It is arguable that indexes are simply constructs of formative measures. Therefore we now turn to the specifics of measure theory.

143

Reflective measures emanate from their constructs whereas formative measures constitute their constructs (Hardin and Maroulides 2011, Finn and Wang 2014). The traditional

reflective measurement model is

xi = λiη + εi, ​ ​ ​ ​ ​ ​

where i subscripts the indicators, λ refers to the loading of the ith indicator on the latent trait ​ ​ ​ ​ ​ ​ i ​

η, and εi represents the uniqueness and random error for the ith indicator. Thus the observable ​ ​ ​ ​ ​

variables xi are functions of the indicators λi. The formative measurement model is ​ ​ ​ ​ ​ ​

η ∑γ = ixi+ ζ, i = 1 to n, ​ ​ ​ ​

where γ is the contribution of x to the latent construct η and ζ is the residual. In contrast to ​ ​ ​ ​ ​ ​ ​ i ​ i ​ ​ reflective measures, the latent constructs are functions of the observables (Howell et al.

2007).

Many of the criticisms against formative measures (Edwards 2010) are the same that are leveled against index numbers. For example, economists of the non­axiomatic type treat index numbers as empirical objects (Boumans 2001: 315), just as constructs of formative measures are treated as empirical objects. In historical context, the criticisms against formative measures can be seen as part of the axiomatization movement. Yet in Fisher’s instrumental approach to index numbers, it is more important that an index has an empirical basis, than whether it has logical consistency with a set of axioms. The same may be said of formative measures. Moreover, what index number theory has, that formative measurement theory lacks, are tests such as Fisher’s. Introducing tests for formative measures would shift the

dispute from the validity of formative measures to the validity of the tests.

Iterated Function Systems

There is an unexplored conceptual model for both indexes and formative measures, namely the Iterated Function System (IFS). Informally, an IFS is a collection of independent functions for which, at every iteration, one of the functions is (usually) randomly chosen according to a probability schedule and the output from the previous step is fed into it, producing a fractal attractor. In other words, the collection of independent functions is analogous to the pre­indexed data or the formative measures and the fractal attractor is

144

analogous to an index or a construct, respectively. Thus we have a coherent mathematical

system based on the same logical structure as indexes and constructs of formative measures.

Formally, an IFS (Hutchinson 1979, Falconer 2004, Barnsley 2014) is a discrete time

dynamical system represented by a finite set of maps {f0, … fn, …fN­1} on a state space X, ​ ​ ​ ​ ​ ​ ​

where a trajectory of the IFS is a sequence of state­space points, {x0, … xt, xt+1, …}, together ​ ​ ​ ​ ​ ​ ​ ​

∈ {0, 1, , N-1} with a regime sequence {n0, … nt, nt+1, …} with nt , such that xt+1 = ​ ​ ​ ​ ​ ​ ​ … ​ ​ ​ ​ ​ ​

) ∀ t ∈ ℕ fn(t)(xt . If the fn are contraction mappings, then IFS is said to be hyperbolic with ​ ​ ​ ​ ​ ​ ​ a stable unique attractor A (Alexander et al. 2012) that is a fractal (Zhang et al. 2008). ​ ​

IFS attractors have been used to model a variety of phenomena. Models of objects have included: horns, seashells and other natural forms (Stępień 2009); tree crowns (Collin et al. 2011); and vaporish objects (Sokol and Gentil 2015). The data stream from the IFS has been used to model: sequences over an arbitrary finite number of symbols (Tiňo 1999) such as DNA (Jeffrey 1990); economies (La Torre et al. 2011); and the formation of episodic memory

in the hippocampus (Yamaguti 2011).

3. MATERIALS AND METHODS

Construction of the indexes of Serious Adverse Events was a multi­step process. It required piecing together different tables in the database, determining criteria for suitable data, filtering the data, spot checking the results, constructing the indexes, graphing the results and

performing the statistical analyses.

Data and Data Manipulation

Clinical trials data were downloaded from AACT (Tasneem et al. 2012, AACT 2015). This dataset comprises data downloaded by AACT from ClinicalTrials.gov on September 27, 2015 and then placed by AACT into SAS CPORT transport files. The Food and Drug Administration Modernization Act of 1997 (21 USC 31) mandated that National Institutes of Health (NIH) establish a registry of clinical trials information for both federally and privately funded trials conducted under investigational new drug applications to test the effectiveness of experimental drugs for serious or life­threatening diseases or conditions. This led to the

145 creation of the website ClinicalTrials.gov. Section 801 of the FDA Amendments Act of 2007 further requires, inter alia, requires the submission of basic results for certain clinical trials, generally no later than 1 year after their Completion Date. The submission of adverse event information (“adverse events”) was required beginning in September 2009.

We now describe our data manipulation. The dataset comprises nine intervention types, Phases 0 through 4, adverse events for all nine intervention types, and more (Table 1). Phase 0 studies are exploratory studies involving very limited human exposure to the drug, with no therapeutic or diagnostic goals; Phase 1/Phase 2 are trials that are a combination of phases 1 and 2; and so forth (FDA 2015b). For the present study, clinical trial entries that met the following criteria were selected: Phase ≠ “N/A,” Completion Date ≠ “ ,” Overall Status = “Completed,” Enrollment Type = “Actual,” Study Type = “Interventional,” Intervention Name ≠ “Placebo” or “placebo,” Sponsor Type = “Lead Sponsor,” Outcome Type = “Primary,” and Event Type = “Serious.” NCT ID is a unique identification code given to each clinical study registered on ClinicalTrials.gov. Many NCT IDs had several entries, and often there were several primary outcomes. We counted these as separate entries. Only data after 1996 was used because as state above that is the year that the Food and Drug Administration Modernization Act mandated that the NIH create the clinical trials registry database. Processed data were spot­checked against www.clinicaltrials.gov. The entire data dictionary is available at ClinicalTrials.gov. The index numbers presented here are percentages. As such they easily satisfy the time reversal test and the unit test (Fisher 1922, Diewert 1992).

The Aggregates

As stated above, the clinical trials aggregates are formative measures and as such they suffer from the deficiencies of formative measures. To illustrate (Table 1), consider NCT ID 1920854 and NCT ID 1989169. NCT ID 1920854 involved a test of soluble ferric pyrophosphate, which is used for iron replacement therapy in dialysis patients. NCT ID 1989169 involved Midazolam, which is a benzodiazepine or a muscle relaxant. Soluble ferric pyrophosphate and Midazolam have completely different chemical structures and work via different mechanisms, yet because both clinical trials entered Phase 1 in the same year and because both reported serious adverse events, both comprise part of the 2013 Drug Phase 1

146

2013 data point (Figure 1e). Consider NCT ID 1844388 (Table 1). This was a Phase 3 study to test a carboxymethyl cellulose sodium based eye drop solution. Because this trial started in 2013 and because it reported a serious adverse event, together with the two aforementioned trials it comprises part of the 2013 data point for drugs tests/all phases (Fig. 2e). Consider NCT ID 1867021 (Table 1). This was a Phase 4 test of Agriflu, an Influenza Virus Vaccine for Intramuscular Injection. Because the trial started in 2013 and reported a serious adverse event, together with the three aforementioned trials it comprises part of the 2013 data point for all intervention types and phases combined (Fig. 3). In each of the foregoing cases (points for 2013 in Figs. 1e, 2e, 3), the resultant data point is a “conceptual polyglot with no clear interpretation of its own” (Edwards 2010: 379).

As bad as this situation appears, it is no worse than the S&P 500 index. That index brings together entirely different companies that meet liquidity requirements and that are publicly listed on certain exchanges. In alphabetical order, the first 10 of these companies include: 3M Company, an industrial conglomerate out of St. Paul, Minnesota; Abbott Laboratories, a health care equipment and services company out of North Chicago, Illinois; Abbvie, a pharmaceuticals company out of North Chicago, Illinois; Accenture plc, an IT consulting and other services company out of Dublin, Ireland; ACE Limited, a property and casualty insurance company out of Zurich, Switzerland; Activision Blizzard, a home entertainment software company out of Santa Monica, California; Adobe Systems Inc., an application software company out of San Jose, California; ADT Corp., a diversified commercial services company out of Boca Raton, Florida; Advance Auto Parts, an automotive retail company out of Roanoke, Virginia; and AES Corporation, an independent power producers & energy traders company out of Arlington, Virginia. It can be argued that the incongruity of the clinical trials aggregates is no worse than that of these first 10 S&P 500 companies.

NCT 1920854 1989169 1844388 1867021 Adverse Serious Serious Serious Serious event Lead Rockwell Shire Allergan Novartis Sponsor Medical Technologies, Inc.

147

Agency Industry Industry Industry Industry Class Interventio Drug Drug Drug Biological n Type Phase 1 1 3 4 Interventio Soluble ferric Midazolam Carboxymethylcell Biological: n Name pyrophosphat ulose sodium based Agriflu (TIV) e eye drop solution Description [None] [None] 1­2 drops [None] carboxymethyl cellulose sodium based eye drop solution (REFRESH CONTACTS?) in each eye, a minimum of 4 times a day for 90 days. One of the 4 times a day may be to prepare the contact lens for insertion. Brief Title A Single Pharmacokine A Study to Immunogenic Ascending tic Interaction Compare a New ity, Safety Dose Study of Between Eye Drop and Soluble Ferric SSP­004184 Formulation With Tolerability Pyrophosphat (SPD602) and Refresh Contacts? of a Trivalent e Midazolam in Subunit Administered Healthy Adult Inactivated Intravenously Subjects Vaccine in in Healthy Healthy Volunteers Subjects 50 Years and Above Study Allocation: Allocation: Allocation: Allocation: Design Randomized, Randomized, Randomized, Randomized, Endpoint Endpoint Endpoint Endpoint Classification Classification Classification: Classification : : Safety/Efficacy : Pharmacokine Pharmacokine Study, Intervention Safety/Effica tics Study, tics Study, Model: Parallel cy Study, Intervention Intervention Assignment, Intervention Model: Model: Masking: Double Model: Parallel Crossover Blind (Subject Parallel Assignment, Assignment, Investigator, Assignment, Masking: Outcomes Masking:

148

Double Blind Masking: Assessor), Primary Double Blind (Subject Open Label Purpose: Treatment (Subject Investigator, Investigator, Outcomes Outcomes Assessor), Assessor), Primary Primary Purpose: Purpose: Treatment Prevention Arms 2 2 2 2 Enrollment 48 30 365 2902 Genders Both Both Both Both Accepts Yes Yes Yes Yes Healthy Volunteers? Age 18­65 18­65 18­N/A 50­N/A Dates June 2013 ­ November April 2013 – May 2013 ­ September 2013 ­ December 2013 December 2013 December 2013 2013 Table 1. Examples of clinical trials that were aggregated into constructs.

Logistic Regression

Index numbers are commonly analyzed for trends. This is often done by visual inspection, where trend lines are drawn manually such as in technical analysis of stock indexes (Marshall et al. 2009). This approach in effect treats time as the independent variable and the index number as a linear combination of time. In the present study we used the logistic regression to identify trends in the clinical trial data aggregates.

The percents of clinical trials reporting adverse events were calculated by intervention type and phase, by intervention type alone, and for all intervention types and phases combined. We then performed logistic regressions on the percents (dependent variable) versus the year of the start date (independent variable) of the respective clinical trials, in order to detect the presence of increasing or decreasing trends.

Logistic regression is used to determine the impact of an independent variable on a binomial dependent variable. It has been used on time series when time is a proxy, such as for the impact of traffic cameras on speeding violations (Vanlaar et al. 2014) or the efficacy of a new drug for treatment of severe chronic constipation (Choi et al. 2005). In our case, time is a

149 proxy for competencies relating to adverse events. The plausibility of our proxy requires no more suspension of disbelief than in the cases of the two cited examples (Choi et al. 2005,

Vanlaar et al. 2014) or for that matter in the cases of many if not all formative measures.

The logistic regression was utilized because percent data do not satisfy the assumptions for relevant statistical tools such as Analysis of Variance (ANOVA), namely that the data are normally distributed and that they are free to vary around the mean without limit (Long 1997). The arcsine transform is commonly used to transform percent data into normally distributed data, but the logistic regression provides greater interpretability and higher power with binomial data such as ours (Warton and Hui 2011, Shi et al. 2013). Our data is binomial because it is analogous to the percent of correct answers on a test; i.e, the aggregates comprise the percent of “correct answers” (clinical trials that reported adverse events) for intervention type and phase combinations. We tested the significance of these results with Wald Chi

Square values and with Maximum Likelihood Chi Square values.

RESULTS

We produced and graphed three groups of Serious Adverse Event indexes. First, we produced indexes separately for each of the nine intervention types and separately for each of the Phases 0 through 4, with year as the independent variable (x­axis). We also produced and graphed indexes separately for each of the nine intervention types but aggregating all phases, with year as the independent variable. Finally we produced and graphed an index aggregating all intervention types and all phases, with year as the independent variable. Because of the number of graphs, they are not presented here. Instead, below we present a subset of graphs in the context of an example research question.

Example research question: whether clinical trials serious adverse events are increasing or decreasing since the inception of the U.S. FDA database in 1997

As an example application of these indexes, we asked whether clinical trials serious adverse events are increasing or decreasing since the inception of the U.S. FDA database in 1997. We tested the indexes for statistically significant trends by using the logistic regression. The

150 logistic regressions showed that only twelve intervention type­phase combinations had statistically significant relationships with the year of the clinical trial start date (Table 2, Figures 1a­1i). These nine were Behavioral Phase 1/Phase 2, Biological Phase 2, Device Phase 3, Device Phase 4, Drug Phase 1, Drug Phase 1/Phase 2, Drug Phase 3, Other Phase 2, and Procedure Phase 4. In seven of the nine, the percent of clinical trials reporting adverse events decreased over time, as indicated by the +/­ sign on the Estimate for Start Date Year. As stated above in the Theoretical Background under the subsection on Serious Adverse Events, Section 801 of the Food and Drug Administration Amendments Act applied only to medication and medical device clinical trials after September 2009. Therefore we would not be surprised to see adverse events, reported for devices and drugs, increasing after 2009. But the only statistically significant increasing trends were Behavioral Phase 1/Phase 2 (Table 2,

Fig. 1a) and Device Phase 3 (Table 2, Fig. 1c).

151

INTERVENTIO PHASE Estimate Pr>Chi R­Square N TYPE Square for Wald Chi Square

Behavioral Phase 1 0.2383 0.2327 0.1324

Behavioral Phase 0.5081 0.0303* 0.4117 1/Phase 2

Behavioral Phase 2 0.1307 0.3316 0.0604

Behavioral Phase 0.21 0.1819 0.1788 2/Phase 3

Behavioral Phase 3 0.3692 0.1198 0.2724

Behavioral Phase 4 0.1941 0.3719 0.083

Biological Phase 1 0.0593 0.7218 0.0088

Biological Phase 0.2793 0.073 0.2383 1/Phase 2

Biological Phase 2 ­0.2422 0.0261* 0.244

Biological Phase ­0.7902 0.1225 0.3645 2/Phase 3

Biological Phase 3 ­0.1376 0.1022 0.0991

Biological Phase 4 0.0205 0.874 0.0017

Device Phase 0 0.0726 0.8772 0.004

Device Phase 1 ­0.1258 0.5567 0.047

Device Phase 0.1183 0.7587 0.0142 1/Phase 2

Device Phase 2 ­0.2393 0.1356 0.1593

152

Device Phase ­0.0824 0.7609 0.0101 2/Phase 3

Device Phase 3 0.3834 0.0102* 0.4179

Device Phase 4 ­0.2961 0.0233* 0.283

Dietary Phase 1 ­0.5675 0.198 0.3483 Supplement

Dietary Phase 0 1 0 Supplement 1/Phase 2

Dietary Phase 2 0.4172 0.3156 0.1431 Supplement

Dietary Phase ­0.3804 0.1745 0.3278 Supplement 2/Phase 3

Dietary Phase 3 ­0.0318 0.8662 0.0025 Supplement

Dietary Phase 4 ­0.0451 0.8569 0.0042 Supplement

Drug Phase 0 ­6.7313 0.7461 0.5936

Drug Phase 1 ­0.2233 0.0366* 0.2178

Drug Phase ­0.4238 0.0032* 0.5191 1/Phase 2

Drug Phase 2 ­0.0782 0.3717 0.0358

Drug Phase 0.1522 0.23 0.0895 2/Phase 3

Drug Phase 3 ­0.1715 0.0485* 0.1636

Drug Phase 4 ­0.1048 0.2298 0.0502

Genetic Phase 4 ­0.15 0.8459 0.0094

Other Phase 0 0.4584 0.4144 0.1134

Other Phase 1 0.22 0.5288 0.0539

153

Other Phase 1.1184 0.1012 0.5125 1/Phase 2

Other Phase 2 ­0.3632 0.0299* 0.295

Other Phase ­0.572 0.2498 0.3035 2/Phase 3

Other Phase 3 ­0.0287 0.8139 0.0038

Other Phase 4 ­0.062 0.6754 0.0158

Procedure Phase 1 ­0.7269 0.1547 0.3513

Procedure Phase ­0.2945 0.3947 0.1388 1/Phase 2

Procedure Phase 2 ­0.1459 0.1899 0.1084

Procedure Phase ­0.0916 0.6634 0.0426 2/Phase 3

Procedure Phase 3 ­0.0237 0.8576 0.0028

Procedure Phase 4 ­0.3406 0.0376* 0.2841

Radiation Phase 7.1561 0.6934 0.8748 1/Phase 2

Radiation Phase 2 0.106 0.6021 0.0256

Radiation Phase 4 5.7944 0.8776 0.72

Table 2. Results of logistic regressions of Percent of Clinical Trials reporting serious adverse events vs. Year of Clinical Trial Start Date, by Intervention Type and Phase. Chi Square values that are statistically significant at P=0.05 are indicated with an asterix (*). Estimate is the coefficient from the regression. Pr>Chi Square for Wald Chi Square indicates p­values, respectively, testing the null hypothesis that an individual predictor's regression coefficient is zero, given the other predictor variables are in the model; a statistically significant value rejects the null hypothesis.

154

Figure 1a. Intervention type­phase combinations that resulted in statistically significant logistic regressions. Y­axis: percent of clinical trials reporting serious adverse events. X­axis: year of the start date of the clinical trial.

155

Figure 1b. Intervention type­phase combinations that resulted in statistically significant logistic regressions. Y­axis: percent of clinical trials reporting serious adverse events. X­axis: year of the start date of the clinical trial.

156

Figure 1c. Intervention type­phase combinations that resulted in statistically significant logistic regressions. Y­axis: percent of clinical trials reporting serious adverse events. X­axis: year of the start date of the clinical trial.

157

Figure 1d. Intervention type­phase combinations that resulted in statistically significant logistic regressions. Y­axis: percent of clinical trials reporting serious adverse events. X­axis: year of the start date of the clinical trial.

158

Figure 1e. Intervention type­phase combinations that resulted in statistically significant logistic regressions. Y­axis: percent of clinical trials reporting serious adverse events. X­axis: year of the start date of the clinical trial.

159

Figure 1f. Intervention type­phase combinations that resulted in statistically significant logistic regressions. Y­axis: percent of clinical trials reporting serious adverse events. X­axis: year of the start date of the clinical trial.

160

Figure 1g. Intervention type­phase combinations that resulted in statistically significant logistic regressions. Y­axis: percent of clinical trials reporting serious adverse events. X­axis: year of the start date of the clinical trial.

161

Figure 1h. Intervention type­phase combinations that resulted in statistically significant logistic regressions. Y­axis: percent of clinical trials reporting serious adverse events. X­axis: year of the start date of the clinical trial.

162

Figure 1i. Intervention type­phase combinations that resulted in statistically significant logistic regressions. Y­axis: percent of clinical trials reporting serious adverse events. X­axis: year of the start date of the clinical trial.

163

The logistic regressions also showed that only two intervention types across all phases, namely Behavioral and Procedure, had statistically significant relationships with the year of the clinical trial start date (Table 3, Figures 2a­2i). Of these, only Behavioral showed an increasing trend, with an R­Square of 33%. Section 801 of the Food and Drug Administration Amendments Act does not apply to behavioral health interventions, and it has been suggested that adverse event monitoring guidelines should be integrated specifically into behavioral health interventions (Peterson et al. 2013). These results support that position. In addition, data aggregated for all intervention types and phases combined showed no overall increasing

or decreasing trend. A short term decreasing trend is apparent (Figure 3).

We conclude that the current clinical trials regulatory mechanism, as it relates to serious

adverse events, is efficacious overall, despite isolated excursions of increasing occurrences.

INTERVENTION Estimate Coefficient R­Square TYPE Pr>ChiSq

Behavioral 0.2968 0.0109* 0.3302

Biological ­0.0751 0.3506 0.0383

Device 0.00934 0.9211 0.0005

Dietary ­0.1431 0.2542 0.0889 Supplement

Drug ­0.084 0.2987 0.0527

Genetic 0.1038 0.781 0.0126

Other 0.1579 0.2122 0.0712

Procedure ­0.2253 0.034* 0.223

Radiation 0.0963 0.4356 0.0478

Table 3. Results of logistic regressions of Percent of Clinical Trials reporting serious adverse events vs. Year of Clinical Trial Start Date, by Intervention Type. Chi Square values that are statistically significant at P=0.05 are indicated with an asterix (*). Estimate is the coefficient from the regression. Pr>Chi Square for Wald Chi Square indicates p­values, respectively,

164 testing the null hypothesis that an individual predictor's regression coefficient is zero, given the other predictor variables are in the model; a statistically significant value rejects the null hypothesis.

Figure 2a. Y­axis: percent of clinical trials reporting serious adverse events. X­axis: year of the start date of the clinical trial. All phases.

165

Figure 2b. Y­axis: percent of clinical trials reporting serious adverse events. X­axis: year of the start date of the clinical trial. All phases.

166

Figure 2c. Y­axis: percent of clinical trials reporting serious adverse events. X­axis: year of the start date of the clinical trial. All phases.

167

Figure 2d. Y­axis: percent of clinical trials reporting serious adverse events. X­axis: year of the start date of the clinical trial. All phases.

168

Figure 2e. Y­axis: percent of clinical trials reporting serious adverse events. X­axis: year of the start date of the clinical trial. All phases.

169

Figure 2f. Y­axis: percent of clinical trials reporting serious adverse events. X­axis: year of the start date of the clinical trial. All phases.

170

Figure 2g. Y­axis: percent of clinical trials reporting serious adverse events. X­axis: year of the start date of the clinical trial. All phases.

171

Figure 2h. Y­axis: percent of clinical trials reporting serious adverse events. X­axis: year of the start date of the clinical trial. All phases.

172

Figure 2i. Y­axis: percent of clinical trials reporting serious adverse events. X­axis: year of the start date of the clinical trial. All phases.

173

Figure 3. Y­axis: percent of clinical trials reporting serious adverse events. X­axis: year of the start date of the clinical trial. All intervention types and phases.

174

4. DISCUSSION

Evidence­based policy­making (EBPM) is a normative theory of policy choice asserting that policy decisions, both formulating and revising, should always be based the best available evidence (Kay 2011, De Marchi et al. 2012, Ghosh et al. 2014) (We note that regulations and laws should also be based on the best available evidence, such that this discussion also applies to them.). This evidence is obtained through monitoring (Dutz et al. 2014), and through engaging stakeholders (Lemke and Harris­Wai 2015) and researchers (Goldson et al. 2014). The best available evidence comprises “‘expert knowledge; published research; existing research; stakeholder consultations; previous policy evaluations; the internet; outcomes from consultation; costings of policy options; outputs from economic and statistical modelling’” (Ghosh et al. 2014: 620). Regarding “published research,” when a policy is narrowly focused, the study of its effects and efficacy falls within the boundaries of a discipline, and feedback becomes readily available in a specialty journal. But when a policy is broadly applicable, the study of its effects and efficacy may overlap multiple disciplines. A speciality journal might publish relevant studies on the subject that fall within its narrow boundaries, but the impact of the policy in its totality can be neglected by researchers and a holistic assessment of its efficacy can be difficult.

Evidence­based health policy­making (Biller­Adorno et al. 2002) is one such broadly applicable field. Consider, for example, clinical trials policy. In the United States, clinical trials are observational or interventional, and there are nine types of interventions that each may be considered to fall under separate research disciplines (behavioral, biological, device, dietary supplement, drugs, genetic, “other,” procedures, radiation). Sponsors (funders) of clinical trials that evaluate new drugs, biologics, and devices are required to monitor these clinical trials; and one recommended monitoring instrument is the Data Monitoring Committee (DMC, FDA 2006; in Europe, CHMP 2005). The DMC reviews summaries of any “serious adverse events” (see below, Theoretical Background), and it may recommend early termination of a clinical trial. Thus the DMC crosses three specialty disciplines. Statistics on the percentage of clinical trials that use the DMC have been published (Keating and

175

Cambrosio 2009), but detailed assessments of the DMC process have been published only in narrow specialty journals (e.g., for drugs, Glover and Kay 2012). Was the DMC a good idea? What is the effect and efficacy of the DMC, in its totality, across the three interventions, across the years? Is additional guidance needed? (i.e., a revision of the FDA 2006 guidance) Should the DMC be recommended for other interventional types? To answer these types of question, tools are needed that enable policy makers to monitor relevant parameters across intervention types and years.

Evidence­based policy­making is “a forward looking approach to creating public policies with the aim of addressing real problems and relies on evidence, rather than short­term pressure” (Majcen 2016: 1). One of the main questions in evidence­based policy­making (EBPM) is, what makes for good evidence (Moore 2006, Head 2016)? Types of good evidence are “impact evidence, implementation evidence, descriptive analytical evidence, public attitudes and understanding, statistical modelling, economic evidence, and ethical evidence” (De Marchi et al. 2016: 27). A balance between these is recommended, yet there is a trend towards preferring quantitative and economic evidence in the U.K. (De Marchi et al. 2016) and the EU (Majcen 2016). The present study offers quantitative evidence in the form of statistical modelling. These indexes also comprise impact evidence, presented on the same aggregate scale as the FDA serious adverse events policy. However, it must be emphasized that all indexes are constructs, which must be seen as eroding their quantitative value to some extent.

Questions have been raised regarding the extent to which policy makers actually utilize evidence in their policy­making; the flood of information available to policy makers has also been noted (reviewed in Head 2016). The discontinuity between aspiration and practice may lie in part in the difficulties posed by the volume and scope of data, i.e., it may be a data presentation issue. Indexes, once constructed, are easy to use and understand. They are popular in spite of the conceptual difficulties posed by their nature as constructs. They should be welcomed into the policy maker’s toolkit.

176

5. CONCLUSION

The preceding analyses demonstrated the utility of an index­based approach to monitoring sector­ and industry­wide clinical trial research trends. Indexes were constructed comprising percents of clinical trials reporting serious adverse events, aggregated by intervention type and by intervention type and phase. The visual display afforded by indexes enables observers to place current sector­ and industry­wide patterns, and specific research enterprise performance, in longer term context. It also allows for longer term trend analysis, which can allow for strategic vis­a­vis tactical monitoring and management.

In our demonstrative analysis of longer term trends, most of the indexes showed no statistically significant trends over time. Those few that did show a statistically significant trend over time mainly displayed a decreasing percent of reported serious adverse events. Indexes can show patterns but not explain them, so our interpretation is guarded. The results showing decreasing adverse serious events suggest that the policies and regulations around clinical trials are efficacious; but future research should be directed to performing in­depth case studies to shed light on why these some trends are statistically significant for some intervention type­phase combinations but not for others.

This study is intended as a proof of concept and is meant to open the research area for further development. The indexes presented here are relatively simple. Future research may consider following the lead of index number researchers in other areas. One potential approach is the index decomposition methodology (Ang and Zhang 2000), which computes the impact of structural change as the difference between this hypothetical aggregate and an observed aggregate.

177

Literature Cited

AACT Database for Aggregate Analysis of ClinicalTrials.gov. Durham, NC: Clinical Trials Transformation Initiative; 2015. http://www.ctti­clinicaltrials.org/what­we­do/analysis­dissemination/state­clinical­trials/aact­ database Last accessed December 23, 2015.

Allen, Roy George Douglas. Index numbers in theory and practice. London: Macmillan, ​ ​ 1975.

AMIA. “Bioinformatics core competencies,” 2016a, online: https://www.amia.org/biomedical­informatics­core­competencies, last accessed July 5, 2016.

AMIA. “Clinical research informatics,” 2016b, online: https://www.amia.org/applications­informatics/clinical­research­informatics, last accessed July 5, 2016.

Ang, Beng Wah, and F. Q. Zhang. "A survey of index decomposition analysis in energy and environmental studies." Energy 25, no. 12 (2000): 1149­1176. ​ ​ Arrowsmith, John. "Trial watch: phase III and submission failures: 2007–2010." Nature ​ Reviews Drug Discovery 10, no. 2 (2011): 87­87. ​ Arrowsmith, John. "Trial watch: phase II failures: 2008–2010." Nature Reviews Drug ​ Discovery 10, no. 5 (2011): 328­329. ​ Arrowsmith, John, and Philip Miller. "Trial watch: phase II and phase III attrition rates 2011­2012." Nature Reviews Drug Discovery 12, no. 8 (2013): 569­569. ​ ​ Barnett, William A., and Marcelle Chauvet. "How better monetary statistics could have signaled the financial crisis." Journal of Econometrics 161, no. 1 (2011): 6­23. ​ ​ Barnsley, Michael F. Fractals everywhere. Academic press, 2014. ​ ​ Belter, Klaus, Tom Engsted, and Carsten Tanggaard. "A new daily dividend­adjusted index for the Danish stock market, 1985–2002: construction, statistical properties, and return predictability." Research in International Business and Finance 19, no. 1 (2005): 53­70. ​ ​ Biller­Andorno, Nikola, Reidar K. Lie, and Ruud Ter Meulen. "Evidence­based medicine as an instrument for rational health policy." Health care analysis 10, no. 3 (2002): 261­275. ​ ​ Boumans, Marcel. "Fisher's instrumental approach to index numbers." History of political ​ economy 33, no. 5 (2001): 313­344. ​

178

Cadogan, John W., and Nick Lee. "Improper use of endogenous formative variables." Journal ​ of Business Research 66, no. 2 (2013): 233­241. ​ Califf, Robert M., Deborah A. Zarin, Judith M. Kramer, Rachel E. Sherman, Laura H. Aberle, and Asba Tasneem. "Characteristics of clinical trials registered in ClinicalTrials.gov, 2007­2010." Jama 307, no. 17 (2012): 1838­1847. ​ ​ CHMP (Committee for Medicinal Products for Human Use), Guideline on data monitoring committees. July 2005. EMEA/ CHMP/EWP/5872/03, online: http://www.ema.europa.eu/pdfs/human/ ewp/587203en.pdf, last accessed July 3, 2016.

Choi, Leena, Francesca Dominici, Scott L. Zeger, and Peter Ouyang. "Estimating treatment efficacy over time: a logistic regression model for binary longitudinal outcomes." Statistics in ​ medicine 24, no. 18 (2005): 2789. ​ Clements, Kenneth W., Izan HY Izan, and E. Antony Selvanathan. "Stochastic index numbers: a review." International Statistical Review 74, no. 2 (2006): 235­270. ​ ​ ClinicalTrials.gov “Basic Results” Data Element Definitions (2013), online: http://prsinfo.clinicaltrials.gov/results_definitions.html#AdverseEventsDefinition, last accessed March 2015.

Collin, A., A. Lamorlette, D. Bernardin, and O. Séro­Guillaume. "Modelling of tree crowns with realistic morphological features: New reconstruction methodology based on Iterated Function System tool." Ecological modelling 222, no. 3 (2011): 503­513. ​ ​ Copeland, Adam, and Dennis Fixler. "Measuring the price of research and development output." Review of Income and Wealth 58, no. 1 (2012): 166­182. ​ ​ De Marchi, Giada, Giulia Lucertini, and Alexis Tsoukiàs. "From Evidence­Based Policy­Making to Policy Analytics." (2012).

Diewert, W. Erwin. "Fisher ideal output, input, and productivity indexes revisited." Journal of ​ Productivity Analysis 3, no. 3 (1992): 211­248. ​ Diewert, W. Erwin. “Index Numbers”. In: John Eatwell, Murray Milgate, Peter Newman (Eds), The New Palgrave. A Dictionary of Economics, Vol. II. Palgrave: London, 2008, pp. ​ ​ 1–44.

Diewert, W. Erwin. "Cost of living indexes and exact index numbers." Quantifying Consumer ​ Preferences 288 (2009): 207. ​ Dutz M.A., Kuznetsow Y., Lasagabaster E., Pilat, D., Making Innovation Policy Work. Learning from Experimentation, OECD/IBRD, Paris 2014.

179

Edwards, Jeffrey R. "The fallacy of formative measurement." Organizational Research ​ Methods 14, no. 2 (2010): 370­388. ​ Eichhorn, Wolfgang. "Zur axiomatischen Theorie des Preisindex." Demonstratio ​ Mathematica 6 (1973): 561­573. ​ Falconer, Kenneth. Fractal geometry: mathematical foundations and applications. John ​ ​ Wiley & Sons, 2004.

Färe, Rolf, and Daniel Primont. "Luenberger productivity indicators: aggregation across firms." Journal of Productivity Analysis 20, no. 3 (2003): 425­435. ​ ​ FDA, Step 3 – Clinical Research (2015a), online: http://www.fda.gov/ForPatients/Approvals/Drugs/ucm405622.htm, last accessed December 26, 2015.

FDA, ClinicalTrials.gov Protocol Data Element Definitions (DRAFT December 2015), (2015b), online: https://prsinfo.clinicaltrials.gov/definitions.html#StudyPhase, last accessed December 26, 2015.

FDA, Guidance for clinical trial sponsors: establishment and operation of clinical trial data monitoring committees. March 2006. OMB control no. 0910­0581, online: http://www.fda.gov/downloads/RegulatoryInformation/Guidances/ucm127073.pdf, last accessed Junly 3, 2016.

Finn, Adam, and Luming Wang. "Formative vs. reflective measures: Facets of variation." Journal of Business Research 67, no. 1 (2014): 2821­2826. ​ Fisher, Irving. The making of index numbers: a study of their varieties, tests, and reliability. ​ ​ No. 1. Houghton Mifflin, 1922.

Fisher, Lawrence, and Daniel G. Weaver. "Dealing with short­term anomalies in the relative prices of securities, I: constructing an unbiased equally weighted investment performance index and estimating the standard deviation of the relative “errors”." Unpublished working ​ paper, Rutgers University (1992). ​ Fox, Kevin J. "Problems with (dis) aggregating productivity, and another productivity paradox." Journal of Productivity Analysis 37, no. 3 (2012): 249­259. ​ ​ Frisch, Ragnar. "Necessary and sufficient conditions regarding the form of an index number which shall meet certain of Fisher's tests." Journal of the American Statistical Association 25, ​ ​ no. 172 (1930): 397­406.

Frisch, Ragnar. "Annual survey of general economic theory: The problem of index numbers." Econometrica: Journal of the Econometric Society (1936): 1­38. ​

180

Ghosh, A., C. Carmichael, A. J. Elliot, H. K. Green, V. Murray, and C. Petrokofsky. "The Cold Weather Plan evaluation: an example of pragmatic evidence­based policy making?." Public health 128, no. 7 (2014): 619­627. ​ Glover, Josephine Mary, and Richard Kay. "Who Advises the Data Monitoring Committee (DMC)? A Review of Regulatory Guidance for Sponsors on DMCs After 5 Years and Advice for DMC Members." Drug Information Journal 46, no. 5 (2012): 525­531. ​ ​ Goldson, Stephen, Peter Gluckman, and Kristiann Allen. "Getting science into policy." Journal für Verbraucherschutz und Lebensmittelsicherheit 9, no. 1 (2014): 7­13. ​ Hansen, Bent, and Edward F. Lucas. "On the accuracy of index numbers." Review of Income ​ and Wealth 30, no. 1 (1984): 25­38. ​ Hardin, Andrew, and George A. Marcoulides. "A commentary on the use of formative measurement." Educational and Psychological Measurement (2011): 0013164411414270. ​ ​ He, Zhe, Simona Carini, Ida Sim, and Chunhua Weng. "Visual aggregate analysis of eligibility features of clinical trials." Journal of biomedical informatics 54 (2015): 241­255. ​ ​ Head, Brian W. "Toward More “Evidence Informed” Policy Making?." Public ‐ ​ Administration Review 76, no. 3 (2015): 472­484. ​ Hill, Kevin D., Karen Chiswell, Robert M. Califf, Gail Pearson, and Jennifer S. Li. "Characteristics of pediatric cardiovascular clinical trials registered on ClinicalTrials. gov." American heart journal 167, no. 6 (2014): 921­929. ​ Howell, Roy D., Einar Breivik, and James B. Wilcox. "Reconsidering formative measurement." Psychological methods 12, no. 2 (2007): 205. ​ ​ Hutchinson, John E. Fractals and self similarity. University of Melbourne, Department of ​ ​ Mathematics, 1979.

Isaac, Joel. Working knowledge: Making the human sciences from Parsons to Kuhn. Harvard ​ ​ University Press, 2012.

Jeffrey, H. Joel. "Chaos game representation of gene structure." Nucleic Acids Research 18, ​ ​ no. 8 (1990): 2163­2170.

Kay, Adrian. "Evidence Based Policy Making: The Elusive Search for Rational Public ‐ ‐ Administration." Australian Journal of Public Administration 70, no. 3 (2011): 236­245. ​ ​ Keating, Peter, and Alberto Cambrosio. "Who's minding the data? Data Monitoring Committees in clinical cancer trials." Sociology of health & illness31, no. 3 (2009): 325­342. ​ ​

181

La Torre, Davide, Simone Marsiglio, and Fabio Privileggi. "Fractals and self­similarity in economics: the case of a two­sector growth model." Image Analysis and Stereology 30, no. 3 ​ ​ (2011): 143.

Lan, Boon Leong, and Ying Oon Tan. "Statistical properties of stock market indices of different economies." Physica A: Statistical Mechanics and its Applications 375, no. 2 ​ ​ (2007): 605­611.

Lemke, Amy A., and Julie N. Harris­Wai. "Stakeholder engagement in policy development: challenges and opportunities for human genomics." Genetics in Medicine 17, no. 12 (2015): ​ ​ 949­957.

Long, J.S. (1997). Regression Models for Categorical and Limited Dependent Variables. Sage Publishing.

Majcen, Š., “Evidence based policy making in the European Union: the role of the scientific community,” Environ Sci Pollut Res, online 20 Feb 2016.

Marshall, Ben R., Sun Qian, and Martin Young. "Is technical analysis profitable on US stocks with certain size, liquidity or industry characteristics?." Applied Financial Economics 19, no. ​ ​ 15 (2009): 1213­1221.

Moore, R. Andrew. "Evidence­based policy­making." Clinical Chemical Laboratory ​ Medicine 44, no. 6 (2006): 678­682. ​ Neiburg, Federico. "Sick currencies and public numbers." Anthropological Theory 10, no. 1­2 ​ ​ (2010): 96­102.

Peterson, Alan L., John D. Roache, Jeslina Raj, Stacey Young­McCaughan, and STRONG STAR Consortium. "The need for expanded monitoring of adverse events in behavioral health clinical trials." Contemporary clinical trials 34, no. 1 (2013): 152­154. ​ ​

Petsko, Gregory A. "When failure should be the option." BMC biology 8, no. 1 (2010): 61. ​ ​ Porter, Theodore. "Trust in numbers. The search for objectivity in science and public life." Trust in numbers: The search for objectivity in science and public life (1995). ​ Rigdon, Edward E. "Comment on “Improper use of endogenous formative variables”." Journal of Business Research 67, no. 1 (2014): 2800­2802. ​ Samuelson, Paul A., and Subramanian Swamy. "Invariant economic index numbers and canonical duality: survey and synthesis." The American Economic Review (1974): 566­593. ​ ​

182

Shi, P. J., H. S. Sand Hu, and H. J. Xiao. "Logistic Regression is a better Method of Analysis Than Linear Regression of Arcsine Square Root Transformed Proportional Diapause Data of Pieris melete (Lepidoptera: Pieridae)." Florida Entomologist 96, no. 3 (2013): 1183­1185. ​ ​ Siddiqui, Ohidul. "Statistical methods to analyze adverse events data of randomized clinical trials." Journal of biopharmaceutical statistics 19, no. 5 (2009): 889­899. ​ ​ Sivendran, Shanthi, Asma Latif, Russell B. McBride, Kristian D. Stensland, Juan Wisnivesky, Lindsay Haines, William K. Oh, and Matthew D. Galsky. "Adverse event reporting in cancer clinical trial publications." Journal of Clinical Oncology 32, no. 2 (2014): 83­89. ​ ​ Sokol, D. and C. Gentil. “Intuitive modeling of vaporish objects.” Chaos, Solitons and ​ Fractals 81 (2015) 1–10. ​ Stępień, C. "An IFS­based method for modelling horns, seashells and other natural forms." Computers & Graphics 33, no. 4 (2009): 576­581. ​ Tasneem, Asba, Laura Aberle, Hari Ananth, Swati Chakraborty, Karen Chiswell, Brian J. McCourt, and Ricardo Pietrobon. "The database for aggregate analysis of ClinicalTrials. gov (AACT) and subsequent regrouping by clinical specialty." PloS one 7, no. 3 (2012): e33677. ​ ​ Tiňo, Peter. "Spatial representation of symbolic sequences through iterative function ​ systems." IEEE Transactions on Systems, Man, and Cybernetics­Part A: Systems and ​ Humans 29, no. 4 (1999): 386­393. ​ Vanlaar, Ward, Robyn Robertson, and Kyla Marcoux. "An evaluation of Winnipeg's photo enforcement safety program: Results of time series analyses and an intersection camera experiment." Accident Analysis & Prevention 62 (2014): 238­247. ​ ​ Warton, David I., and Francis KC Hui. "The arcsine is asinine: the analysis of proportions in ecology." Ecology 92, no. 1 (2011): 3­10. ​ ​ Yamaguti, Yutaka, Shigeru Kuroda, Yasuhiro Fukushima, Minoru Tsukada, and Ichiro Tsuda. "A mathematical model for Cantor coding in the hippocampus." Neural Networks 24, no. 1 ​ ​ (2011): 43­53.

183

CONCLUSION

In this dissertation I investigated whether the study of technology commercialization models can benefit from a philosophy of science­type approach. In the philosophy of science approach, we reconsider things that are currently being taken for granted and locate issues that are not currently being treated. I concluded yes. The out­of­bounds, foundation­probing thinking that characterizes the philosophy of science resulted in a number of valuable and interesting findings. I also learned some important lessons about how to perform and communicate this type of research. The single most surprising result from this research was that the philosophy of science, and especially Kuhn (2012 [1962]), was promoting the ​ ​ sustaining­discontinuous innovation dichotomy long before it was popularized in management by Christensen (2013). I will now briefly review my results. I will then discuss the lessons learned. I will conclude with a brief discussion on how the philosophy of science preceded the concept of discontinuous innovation.

Summary of Results

In Chapter One, I applied Sugden et al.’s (1981) four­parameter version of Richards’ (1959) model to technology diffusion curves (Rogers 2010). This tool proved valuable in itself as it facilitates technology diffusion forecasting. This research was also necessary for performing the research in Chapter Two.

In Chapter Two, I addressed the hitherto unaddressed question of the ontology of technology adoption models, that is, whether they are science or technology; or more fundamentally, whether technology adoption models13 are representative or interventional. I found that they diffused like technology. I interpreted my result in the onto­epistemic perspective of Sandkühler (1990, 1991) in which both the ontological and epistemological components of the human mode of grasping the world operate very much on the same footing, i.e., our logic affects how we see the world, and the world affects how we construct our logic. This result

13 Namely the Technology Acceptance Model (Venkatesh and Davis 2000), the Precautionary ​ Principle (Francot Timmermans and De Vries 2013), and the diffusion of innovations ​ ‐ ​ (Rogers 2010). 184 has bearing on our notions of objectivity of the observer (“observer effects”), on the influence of the observer’s logic on his observations, and on the relation between understanding and truth.

In Chapter Three, I showed that there was evidence of replicator dynamics (Henrich 2001) in Base of the Pyramid field studies. This evidence suggests that new products diffuse at the Base of the Pyramid like they do elsewhere: through imitation, not through the individual learning. The problem is, all Base of Pyramid product diffusion studies use individual learning models.

In Chapter Four, I reexamined the Precautionary Principle (Francot Timmermans and De ​ ‐ Vries 2013), which is a policy and regulatory model for how to deal with potentially ​ dangerous new technologies and technology­based products. I combined Slovic’s (1987) ​ psychometric study on the perception of risk with Sandin’s (1999) ontological Threat dimension and epistemological Uncertain dimension. I showed that the Precautionary Principle states that if the public thinks something is an unknown risk, then protective action should be taken. This finding allowed me to interpret the Precautionary Principle as a model that validates and operationalizes the fear of uncertainty.

In Chapter Five, I imported a new model into technology commercialization studies. Specifically, I applied economic index number theory (Boumans 2001) to clinical trial data to develop a tool for clinical trial (bio)informatics. It allowed me to visually display large amounts of data. I also used the study as an opportunity to revisit the validity of constructs of formative measures (Edwards 2010) such as indexes.

Lessons Learned

I will now discuss the lessons learned from performing this research, and the implications for future research.

I recently presented technology forecasts using the Richards model at a conference. Specifically, I presented at the 2016 Commercialization of Micro, Nano, and Emerging ​ Technologies (COMS) conference, which was a joint conference between the American ​ ​

185

Society of Mechanical Engineers (ASME) and the Micro, Nano, and Emerging Technologies Commercialization Education Foundation (MANCEF). I presented forecasts of several technologies, including forecasts of the diffusion of sensors. I showed that existing forecasts of sensor diffusion utilized either linear or exponential models, neither of which has theoretical or empirical justification. When I presented my sigmoidal forecast, two members of the audience suggested that my forecast did not account for the possibility of disruptive technology or discontinuous innovation. This comment came even after I showed that 85% of R&D funding goes into three activities that contribute less than 5% to the economy (Clark 2016). I agree that product diffusion models do not account for discontinuous innovation. However, speculations about the possible sudden appearance of disruptive effects (as a kind of ancient Greek deus ex machina) would never get past a good reviewer for a peer­reviewed ​ ​ journal. Therefore I stand by my forecasts.

The question of whether models are technology is traditionally a philosophical question. It has not been investigated empirically. Philosophers did not welcome my empirical investigation. They did not seem to understand it. One reviewer did not appreciate that one might first need to justify asking the question of whether models are technology, before even asking the question. Another reviewer did not understand the difference between justifying the question of whether models are technology, versus asking and proving the question of whether models are technology (“If sigmoidal growth is an indication of technology, then everything that grows is technology.” My point was that if models do not demonstrate sigmoidal growth, then we are not justified in even asking whether models are technology. I was not yet trying to prove that models are technology.). Another reviewer did not understand the idea that the Precautionary Principle could be treated as a model. This is an argument that I need to make forecefully. In brief, I am treating the Precautionary Principle as a mental model for confronting risk. This approach is not new. A mental model­based approach for risk commmunication has already been proposed (Morgan 2002). Moreover, a principle is a model for developing a theory. “[A] principle is more than only a postulate expressing a fundamental assumption concerning how one should understand nature, although a principle may involve such a postulate...A principle also serves as guidance for a chain of reasoning in building a physical theory” (Plotnitsky 2015: 1224). Finally, a particular difficulty with this

186 chapter is that the empirical investigation of the question opens up a new line of research. If I review the theoretical literature, then reviewers expect a theoretical investigation. If I do not review the theoretical literature, then I am open to accusations of misrepresentation. All these issues would require careful clarification in future research.

The minority position is that the Precautionary Principle is a matter of customary international law. A jurist holding this position was unable to conceptualize the Precautionary Principle as arising from a common global precautionary culture of precautionary thinking. He was not able to see the simultaneous effects of this common global precautionary culture on German, Swedish, American law and international law. He thought I was claiming that American law affects German law. Perhaps it does, but that was not my claim.

I thought regulators would find value in the clinical trials Serious Adverse Events index number, because it indicates the efficacy of regulatory measures across industries. I thought investors would find value in it, because it indicates the success of research in entire sectors. In retrospect I did not make my case. I focused on explaining the index number, rather than on demonstrating its utility. In future research I will focus on demonstrating the utility of the index number.

In summary, This type of research is potentially fruitful but it requres great care in communicating. The philosophy of science takes an unconventional approach not merely to be different, but rather to promote and advance the research subject in question. Thus the emphasis in communicating should be on the value added by the research, which then justifies the unconventional methods.

Discontinuous Innovation and the Philosophy of Science

The most surprising and perhaps most important finding of this dissertation is that the philosophy of science is a form of discontinuous innovation14 in the sense of Christensen (2013) but preceding Christensen by many years. The philosophy of science is discontinuous innovation because it disrupts a research field with radical innovation, and thereby accelerates

14 Along the lines of Kassicieh et al. (2002), we refer to discontinuous innovation and ​ disruptive technology, not for example or discontinuous technology. 187 that field’s development. What is also surprising is Christensen’s unacknowledged debt to Kuhn (2012 [1962]).

In the Introduction to this dissertation, I discussed Chang (1999). Chang (1999) proposed thinking about the philosophy of science as a venue for addressing general questions that science could address but did not due to scientific specialization. Chang (1999: 414) stated that the philosophy of science deals with “ideas and questions [that] must be suppressed, since they are heterodox enough to contradict or destabilize those items of knowledge that need to be taken for granted.” Chang (1999) treated Kuhn and Popper as a Scylla and Charybdis, but he missed the most significant point. Kuhn (2012 [1962], 1970) asserted that a scientific paradigm cannot function unless it it taken for granted and protected from criticism. Popper (1970) asserted that dogmatism in science was a danger to science and civilization. For Chang (1999: 415), the philosophy of science increases scientific knowledge, not by navigating between Kuhn and Popper, but by “practicing...alongside” science. But it should be noted that Kuhn challenged the prevailing view of normal scientific progress as development by accumulation (Kuhn’s concept of revolution was itself a revolution.). He proposed instead that periods of continuous scientific development were interrupted by revolutions. Yet this is the basic structure of Christensen’s thesis of continuous or sustaining innovation that is interrupted by discontinuous innovation. Christensen has cited Kuhn but has not acknowledged the dependence of the structure of his thesis on Kuhn’s thesis.

Kuhn and Christensen were not the only scholars to adopt this Kuhnian model. In 1972, a similar model was proposed in evolutionary biology as the punctuated equilibrium, the ​ hypothesis that evolutionary development is marked by isolated episodes of rapid speciation between long periods of little or no change (Eldredge and Gould 1972). Mathematicians similarly study the behavior of systems at the boundaries of phase transitions, as these systems make abrupt transitions from rigid order to chaos (Langton 1990, Kauffman and Johnsen 1991, Hiett 1999). The model has also been applied to institutional change (Young 1996) and public policymaking (Baumgartner et al. 2014). Levinthal (1998) applied it to technological change shortly after Christensen published his book in 1997.

188

It has been suggested that Modernism (and Postmodernism) is a worldview characterized by self­referentiality, stochasticity or heterogeneity, discontinuity, simultaneous multiple perspectives, radical subjectivity, and nonlocality (Marinakis 2008, Everdell 2009). We may now conjecture that the current Post­Post­Modern worldview is characterized by the Kuhnian model. It is a vision in which our world, and our experience of the world, comprises periods of continuity interrupted or punctuated by abrupt transitions, potentially transitions into chaos. The role of technology, and of the Management of Technology, in such a worldview is both moderating and amplifying. Technology can just as dramatically extend life just as it can prematurely end life.

189

Literature Cited

Baumgartner, Frank R., Bryan D. Jones, and Peter B. Mortensen. "Punctuated equilibrium theory: explaining stability and change in public policymaking." Theories of the policy ​ process (2014): 59­103. ​

Boumans, Marcel. "Fisher's instrumental approach to index numbers." History of political ​ economy 33, no. 5 (2001): 313­344. ​

Chang, Hasok. "History and philosophy of science as a continuation of science by other means." Science & Education 8, no. 4 (1999): 413425. ​

Christensen, Clayton. The innovator's dilemma: when new technologies cause great firms to ​ fail. Harvard Business Review Press, 2013. ​

Clark, G. “Winter Is Coming: Robert Gordon and the Future of Economic Growth.” The ​ American Economic Review 106(5) (2016): 68­71. ​ ​ ​

Edwards, Jeffrey R. "The fallacy of formative measurement." Organizational Research ​ Methods 14, no. 2 (2010): 370­388. ​

Eldredge, Niles and S. J. Gould (1972). "Punctuated equilibria: an alternative to phyletic gradualism" In T.J.M. Schopf, ed., Models in Paleobiology. San Francisco: Freeman Cooper. ​ ​ pp. 82­115.

Everdell, William R. The first moderns: Profiles in the origins of twentieth­century thought. ​ ​ University of Chicago Press, 2009.

Francot Timmermans, Lyana, and Ubaldus De Vries. "Eyes Wide Shut: On Risk, Rule of ‐ Law and Precaution." Ratio Juris 26, no. 2 (2013): 282­301. ​ ​

Henrich, Joseph. "Cultural transmission and the diffusion of innovations: Adoption dynamics indicate that biased cultural transmission is the predominate force in behavioral change." American Anthropologist 103, no. 4 (2001): 992­1013. ​

190

Hiett, Peter John. "Characterizing critical rules at the ‘edge of chaos’."Biosystems 49, no. 2 ​ (1999): 127­142.

Kassicieh, Suleiman K., Steven T. Walsh, John C. Cummings, Paul J. McWhorter, Alton D. Romig, and W. David Williams. "Factors differentiating the commercialization of disruptive and sustaining technologies." IEEE Transactions on Engineering Management 49, no. 4 ​ (2002): 375­387.

Kauffman, Stuart A., and Sonke Johnsen. "Coevolution to the edge of chaos: coupled fitness landscapes, poised states, and coevolutionary avalanches."Journal of theoretical biology 149, ​ no. 4 (1991): 467­505.

Kuhn, Thomas S. "Logic of discovery or psychology of research." Criticism and the Growth ​ of Knowledge (1970): 1­23. ​

Kuhn, Thomas S. The structure of scientific revolutions. University of Chicago press, 2012 ​ ​ [1962].

Langton, Chris G. "Computation at the edge of chaos: phase transitions and emergent computation." Physica D: Nonlinear Phenomena 42, no. 1 (1990): 12­37. ​ ​

Levinthal, Daniel A. "The slow pace of rapid technological change: gradualism and punctuation in technological change." Industrial and corporate change 7, no. 2 (1998): ​ 217­247.

Marinakis, Yorgos D. "Ecosystem as a topos of complexification." Ecological complexity 5, ​ no. 4 (2008): 303­312.

Morgan, M. Granger. Risk communication: A mental models approach. Cambridge University ​ ​ Press, 2002.

Plotnitsky, Arkady. "A Matter of Principle: The Principles of Quantum Theory, Dirac’s Equation, and Quantum Information." Foundations of Physics 45, no. 10 (2015): 1222­1268. ​ ​

191

Richards, F. J. "A flexible growth function for empirical use." Journal of experimental ​ Botany 10, no. 2 (1959): 290­301. ​

Rogers, Everett M. Diffusion of innovations. Simon and Schuster, 2010.

Sandkühler, H. J., Onto­Epistemologie. Europäische Enzyklopädie zu Philosophie und Wissenschaften. Meiner, Hamburg (1990): 608­615.

Sandkühler, H. J. (1991). Die Wirklichkeit des Wissens, Geschichtliche Einführung in die Epistemologie und Theorie der Erkenntnis. Suhrkamp, Frankfurt, 1991.

Slovic, Paul. "Perception of risk." Science 236, no. 4799 (1987): 280­285. ​ ​

Sugden, Lawson G., E. A. Driver, and Michael CS Kingsley. "Growth and energy consumption by captive mallards." Canadian Journal of Zoology 59, no. 8 (1981): ​ ​ 1567­1570.

Young, H. Peyton. "The economics of convention." The Journal of Economic Perspectives ​ 10, no. 2 (1996): 105­122.

192