<<

Scientific Modeling Without Representationalism

A dissertation submitted to

the Graduate School of the University of Cincinnati in partial fulfillment of the requirements for the degree of

Doctor of (PhD)

in the Department of Philosophy

of the College of Arts and Sciences

by

Guilherme SANCHESDE OLIVEIRA

M.A. University of São Paulo, 2014

M.A. University of Cincinnati, 2017

Commitee Chair: Angela POTOCHNIK, Ph.D.

August 23, 2019 i Abstract

Scientists often gain insight into real-world phenomena indirectly, through building and manipulating models. But what accounts for the epistemic import of model-based research? Why can scientists learn about real-world systems (such as the global climate or biological populations) by interacting not with the real-world systems themselves, but with computer simulations and mathematical equations? The traditional answer is that models teach us about certain real-world phenomena because they represent those phenomena. My dissertation challenges this representationalist and provides an alternative framework for making sense of scientific modeling.

The philosophical debate about scientific model-based representation has, by and large, proceeded in isolation from the debate about in philos- ophy of mind and cognitive science. Chapter one exposes and challenges this anti- psychologism. Drawing from ‘wide computationalist’ embodied cognitive science re- search, I put forward an account of scientific models as socially-distributed and materially- extended mental representations. This account illustrates how views on mental rep- resentation can help advance philosophical understanding of scientific representation, while raising the question of how other views from (embodied) cognitive science might inform philosophical theorizing about scientific modeling.

Chapter two argues that representationalism is untenable because it relies on on- tological and epistemological assumptions that undermine one another no matter the theory of representation adopted. Views of scientific representation as mind-independent fail with the ontological claim that ‘models represent their targets’ and thereby un- dermine the epistemological claim that ‘we learn from models because they represent ii their targets’. On the other hand, views of scientific representation as mind-dependent support the ontological claim, but they do so in a way that also undermines the epis- temological claim: if ‘representation’ entails only use rather than success or accuracy, then the epistemic value of modeling cannot be explained purely in representational terms.

Chapter three focuses on emerging “artifactualist” views of models as tools, arti- , and instruments. The artifactualism of current accounts is a conciliatory view that is compatible with representationalism and merely promotes a shift in emphasis in theorizing. I argue against this version of artifactualism (which I call “weak artifac- tualism”), and I put forward an alternative formulation free from representationalism (“strong artifactualism”). Strong artifactualism is not only desirable, but it’s also viable and promising as an approach to making sense of how we learn through modeling.

Chapter four draws from ecological to offer an empirically-informed account of modeling as a tool-building practice. I propose that the epistemic worth of modeling is best understood in terms of the “affordances” that the practice gives rise to for suitably-positioned embodied cognitive agents. This account develops a strong ar- tifactualist view of models (chapter three) and it circumvents the challenges inherent to representationalism (chapter two) because it anchors the epistemic worth of modeling in the models’ affordances, which are agent-relative but mind-independent. Moreover, this account provides an additional to give up anti-psychologism in philoso- phy: not only can views on mental representation help us better understand scientific representation (chapter one), but anti-representational views in psychology can also inform a nonrepresentational understanding of how and why modeling works. iii

c 2019 Guilherme Sanches de Oliveira All rights reserved iv

Acknowledgements

In the sciences and in , we use the term “model” to refer to a computational, mathematical or concrete object that’s designed and used to facilitate learning about some other object, typically a system or phenomenon in the real world that’s the ultimate “target” of investigation: in this technical sense, often think of models as imitations or copies of that target. But this is exactly the opposite of how we use the term in ordinary contexts: in real life, a “model” is the thing to be imitated, not the thing that’s imitating something else. It is in this ordinary sense of the term that I want to acknowledge and thank my models—the people I have been working hard to learn from, whose virtues I will continue trying to emulate. My committee chair, Angela Potochnik, is my model of intellectual insight and gen- erosity. Angela was the first person to get excited about my dissertation, and I mean that literally: she was excited about my dissertation even before I knew what it was going to be about. She saw potential in the very first paper I wrote in my very first semester at UC, and she encouraged me to keep developing those still rudimentary . Over the course of five years, she was always open to reading my nonsense and giving it the benefit of the doubt. Angela’s generosity was, of course, evident in the feedback she would give me, which was always timely, rigorous, and abundant. But I want to emphasize her generosity in supporting my professional development: the countless letters of recommendation (for grants, awards, fellowships, workshops, sum- mer schools, jobs, you it); the opportunities to learn new skills (including LATEX), v to get involved in cool initiatives (PhilPapers, the Center for Public Engagement with Science), and to collaborate in research (co-authored paper forthcoming!); and all the advice, which I’m slowly coming to understand (e.g., learning when and what to say “no” to). For all of this and so much more, thank you! I’m lucky to have found a mentor in Tony Chemero, my interdisciplinary hero, the person who not only told me but also showed me that it’s possible to do good phi- losophy and good science all at once. Being around Tony made me realize that pretty much all of my clever ideas had already been had by pragmatists and phenomenolo- gists, and that turning to them could help dissolve my philosophical problems. Tony also helped me see the light and embrace the teachings of St. Jimmy (while always keeping a healthy dose of scientific !), and for this I will be forever grate- ful. Even though he’s a rockstar (or karate master?), Tony is always ready to share the spotlight with his students, and that speaks volumes about his character. Completing my committee, Tom Polger and Mike Richardson deserve recognition for their continued support. Tom gave me feedback on the first version of the paper that would become the basis for my dissertation and also on the first related conference presentation at the SSPP meeting in New Orleans; his (hard!) questions, since back then, have helped me avoid going down quite a few rabbit holes. Mike’s insight and encouragement, coming from the perspective of a modeler and ecological psychologist, made me believe in my project’s potential beyond disciplinary confines. My time at UC would not have been as productive and stimulating if it weren’t for all the amazing people I got to work with here. For their guidance, I thank the faculty in philosophy (especially Zvi Biener, John McEvoy, Jenefer Robinson, and Rob Skip- per), in psychology (especially Paula Silva, Kevin Shockley, Mike Riley, and Tehran vi

Davis), and in the Graduate School (Steve Howe, David Stradling, and Mike Riley again). For their support and company, I thank good friends and colleagues in philos- ophy (especially Frank Faries, Valentina Petrolini, Walter Stepanenko, Sahar Heydari Fard, Mohan Pillai, and Jonathan McKinney), in psychology (especially Chris Riehm, Patric Nordbeck, Colin Annand, Francis Grover, Ed Baggs, and Patrick Nalepka), and, in both literal and metaphorical walks back and forth between McMicken and Ed- wards, my buddy Vicente Raja. Vesna Kocani deserves special thanks for her work behind the scenes, keeping everything running smoothly. I am indebted to Osvaldo Pessoa Jr. and to Caetano Plastino (University of São Paulo) for introducing me to philosophy of science, and cognitive science. I am also indebted to Otávio Bueno (University of Miami) and Nancy Ners- essian (Georgia Tech) for their advice as I tried to figure out how to continue studying philosophy at the graduate level. Working on and completing a Ph.D. takes a lot of time and energy. I wouldn’t have been able to pour so much of myself into this project if it weren’t for my mom, my ultimate model of perseverance and unwavering optimism. My mom lost her parents at fifteen, and had to drop out of high school to work full time; years later, a failed marriage and a divorce made it so that she had to raise my sister and me all by her- self. Despite having gone through so many challenges in life (or, somehow, because of them), she has this resolutely positive outlook: if you work hard enough, things may still not turn out the way you wanted, but you’ll be okay. I’m not sure I totally buy that, but I am grateful that my mom does and that she worked hard so that I could have the educational opportunities she didn’t; her grit continues to be a source of inspiration. I dedicate this accomplishment to Maggie, who—along with Grafite, Friendly & vii

Bijoux, Pulla & Snuffkin, all the chickens, and the occasional groundhog, fox and coy- ote—ensured that life, even during the PhD, was full and whole. I can’t wait to meet our little one. viii

Contents

Abstract i

1 Scientific Representation, Mental Representation, and Embodied Cognition1

1.1 Introduction...... 2 1.2 Anti-Psychologism in the Philosophical on Scientific Models.4 1.2.1 Is there a special problem of scientific representation?...... 6 1.3 Embodying Cognition: Wide Computationalism...... 9 1.4 Wide Computation in Scientific Modeling...... 15 1.5 Conclusion...... 22

2 Representationalism is a Dead End 24

2.1 Introduction...... 25 2.2 Representationalism and the perplexities of misrepresentation...... 28 2.3 Can’t have: representationalism is untenable...... 37 2.3.1 Types of theories of representation...... 38 2.3.2 Mind-independent views of representation fail with OC..... 39 2.3.3 Mind-dependent views of representation fail with EC...... 45 2.4 Don’t need: representationalism is unnecessary...... 55 2.4.1 Pragmatic reason...... 56 ix

2.4.2 Viable alternatives...... 61 2.5 Conclusion...... 70

3 Models as Tools: Making Artifactualism Leaner and Meaner 72

3.1 Introduction...... 73 3.2 The Virtues of Artifactualism (as We Know It)...... 74 3.3 Artifactualism as We Know It is Representationalist...... 81 3.4 What’s so bad about combining Artifactualism with Representationalism? 87 3.4.1 A Bad View of Tools: the of the Linguistic Sign...... 87 3.4.2 A Bad View of Science: The Myth of the Primacy of . 89 3.4.3 A Bad View of Philosophy: The Allure of Ideal Theory...... 93 3.4.4 A Costly Analytical Strategy: The Representational Inheritance Tax 96 3.5 Artifactualism doesn’t need Representationalism: Toward a Variety of Artifactualism Worth Wanting...... 97 3.5.1 A Philosophical Fresh Start: Models and/as ‘Simple Tools’.... 100 3.5.2 Prolegomena to Future Strong Artifactualist Accounts...... 104

4 An Ecological Approach to Scientific Modeling 108

4.1 Abstract...... 108 4.2 A Primer on Ecological Psychology...... 109 4.2.1 The Ecological Theory of ...... 111 4.2.2 Theoretical and Ontological Foundations...... 113 4.3 Getting Clearer on Affordances...... 119 4.3.1 Definition...... 119 4.3.2 Empirical Support and Applications...... 123 x

4.4 An Affordance-Based View of Scientific Models as Tools...... 126 4.4.1 A Tale of Two Other Relations...... 127 4.4.2 Strengths...... 135 Wide Applicability...... 135 Discovery and Transfer...... 136 Limits, and Within- and Between-Subject Variation...... 137 Dissolving the Problem of Misrepresentation...... 141 No Loans of Meaning...... 142

Bibliography 145 xi

List of Figures

1.1 According to the extended mind view, an individual’s unextended mind (a) can become extended outward to include properly functionally linked external resources (b). Wide computationalism entails no commitment to the possibility of mind extension, and only posits that computational of cognitive systems involving ‘wide’ computation (b) are not reducible to computational of cognitive systems involv- ing ‘narrow’ computation (a)...... 13 1.2 Example of a visualization generated by a computer simulation of the climatic effects of human-produced carbon dioxide emissions. Credits: NASA/GSFC...... 16 1.3 The location of 7,280 temperature stations in the Global Historical Cli- matology Network catalog. Credits: Robert A. Rohde for the Global Warming Art project and Wikimedia Commons...... 17 1.4 Concrete model built as part of the projected restoration of the Isabella Lake dam, in Kern County, California (left), and map showing the loca- tion of the modifications proposed (right). Credits: U.S. Army Corps of Engineers...... 20 xii

2.1 Representation as mind-independent: the model represents the target, and this objective two-place representation relation (r) is defined in terms of, e.g., isomorphism (van Fraassen 1980) or similarity (Giere 1988).... 40 2.2 By using a model to represent some target, scientists establish (e) that the model represents the target (r). Strictly speaking, the representation relation is mind-dependent and therefore not reducible to r: it is neces- sarily constituted by both r and e...... 47 2.3 Practical engagement with one object (pA) can facilitate the development of skills useful for engaging with another object in a different context (pB): e.g., playing with a beach ball or a toy guitar can help develop the motor skills needed for participating in a real soccer match or for performing with a band. Object A can be seen as a mediator or surro- gate for B, but in this pragmatic, developmental view ‘mediation’ and ‘surrogacy’ do not entail a representational relation and do not require in representational terms...... 63

3.1 Representationalism (R) and Artifactualism (A) are independent (but not mutually exclusive) views of models. Current artifactual accounts combine both, adopting “weak artifactualism” (W.A.) and occupying the shaded area where the two views overlap. “Strong artifactualism” (S.A.), in , is in the conceptual space that does not overlap with representational analyses of models...... 99 xiii

4.1 The scope of psychological science indicated in red for (a) behaviorism and (b) cognitivism. Behaviorism black-boxed internal processing and instead studied measurable stimuli and responses. Cognitivism, in turn, shifted away from analysis of stimuli and responses (now conceptual- ized as inputs and outputs) and focused instead on the computational processes that might mediate the two. These differences aside, the over- all schema is equivalent...... 115 4.2 Illustration of ecological information. An organism’s relation to the en- vironment generates information that is specific to the organism’s rela- tion to the environment, as in the case of optic flow in (a). Ecological information is dynamic and enables the prospective control of action, as in the case of diving birds guiding their wing position by visual infor- mation of time-to-contact with water (b)...... 117 4.3 The theoretical scope (in red) and ontological basis of ecological psychol- ogy: the lawful informational reciprocity of organism and environment. An organism’s perception-action dynamics generate information that, in turn, specifies the same dynamics. In this sense, ecological information is relational—hence the bidirectional arrows. (Inspired by diagram in Turvey and Carello 1986, p. 143)...... 118 xiv

4.4 Affordances are agent-relative but mind-independent opportunities for action. On the one hand, an object does not have affordances in and of itself, but only for some agent. On the other hand, however, affordances are matches between characteristics of the agent and of the object, and as such they exist objectively, independently of the agent’s acting on them or even being aware of them. Buttons, switches and knobs objectively afford certain uses, but only to humans with a particular level and type of manual dexterity (and not to other humans nor to, say, elephants)... 122 4.5 As tools, models have affordances, i.e., they offer opportunities for ac- tion to particular users. The bidirectional arrow highlights the of affordances as relational properties...... 127 4.6 A. W. H. Phillips and the MONIAC (Monetary National Income Ana- logue Computer), also known as the Phillips Hydraulic Computer.... 128 4.7 Visualizations from the Lotka-Volterra mathematical model. (a) Time evolution of variables x and y given fixed parameter values. (b) Phase- space plot depicting dynamics for a range of initial y values given the same set of fixed initial values for the variable x and for the parameters. 130 4.8 Scientific modeling is constituted by both user-model relations and user- target relations...... 132 1

Chapter 1

Scientific Representation, Mental Representation, and Embodied Cognition

Abstract

The booming philosophical literature on model-based scientific representation has by and large developed in isolation from research on mental representation in psychol- ogy and cognitive science. Some philosophers of science have even explicitly rejected the that cognitive science might have anything to contribute to a philosophical understanding of modeling and simulation. Here I draw from theories of embodied cognition to offer an against this type of philosophical anti-psychologism. In particular, inspired by ‘wide computationalist’ approaches to cognition, I sketch a view of scientific models as external, socially-distributed, materially-extended mental representations. This way of thinking about models has meaningful implications for current philosophical debates about the uniqueness of scientific representation, and it Chapter 1. Scientific Representation, Mental Representation, and Embodied 2 Cognition also illustrates how interdisciplinary, inter-debate contact in general can transform phi- losophy of science and open up new avenues for philosophically understanding how we learn through modeling.

1.1 Introduction

Scientists use a wide variety of modeling techniques to study the world. These include concrete models (like Watson and Crick’s tin-and-wire model of the DNA, and the Phillips hydraulic model of the economy), abstract mathematical models (such as the Lotka-Volterra predator-prey equations, and the Haken-Kelso-Bunz motor coordina- tion equations), and computer simulations (from climate modeling in meteorology to agent-based modeling in the social sciences, for example). This widespread practice of building models and simulations inside the lab to understand, explain or predict phe- nomena that happen in the “real world” outside the lab raises a number of philosophi- cal questions, including ontological and epistemological questions about the nature of models and their capacity to provide of real-world phenomena. In attempting to answer these questions arising from model-based scientific re- search, philosophers have put forward many different accounts of scientific represen- tation to explain how models relate to the target phenomena in the real world being modeled. Prominent accounts have defined scientific representation as a relation of structural correspondence, such as isomorphism, between models and targets (van Fraassen 1980, 2008; Pincock 2012), or as a relation of similarity between models and targets (Giere 1988, 2004, 2010; Weisberg 2012), or, from a deflationary perspective, as any relation of correspondence between models and targets that supports the right Chapter 1. Scientific Representation, Mental Representation, and Embodied 3 Cognition inferential or interpretational scientific practices (Suarez 2004, 2015; Contessa 2007; Hughes 1997; Morrison 2015; Frigg and Nguyen 2017). This booming philosophical literature on model-based scientific representation has, by and large, developed in isolation from debates about mental representation in (the philosophy of) psychology and cognitive science. Some philosophers of science openly reject the idea that thinking about mental representation could help at all in the project of understanding model-based scientific research. In many cases, however, even when this rejection isn’t explicitly endorsed, philosophers working on scientific representa- tion tend to just quietly ignore work on mental representation. The overarching goal of this chapter is to provide an argument against this form of anti-psychologism and to urge philosophers working on scientific representation to pay closer attention to work on mental representation. I work toward this overarching goal by giving a particular example from research in embodied cognitive science—one of the most exciting and innovative areas in contemporary cognitive science—that il- lustrates some of the potential benefits of interdisciplinary inter-debate contact. I be- gin, in Section 2, by further characterizing the philosophical debate about modeling and scientific representation as it stands. In Section 3, I provide a brief introduction to embodied cognitive science, in particular focusing on ‘wide computationalism’ as an ecumenical and broadly appealing option for cognitive scientists of all stripes. Sec- tion 4, then, connects the dots: drawing from wide computationalism, I sketch a view of scientific models as external, socially-distributed, materially-extended mental repre- sentations. As I show, this view offers an unexpected resolution to current philosophi- cal debates about the uniqueness of scientific representation, and it also demonstrates, Chapter 1. Scientific Representation, Mental Representation, and Embodied 4 Cognition more generally, how work in cognitive science can in help advance our philosoph- ical understanding of scientific modeling.

1.2 Anti-Psychologism in the Philosophical Literature on

Scientific Models

The topic of representation occupies center stage in the of science literature on modeling and simulation. As already indicated in the introduc- tion, the literature includes a wide variety of theories of representation, that is, theories that attempt to give a general account of the representational relation between, on the one hand, scientific models and simulations and, on the other, the real-world systems and phenomena that scientists are ultimately interested in understanding, explaining and predicting. But the literature is also characterized by debate around what can be seen as different meta-theories of representation, that is, different views about what a good theory of representation should be like. One such meta-theoretic question concerns precisely what a good theory of repre- sentation should be about. More specifically, the question here is whether a good the- ory of representation will be strictly about model-target relations or whether it must en- compass an agential dimension and, accordingly, take into account how scientists make use of some model-target correspondences to represent aspects of target phenomena (for analyses of representation at this meta-theoretic level see, e.g., Chakravartty 2010, Knuuttila 2011, Sanches de Oliveira 2018). Another meta-theoretic question concerns the extent to which our understanding Chapter 1. Scientific Representation, Mental Representation, and Embodied 5 Cognition of scientific representation can be enhanced by consideration of views of represen- tation in other domains outside of scientific practice. Along these lines, a number of philosophers have recently drawn insights from views of representation in litera- ture, theater and the visual arts to offer accounts of model-based representation (e.g., Godfrey-Smith 2009a, 2009b, Frigg 2010a, 2010b, Bueno and French 2011, Toon 2012), while others advocate for caution in these comparisons between science and art espe- cially in their “fictionalist” versions (e.g., Morrison 2015). In stark contrast with the lively debate on the relation between views of represen- tation in science and in art, little to no attention has been devoted to the analogous meta-theoretic question of whether views of representation in psychology and cogni- tive science can contribute to our understanding of scientific representation. This is in line with the antagonistic attitude toward psychology that philosophers of science have historically adopted. For much of the twentieth century, the field’s anti-psychologism was framed in terms of Reichenbach’s (1938) influential distinction between the context of discovery and the context of justification—or in ’s words, the distinction between “the process of conceiving a new idea, and the methods and results of examin- ing it logically” (Popper 1959/2005, p. 8). Given this division, the scope of philosophy of science was traditionally construed as being limited to the logical aspects of ideal Science, to the exclusion of whatever psychological factors and processes that may be at play in actual scientific practice.1 In the current literature on modeling, anti-psychologism manifests itself most clearly in the form of indifference toward theories of mental representation, from psychology and cognitive science, as potential aids to a philosophical understanding of scientific

1Yet see Schickore (2018) on Reichembach’s more nuanced original views which included a descrip- tive dimension, and also Melogno (2019) for a critical analysis of the relevant historiography. Chapter 1. Scientific Representation, Mental Representation, and Embodied 6 Cognition representation. The rejection is sometimes explicit: as has put it, “I will have no truck with mental representation, in any sense” because it “has nothing to contribute to our understanding of scientific representation” (van Fraassen 2008, p. 2). But such forceful statements are the exception rather than the rule. For the most part, contemporary philosophers of science have embodied the field’s longstanding anti-psychologism by simply (and perhaps unwittingly) ignoring the vast literature on mental representation as if it really had nothing to contribute to the enterprise of philosophically making sense of scientific representation. An intuitive and compelling reason to think this way is illustrated in a recent dispute about the uniqueness of scientific representation.

1.2.1 Is there a special problem of scientific representation?

No, say Craig Callender and Jonathan Cohen. In their controversial 2006 paper, Cal- lender and Cohen put forward an argument that supposedly “solves or dissolves the so-called ‘problem of scientific representation’.” (Callender and Cohen 2006, p. 67). First of all, they draw a distinction between the normative and the constitutive questions of representation, suggesting that the question of what makes something a good represen- tation is separate from the question of what makes it a representation at all. They concede that scientific representation may be ‘normatively’ unique, such that the standards by which scientists evaluate the quality of their representations are different from the stan- dards used elsewhere (e.g., in art). Yet, Callender and Cohen propose that scientific representation is not ‘constitutively’ sui generis. Even if scientists have different crite- ria for determining what counts as a correct or successful model of some target, still, these normative criteria for usefulness have nothing to do with what makes the models Chapter 1. Scientific Representation, Mental Representation, and Embodied 7 Cognition represent at all. For Callender and Cohen, there is nothing unique to scientific repre- sentation that cannot be answered by to mental representation: “the varied representational vehicles used in scientific settings (models, equations, toothpick con- structions, drawings, etc.) represent their targets (the behavior of ideal gases, quantum state evolutions, bridges) by virtue of the mental states of their makers/users” (p. 75). Brandon Boesch (2017) disagrees with Callender and Cohen, and he argues, instead, that scientific representation is in fact sui generis. In particular, Boesch argues that the- ories of mental representation cannot explain the social or communal nature of scien- tific model-based representation. In his view, models represent their targets through a process he calls “licensing,” which he describes as “the set of activities of scientific practice by which scientists establish the representational relationship between a ve- hicle and its target” (Boesch 2017, p. 974). Building and using a model involves the inclusion of specific features into the model and the establishment of particular ways of interpreting those features as corresponding to features of target phenomena. Im- portantly, beyond agreeing on criteria for normatively evaluating models as successful or correct representations, for Boesch scientists communally negotiate the models’ very representational status. In his view, all scientific representations (even bad ones, i.e., even ones that are normatively defective) can only be representations at all if they are supported by the process of licensing. And, because licensing is inherently a matter of communal or social negotiation, Boesch argues, it is not reducible to mental represen- tation: as a social, collective achievement, “[scientific] representation is not at all ‘in the mind’ of any particular agent” (p. 973). This clash between Callender and Cohen’s proposal and Boesch’s view nicely il- lustrates the anti-psychologism that characterizes much of the philosophy of science Chapter 1. Scientific Representation, Mental Representation, and Embodied 8 Cognition literature on modeling. On the one hand, Boesch clearly embraces anti-psychologism when he makes the case for the uniqueness of scientific representation. That is, in maintaining that scientific representation is not reducible to mental representation, Boesch is also suggesting that, even if we had a good understanding of the mental- representational processes at play when humans engage in modeling practices, there would still be a significant explanatory remainder that could only be accounted for by philosophical theorizing that is specific to scientific representation. In short, then, it fol- lows from Boesch’s view that, when it comes to understanding the crucial dimensions of scientific representation, work on the nature of mental representation in psychology and cognitive science has little to offer. Callender and Cohen’s position might, on the other hand, appear to challenge this form of anti-psychologism—after all, as we have just seen, their controversial claim is precisely that scientific representation is not constitutively unique, but is instead best understood as derivative of the representational mental states of individual scientists. Yet, it is important to note that even in making this point, Callender and Cohen are still broadly in line with our field’s anti-psychologism. To be sure, their proposal is psy- chologistic in the sense that it identifies the psychological features of scientists (namely, their mental states) as crucial parts of the explanandum of philosophy of science. In a deeper sense, however, Callender and Cohen’s position is still anti-psychologistic in that, rather than engaging with views of representation in psychology and cognitive science, Callendar and Cohen’s understanding of mental representation is directly im- ported from ’s work in , and as such their argument still exemplifies the insularity and parochialism typical of our field. In the remainder of this chapter, my goal is to challenge the anti-psychologism(s) Chapter 1. Scientific Representation, Mental Representation, and Embodied 9 Cognition typical of contemporary philosophy of science, and to do so by showing how careful engagement with research on mental representation in the sciences of the mind can inspire and inform the philosophical debate about scientific representation and mod- eling. More specifically, I am convinced that this kind of interdisciplinary and inter- debate contact has the potential to (dis)solve philosophical puzzles, and in Section 4 I illustrate this point by showing how ‘wide computationalist’ approaches in embodied cognitive science support an understanding of model-based representation that chal- lenges the dispute between Callender and Cohen (2006) and Boesch (2017), showing both sides to be partly right and partly wrong. But in order to get there, a brief in- troduction to embodied cognitive science, in general, and wide computationalism, in particular, is in order.

1.3 Embodying Cognition: Wide Computationalism

“Embodied cognition” is an umbrella term that has been used to refer to a wide variety of research projects and approaches in cognitive science, including work on: bodily- formatted representation (e.g., Gallese and Sinigaglia 2011; Goldman and de Vignemont 2009; Goldman 2014); grounded cognition (e.g., Lakoff and Johnson 1980, 1999; Barsa- lou 2008); the extended mind (e.g., Clark and Chalmers 1998; Menary 2010); predictive processing (e.g., Friston 2009; Hohwy 2012; Clark 2013); ecological psychology (e.g., Gibson 1979; Richardson et al 2008; Chemero 2009); wide computationalism (e.g., Wil- son 1994, 2004); and enactivism (e.g., Maturana and Varela 1980; Varela, Thompson and Rosch 1991). Although it is hard to pinpoint what all of these projects and approaches have in Chapter 1. Scientific Representation, Mental Representation, and Embodied 10 Cognition common, a defeasible generalization is that work in embodied cognitive science tends to reject the “smallism” and “localism” of traditional cognitive science (Sanches de Oliveira and Chemero 2015; Chemero and Silberstein 2008): rather than in principle stipulating that cognition is an intracranial process (see, e.g., Adams and Aizawa 2009), embodied cognitive science is open to the possibility that cognitive phenomena are constituted by elements beyond the smallest physical scale and beyond a single agent’s head. My focus here is on the potential contribution that research in wide computation- alist embodied cognitive science can make to the philosophy of science literature on modeling. This choice is partly out of convenience. While some strands of embodied cognitive science such as ecological psychology and enactivism are widely regarded as being “radical” (see, e.g., Clark 1997, 2001, and Chemero 2009), wide computational- ism is, by comparison, much more theoretically ecumenical and in principle appealing to a broader range of cognitive scientists. As such, my hope is that the plausibility of wide computationalism will count in favor of my use of it, in Section 4, to sketch an approach to scientific modeling. Because wide computationalism is similar to the much more well-known ‘extended mind view’, it will be instructive to begin by contrasting the two. The extended mind view, popularized in philosophical circles by Clark and Chalmers (1998), is based on a functional parity argument: for any element inside an agent’s head that plays a func- tional role such that the element counts as constitutive of that agent’s mind, if an el- ement outside the agent’s head came to play the same functional role as the internal element, this external element should also be understood as constitutive of the agent’s mind. For example, if certain brain states are ordinarily seen as constitutive of one’s Chapter 1. Scientific Representation, Mental Representation, and Embodied 11 Cognition memory, but some extra-neural resource comes to play the same functional role (such as Otto’s notebook, in Clark and Chalmers’ example, or perhaps the phone numbers stored in one’s smartphone), then the extra-neural resource in question should, by par- ity, be seen as constitutive of one’s memory. This view is commonly referred to as the ‘extended mind’ view precisely because it proposes that the boundaries of an individ- ual’s mind are fluid: your mind can, in the right circumstances, extend outward to encompass elements external to your body. Wide computationalism is similar to the extended mind view in that it also recog- nizes the possibility that cognition involves elements external to any individual agent’s head or body. But a crucial difference between the two concerns the degree to which they are committed to psychological individualism. Rob Wilson describes psychologi- cal individualism as “the view that psychological states must be taxonomized so as to supervene on the intrinsic, physical properties of individuals” (Wilson 1994, p. 351). Thus framed, the extended mind view is fundamentally individualistic insofar as it ac- cepts the individual agent as the starting point for psychological explanation, but adds that an individual agent’s mind may sometimes become extended by encompassing extrinsic elements and resources that play functional roles equivalent to the functional roles otherwise played by intrinsic elements and resources. In contrast, wide compu- tationalism rejects psychological individualism altogether. According to wide computationalism, the aim of computational psychological ex- planation is to explain the constitution, organization and behavior of cognitive systems rather than of individuals. The boundaries of some cognitive systems perfectly corre- spond to the boundaries of particular individual agents: these are systems that involve only “narrow” (intra-cranial) computation. But individual agents aren’t the only kind Chapter 1. Scientific Representation, Mental Representation, and Embodied 12 Cognition of cognitive system there is: or, in other words, not all cognitive systems neatly cor- respond to individual agents. Some cognitive systems are constituted by an organism along with features of its environment, and, as a result, computational psychological explanations of that cognitive system will necessarily involve of elements external to that organism, including “wide” computational processes that “are not fully instantiated in [that] individual” (Wilson 1994, p. 352):

The account of actual implementation [in wide computational systems] is a generalization of that in the case of narrow computational systems: a wide computational system implements the “program” physically stored in the environment with which it causally interacts. (Wilson 1994, p. 361)

Notice that both wide computationalism and the extended mind view are perfectly compatible with the computational-representational focus typical of cognitive science—i.e., both are in line with the “central hypothesis” that “thinking can best be understood in terms of representational structures in the mind and computational procedures that operate on those structures” (Thagard 2005, p. 10). The key difference is that, while the extended mind view describes the individual mind as ‘leaking out into the world’, wide computationalism is agnostic about the actual or potential boundaries of indi- vidual minds (see Fig. 1.1). Wide computationalism dissociates computational ex- planation from psychological individualism, and therefore enables the computational analysis of cognitive systems whatever their constitution (see, e.g., Kersten 2017), that is, regardless of whether the cognitive system in question corresponds to a single indi- vidual’s mind, or to individuals in interaction with other individuals, or to individuals in interaction with tools, and so on. Chapter 1. Scientific Representation, Mental Representation, and Embodied 13 Cognition

FIGURE 1.1: According to the extended mind view, an individual’s unex- tended mind (a) can become extended outward to include properly func- tionally linked external resources (b). Wide computationalism entails no commitment to the possibility of mind extension, and only posits that computational explanations of cognitive systems involving ‘wide’ com- putation (b) are not reducible to computational explanation of cognitive systems involving ‘narrow’ computation (a).

At this point it will be helpful to consider two brief examples of cognitive systems understood through the lens of wide computationalism. Edwin Hutchins’ work on navigation is probably the textbook case. Hutchins has studied different instances of complex problem solving to reveal how the relevant cognitive processing is distributed across multiple people and technological equipment. In his classic 1995 book Cognition in the Wild, Hutchins described the intricate process of bringing a large ship to a har- bor, emphasizing the role played by interpersonal collaboration and tools in making the task possible. And Hutchins has observed much the same in the activity of pi- lots: “In the cockpit, some of the relevant representational media are located within Chapter 1. Scientific Representation, Mental Representation, and Embodied 14 Cognition the individual pilots. Others, such as speech, are located between the pilots, and still others are in the physical structure of the cockpit” (Hutchins and Klausen 1996, p. 32). Ultimately, in these and other cases, Hutchins proposes that studying cognition “in the wild” reveals cognition to be far from limited to what goes on inside any individ- ual’s head: at least some cognitive systems are constituted by multiple people and the material environment. Another good example is John Sutton’s work on the socially-distributed nature of memory. For people in close relationships, remembering is often a collective task per- formed through interaction: “couples and families, or other enduring and integrated small groups such as old school friends, veterans, sports teams, committee members, or business partners often and repeatedly jointly remember significant episodes they have gone through together” (Sutton et al 2010, p. 539). Rather than assuming that ‘memory’ is the storage of representations inside an individual’s head, Sutton and col- leagues take ‘remembering’ as a more theoretically-neutral starting point. As they ex- plain, their goal is not to make a functional parity argument like the one put forward in the extended mind view: “the focus is not on whether or how much the internal and external resources have features in common, but on how they operate together in driving more-or-less intelligent thought and action” (p. 525). And they suggest that an important part of remembering is in fact performed by socially-distributed “trans- active memory systems”—systems that “in certain cases can be highly integrated and enduring, and exhibit high levels of continuous reciprocal causation” (p. 547). In these and other cases, the wide computationalist approach is to make sense of the behavior observed in terms of the computational processes at play, without any Chapter 1. Scientific Representation, Mental Representation, and Embodied 15 Cognition prejudices on what the boundaries of the system might be. In this way, wide computa- tionalism is more ecumenical than views in which cognition is an intra-cranial feature that sometimes ‘leaks out into the world’: the goal in wide computationalism is simply to employ the usual tools and methods cognitive scientists have at their disposal and to understand cognitive systems whatever their spatiotemporal scale and makeup.

1.4 Wide Computation in Scientific Modeling

The views of cognition and mental representation reviewed in Section 3 motivate think- ing of scientific modeling as involving similarly wide or distributed computational processing. Consider the case of climate modeling (see Fig. 1.2). Understanding, ex- planation and prediction of climatic phenomena typically depend on a number of dif- ferent mathematical models and computer simulations: no single set of parameters or equations is accepted as “the right model," but a multi-model approach is instead fa- vored to support the epistemic goals of scientific (see, e.g., Lloyd 2010, Parker 2011). And beyond the technical complexity of working with multiple simulations of multiple equations, the modeling process is also characterized by material and social complexity, involving large datasets that are stored in multiple servers, performing cal- culations with advanced equipment such as high performance computers, and through the combined activity of large teams of researchers, modelers and technicians that are often stationed in different laboratories around the globe (see Fig. 1.3). Eric Winsberg (2018a, 2018b) describes the complex and distributed nature of epis- temic labor in contemporary climate modeling as follows:

Not only is epistemic agency in climate science distributed across space (the Chapter 1. Scientific Representation, Mental Representation, and Embodied 16 Cognition

FIGURE 1.2: Example of a visualization generated by a computer simula- tion of the climatic effects of human-produced carbon dioxide emissions. Credits: NASA/GSFC

science behind model modules comes from a variety of laboratories around the world) and domains of expertise, but also across time. No state-of-the- art, coupled atmosphere-ocean GCM (AOGCM) is literally built from the ground up in one short surveyable unit of time. They are assemblages of methods, modules, parameterization schemes, initial data packages, bits of code, and coupling schemes that have been built, tested, evaluated, and cre- dentialed over years or even decades of work by climate scientists, mathe- maticians, and computer scientists of all stripes. No single person—indeed no group of people in any one place, at one time, or from any one field of expertise—is in any position to speak authoritatively about any AOGCM in its entirety. (Winsberg 2018b, p. 229-230) Chapter 1. Scientific Representation, Mental Representation, and Embodied 17 Cognition

FIGURE 1.3: The location of 7,280 temperature stations in the Global His- torical Climatology Network catalog. Credits: Robert A. Rohde for the Global Warming Art project and Wikimedia Commons.

The primary focus of Winsberg’s argument is the epistemological implications of the distributed nature of climate modeling. But the description he offers quite straight- forwardly lends itself to interpretation in line with the wide computationalist approach in embodied cognitive science reviewed in the previous section. From a wide compu- tationalist perspective, as was true of Hutchins’ ships and cockpits and of Sutton’s collective remeberers, the explanatory and predictive outcomes of climate modeling are irreducible to any single individual’s use and understanding of the models and of the exact nature of the models’ relation to the real climate. Instead, the explana- tory and predictive outcomes are better understood as accomplishments of the entire cognitive system—a cognitive system that comprises individual researchers, their lab- oratories, and a complex web of computing machinery and data. This is not to say that Chapter 1. Scientific Representation, Mental Representation, and Embodied 18 Cognition individuals never make judgments of their own based on modeling outputs. But if a single researcher comes to adequately understand, explain and predict certain global climatic phenomena, it is only after the cognitive heavy-lifting has been done by the larger techno-scientific system the individual researcher is a part of. Climate modeling may seem to be unique in its complex social and material distributed- ness. Arguably, however, it is different only in degree (rather than in kind) from smaller and more local instances of wide computation in model-based problem solving. In fact, according to Rob Wilson’s formulation, even the simple case of solving a long multiplication problem using pencil and paper involves wide computation: “A crucial part of the process of multiplication, namely, the storage of mathematical symbols, ex- tends beyond the boundary of the individual. Considered as multipliers, we are part of wide computational systems” (Wilson 1994, p. 356). The processing and the out- come of pencil-and-paper multiplication cannot be fully accounted for in terms of the states and processes inherent to individual components in isolation from other com- ponents that also make up the system: the paper or the pencil by themselves cannot account for the the mathematical result; similarly, the person’s hand, the person’s eyes or the person’s brain in and of themselves do not fully account for this particular re- sult either. The pencil-paper-hand-eye-brain system’s behavior is the product of the combined work of all of these components, and it is this system-level performance that wide computational explanation aims to elucidate. If this approach works for the computational explanation of pencil-and-paper mul- tiplication, then it surely also applies to cases of scientific modeling that fall short from the complexity and distributed-ness of climate modeling. The use of scale models and concrete mockups is perhaps an intermediary case when it comes to the social and Chapter 1. Scientific Representation, Mental Representation, and Embodied 19 Cognition material “width” of the computational processing at play. In his book, Michael Weis- berg (2012) uses the San Francisco Bay dam model to illustrate his similarity-based account of scientific representation. An interesting characteristic of the San Francisco bay modeling project that Weisberg highlights is the outcome of the project: in that particular case, the success of the modeling project meant that the authorities decided against building the proposed dam, making it so that the target system that the model represented (namely, a potential dam in the San Francisco Bay) never came into ex- istence. For present purposes, however, it is important to see how, in this and other cases of concrete modeling (see a similar but less well-known example in Fig. 1.4), the modeling process and its outcome cannot be adequately described as the product of an individual scientist’s insight and epistemic labor. The relatively large material infras- tructure involved, the different types of expertise required for building and operating the model, and the various technical, economic, political and environmental consider- ations at play in the particular situation all contribute to the modeling project in its ex- ecution and results. The epistemic accomplishment is first and foremost the system’s: it is only after the cognitive heavy-lifting has been done by the large system that indi- vidual scientists can glean lessons from the model. And, from a wide computationalist perspective, it is at the level of this larger (extra-cranial) system that computational ex- planations will be best suited to explain the transformation of informational input into epistemic (as well as political, pragmatic, etc.) output. It might seem like the view I am sketching here isn’t all that : after all, at least Ronald Giere (e.g., 2002) and Tarja Knuuttila (e.g., 2017) have in recent years offered accounts of scientific modeling in this vicinity. But their focus on cognitive offloading suggests they are closer to analysis in terms of the extended mind view than of wide Chapter 1. Scientific Representation, Mental Representation, and Embodied 20 Cognition

FIGURE 1.4: Concrete model built as part of the projected restoration of the Isabella Lake dam, in Kern County, California (left), and map showing the location of the modifications proposed (right). Credits: U.S. Army Corps of Engineers. computationalism proper: in their accounts, models are seen as extending the scien- tists’ cognitive abilities, as if instantiating and accelerating or magnifying in external media processes that would otherwise be realized inside individuals’ heads. An ex- tended mind-style analysis may be viable for some cases of scientific modeling, just as it may be viable for some more ordinary cases of extended/offloaded cognition. But cases like the ones I have described here seem to resist reduction to the cognitive states and processes of any single individual, or even of a single individual as offloading her states into tools at hand: instead, as seen above, cases like these call for the compu- tational analysis of explanation and prediction as cognitive achievements realized by entire systems of data, equations, equipment, scientists, engineers, and so on. More- over, following what was suggested in the previous section, the wide computationalist Chapter 1. Scientific Representation, Mental Representation, and Embodied 21 Cognition approach to modeling I am sketching here is actually less controversial and more ec- umenical than any extended mind-style analysis: rather than positing that cognition leaks from scientists’ brains and that their mind extends into machines, in the wide computationalist view the computational analysis is simply scaled up to the larger, wide (extra-cranial) level at which the cognitive process is accomplished. Crucially for present purposes, the wide computationalist approach to modeling I am sketching here is interesting and novel (compared to these other accounts inspired by the extended mind view) because of how it connects with the philosophical debate about whether or not scientific representation is unique and sui generis. Wide compu- tationalism can be understood as a view on the nature of mental representation: in par- ticular, it proposes that mental representations—the computational states investigated in the sciences of the mind—may be ‘narrow’ as well as ‘wide’. And it is the possibility of wide computation/representation that makes wide computationalism particularly relevant for the philosophy of science debate about modeling. At first blush, the wide computationalist view of scientific modeling seems to vin- dicate Boesch’s criticism of Callender and Cohen: after all, framing modeling in terms of wide, distributed computation is in line with the recognition of an intrinsically social or communal dimension to model-based representation. At the same time, however, the view sketched here also shows Boesch to be partly wrong: this is because the wide computationalist approach accommodates the communal and social into the mental, framing models as external, socially-distributed, materially-extended mental represen- tations. This suggests that Callender and Cohen were in fact right to think that scientific rep- resentation is not sui generis: we can indeed make sense of scientific representation in Chapter 1. Scientific Representation, Mental Representation, and Embodied 22 Cognition terms of mental representation. At the same time, however, unlike what Callender and Cohen propose, this is not because the representational character of scientific objects is derivative of the internal mental states of individuals: rather, in the approach sketched here, scientific models themselves are the external and socially-extended computa- tional states or mental representations that constitute wide cognitive systems. The result is a direct threat to the anti-psychologistic assumption that scientific rep- resentation is unique and not amenable to explanation from the perspective of the sci- ences of the mind. For if, as I’m proposing here, models are (or can be) constituents of wide computational cognitive systems, then understanding the model’s role in ex- planatory, predictive and other epistemic achievements does not require a special the- ory of scientific representation, but can be accomplished through consideration of how (wide) cognitive or mental representation supports the achievements of (wide) cogni- tive systems. Importantly, this scientifically-informed result reveals the inadequacy of positions on both sides of this particular philosophical debate, showing that the ques- tion itself might just be ill-framed.

1.5 Conclusion

What is the nature of the representational relation between scientific models and the target phenomena in the real world that scientists are interested in understanding? This question has occupied center stage in the recent philosophy of science literature. Yet, in trying to answer it, philosophers rarely if ever take into account how views of mental representation in cognitive science might contribute to an understanding of sci- entific representation. In this chapter I have challenged this anti-psychologism typical Chapter 1. Scientific Representation, Mental Representation, and Embodied 23 Cognition of our field by showing how a wide computationalist approach to modeling collapses the social and scientific into the mental, but without reducing external representation to internal mental states of individuals. This is an interesting result because it shows some of the contemporary debate to be ill-framed, as in the case of the two sides ex- amined here on the question of the uniqueness of scientific representation. Moreover, the approach sketched here is also useful in the way it opens up a new avenue for philosophical (and scientific) investigation: rather than simply focusing on represen- tationally analyzing model-target relations, wide computationalism motivates looking at the whole system’s behavior to elucidate the computational processes underlying model-based problem solving and to determine how the system as a whole turns infor- mational input (data, and the questions guiding research) into scientific output (knowl- edge claims, understanding, predictions, policy guidance, etc). In challenging the anti- psychologism characteristic of the philosophical discussion, the view I am proposing here also illustrates how philosophers of science can benefit from lending an ear to col- leagues in the sciences of the mind: other theories of mental representation will likely have different implications for a philosophical understanding of scientific representa- tion, and we can only gain by pursuing greater interdisciplinary, inter-debate contact. 24

Chapter 2

Representationalism is a Dead End

Abstract

Representationalism—the view that scientific modeling is best understood in repre- sentational terms—is the received view in contemporary philosophy of science. Con- tributions to this literature have focused on a number of puzzles concerning the na- ture of representation and the epistemic role of misrepresentation, without consider- ing whether these puzzles are the product of an inadequate analytical framework. The goal of this chapter is to suggest that this possibility should be taken seriously. The argument has two parts, employing the “can’t have” and “don’t need” tactics drawn from philosophy of mind. On the one hand, I propose that representationalism doesn’t work: different ways to flesh out representationalism create a tension between its on- tological and epistemological components and thereby undermine the view. On the other hand, I propose that representationalism is not needed in the first place—a posi- tion I articulate based on a pragmatic stance on the success of scientific research and on the feasibility of alternative philosophical frameworks. I conclude that representation- alism is untenable and unnecessary, a philosophical dead end. A new way of thinking Chapter 2. Representationalism is a Dead End 25 is called for if we are to make progress in our understanding of scientific modeling.

2.1 Introduction

Many contributions to the philosophy of science literature highlight the central role played by mediated or indirect forms of investigation. Rather than directly intervening upon the various real-world phenomena they are interested in, scientists often build and manipulate models that simulate those phenomena. Direct interventions are some- times impractical, dangerous or even unethical. There are moral limits to how scientists can use human subjects and animals in the lab, just as there are practical barriers to di- rect experimentation on global climate change due to the phenomenon’s complexity and spatio-temporal scale. In these and other cases, modeling enables scientists in all disciplines to indirectly advance their understanding of real-world phenomena. And not only are there different for modeling, there are also many different ways to do it, utilizing distinct methods and modalities. Diagrams, graphs, mathematical equations, and concrete scale models have been used in science for centuries. More re- cently, technological advances have led to the widespread use of computer simulations and robotic models. But how is it possible to learn through modeling? It is easy to see how mathe- matical equations, computer simulations, and robotic agents could help advance our understanding of mathematics, computing, and robotics, respectively. How is it that building and operating models enables scientists to learn something not only about the models themselves, but also about the real-world phenomena scientists are ulti- mately interested in? In short, why is modeling epistemically valuable? The answer Chapter 2. Representationalism is a Dead End 26 seems obvious and intuitive: models can give us knowledge of their targets because they represent those targets. That is, models are related to real-world target phenomena in such a way that they can stand in for those targets in empirical research and that the outcomes of modeling generate knowledge of the phenomena being modeled. It is because they are representations of certain phenomena that mathematical equations, computer simulations and concrete models are viable indirect routes to understanding and explaining those phenomena. This intuitive answer has, over the past couple of decades, shaped the literature on scientific modeling and brought to the forefront of philosophers’ attention a number of problems relating to the nature of representation and the role of misrepresentation in science. Explaining what representation is and how it works turns out to be no trivial matter. Philosophers generally agree that modeling is a legitimate means to knowl- edge, and they also generally agree that the intuitive answer above is right, i.e., model- ing is epistemically valuable because models stand in a special relation to their targets (namely, one of representation). But how to support this representationalist intuition? What is it about representation that makes it epistemically valuable in the way we take it to be and that justifies attempting to analyze model-based science in representational terms? Some theories of representation stand out as the most influential, but none has escaped criticism unscathed. And the continued development of revamped or brand- new theories to correct what was wrong with their predecessors seems to only multiply the disagreement about what representation is and how it works. In a situation like this, when an intuitive view gives rise to problems no one ap- pears to be able to solve, it is wise to at least entertain the possibility that our intu- itive view was mistaken. This is what I do in this chapter. I begin, in Sect. 2.2, by Chapter 2. Representationalism is a Dead End 27 giving a brief overview of recent controversies in order to make explicit the represen- tationalist assumptions that pervade the literature. I identify representationalism as a methodological stance that is based on a twofold assumption comprised of an ontolog- ical component and an epistemological component, and I argue that these components create serious challenges for making sense of the role that misrepresentation (e.g., ide- alization and abstraction) is said to play in science. I then provide a two-part argument against representationalism inspired by the “can’t have” and “don’t need” tactics used in a different debate about content in philosophy of mind (see Hutto and Myin 2013, Myin and Hutto 2015). My goal is to show, first, that we cannot have a representation- alist approach to scientific modeling, and, second, that we do not even need one. The “can’t have” argument, presented in Sect. 2.3, proposes that representationalism does not work because, in any way of fleshing out the view, its ontological and epistemo- logical components undermine one another. The “don’t need” argument, in Sect. 2.4, proposes that, regardless of whether representationalism is tenable or not, represen- tationalism is unnecessary: pragmatic and conceptual considerations suggest that we can address what is philosophically interesting about scientific modeling without get- ting into the representationalist quagmire. Representationalism is a familiar way of thinking about model-based science, but it is a philosophical dead end, and we have good reason to pursue alternatives. Chapter 2. Representationalism is a Dead End 28

2.2 Representationalism and the perplexities of misrep-

resentation

How can building and manipulating models generate knowledge of some target phenomenon?

What makes this question particularly puzzling is the fact that, as many philosophers point out, models always contain idealizations and abstractions that make them imper- fect copies of their targets. Models abstract away many of their target’s complexities, neglecting details where possible so as not to complicate matters unnecessarily. Models also include intentional distortions. They often posit processes, elements and proper- ties that are absent in the target, or that cannot even be found anywhere in nature, as in the case of infinite populations, frictionless planes, and the rational self-interested agent. These lies “by omission” and “by commission,” as it were, pose the challenge of elucidating the contribution of “falsehoods” to scientific modeling, directing philoso- phers to the question how can scientists learn through misrepresentation? A considerable amount of attention in recent years has been devoted to explaining the status of misrepresentation in model-based science. Abstractions, simplifications, approximations and idealizations are recognized as useful means to the future devel- opment of “truer theories” (Wimsatt 1987), and as serving at the very least as tempo- rary placeholders for more accurate descriptions. But some in the literature go so far as to claim that falsehoods are not defects of a model, but are often instrumental to its success: “fictions can be genuinely explanatory” (Bokulich 2012, p. 736); “false models can explain, and (...) they often do so in virtue of their idealizations” (Kennedy 2012, p. 332); and “idealizations aid in representation not simply by what they eliminate, such as noise or noncentral influences, but in virtue of what they add, that is, their positive Chapter 2. Representationalism is a Dead End 29 representational content” (Potochnik 2017, p. 50). The options seem clear enough: ei- ther we hold that explanatory success requires /accuracy and accordingly see the false parts of models as explanatorily superfluous, or we agree with the philosophers just quoted in holding that, at least sometimes, the false parts of models are themselves required for models to be successful. Bokulich explains the situation as follows:

The field has largely split into two camps on this issue: those who think it is only the true parts of models that do explanatory work and those who think the falsehoods play an essential role in the model explanation. Those in the former camp rely on things like de-idealization and harmless analyses to show that the falsehoods do not get in the way of the true parts of the model that do the real explanatory work. Those in the latter camp have the challenging task of showing that some idealizations are essential and some fictions yield true insights. (Bokulich 2017, p. 108)

Whether misrepresentation is defended as a temporary fix or as a legitimate long-term strategy, acknowledging the utility of abstractions and idealizations in model-based research generates the problem of explaining just how falsehoods can contribute to the epistemic goals of science (Elgin 2004, 2017; Batterman and Rice 2014; Morrison 2015; Potochnik 2015, 2017). Looming behind these philosophical questions about misrepresentation in scien- tific modeling is the representationalist intuition described in Sect. 2.1. One influential view, for example, treats idealization as a purposeful deviation from accurate represen- tation or a “departure from complete, veridical representation of real-world phenom- ena” (Weisberg 2012, p. 98). Notice, however, that framing idealizations, abstractions Chapter 2. Representationalism is a Dead End 30 etc. as falsehoods or misrepresentations presupposes, more generally, an understand- ing of models as representations and of modeling as a representational activity: it only makes sense to think that a model represents some target inaccurately and imperfectly if we think of models as representing at all. Furthermore, interest in elucidating how misrepresentation does not hinder, and perhaps even enhances, a model’s explanatory and epistemic import presupposes a connection between the epistemic import of mod- els and their representational character. That is, the fact that we take the role of mis- representation in modeling to be a question of epistemological concern makes evident our assumption that modeling is an epistemic activity because it is representational. This view of models and modeling should sound familiar and uncontroversial, and it is illustrated in pronouncements such as the following: that models are “the means by which scientists represent the world—both to themselves and for others” (Giere 1988, p. 62, emphasis added) and that “scientists use models to represent aspects of the world for various purposes” (Giere 2006, p. 63); that modeling is “fundamentally a strategy of indirect representation of the world” (Godfrey-Smith 2006a, p. 730, empha- sis added) and an “indirect approach to representing complex or unknown processes in the real world” (Godfrey-Smith 2006b, p. 7, emphasis added), or, alternatively, that models are “candidates for the direct representation of observable phenomena” (van Fraassen 1980, p. 64, emphasis added); that “science is in the business of producing representations of the physical world” (Pincock 2012, p. 3, emphasis added) and that using mathematics in modeling, for example, “makes an epistemic contribution to the success of our scientific representations” by “aiding in the confirmation of the accuracy of a given representation” (p. 8, emphasis added); that models “are by definition in- complete and idealized descriptions of the systems they describe” (Bokulich 2017, p. Chapter 2. Representationalism is a Dead End 31

104, emphasis added) and that “scientific models are explicitly intended to represent phenomena only partially” (Potochnik 2017, p. 43, emphasis added); that “we need to know the variety of ways models can represent the world if we are to have faith in those representations as sources of knowledge” (Morrison 2015, p. 97, emphasis added); and, similarly, that “models must be representations: they can instruct us about the nature of reality only if they represent the selected parts or aspects of the world we investigate” and, for this reason, “if we want to understand how models allow us to learn about the world, we have to come to understand how they represent” (Frigg and Nguyen 2017a, p. 49, emphasis added). On the surface, these quotes might all appear to be saying pretty much the same thing. Those familiar with the literature can “see through” the apparent overlap and identify hints of various disagreements philosophers have concerning representation, such as about whether it is ‘direct’ or ‘indirect’ (e.g. in the quotes by Godfrey-Smith and van Fraassen), about how mathematics contributes to representation (e.g., in the quote by Pincock), or about the role of misrepresentation (e.g., in the quotes by Bokulich, Po- tochnik, and Morrison). But I want to suggest that, in an important sense, the surface- level reading of these quotes is right. Despite all of the disagreements philosophers have about representation, one issue they all seem to agree about is that representation matters, that representation is what we should be thinking and disagreeing about as we try to understand scientific modeling. Taken together, the quotes reveal the rep- resentationalist intuition that pervades the philosophical literature on modeling and amounts to the following methodological stance:

Representationalism: scientific modeling is best understood representationally; i.e., making sense of model-based science, philosophically, requires analyzing it in Chapter 2. Representationalism is a Dead End 32

representational terms.

As a methodological stance, representationalism is constituted by the following twofold representationalist assumption:

Ontological Component of Representationalism (OC): models are representations; i.e., models stand in a representational relation to target phenomena.

Epistemological Component of Representationalism (EC): modeling is epistemically valuable because of its representational nature; i.e., the representational relation between model and target is what secures the epistemic worth of modeling.

The upshot of representationalism thus construed is as follows. The first compo- nent of representationalism—the ontological component (OC)—holds that a model is a representation of some target. This means that, like many other human activities (such as, say, art), scientific modeling is a representational practice, and models, as tan- gible components of that practice, are representations (just as, for instance, paintings and sculptures can be artistic representations). Moreover, OC holds that a model is a representation of some target phenomenon or system: in this way, a model is a model of some target by virtue of representing that target. Two points are important to note here. First, philosophers sometimes talk about “target-less modeling” or models with “missing targets”—this is what happens when scientists, knowingly or not, build models of non-real-world phenomena, such as sim- ulations of three-sex biological populations or models of ether and phlogiston. In- stances like these might appear not to endorse OC insofar as in these cases there is no actual target that the models represent. But this variety of modeling is still typically understood representationally: models that do not represent some real-world target Chapter 2. Representationalism is a Dead End 33 are still understood as representations—they just represent some abstract, fictional, non-existing target (see, e.g., Godfrey-Smith 2006a and Morrison 2015). This connects to the second point, which is that OC is a general ontological view that can be fleshed out in different ways. Part of the philosophical literature on mod- els focuses on the question of the “ of models,” whether they are concrete or abstract objects, set-theoretic structures, sentences, fictions, artifacts, and so on (see Gelfert 2017 for an overview of this specific debate). OC is not an alternative to these views on the ontology of models. Rather, these different takes on the ontological na- ture of models are typically developed as particular versions of the general ontological intuition behind representationalism (i.e., OC): they propose that models are, for exam- ple, objects that represent, or structures that represent, or fictions that represent, and so on. This means that OC should not be seen as a particular view among the many found in the debate about the ontology of models, but instead as distilling what virtually all views in the ontology debate assume, namely that models are the sort of entity that participates in a representation relation. In short, OC motivates representationalism as a methodological stance for understanding scientific modeling because it delineates a general way of thinking about models (i.e., as representations), which can then be developed into various particular accounts of the nature of models. The second component of representationalism—the epistemological component (EC) —suggests that models contribute to central goals of science and that they do so as rep- resentations, that is, only because they represent their targets. Models can be used in communicating results and in science education, for example. Most prominently, how- ever, EC makes the stronger claim that models contribute to explanatory and other epistemic goals of science, leading to understanding and knowledge of some target, and Chapter 2. Representationalism is a Dead End 34 that this happens because models represent their targets—that is, a model’s epistemic worth is an outcome of its representational relationship to some target. In clarifying EC it is important to note that authors differ on whether they use ‘rep- resentation’ as a success term or not. For those who do, representation and accuracy do not come apart: in this view, the false parts of a model do not, strictly speaking, repre- sent; saying that X represents Y entails that X represents Y accurately. This seems to be the case, for example, with Kennedy’s (2012) “non representationalist view of model explanation.” Her claim is that idealizations can play an important explanatory role. And she calls this a “non representationalist” view because she sees the false parts of models (i.e., idealizations) as falling short from representing (i.e., accurately represent- ing) the target. Other authors who do not use ‘representation’ as a success term can make similar claims, agreeing with regard to the explanatory import of the false parts of models, while still seeing those false parts as positively (mis)representing—which would amount to a “representationalist” account in Kennedy’s sense (see, e.g., Po- tochnik 2017). This question will come up again in the next section when we discuss recent ap- proaches that divorce representational status from accuracy. For present purposes it is important to point out that EC is meant as a general formulation that encompasses both senses of ‘representation’. Whether one uses ‘representation’ as a success term or not, the epistemological component of representationalism holds that we need to understand how representation (or accurate representation) works if we are to under- stand why modeling is successful: this is, most fundamentally, what is communicated in claims that “we need to know the variety of ways models can represent the world if we are to have faith in those representations as sources of knowledge” (Morrison 2015, Chapter 2. Representationalism is a Dead End 35 p. 97, emphasis added) and that “if we want to understand how models allow us to learn about the world, we have to come to understand how they represent” (Frigg and Nguyen 2017a, p. 49, emphasis added). In short, EC motivates representationalism as a methodological stance for understanding scientific modeling because it delineates a general way of thinking about why models are epistemically valuable (namely, because they represent), which can then be developed into different views of how modeling works given particular accounts of the nature of representation. To conclude this section I want to briefly indicate an initial difficulty attending rep- resentationalism, namely that the role that many philosophers assign to idealization and abstraction in scientific modeling creates puzzles connected to each component of the twofold representationalist assumption. Following the ontological component of representationalism, for some object (say, a pair of first-order differential equations or a robotic agent) to be a model of some real-world target, a minimal requirement is that it represent that target. That is, OC entails that at least part of what makes X a model of some target Y is the fact that X represents Y, even if inaccurately (or partially, if you use ‘representation’ as a success term). Yet, this seems incompatible with the common idea, reviewed above, that introducing idealizations and abstractions does not neces- sarily stand in the way of a model’s success and sometimes even enhances the model. Put simply, the puzzle is: if what makes X a model of Y is that X represents Y, then how can X still be a model of Y when X misrepresents Y, or falls short from representing Y accurately? A similar problem is associated with the epistemological component of representa- tionalism. EC frames the epistemic value of modeling in terms of the representational relation between model and target. In other words, EC says that modeling advances Chapter 2. Representationalism is a Dead End 36 scientific knowledge of target systems and phenomena because models represent those targets. The problem, however, is that EC seems to be in tension with the purported role of misrepresentation in science. If, as suggested by EC, models are epistemically valuable in investigations of some target because they represent the target, then how can models sometimes be more epistemically valuable when they misrepresent, i.e., when the representational relationship between the two is faulty? These two puzzles show that popular views concerning the productive role of mis- representation (or partial representation) in science are in tension with the represen- tationalist stance they presume and its ontological and epistemological commitments. One might take this as indicating that our understanding of idealization and abstrac- tion is in need of revision, and I think this is right. But perhaps a deeper lesson is that the representationalist intuition we took for granted just pushed our questions to another level instead of answering them. The assumption that models are best un- derstood, ontologically and epistemologically, in representational terms gives rise to controversies surrounding the nature and status of misrepresentation. This assump- tion creates the need to account for how the parts of a model that do not represent accurately are themselves epistemically valuable or, at least, how they do not get in the way of the parts that are. But these problems only arise if we accept representational- ism in the first place. Chapter 2. Representationalism is a Dead End 37

2.3 Can’t have: representationalism is untenable

I have suggested that the familiar and widespread representationalist approach to model-based science corresponds to a twofold assumption constituted by an ontolog- ical commitment (OC) and an epistemological commitment (EC). I have also shown how this twofold representationalist assumption is implicated in the role that misrep- resentation (e.g., idealization and abstraction) is said to play in science. The same diffi- culty also applies to the task of explaining the epistemic import of successful represen- tation. In this section I examine some of the main philosophical accounts of representa- tion. This will involve revisiting a couple of well-known criticisms, but with a different goal than the one critics originally had. Each account of representation can be seen as an attempt to make good on the claims that scientific models are representations and that they are epistemically valuable because of their representational character. Many of the criticisms to follow were originally meant to show that some particular account of representation was mistaken about the nature of representation. Our problem space here is different. Rather than meaning to discredit this or that particular account of representation, the aim of this section is to cast doubt on the feasibility of the entire rep- resentationalist project. For this reason, I will not be concerned with whether a given account defines representation the right way or not. On any theory of representation, the ontological and epistemological commitments of representationalism undermine one another, making representationalism an untenable view of scientific modeling. Or so I will argue. Chapter 2. Representationalism is a Dead End 38

2.3.1 Types of theories of representation

Representation is typically thought of, broadly speaking, as a relation. But exactly what kind of relation is it, and between what sorts of entities does it hold? Tradi- tional views treat representation as a mind-independent relation between model and target, in which the two are related by virtue of some property such as isomorphism or similarity. In views like these, the representation relation exists whenever the ap- propriate objective correspondence holds between model and target. This traditional conception of representation as mind-independent has been dubbed “informational” (Chakravartty 2010) because it sees models as objectively containing information about target phenomena, and it has also been described as “dyadic” (Giere 2004, Suarez 2004, Knuuttila 2011) in that it sees the representation relation as one that holds be- tween two entities only, i.e., model and target. The designations “informational” and “dyadic” are contrasted with, respectively, “functional” views which see representa- tion as a relation that depends on human insight or use and is established by the activities of cognitive agents (Chakravartty 2010), or “triadic” views which see rep- resentation as necessarily holding between three entities: agents, models and targets (Knuuttila 2011). In what follows I will focus first on the mind-independent (or infor- mational, or dyadic) conception of representation and then on the mind-dependent (or functional, or triadic) conception to show how each fails with representationalism in different ways. It is worth noting from the outset that most participants in the literature have moved toward adopting a view of representation as a mind-dependent agential accomplishment—including even those authors often identified as having originally put forward views of representation as mind-independent. Still, working through the dyadic, mind-independent case, even if more briefly, is instructive for understanding Chapter 2. Representationalism is a Dead End 39 the problems facing representationalism in general.

2.3.2 Mind-independent views of representation fail with OC

Prominent accounts describe representation as a relation of isomorphism or of simi- larity, and the initial formulations of these accounts are often described as having re- lied on a conception of representation as being mind-independent. Bas van Fraassen’s (1980) view is one of the go-to examples of an isomorphism-based view of representa- tion as mind-independent. In an important passage, van Fraassen claims: a “theory is empirically adequate if it has some model such that all appearances are isomorphic to empirical substructures of that model” (p. 64). By “appearances” van Fraassen means observable phenomena in the real world, or more precisely “structures which can be described in experimental and measurement reports”; as for “empirical substructures,” they are the parts of models within a theory that are “candidates for the direct repre- sentation of observable phenomena” (p. 64). In order for a theory to be empirically adequate, then, there needs to be a correspondence between the models of the theory and observable phenomena, which is to say that the candidates for direct representa- tion of real-world phenomena need to actually represent them. And representation, for van Fraassen, is a relation of isomorphism, a one-to-one structural mapping or “total identity of structure” (p. 43): a model represents some target system when the two are isomorphic, that is, when there is complete structural identity between the “empirical substructures” of the model and the real-world “appearances,” as he calls them. A second example of a traditional view of representation as mind-independent is provided by Ronald Giere (1988). For Giere, models are representations: models rep- resent “systems found in the real world” such as “springs and pendulums, projectiles Chapter 2. Representationalism is a Dead End 40 and planets, violin strings and drum heads” (p. 62). But what does it mean to say that a model represents some real-world system? In Giere’s view, a model represents some system by virtue of being similar to it: “[T]he primary relationship between models and the world is not truth, or correspondence, or even isomorphism, but similarity” (p. 93). Giere explicitly rejects van Fraassen’s isomorphism-based view of represen- tation because he takes it to set the bar too high. A model and its target may well be structuraly similar to the point of being isomorphic, but this is the exception. In practice, models and targets usually fall short from this degree of structural mapping (p. 80). Different degrees and kinds of similarity may be relevant in different contexts. Still, in Giere’s view, it is by virtue of being similar to real-world systems that models represent them. Giere and van Fraassen thus clearly disagree on what it takes for a model to repre- sent some target system, and yet their views in the works cited here seem to coincide in a crucial respect: both treat representation as a mind-independent relation. Figure 1 illustrates this type of view of representation in the usual fashion seen in the literature.

r Model / Target

FIGURE 2.1: Representation as mind-independent: the model represents the target, and this objective two-place representation relation (r) is de- fined in terms of, e.g., isomorphism (van Fraassen 1980) or similarity (Giere 1988).

These isormorphism- and similarity-based views have been extensively criticized as giving an inadequate account of the nature of representation (see, e.g., Giere 1988, 2004, Suarez 2003, Knuuttila 2010), and, as we will see in the next subsection, they have also been revised (or re-explained) by their own authors. But our level of analysis is different. What matters for present purposes is to understand the theory-type (i.e., Chapter 2. Representationalism is a Dead End 41 the conception of representation as mind-independent) and to determine how it fares with the twofold representationalist assumption. In short, our question is not whether these views get representation right, but rather: is representationalism, as composed of ontological and epistemological (i.e., OC and EC), supported by a view of representation as a mind-independent, informational, two-place relation? At first blush, views of representation as mind-independent seem highly promising. They provide a simple way to flesh out OC: saying that models stand in a representa- tional relation to their targets means that there is some actual correspondence between them, be it one-to-one structural identity or some other form of similarity. Views of representation as mind-independent can thus point to objective features of a model and of a target that make the one a representation of the other. And this answer to OC, in turn, lends support to EC. Representation is not something we make up; and the existence of an objective correspondence between a model and its target inspires con- fidence in that model as a source of knowledge about the target. This makes EC seem perfectly unproblematic: model-based science is epistemically valuable because suc- cessful models stand in the appropriate mind-independent representational relation to their targets and can, for this reason, act as sources of knowledge about those targets. However, the apparent success of mind-independent representation with EC is un- dermined by its failure with OC. Note how EC relies on OC: the idea that model- ing is epistemically valuable because of its representational nature (i.e., EC) relies on the idea that modeling actually has this representational nature (i.e., OC). But mind- independent views notoriously fail as explanations of how scientific representation works, that is, as attempts to flesh out OC. And, by failing with OC, views of represen- tation as mind-independent make representationalism untenable. Chapter 2. Representationalism is a Dead End 42

A first difficulty attending mind-independent views of representation is the problem of the asymmetry of modeling. For any account of representation as mind-independent to be successful in elucidating how model-based science works, it needs to accommodate the inherent asymmetry that characterizes modeling. Scientists build and manipulate models to learn about real-world systems, but usually not the other way around: scien- tists do not, for example, intervene on the global climate as a means to understanding how computer simulations work. To say the same using more explicitly representa- tional terms, models represent their targets, but it does not seem right to say that target systems also represent their models. Because representation is an asymmetric relation, accounts of representation need to respect this asymmetry if they are to be plausible. Isomorphism is a symmetric relation and therefore fails in this respect: if A is isomor- phic to B, then B is isomorphic to A as well (Suarez 2003, Knuuttila 2010). The same is true for similarity: if a model is similar to its target to some degree and in some respects, then the target will necessarily be similar to the model as well, to the same degree and in the same respects (Suarez 2003). This difficulty applies to any account of representation as mind-independent. In order to succeed where isomorphism and similarity fail, alternative informational or dyadic views would have to involve an asymmetric relation to ensure that models represent their targets but not the other way around. But it is not at all clear how this could be done within a conception of representation as mind-independent. Notice that, logically speaking, asymmetry is a special kind of non-symmetry. MacBride (2016) provides good examples to illustrate the distinction: love can be symmetrical or non- symmetrical (e.g., if A loves B, B may or may not love A), but love is not asymmetric (e.g., the fact that A loves B does not entail that B doesn’t love A); on the other hand, if A Chapter 2. Representationalism is a Dead End 43 is taller than B, this guarantees that B is not taller than A, precisely because ‘being taller than’ is an asymmetric relation. The challenge, then, is to find some objective, mind- independent feature of models and targets that is neither too restrictive (a common charge against isomorphism) nor too inclusive (a common charge against similarity) and that points in the right direction, such that models represent their targets but tar- gets do not represent their models. While this may not be a logically unsurmountable obstacle, it is in practice challenging enough that no account has been offered in recent decades which adequately addresses it: instead, virtually everyone in the literature has come to see the asymmetry of the representation relation as an agential feature and as reason to account for that relation as being mind-dependent. A second problem for mind-independent views of representation is the problem of the diversity of scientific practice, namely that this diversity precludes the identification of a single criterion for representation that applies to all of science. Isomorphism may be a useful approximation to how representation works in some scientific disciplines and in approaches that rely on mathematics to model structures. But many scientific projects are not in the business of mathematically modeling structures. Similarity may well apply in these other cases, but that is because similarity applies everywhere: any- thing is similar to anything else in various ways (Giere 2004). Scientists in different disciplines and research traditions model real-world phenomena in so many different ways that this diversity inspires skepticism in there being a single criterion that applies to all cases of scientific representation without casting too wide a net. Along similar lines, Suarez claims that isomorphism and similarity do not do justice to the variety of representations scientists use: “an analysis of the means of representation in terms of just one of these conditions would be unduly restrictive and local” (Suarez 2003, p. Chapter 2. Representationalism is a Dead End 44

230). But, again, this is not a problem only with isomorphism and similarity. The di- versity of scientific practice makes it difficult for any view of representation as mind- independent to support OC by fully specifying what grounds the representational rela- tion. Some philosophers have proposed that representation can involve different kinds of interpretative keys (see, e.g., Contessa 2007, Frigg and Nguyen 2017a). This move might help accommodate the diversity of modeling practices, but it is not an option for views of representation as mind-independent, in which the representation relation has to hold between model and target independently of human intention, agency, and in- terpretation. The challenge, then, is to find a single criterion for this mind-independent relation that can capture the diversity of scientific practice without being too inclu- sive. One could perhaps adopt some sort of pluralism about representation as a mind- independent relation. Along these lines, for example, one might attempt to accommo- date a number of different criteria to make sense of the diversity of scientific practice by assuming that the various criteria do not all apply to the same mind-independent, two- place relation: instead, each criterion corresponds to a distinct two-place relation, and different two-place relations connect different models and targets. Yet, if one some- how succeeded in identifying these distinct criteria that make up the multiplicity of two-place relations that suffice for representation, then that would actually be a good reason to give up on the notion of representation as mind-independent! After all, “rep- resentation” would no longer refer to a single type of two-place relation but instead it would be a merely conventional (and, therefore, mind-dependent) way to designate various distinct types of two-place relations that are only related insofar as we conceive of them all as forms of representation. Chapter 2. Representationalism is a Dead End 45

In sum, the failures of prominent isomorphism- and similiarity-based accounts re- veal more general ways in which any view of representation as a mind-independent relation faces serious challenges. For reasons such as the ones examined here, views adopting a conception of representation as mind-independent are unable to give a clear and plausible account of representation with which to support OC. And by failing with OC, these views thereby undermine EC as well. That is, if we have no good account of how and why the (informational, two-place, mind-independent) representational re- lation holds between model and target, then this motivates rejecting the intuition that models give us knowledge of their targets because of this (informational, two-place, mind-independent) representational relation. This results in the methodological stance of representationalism being unsupported when couched in a conception of represen- tation as mind-independent.

2.3.3 Mind-dependent views of representation fail with EC

According to the views considered so far, the correspondence between a model and a target exists purely because of the features that the model and the target have in common—purely by virtue of what the model is like and what the target is like. But, consider that, while two entities may relate to one another in a number of mind- independent ways, ordinary usage seems to suggest that one represents the other only for humans, that is, only if humans use the one as a representation of the other. Sup- pose you witness a traffic accident and later you tell a friend about it, using your hands to show how the two cars in front of you collided. You might find it unproblematic to say that in this context your hands represent the two cars. And you might grant that there are mind-independent correspondences between hands and cars, some degree of Chapter 2. Representationalism is a Dead End 46 similarity, such as their roughly approximate width-to-length ratio. Yet, it would seem wrong to say that hands always represent cars: they can represent cars and they do so in situations like the one described; still, what makes hands represent cars is not any objective correspondence between the two, but the fact that we intentionally use them that way, if and when we do so. The recognition that traditional views failed to account for the role that human intention plays in representation has spurred alternative ways of understanding the representation relation. Rather than framing it as an objective relation between two entities, more recent accounts treat representation as a mind-dependent relation that necessarily involves humans as an agential or intentional component. Rather than thinking that “the model somehow ‘directly’ represents the object” (Giere 2010, p. 275), in the current perspective a model is a representation of a target only as a result of human activity. As van Fraassen has put it recently, “There is no representation except in the sense that some things are used, made, or taken, to represent some things as thus or so” (van Fraassen 2008, p. 23). In this sense, representation is a functional (rather than informational) term, and there can only be a representational relation between any two things if a third component is in place: models represent only if they are used by scientists for that purpose. In contrast with Fig. 1, Figure 2 illustrates representation as a functional, three-place, mind-dependent relation. The inclusion of an intentional aspect to representation, through the addition of a third relatum, is found in updated versions of both isomorphism- and similarity-based views, as illustrated in the following quotes:

A model can (be used to) represent a given phenomenon accurately only if Chapter 2. Representationalism is a Dead End 47

Scientists

e  Model r / Target

FIGURE 2.2: By using a model to represent some target, scientists establish (e) that the model represents the target (r). Strictly speaking, the represen- tation relation is mind-dependent and therefore not reducible to r: it is necessarily constituted by both r and e.

it has a substructure isomorphic to that phenomenon. (That structural rela- tionship to the phenomenon is of course not what makes it a representation, but what makes it accurate: it is its role in use that bestows the representa- tional role.) (van Fraassen 2008, p. 309, italics original)

The formula is: Agents (1) intend; (2) to use model, M; (3) to represent a part of the world, W; (4) for some purpose, P. So agents specify which simi- larities are intended, and for what purpose. This conception eliminates the problem of multiple similarities and introduces the necessary asymmetry. I propose to call this “The Intentional Conception of Scientific Representa- tion.” (Giere 2010, p. 274)

Besides these updated versions of isomorphism and similarity, other original ac- counts that implicitly or explicitly put forward representation as a mind-dependent relation include the inferential account (Suarez 2003, 2004), the interpretational account (Contessa 2007), the weighted feature-matching account (Weisberg 2012), the , Demonstration, Interpretation or DDI account (Hughes 1997), and the Denotation, Exem- plification, Keying-up, Imputation or DEKI account (Frigg and Nguyen 2017a, 2017b). All of these accounts have in common the idea that representation is not a brute fact about Chapter 2. Representationalism is a Dead End 48 the model-target dyad: rather, representation necessarily involves an agent responsi- ble for establishing the “representational mapping” and determining what represents what, to what extent, and in what way. A given model may be isomorphic to its target or it may be similar to the target in non-structural ways and these are perfectly legiti- mate two-place relations that may be of scientific interest. But, in a functional view of representation as mind-dependent, neither isomorphism nor similarity (nor any other two-place relation) entails a relation of representation. To be sure, we often take ad- vantage of correspondences between different entities when we use one to represent the other, as in the case of hands and cars discussed above. Still, in views of repre- sentation as mind-dependent, it is our use of these correspondences (rather than the correspondences themselves) that creates the representation relation. Is representation in fact a mind-dependent relation? This is not a question I will attempt to answer here. Instead, what matters for this chapter’s argument is to de- termine how views of representation as mind-dependent fare with OC and EC. This decides whether mind-dependent views can support the representationalist assump- tions philosophers of science by and large take for granted. We have seen that views of representation as mind-independent fail with OC. In this respect, mind-dependent views appear to fare better. Indeed, the main motivation for the formulation of mind- dependent views in the first place was precisely to give a better account of the repre- sentation relation—in the current terminology, to flesh out OC in a more appropriate way. And that they do. Mind-dependent views accommodate the intuitions many people have about how representation works, most importantly intuitions about how representation involves intention and agency. Mind-dependent views seem to tell a more realistic story about what makes a model a representation of its target: models Chapter 2. Representationalism is a Dead End 49 represent their targets, not independently from human activity, but as a result of sci- entific practice and the purposes guiding research; i.e., models represent their targets because they are used as representations of those targets. The problem, however, is that mind-dependent views strengthen OC at the cost of weakening EC. Granting for the sake of the argument that the functional conception is correct and representation is a mind-dependent relation, this secures OC in a way that undermines EC. Mind-dependent views dissociate the question of what makes something represent well from the question of what makes it represent at all (see, e.g., Callender and Cohen 2006, Chakravartty 2010, Knuuttila 2011). This means that, in a mind-dependent view of representation, anything can by definition represent anything else in some context as long as someone is willing to establish the relevant representational mapping: intention and stipulation are not sufficient to make something represent suc- cessfully or accurately, but they are enough to make it a representation. The problem is that even if anything can represent anything else for someone, plausibly not anything can be informative about anything else. Scientists do not use any old representation to model their targets of investigation because not any representation would be explana- tory, illuminating, or epistemically valuable in some way. If representation entails only usage, not success or accuracy, then it no longer makes good sense to also hold that the model-target representational relation is what enables us to learn about a target through modeling. Accepting the conception of representation as mind-dependent would render the representationalist stance absurd: in this view we can paraphrase EC and OC, respectively, as stating that scientists can use models to learn about tar- get phenomena because models represent their targets, and that models represent their targets because scientists use them as representations of those targets—in short, this Chapter 2. Representationalism is a Dead End 50 would mean that the reason scientists can use models to study real-world phenomena is that they do use them to study real-world phenomena. Suarez’s (2004) “inferential” account of representation is partly sensitive to the con- cern I am raising, but it is still unable to avoid the tension between OC and EC that characterizes views of representation as mind-dependent. Suarez gives the following formulation of representation in inferential terms:

[Inf]. A represents B only if (i) the representational force of A points towards B, and (ii) A allows competent and informed agents to draw specific regarding B. (p. 773)

Note, however, that by “representational force” Suarez means “the capacity of a source to lead a competent and informed user to a consideration of the target,” and he further adds that this capacity is “fixed and maintained in part by the intended representa- tional uses of the source on the part of agents” (p. 768). This suggests that the defi- nition of ‘representational force’ already subsumes the second criterion, such that the two criteria of [Inf] collapse into a single one that Suarez describes as being “fixed and maintained” by us and our intentions. In the inferential account, then, representation is still a matter of stipulation, even if it is not entirely “arbitrary” stipulation, as Suarez claims. Constraints make it so that not any model is informative about any target; but what makes a model informative about some target is that it represents that target (i.e., that its representational force points to the target), which in turn is determined by our goals and intentions to use the model as a representation of the target. This simply amounts to a more roundabout way of saying that OC is true because of us, because Chapter 2. Representationalism is a Dead End 51 we make it so. And, as I have argued, this kind of view of what makes X a representa- tion of Y (i.e., this view of OC) undermines the claim that we learn about Y because X represents it (i.e., it undermines EC). The conception of representation as mind-dependent thus fails with EC in a view like Suarez’s. But what about other accounts that also treat representation as mind- dependent? As the brief discussion above makes clear, Suarez’s (2004) is a bare-bones, deflationary view of representation: he does not put forward a substantive account of what sort of model-target correspondence needs to be in place for humans to use a model as a representation of some target; instead, he outlines minimum general condi- tions that he takes to constitute the representation relation in every particular case. In contrast with this minimalist or deflationary approach, other mind-dependent views add more substantive constraints to what creates representation relations, as is the case, for example, with the updated versions of isomorphism- and similarity-based views mentioned above. Could these substantive (non-deflationary) accounts of representa- tion as mind-dependent provide more secure foundations for the representationalist methodological stance? I don’t think so. Notice how, as I proposed above, what makes Suarez’s account fail with EC is not the fact that it is deflationary. Suarez’s view omits details about the specific sorts of model-target correspondences that need to be in place for the repre- sentation relation to exist, and this lack of details makes it a deflationary rather than substantive account. Importantly, however, the lack of these details is not what makes Suarez’s account fail with EC: rather, the account fails with EC because of its general formulation of representation as a mind-dependent relation—crucially, it fails precisely because it fleshes out OC in mind-dependent terms, which, in turn, undermines EC. Chapter 2. Representationalism is a Dead End 52

This suggests that adding details about specific sorts of model-target relations (i.e., putting forward a substantive account rather than a deflationary one) would not suffice to rescue the mind-dependent conception: while the resulting account may or may not be more attractive for other reasons, it would still have to deal with the same problem attending deflationary views because the way it differs from deflationary views does not address what makes deflationary views fail with EC. Not only that, but substan- tive accounts would likely face additional problems related to the substantive claims they make about the nature of the model-target relation, e.g., if it is isomorphism, sim- ilarity, or something else. This means that, in addition to failing with EC in the way a deflationary view does, these accounts may also fail due to the specific way they flesh out OC. Some of the additional criticisms of isomorphism- and similarity-based mind- dependent views would be recapitulations of the familiar ones referred to in Sect. 2.3.2, while other criticisms would be more specific. The important point is that, although much more could be said about substantive mind-dependent views, not much else is needed for the purposes of the present argument—and this because what is at stake here is not what the best account of representation is, but rather how different views of representation fare with representationalism. If representations are ever epistemically valuable, it cannot be because they are rep- resentations. After all, when representational success is separated from representation simpliciter, being a representation entails nothing about quality or accuracy, only about use. The advocate of representationalism could try to defend her view by conceding that being used as a representation suffices for a model to represent at all while also holding that some additional criterion is what makes the model represent its target well or accurately. While this move toward a substantive view may (or not) lead to Chapter 2. Representationalism is a Dead End 53 a better understanding of how representation works, it would not salvage represen- tationalism because it would not resolve the tension between OC and EC. This be- comes clear when we consider that this putative additional criterion for accuracy in a substantive mind-dependent account would itself be either a mind-independent or a mind-dependent relation between model and target. The latter should be rejected from the get-go: saying that the criterion for accuracy is mind-dependent would result in a regress because we would be back to the same position that led us to look for a criterion separate from usage in the first place, and this new criterion would itself also be either mind-independent or mind-dependent, and so on. But what about the former, that is, if the criterion of accuracy in a mind-dependent account of representation is mind-independent? This is what van Fraassen (2008) pro- poses in the long quote above—namely, that isomorphism is what makes the model accurate, not what makes it a representation. For reasons like the ones examined in Sect. 2.3.2, we should have little confidence in the possibility that a single mind- independent criterion of accuracy applies in all of science without being too loose, vague or trivial. But even if we succeeded in finding this elusive mind-independent criterion of accuracy with the right scope of application, this would still not enable views of representation as a mind-dependent relation to secure EC. In this conception of representation, mind-independent relations such as isomorphism and similarity are strictly speaking non-representational: they hold between two entities independently of the two entities being used for representational purposes. And identifying epistemic value with a non-representational criterion is not a way to resolve the tension between EC and OC in the mind-dependent conception of representation because that would, instead, necessitate rejecting EC: in this view, models would be epistemically valuable Chapter 2. Representationalism is a Dead End 54 due to some non-representational relation they bear to their targets, rather than due to the representational relation between the two. And by failing with EC, even this hy- pothetical substantive mind-dependent account of representation would undermine representationalism. The advocate of representationalism might recognize that she cannot rescue EC but still resist the conclusion that the representationalist methodological stance is unjus- tified: she could do so by claiming that she was never committed to EC as framed earlier but rather to an alternative formulation (say, EC*) according to which mod- els are epistemically valuable because of their representational accuracy (rather than because of their representational status simpliciter). But this move would not avoid the criticisms just raised. Instead, the same problem would arise of whether accuracy should be defined in mind-dependent or mind-independent terms. Not only that, how- ever, this move would also raise additional problems. One problem is related to our earlier discussion about misrepresentation. As seen in Sect. 2.2, an increasingly popu- lar philosophical view is that idealizations, abstractions and other misrepresentations can actually make models better—i.e., that sometimes models are epistemically valu- able not despite misrepresenting their targets but precisely because of misrepresenta- tion. Adopting something like EC* and making accuracy the criterion for epistemic value would entail that idealization and abstraction (i.e., inaccurate representation) cannot make models better epistemic tools. This would not only go against a grow- ing philosophical view, but would also fly in the face of scientific practice and make the widespread reliance on idealization and abstraction into a mystery—something that, somehow, is more and more used for epistemic purposes but is not epistemically valuable. And lastly, adopting something like EC* would in fact motivate abandoning Chapter 2. Representationalism is a Dead End 55 the methodological stance of representationalism. If fleshing out our view of models as representations (in a mind-dependent conception) forces us to hold that the epis- temic value of modeling is not due to its representational nature (but rather to some notion of accuracy), then we no longer need an account of scientific models in repre- sentational terms because the real epistemological heavy-lifting would be done by the model-target correspondence or informational, two-place relation (which is not repre- sentational). Accordingly, if our goal was to find out why scientific modeling is suc- cessful and how it leads to knowledge of real-world target phenomena, figuring out what the right theory of representation is would not help because the answer to that question will be some non-representantional mind-independent feature that models and targets have in common.

2.4 Don’t need: representationalism is unnecessary

The previous section proposed a “can’t have” argument, showing that the methodolog- ical stance of representationalism does not work: the different ways to flesh out the representationalist approach, based on views of representation as mind-independent or as mind-dependent, all undermine representationalism by failing with at least one of its two components (i.e., OC or EC). Naturally, there is no alternative way to flesh out representationalism because there are no alternative conceptions of representation: either representation is mind-independent or it is mind-dependent. What then? In this section I offer a “don’t need” argument, claiming that, whether or not representation- alism is something we can have, a representationalist approach is not actually required Chapter 2. Representationalism is a Dead End 56 for making sense of model-based science. I discuss two reasons why representation- alism is uncalled for, and along the way I indicate promising directions for advancing our understanding of scientific modeling without stepping into the representationalist minefield.

2.4.1 Pragmatic reason

The first reason philosophy of science does not need representationalism is what I call the pragmatic reason. Scientists build and manipulate models of many different sorts to study a wide range of phenomena, and in this process, they learn many different things about how the world works. Now, most scientists clearly do not have a theory of representation. Some chemists use simulations to study chemical reactions, some biologists use mathematical equations to understand population dynamics, and some economists have used a hydraulic analog computer like the Phillips Machine to learn about financial systems—and they all manage to learn about their objects of study via modeling without having an account of what it means for their simulations, equations and machines to represent chemical reactions, populations and financial systems. Of course, scientists have standards, both explicit and tacit, that guide the procedures and tools they use in their research. But these are standards for making “good models,” not necessarily criteria for making (accurate) representations of target phenomena. In practice, the precise meaning of “good model” depends on a number of factors. Cru- cial factors implicated in what makes a “good model” typically include the research project’s disciplinary and theoretical context, the methodological and technological background (i.e., which prior “successful” models a given project builds on), and what sorts of questions the project is meant to address (e.g., if it aims to generate predictions Chapter 2. Representationalism is a Dead End 57 of future events, explain past events, guide real-world interventions, etc). Modeling is always informed by these disciplinary, theoretical, methodological, technological, erotetic and purposive aspects of particular research projects. And, taking scientific practice at face value, success in following these various kinds of standards is what justifies the epistemic claims coming out of model-based research. That is, models are constrained in their construction and usage by all of these theoretical and practical standards that have come to be accepted by the scientific community as justified paths to knowledge, and, as a result, models have their epistemic justification built in: mod- els are meant “to solve [certain] tasks, and their success or failure at the task at hand is both the measure of their value and the justification of their design” (Isaac 2013, p. 3622). The representationalist methodological stance is predicated on the assumption that the single most important factor in model-based science, and the factor to be eluci- dated philosophically, is the (representational) relation between a model and a target system: models are taken to be “about” their targets, and to be evaluated by scien- tists in terms of the success of this relation. While I will have more to say concerning “aboutness” in Sect. 2.4.2, what matters for the present point is to see how it is more adequate, pragmatically speaking, to bring to the forefront other factors, such as the disciplinary, theoretical, methodological, technological, erotetic and purposive aspects I have indicated here. An example might help illustrate this point. Climate modeling, for instance, typically involves the use of multiple models: no single model is accepted as “the right model” but, together, a number of competing models are used to support explanatory or predictive practices (see, e.g., Lloyd 2010, Parker 2011). In these typical cases, the various climate models are representationally incompatible (i.e., if analyzed Chapter 2. Representationalism is a Dead End 58 representationally, each model “describes” the climate in ways that are known to be false and which contradict the other models). Yet, following theoretical and practical (methodological, technological, etc) standards, scientists evaluate their model-based climate research as more or less successful and justified to the degree that it enables them to accomplish relevant goals. The pragmatic stance I am proposing here takes this feature of climate modeling to be the rule in science rather than the exception. The primacy of pragmatic value (understood broadly to include the various theoretical, methodological, technological, and other practical aspects) explains why scientists can advance their knowledge through model-based research without having a theory of representation and without needing to wait for philosophers to provide the correct ac- count of representation that explains the success of good models and the failure of bad ones. In practice, scientists care about whether or not their models “work”—whether the models fruitfully connect with and advance current knowledge, methods, technol- ogy, and research goals—not whether the models represent real-world target phenom- ena in the sense of meeting some formal definition of “representation.” What I have said up to now highlights one way in which this is a “pragmatic” reason, namely in that it takes scientific practice seriously and recognizes the practical epistemic justification of modeling as independent of representational status and rep- resentational accuracy. But this reason is also “pragmatic” in the way it connects to the philosophical approach of the American pragmatist tradition. The reader might be willing to grant that scientists do not need a theory of representation in order to be epistemically successful, but still be inclined to hold that philosophers need one in or- der to adequately make sense of the scientists’ epistemic success. Here, though, the pragmatist response is that this would only lead to bad philosophy. William James Chapter 2. Representationalism is a Dead End 59

(1907) describes the now famous anecdote of a man who goes around a tree to try to see a squirrel while the squirrel is, at the same time, also going around the same tree to try to hide from the man: in going around the tree, does the man also go around the squirrel? James concludes that the disagreement between competing answers to this question is pointless: “If no practical difference whatever can be traced, then the alternatives mean practically the same thing, and all dispute is idle” (p. 45). Later on in the same lecture and generalizing the lesson from the anecdote to various philosophi- cal debates, James claims: “It is astonishing how many philosophical disputes collapse into insignificance the moment you subject them to this simple test of tracing a con- crete consequence. There can be no difference anywhere that doesn’t make a difference elsewhere” (pp. 49-50, emphasis original). James’ quote could perhaps be read as condemning any kind of abstract philoso- phizing—but that is more than I will ask the reader to accept here. My point, instead, is to suggest the following: if we accept that thinking in representational terms is not necessary for the scientists’ epistemic success, then giving a philosophical account of scientific practice in representational terms becomes a lot less urgent than it would oth- erwise have been—less urgent, say, than it would have been if scientists in fact used theories of representation to justify, to themselves, their claims to knowledge stemming from model-based research. Accordingly, philosophical disputes about representation might just be complicating things more than is necessary and beyond concrete analyt- ical purchase on what is going on in science. There are many different philosophical views about representation: about whether it is mind-independent or not, whether it is a matter of isomorphism or interpretation or something else, whether its definition includes accuracy or only use, whether misrepresentation is a legitimate long-term Chapter 2. Representationalism is a Dead End 60 strategy or whether idealizations and abstractions are merely place-holders for more accurate representation, and so on. But we ought to consider carefully whether the dif- ferences between these views are associated with real differences elsewhere: the cost of not pondering this possibility is practical irrelevance. If scientists get by just fine with- out conclusively settling any of these matters, then we don’t need to settle them either if our goal is to make sense of actual (rather than ‘ideal’) science: in order to make sense of the epistemic worth of modeling, we should pay attention to factors like the standards and practices mentioned above—that is, we should pay attention to what actually makes a difference for scientists as they use models to learn about the world. To be sure, this does not mean that philosophical meta-level analysis of science is unimportant, nor, more specifically, that trying to elucidate how we learn from models is pointless. First, because the standards involved in the pragmatic-epistemic justifi- cation of model-based research are often tacit or only poorly articulated within scien- tific practice, philosophers have plenty of work to do helping clarify those standards. This is work that retains the distinctly meta-level nature characteristic of philosophical inquiry but which has clear real-world implications insofar as it is informed by and closely aligned with actual scientific practice (as opposed to an abstract, armchair re- construction of science). And, second, it does not follow from the foregoing that the specific goal of elucidating how we learn from modeling is worthless: on the contrary, I take this to be a crucial goal—but one that, following my argument in Sect. 2.3, rep- resentationalism cannot help us accomplish. If we are to philosophically make sense of how scientists learn from modeling, we need to take into account what scientists do and what in fact guides the design and evaluation of models in real scientific practice. My contention is that the representationalist framework is not required for making Chapter 2. Representationalism is a Dead End 61 progress here because the scientists’ pragmatic-epistemic success is independent from their models’ meeting the requirements of formal definitions of representation.

2.4.2 Viable alternatives

The second reason representationalism is unnecessary for making sense of scientific modeling is that viable alternatives exist. Representationalism is popular, but it is not the only game in town. In fact, even within recent representationalist approaches, par- ticularly of the deflationary variety, there are useful philosophical resources for build- ing more successful ways of understanding scientific modeling. Here I focus on the notions of ‘surrogate reasoning’ and ‘mediation’ used by Mauricio Suarez and Mar- garet Morrison. Morrison has long advocated the view of models as mediators. An early articu- lation of this view can be found in her collaboration with Mary Morgan (see Morgan and Morrison 1999), where the two proposed an account of models as autonomous mediating instruments. In their view, models contain elements that are shaped by the- oretical commitments and by empirical , but which are not fully determined by either. Models are thus partially dependent on both theory and phenomena (i.e., data) while also being partially independent from both, and, for this reason, models mediate between the two, thus helping advance our understanding of both our the- ories and the world. More recently Morrison has reiterated this view of models as theory-world mediators while also emphasizing an additional way in which models are mediators: “I use the term ‘mediated’ here to indicate that the model functions as a kind of ‘stand-in’ or replacement for the system under investigation and that it fur- nishes only a partial representation; it is, in essence, one step removed from the real Chapter 2. Representationalism is a Dead End 62 system” (Morrison 2015, p. 153). In this second sense, models are mediators in that they mediate our contact with phenomena: for example, for the biologist using a set of first-order differential equations to study population dynamics, the model takes the place of the actual population, and manipulating the model can replace, say, field work as the method of investigation. For Morrison this means that, in addition to connect- ing scientific theory and empirical data, models are mediators because they become an indirect link between scientists and phenomena, as alternatives to more direct inves- tigation. And this second understanding of ‘mediation’ is exactly what Suarez (2003, 2004, 2015) means when he says that models enable ‘surrogative reasoning’: for him, in modeling, scientists reason about an object (a model) as a means to reasoning about another object (some target), and, in this sense, models act as surrogates for thinking and learning about some other object. Crucially, neither ‘mediation’ nor ‘surrogative reasoning’ is inherently representa- tional. To be sure, both Morrison and Suarez use the two terms representationally: for Morrison, models have both theory and data represented in them (as theory-world me- diators) and they also represent phenomena to scientists (as experimental stand-ins); and for Suarez, surrogative reasoning is “the main purpose of representation” (2003, p. 229) and “the primary function of scientific representation” (2004, p. 769). But this representational use of ‘mediation’ and ‘surrogative reasoning’ is not conceptu- ally required, and a pragmatic rendering of both provides a representationally-neutral alternative. Consider how skill development requires practice with objects that are progres- sively more complex and specialized. Playing sports competitively or performing a song with a musical instrument while others sing along are abilities that do not arise Chapter 2. Representationalism is a Dead End 63 out of the blue. Rather, learning requires practice, which, for kids, typically begins with simpler tools: plastic bat and ball for baseball, a rubber ball for soccer, a toy key- board, a plastic guitar. Developing motor fluency in free-form exploration of these toy objects enables the addition of constraints, such as learning how and when to play a certain sequence of musical notes or how and when to kick the ball with the outside of your foot. And practice with simpler objects ultimately enables the learner to shift to using conventional instruments, and go from, say, a toy guitar to a Gibson Les Paul or from a rubber soccer ball to the Telstar 18 (the official ball of the 2018 FIFA World Cup). These simpler, toy instruments can be said to act as mediators and sur- rogates for more advanced, conventional instruments. One might wish to describe the surrogates as representations of conventional instruments, but this is not required. In fact, if our goal is to make sense of the expert performance of professional athletes or musicians, it is arguably more illuminating to analyze their performance in terms of skill development through practice (see Figure 3).

Agent pA pB y % ObjectA ObjectB

FIGURE 2.3: Practical engagement with one object (pA) can facilitate the development of skills useful for engaging with another object in a differ- ent context (pB): e.g., playing with a beach ball or a toy guitar can help develop the motor skills needed for participating in a real soccer match or for performing with a band. Object A can be seen as a mediator or sur- rogate for B, but in this pragmatic, developmental view ‘mediation’ and ‘surrogacy’ do not entail a representational relation and do not require analysis in representational terms.

The same view of surrogate or mediated engagement applies to skills that are more Chapter 2. Representationalism is a Dead End 64 abstract than sports and music, such as mathematics. Children usually begin to learn mathematics by reasoning about concrete objects, for example, learning to count using oranges and learning fractions with slices of a cake; facility with abstract, symbolic operations can be seen as on a continuum with, and emerging from, these simpler types of concrete engagement. And while it is always possible to interpret surrogate reasoning in representational terms, in a case like this assuming that surrogates always represent what they stand in for leads to the strange conclusion that concrete quantities represent numbers rather than the other way around. That is, one could say that the five oranges represent the number 5 or that the half cake represents the fraction 1/2, but this would go against normal usage, according to which it is the symbols that are abstract representations of various equivalent concrete quantities such as five oranges and five apples or half a cake and half a pizza. And even if some philosophers use ‘mediation’ and ‘surrogate reasoning’ in ac- counts of model-based representation, these notions do not by themselves entail a repre- sentational relation and do not require analysis in representational terms. In a representationally- neutral fashion, ‘mediation’ and ‘surrogacy’ can help explain skill development and learning transfer not only across contexts in everyday practices like sports, music, and mathematics, as seen above, but also in the domain of model-based scientific inquiry. The scientific practice of using concrete models as ‘mediators’ or tools for ‘surro- gate reasoning’ about concrete systems (e.g., the San Francisco Bay model) should not be seen as distinct in kind and independent from what kids do when they play with makeshift toys, using a cardboard box as a fort, a pen as a sword, and so on. As kids grow up, and through formal training from grade school to grad school and beyond, their surrogate reasoning skills gain complexity and are directed to novel domains of Chapter 2. Representationalism is a Dead End 65 practical engagement, yet the different applications are outgrowths of the same cul- tural and psychological developmental context. To be sure, I am not suggesting that simulating different hydrological conditions in the San Francisco Bay model and play- ing with Legos are entirely identical. Yet, the crucial difference between the two seems to be a difference in the practical and theoretical constraints and goals involved, rather than a difference in mechanism: depending on what one wants to accomplish and what practice one aims to contribute to (play or science, say), different criteria will define ‘success’ and ‘failure’ in each case, but they remain instances of mediated or surrogate practical engagement. This way of thinking is not limited to concrete models, but also applies to suppos- edly “abstract” varieties such as mathematical equations and diagrams. These varieties of modeling co-opt skills at work in mathematical reasoning (from ordinary instances such as the ones with oranges and cakes described above) as well as in reading and writing more generally, applying these skills to a new domain where engaging with equations or diagrams on paper or on a computer screen mediates the development of strategies for practical engagement with and reasoning about some other object or en- tity (the “target”). Given a complex set of background skills and practices, interacting with a pair of differential equations, for example, is then used as a scaffold for thinking about various biological or physical systems, and for generating understanding, form- ing hypotheses, guiding interventions, and so on. The point is that, whether concrete or abstract, a model need not be analyzed in representational terms as a description of some target: in the view sketched here, a model is instead understood as a distinct, autonomous object that is used for developing skills that are also useful for engaging with those other objects in specified ways (including, of course, reasoning about those Chapter 2. Representationalism is a Dead End 66 other objects). Skill development and learning transfer in model-based science are often indirect: typically modelers cannot intervene in the systems they model, but instead they have to communicate their findings to others (e.g., policy makers) who are in a position to directly manipulate the target system in question (e.g., regulating carbon emissions to avoid negative outcomes predicted by climate models). This indirect character of engagement and application does not contradict the account I am sketching here: on the contrary, the widespread difficulties with translating scientific findings into public policy can, at least in some cases, be understood as a resistance individuals have to surrogate reasoning processes they are not themselves skillful in due to lack of relevant training. The focus in recent years on active learning in science education points in the same direction. Arguably, non-experts such as school-aged children and grown-up policy-makers become capable of making epistemic use of models not by learning how a model meets criteria that make it a representation of some target, but through direct engagement and the development of relevant practical (motor and reasoning) skills. Working out further details of a non-representationalist account of modeling in terms of skill development and learning transfer is beyond the scope of this chapter, but this pragmatic and representationally-neutral rendering of ‘surrogate reasoning’ and ‘mediation’ indicates possible future directions. Focusing on scientific practice in this way gives us a useful lens through which to think about how models can act as mediators and surrogates for skill development and for transferring insights to novel objects. Through modeling, scientists explore new ways of thinking and acting, and understanding how this happens is crucial for making sense of the epistemic outcomes Chapter 2. Representationalism is a Dead End 67 of model-based research such as explanations, predictions and interventions. Impor- tantly, the sketch presented here shows that representationalism is not needed and that resources currently in use in the representationalist approach do not require a repre- sentational analysis and might even provide the starting point for simpler alternatives that avoid pitfalls inherent to representationalism. To conclude, I want to return to a point that came up in my discussion about the pragmatic reason in Sect. 2.4.1. There, I pointed out that scientists evaluate mod- els as being “good” in terms of how the model fits various disciplinary, theoretical, methodological, technological, erotetic and purposive aspects of a particular research project—and, accordingly, I urged that philosophical work be directed at elucidating precisely these aspects of model-based science. To be clear, this is a direct challenge to the traditional representationalist methodological stance, which takes the crucial aspect of model-based science to be how models relate to target phenomena: represen- tationalism identifies model-target relations (rather than any of these other factors) as what needs to be understood if we are to make sense of scientific modeling. The argu- ment against representationalism that I have offered in this chapter does not entail that model-target relations don’t matter, yet it does motivate rethinking the common idea that models are “about” their targets. As I pointed out above, it makes little sense to treat a beach ball as representing the 18 or a toy guitar as representing the Gibson Les Paul. The beach ball is not “about” a professional soccer ball (in the representational sense of aboutness) any more than it is (in a broader sense) about beach sports, or about going on vacation, or about being a child—the beach ball is, arguably, more closely associated with these Chapter 2. Representationalism is a Dead End 68 practices, events and states than with a professional soccer ball. And even without be- ing “about” a professional soccer ball, the beach ball can, as already seen, mediate how we use the professional soccer ball: the skills developed through practical engagement with it can turn out to be useful for engaging with the professional ball. But so can those skills turn out to be useful for learning how to kick the oval-shaped ball used in American football. Would this, then, mean that the beach ball also represents or is “about” American football, and a generic or a specific American football ball at that? The case of musical instruments is even more telling. Free-play with a toy guitar could lead one to develop skills necessary for playing a professional guitar like the Gibson Les Paul the same way that Jimmy Page, Eric Clapton and Keith Richards did in the 1960s. But so can it enable one to learn to play other types of guitars, and to play many other musical genres. Or to learn to play other string instruments, such as the bass. Or even to transition to percussion or brass instruments. If it makes sense to speak of any of these various possibilities (with different genres and instruments) as the “target” of free-play with a “model” toy guitar, either there is nothing inherently representational in this model-target relation, or the model will have to, in some way that goes against common use, represent the various possibilities all at once. The same applies to models in science. It is commonplace within the represen- tationalist framework to assume that models are best understood as models of some target, where “of” denotes a representational relation. We know, however, that models often come to be used in novel contexts to support investigations of different target phenomena. The same set of equations first used in might later be used by biologists for completely different purposes. What, then, is the target of that mathe- matical model, a physical system or a biological system? Or is it those systems plus Chapter 2. Representationalism is a Dead End 69 all the other systems that will someday come to be investigated using the same equa- tions? Model organisms like fruit flies and mice are now used to model a vast range of phenomena, and the same is true for the Khepera robot, which for an entire generation was the go-to “model organism” for roboticists. Are these model organisms, then, rep- resentations of each and every one of those targets? Speaking of a model’s target (as the system or phenomenon it represents or is about) helps to direct our attention to the specific use of the model in a particular context, but it does little more than that. In the pragmatic surrogacy-based view I have sketched here, it is more illuminating to focus on scientists as agents—rather than to focus on models and targets as free- standing objects—and to frame modeling in terms of skill development and learning transfer—rather than in terms of model-target correspondences as abstract relations. Models are, of course, typically useful for guiding how we think and talk about some phenomenon in a given context, but this does not necessitate analyzing the model itself as being ‘about’ the phenomenon in the sense of being a truth-evaluable description of some representational ‘target’. Models are ‘about’ target phenomena as much as they are ‘about’ the discipline in which they are used, the theoretical context they are meant to fit and advance, the methodological and technological background they are built upon, the intended users, and the intended goals they are meant to help accomplish. Rather than thinking that models tell us something about the world, it is more adequate to think that scientists are the ones who tell us something about the world, something that they learned by harnessing their skills in particular ways to build and manipulate objects of various sorts. Chapter 2. Representationalism is a Dead End 70

2.5 Conclusion

This chapter argued that representationalism is a philosophical dead end. In Sect. 2.2I used discussions about misrepresentation in scientific modeling to reveal the represen- tationalist methodological stance that constitutes the mainstream philosophical view of model-based science. There I identified the twofold assumption underlying repre- sentationalism, namely its ontological and epistemological commitments (i.e., OC and EC), and showed how currently popular ideas about the productive role of idealiza- tions and abstractions, understood as misrepresentations, are in tension with the two underlying representationalist commitments. In Sect. 2.3 I provided a “can’t have” argument, showing that representationalism does not work. Beyond problems with particular theories of representation, each of the different types of views on the representation relation (i.e., as mind-independent or mind-dependent) creates a tension between the two central representationalist as- sumptions: views of representation as mind-independent fail with OC and thereby undermine EC, while views of representation as mind-dependent give more plausible support for OC but in a way that also undermines EC. Ultimately, therefore, no mat- ter the particular theory of representation, whether it frames representation as mind- independent or mind-dependent, the tension between OC and EC leads to the down- fall of representationalism. Sect. 2.4 concluded the discussion by offering a “don’t need” argument for rep- resentationalism about scientific models: whether or not representationalism works, analysis of models in representational terms is not required, and this for reasons hav- ing to do with the success of scientific practice as well as with the fact that key philo- sophical notions are not inherently representational. To be clear, the account sketched Chapter 2. Representationalism is a Dead End 71 in Sect. 2.4 is representationally-neutral in that it does not affirm that models do not rep- resent their targets nor that models will never meet formal definitions of representation: rather, the claim is that modeling is best understood in terms of pragmatic engagement mediated by skill development and learning transfer, and that meeting formal defini- tions of representation is incidental to the purpose and epistemic value of modeling. These show that representationalism is both untenable and unnecessary; together they suggest that it should be abandoned. Our representationalist intuitions appear to work fine so long as we do not confront them directly. As soon as we try to figure out what supports them, it becomes clear that nothing does—that is, noth- ing other than habit. There is nothing terribly wrong with thinking about models as representations, but that can’t tell us why models are epistemically valuable. Represen- tationalist approaches to model-based science have reached a dead end, and the only reasonable way to move forward is to take one step back and change directions. The hard but interesting philosophical question about models concerns how we can learn through modeling: representationalism cannot answer this question, and we don’t need representationalism to answer it, so we might as well begin working on developing alternative ways of thinking. 72

Chapter 3

Models as Tools: Making Artifactualism Leaner and Meaner

Abstract

A powerful idea put forward in the recent philosophy of science literature is that sci- entific models are best understood as instruments, tools or, more generally, artifacts. This idea has thus far been developed in combination with the more traditional repre- sentational approach: accordingly, current artifactualist accounts treat models as rep- resentational tools. But artifactualism and representationalism are independent views, and adopting one does not require acceptance of the other. This chapter argues that a leaner version of artifactualism, free of representationalist assumptions, is both de- sirable and viable. Taking seriously the idea that models are artifacts can elucidate a number of philosophical issues concerning scientific modeling even without reference to representation. Chapter 3. Models as Tools: Making Artifactualism Leaner and Meaner 73

3.1 Introduction

A common feature of contemporary science is the use of concrete, mathematical and computational models to analyze experimental data and to simulate phenomena of in- terest. While there is often debate about the applicability and limitations of particular modeling techniques in particular contexts, it’s hard to deny that modeling practices, in general, are epistemically successful. In the physical sciences, life sciences and so- cial sciences alike, model-based research has helped advance our knowledge of the world in ways that would be unthinkable through more traditional theoretical and ex- perimental means. The big philosophical question, of course, concerns why modeling works: that is, what explains the fact that building and manipulating concrete objects, mathematical equations and computer simulations advances our understanding of the world? In attempting to answer this question, philosophers have developed a number of different theories of how models represent real-world systems and phenomena. Influ- ential accounts have described the representational model-target relation as a matter of similarity (Giere 1988, 2010, Weisberg 2012) and isomorphism (van Fraassen 1980, 2008), while others advocate instead a deflationary, non-reductive view of representa- tion (Suarez 2015, Morrison 2015). Accounts such as these disagree about the details of what representation is and how it works, but they coincide in holding that under- standing representation is essential for understanding scientific modeling: according to this widely held ‘representationalist’ assumption, “we need to know the variety of ways models can represent the world if we are to have faith in those representations as sources of knowledge” (Morrison 2015, p. 97). Alongside debates about representation, a powerful idea put forward in the recent Chapter 3. Models as Tools: Making Artifactualism Leaner and Meaner 74 philosophy of science literature is that in order to better understand scientific mod- eling, we should see models as instruments, tools or, more generally, artifacts. The goal of this chapter is to elucidate how this view of models as tools—call it ‘artifactual- ism’—relates to the traditional representationalist view of models. I begin, in Section 2, by providing an overview of recent influential artifactualist accounts in order to high- light the many virtues of philosophically understanding models as tools. I then show, in Section 3, that artifactualism as we know from current accounts is representationalist at its core, and I explain, in Section 4, why this combination is philosophically problem- atic. Section 5 concludes by exploring what an alternative approach can look like. In contrast with the hybrid artifactualist-representationalist accounts currently on offer, I argue that a freestanding version of artifactualism is not only appealing and worth wanting, but also viable as a framework for philosophically understanding scientific modeling. Taking seriously the idea that models are instruments, tools and artifacts offers a way to make sense of how we learn through modeling while circumventing representationalist commitments and the problems arising from them.

3.2 The Virtues of Artifactualism (as We Know It)

Broadly construed, artifactualism describes scientific models as artifacts in two distinct but complementary senses. On the one hand, models are akin to ordinary tools in their utilitarian or goal-oriented character. Just as everyday objects help us accomplish a variety of tasks, scientific models are built and used by scientists to achieve some goal. This claim draws attention to the ways in which models are “instrumental” to scientific research, i.e., the ways in which they are useful, functional and important for practical Chapter 3. Models as Tools: Making Artifactualism Leaner and Meaner 75 ends that are scientifically interesting. On the other hand, and less figuratively, mod- els are not simply like ordinary tools in being useful for some end: models literally are artifacts created by humans to enable specific forms of manipulation. This is the case for scale models and mockups, robotic agents, and model organisms—all clearly con- crete—and it’s also the case for supposedly more “abstract” mathematical models or computer simulations: in order to be used by scientists, models must be implemented in some way that enables physical interaction and manipulation. In sum, artifactual- ism helps us to acknowledge the usefulness of models as well as of their usableness, i.e., their workable, manipulable concrete dimension. According to artifactualism, we can- not fully appreciate the role models play in advancing scientific knowledge until we see models as being on a par with other concrete instruments used in science. This broad delineates some of the key ideas that artifactualists of different stripes will typically agree on. But if what I said above corresponds to the skeleton of artifactualism, it’s fair to see any particular artifactualist account on offer in the literature as a different attempt to flesh out the general artifactualist approach. In this section I identify three crucial insights emerging from three distinct artifactualist accounts. As should be clear, besides revealing some of the virtues of artifactualism, these insights also have the potential to enrich the philosophical discussion about sci- entific modeling more broadly. The first insight concerns the autonomy or relative independence of modeling with regard to other dimensions of scientific research. This insight—much like the artifactu- alist view itself in its current form—is due to Margaret Morrison and Mary Morgan’s (1999) seminal work. Contrary to the accepted wisdom in philosophy of science at Chapter 3. Models as Tools: Making Artifactualism Leaner and Meaner 76 the time, Morrison and Morgan argued that modeling is not subordinated to theo- rizing nor to experimentation: rather, they proposed, models act autonomously and as “mediating instruments” that connect the two. By this they meant that modeling is never purely determined by theoretical commitments nor is it ever the theory-free exploration of data. Sometimes models contribute more directly to theory building, such as when they aid in the exploration of the implications of a set of theoretical as- sumptions. Other times models assist more directly in experimentation, as is the case when working with models suggests novel hypotheses to be tested empirically. Either way, models are partially independent from both scientific theory and from phenom- ena/data because, in their construction and functioning, models are always shaped by extra-theoretical and/or extra-empirical factors. Philosophers of science now by and large agree that it’s too simplistic to think of models as straightforward expressions of either theory or data: rather, the relation between modeling, theorizing and experimentation is complex and calls for careful in- vestigation (see, e.g., Peschard and van Fraassen 2018). But even if this insight now resonates with a large number of philosophers of science, it’s worth noting how ar- tifactualism provides a particularly fruitful way to make sense of the autonomy and independence of models. As Morrison and Morgan (1999) propose, models are au- tonomous in their functioning because they are tools “with a life of their own”: in their view, “what it means for a model to function autonomously is to function like a tool or instrument” (p. 11). Along with bringing attention to the complex relation between modeling and other parts of science, artifactualism also draws attention to complexities internal to model- based scientific research. In line with this, the second insight stemming from a different Chapter 3. Models as Tools: Making Artifactualism Leaner and Meaner 77 formulation of the artifactualist view of models is that matter matters, or, put more broadly, that the characteristics of particular models and types of models can make a significant difference in the epistemic outcomes of model-based research. This is one of the upshots of Tarja Knnuttila’s view of models as “epistemic artifacts” that are “representationally non-transparent.” Knuuttila describes models as “intentionally constructed things that are material- ized in some medium” (2005, p. 1266) and which always have “a material, sensuously perceptible dimension that functions as a springboard for interpretation, and theoret- ical or other inferences” (2017, p. 12). In her view, it’s a mistake to think that models are abstract entities that can be constructed in different ways with no significant loss or interference from how the model is materially constituted. On the contrary, Knuuttila argues that the “representational means” of models are never transparent in this way: “the wide variety of representational means modelers make use of (i.e. diagrams, pic- tures, scale models, symbols, natural language, mathematical notations, 3D images on screen) all afford and limit scientific reasoning in their characteristic ways” (2011, p. 268). Thus, even though a mathematical model and the Phillips hydraulic machine, for example, can both represent the same economic system, because the two are built using different representational means, the explanation of their epistemic import will neces- sarily differ accordingly. For Knuuttila, models “can play different epistemic roles (...) depending on the representational means in question,” and for this reason we can- not adequately understand how models contribute to scientific knowledge unless we take into account the particular (material) representational means of particular models (2017, p. 12). In direct response to Morrison and Morgan’s (1999) view of models as mediators, Knuuttila claims: “Without materiality mediation is empty” (2005, p. 1266). Chapter 3. Models as Tools: Making Artifactualism Leaner and Meaner 78

To be sure, philosophers of different backgrounds and persuasions might appreci- ate the importance of taking into account the features of particular models and of the particular modeling techniques used in different research projects. But this insight is especially amenable to an artifactual understanding of models. What you can and can- not do with ordinary tools is importantly constrained by the specific material features of the tool: there are things you can do with a steak knife that you can’t do with a dis- posable plastic knife, and vice-versa. As tools, models exhibit the same variability in their use because of how they are built, what they are made of, and so on. This suggests that analyses of “scientific models in general” will, at best, be limited. Models can be of many different types, shapes and , and these differences can significantly impact a model’s usefulness in different research contexts. Models that are formally and/or representationally equivalent may yield different insights depending on how their material characteristics affect the possibilities for manipulation and intervention. Consider, for example, how you might learn different lessons by interacting with a mathematical model of planetary motion than you would learn by working with an orrery, and vice-versa. Artifactualism helps us make sense of these differences, and brings them to the center of attention for philosophical investigation. According to ar- tifactualism, in order to adequately understand how models contribute to advancing scientific knowledge, we need to recognize the contribution that materiality makes to the epistemic value of particular models and modeling techniques. Besides emphasizing the epistemological role of the materiality of models as tools, artifactualism also draws attention to the philosophical import of understanding modeling as a tool-building practice. As I suggested earlier, tools aren’t simply objects that are useful in some generic sense, but they are always useful for someone and for some Chapter 3. Models as Tools: Making Artifactualism Leaner and Meaner 79 end. The specific ways in which tools get used is, of course, related to the tool’s ma- teriality: a hammer can only drive nails into a wall because of its shape and rigidity. But the hammer’s materiality also makes it useful as a paperweight, a door-stopper, or a measuring stick. This is where understanding the users and goals that make up particular practices becomes important. If a hammer is primarily for driving nails into a wall, it only serves that purpose for beings with certain kinds of arms and hands, and who are surrounded by walls and have nails at their disposal. You can’t understand the tool without also understanding how it is used, where, when, by whom, and what for. Analyzing scientific models as tools accordingly motivates considering the differ- ent contexts of investigation in which particular models and modeling techniques are used. This third artifactualist insight resonates with some ideas Adrian Currie (2017) dis- cusses. Currie describes models-as-tools as being constituted by both a vehicle and some content. The model’s vehicle is “the medium through which the content is ex- pressed” (p. 773), or the material features of a particular instantiation of the model’s content. As for the content, he describes it as defined by the function and “F-properties” of the vehicle, i.e., the relevant properties that make a given tool suitable for some func- tion F, as opposed to properties which are not relevant for that function. In Currie’s example, the size of a sewing needle’s eye is relevant for threading, while the needle’s color isn’t—though presumably the color could matter for other functions. Similarly, a model’s F-properties (and, therefore, its content) will vary according to what function the model is meant to fulfill. In line with this, Currie points out that a model’s content may well be some target phenomenon it represents: “when we use a model to explain the behavior of a target system (...) the F-properties that matter are those which make Chapter 3. Models as Tools: Making Artifactualism Leaner and Meaner 80 for a good representation” (p. 776). But this is not always the case. In design and en- gineering, Currie explains, models cannot be adequately described as representations of some currently existing target: in these cases, modeling is a step toward the con- struction of the target, toward bringing the target into existence, and for this reason, what matters in these contexts is how modeling scaffolds that creative process. Cru- cially for present purposes, it follows from this view that any one-size-fits-all account of how and why “modeling in general” works will be of limited help if it’s derived from a single type of modeling in a single context and scientific discipline. Rather, an account of how and why modeling works needs to be sensitive to the way particular types of models/tools are exploited by users engaged in specified activities. Put in other words, it follows that in order to make progress on the of model- based science we need to take into account the users and goals that make up particular modeling practices, which shape the functions models are built to have in the first place. Taken together, these three insights underscore the philosophical import of artifac- tualism. Understanding models as tools, instruments and artifacts sheds light on the complexity of science by revealing the similarities and differences between modeling and the theoretical and experimental dimensions of scientific research. Artifactualism also draws attention to the complexity internal to model-based research, where an ad- equate understanding of the epistemic value of model-based science requires taking into account the constraints imposed by the materiality of particular models and the different practices in which models and modeling techniques are put to use. Artifactual analyses such as the ones reviewed here thus enrich philosophy of science by revealing otherwise neglected aspects of science and thus helping us gain a firmer grip on our object of study. Importantly, while some of these aspects of science that artifactualism Chapter 3. Models as Tools: Making Artifactualism Leaner and Meaner 81 has drawn attention to may come to be examined through non-artifactualist lenses, they are, as I have shown, especially amenable to a philosophical understanding that explicitly frames models as tools and modeling as a tool-building and tool-using prac- tice.

3.3 Artifactualism as We Know It is Representationalist

As already indicated in the introduction (Sect. 1), the philosophical literature on scien- tific modeling is, by and large, a literature about representation. Influential philosophi- cal accounts of model-based science typically disagree on precisely how to understand the nature of the representational relation between models and target phenomena in the real world: is representation a relation of similarity or of isomorphism that holds between model and target? Or is it perhaps a relation of denotation and interpretation that is necessarily partly constituted also by the agent(s) doing the representing? If the latter, what role (if any) do two-place relations like similarity and isomorphism play in agential, three-place representation relations? There are as many answers to these questions as there are philosophers writing on these issues—and perhaps more. Still, there is broad agreement in the literature that these are the right questions to ask, that representation is what we need to understand if we hope to get a handle on how and why modeling works. Sanches de Oliveira (2018) identifies representationalism with two types of commit- ments: an ontological commitment, concerning the nature of models or what models Chapter 3. Models as Tools: Making Artifactualism Leaner and Meaner 82 are, and an epistemological commitment, which establishes why modeling is knowledge- conducing.1 The ontological commitment holds that models are best understood philo- sophically as representations of some system(s) of interest—a view philosophers en- dorse when, explicitly or implicitly, they construe modeling as a “practical approach to understanding [real-world phenomena] by constructing simplified and idealized representations of [the phenomena]” (Weisberg 2018, p. 241). The related but distinct epistemological commitment, in turn, holds that representation is at the root of the epistemic worth of modeling: it’s by virtue of representing some target phenomena that models can be informative about those phenomena. Even when only tacitly held, this commitment motivates analyzing models representationally, and it turns philo- sophical work on representation into a necessary step toward making sense of the epistemic success of modeling: “if we want to understand how models allow us to learn about the world, we have to come to understand how they represent” (Frigg and Nguyen 2017, p. 49). Does artifactualism as we know it from current accounts adhere to representational- ist commitments such as these? I believe the answer is a resounding yes. Commenting on Morrison and Morgan’s (1999) view of models as autonomous mediating instru- ments, Peschard and van Fraassen (2018) explain:

That models function as mediators between theory and the phenomena im- plies then that modeling can enter in two ways. (...) In the first case [the model] is (or is intended to be) an accurate representation of a phenomenon; in the second case it is a representation of what the theory depicts as going

1Fiora Salis (2019) draws a similar distinction between what she calls the “aboutness condition” and the “epistemic condition” of representationalism. Chapter 3. Models as Tools: Making Artifactualism Leaner and Meaner 83

on in phenomena of this sort. (Peschard and van Fraassen 2018, p. 31-32, italics added)

Although this is not their primary focus, Peschard and van Fraassen’s description quite nicely emphasizes the importance of representation in Morrison and Morgan’s ac- count. As Morrison Morgan themselves affirm, in their view models aren’t just “simple tools” that enable the user to perform some action, like hammers, but rather they func- tion as “tools of investigation” for understanding some phenomena, and they do so precisely because they represent those phenomena: “the model’s representative power allows it to function not just instrumentally, but to teach us something about the thing it represents” (Morrison and Morgan 1999, p. 11). In Morrison and Morgan’s version, then, artifactualism openly incorporates the ontological dimension of representation- alism: models are tools and instruments, but they are also representations. Not only that, but even the epistemological dimension of representationalism is clearly present: the fact that they are (also) representations is what makes models informative, because representation is “the mechanism that enables us to learn from models” (p. 11). For Morrison and Morgan, through building and manipulating a model/tool, scientists learn both about the model itself and about theory and phenomena to the extent that the model represents them (p. 33). In this view, then, understanding models as tools and instruments is complementary to analyzing their ontological and epistemological nature as representations. Knuuttila’s account is equally (if perhaps more subtly) representationalist. Con- sider how Knuuttila’s emphasis on the materiality of models as tools is couched in thoroughly representational terms, as an emphasis on the models’ “representational means”: as already seen, she thinks it’s important to take into account the “the wide Chapter 3. Models as Tools: Making Artifactualism Leaner and Meaner 84 variety of representational means modelers make use of” (2011, p. 268, emphasis added) because models “can play different epistemic roles (...) depending on the representa- tional means in question” (2017, p. 12, emphasis added). To be sure, Knuuttila is vo- cal in her criticism of the representationalist view of models and its narrow focus on model-target correspondences: she claims, for example, that “any abstract analysis of the supposed representational relation between a scientific model and its target will not do” (2017, p. 14). But the way she frames her alternative suggests that, for her, the problem with representational analyses of modeling in terms of model-target relations lies in the abstract nature of these analyses rather than in their representational character. Along these lines, Knuuttila complains that the traditional approach “neglects the actual representational means with which scientists go on representing” (2011, p. 263), and she points out just how ironic this state of affairs is: “Philosophers have been engaged in studying the representational relation between models and their supposed target systems without paying too much attention to the representational artifacts used to accomplish such representational work” (2017, p. 14). In her view, the artifactual approach corrects this ironic neglect by “urg[ing] philosophers to study more in detail how the various kinds of representational modes and media enable scientific infer- ences and reasoning” (2017, p. 13). For Knuuttila, then, artifactualism promotes a shift in the emphasis of traditional analyses of models in representational terms: “The philo- sophical gist of the artifactual account is to consider the actual representational means with which a model is constructed and through which it is manipulated as irreducible parts of the model” (2017, p. 11). Thus construed, artifactualism is perhaps a needed corrective, but it doesn’t offer an alternative to representationalism. Much the same points apply to Currie’s account of models-as-tools. Like Knuuttila, Chapter 3. Models as Tools: Making Artifactualism Leaner and Meaner 85

Currie frames the artifactualist perspective as compatible with and complementary to representationalism. According to Currie, the traditional representational view of models (which he refers to as “fictionalism”) holds that models are “revelatory of the actual world in virtue of bearing some resemblance relation to a target system” (Currie 2017, p. 759). And he explicitly claims that he sees this way of thinking as incomplete: “as an overall account of scientific modeling the [fictional] view is insufficient” (p. 779). Accordingly, he proposes that artifactualism can supplement representationalism to provide a more general approach to models: “understanding models qua tools is deeper, more unified and more metaphysically kosher than understanding models qua fictions” (p. 773). For him, approaching models as vehicles that can have different types of content provides a more comprehensive framework, with the “capacity to flexibly account for both fictional and non-fictional models” (p. 779). It’s clear, then, that in Currie’s view artifactualism is supposed to subsume or en- compass representationalism rather than stand in contrast with it. The details of his account suggest further that, for Currie, the problem with representationalism is the fact that it motivates analyzing models as representations of real-world targets; by con- trast, the fact that the view motivates analyzing models as representations at all is just fine. In his proposed analysis of models, Currie relies on the vehicle/content distinction, a distinction that is paradigmatically representational.2 This distinction—between some- thing that is being represented and something that does the representing or ‘carries’ that representational content—is particularly useful for making sense of cases in which different representations have the same meaning. For example, I may refer to water by

2See, e.g., Millikan 1991 or Chalmers 1992 for early uses of the distinction, as representational, in different philosophical debates. Chapter 3. Models as Tools: Making Artifactualism Leaner and Meaner 86 writing down the word for it, as I have done here, or through sound, as when I vocalize something like w, a common pronunciation of the word in American English: although the written word and the sound are different vehicles, they have the same meaning be- cause both carry the same content. This distinction is tailor-made for representational analyses, and in using it as the foundation for thinking about models, Currie is build- ing into his account the representational assumption that models are the sorts of things that carry content, that is, that represent. The result is that, even in the case of modeling in design and —which he claims to be non-representational because there is no currently existing target that the models represent—framing models in terms of vehicles and contents motivates thinking that there is some content that the model rep- resents, such as an abstract or imaginary target that does not yet (but may one day) exist as a physical structure in the real world. So while these cases of modeling may not be representational in his sense (i.e., in the sense of resembling and corresponding to targets that exist in the real world), they are still thoroughly representational in the sense that they are vehicles that carry some content, and thereby represent at all. In sum, then, these different artifactualist accounts coincide in framing artifactu- alism as an approach that complements or corrects the emphasis of traditional rep- resentational analyses of models, but which is ultimately compatible with them and even subsumes them. This is clear not only from direct claims artifactualists make to this effect, but also from the categories they use for analyzing models as tools, such as “representational means” and “vehicles” and “contents.” Chapter 3. Models as Tools: Making Artifactualism Leaner and Meaner 87

3.4 What’s so bad about combining Artifactualism with

Representationalism?

In the previous section I showed how, despite their many differences, prominent arti- factualist accounts coincide in treating the artifactual view of models as compatible with, and complementary to, more traditional philosophical analyses of models as (epistemically valuable as) representations. The goal of the current section is to ex- plain why combining the two views is philosophically problematic. To be clear from the outset, my goal is not to argue that the combination is impossible and that arti- factualism is incompatible with representationalism. I concede that the two ways of thinking about models can be combined, as current artifactualist accounts do. My con- tention, rather, is that the combination of artifactualism with representationalism rests on, and gives rise to, views that are themselves questionable and better avoided.

3.4.1 A Bad View of Tools: the Fallacy of the Linguistic Sign

Imagine for a moment that you are visiting an archaeological site and you run into a hand-sized ceramic shard with some squiggles inscribed on it. You inspect the object closely but can’t decide if the squiggles are merely decorative. Is this object (part of) an amulet, a religious relic, a burial urn, an ornament, a map, a calendar, a combination of these or maybe something else entirely? How can you make sense of this object’s meaning? You might feel compelled to try to identify what the object represents: does it express dates or locations, or is it perhaps a record of important battles or commercial transactions? What is it that this object describes, refers to, stands for or is about? Chapter 3. Models as Tools: Making Artifactualism Leaner and Meaner 88

Cognitive archaeologist Lambros Malafouris (2013) criticizes this strategy for com- miting what he calls the “fallacy of the linguistic sign.” As he describes it, this fallacy is “the commonly practiced implicit or explicit reduction of the material sign under the general category of the linguistic sign” (p. 91). In archaeological research, this is the mistake of analyzing prehistoric objects as embodying a representational (p. 44) and being ‘meaningful’ in the same way that words and sentences are: for example, “presuppos[ing] that both the vase as a material entity and ‘vase’ as a word mean, or signify, in the same manner” (p. 91). Malafouris proposes that archaeological artifacts are, instead, best understood when we analyze them as embodying an enactive logic: artifacts are “something active with which you engage and interact” (p. 149), and they “mediate, actively shape, and constitute our ways of being in the world and of making sense of the world” (p. 44); for this reason, treating artifacts as linguistic signs prevents us from understanding how the ‘meaning’ of artifacts emerges through material en- gagement. In our imagined scenario, then, the most appropriate way for you to make sense of the artifact you found would be to ask, not what it represented, but how it was interacted with by its creators and users. This insight from cognitive archaeology points toward the first problem with com- bining artifactualism with representationalism—namely, that this combination rests on an inadequate understanding of tools in general. Linguistic signs like words, sentences and stories are the sorts of things that embody a ‘representational logic’ and can be an- alyzed as descriptions that have some target in the real world (or not) as referents. But tools and artifacts don’t work that way. When it comes to ordinary tools like hammers, forks and needles, asking what they represent or stand for is at best misleading. If we are interested in understanding the meaning of these tools, the proper questions to ask Chapter 3. Models as Tools: Making Artifactualism Leaner and Meaner 89 concern, instead, how people manipulate them, how the tools behave, and what the outcomes of user-tool interactions are. My contention is that artifactualism as we know it from current accounts commits the fallacy of the linguistic sign when it interprets present-day scientific modeling arti- facts in representational terms—e.g., as having ‘representational means’ or as ‘vehicles’ that carry some ‘content’. These categories apply perfectly well to linguistic signs: we can use written or spoken language to describe the world, and our sentences can rep- resent states of affairs accurately or inaccurately, and communicate meaning more or less effectively depending on the representational vehicle or means. But treating sci- entific model-artifacts as linguistic signs mixes (artifactual and linguistic) and, from the start, toward more abstract ontological and epistemological questions that need not come up in (analyses of) tool-using practices. Artifactualism as we know it draws attention to important tool-like features of models, but it still privileges a ‘representational logic’ in its analysis of the meaning of models. This may, of course, be a fine way to go depending on your goals: it’s just not a sensi- ble option if our goal is to understand models precisely as tools. Following Malafouris, the right way to make sense of tools—be it prehistoric vases or present-day hammers, model organisms and computer simulations—is in terms of their ‘enactive logic’, that is, in terms of how their meaning is enacted in and emerges through use and interac- tion.

3.4.2 A Bad View of Science: The Myth of the Primacy of Description

Following the current literature, in Section 3 I identified representationalism with a commitment to an underlying ontological assumption about what models are (namely, Chapter 3. Models as Tools: Making Artifactualism Leaner and Meaner 90 representations) and an epistemological assumption about why models are knowledge- conducing (namely, because of their representational nature). But it’s important to see that these ontological and epistemological representationalist commitments are just local manifestations, in the philosophical literature on modeling, of a global represen- tationalist picture of science in general—an understanding of science in what Pickering (1994) calls the “representational idiom,” according to which “the defining character- istic of science is its production of representations of nature” (p. 413). This view of sci- ence is often only tacitly assumed as the starting point for representational accounts, but there are also authors who endorse it explicitly: see, e.g., Pincock’s (2012) claim that “science is in the business of producing representations of the physical world” (p. 3). The upshot of this global view is that, because science is first and foremost a de- scriptive enterprise, the parts of science which are not themselves descriptive are to be made sense of in terms of how they promote (or get in the way of) the parts of science that are themselves descriptive. This gives rise to a host of epistemological puzzles concerning, for example, what makes particular descriptions true (e.g., how are we to understand the description-descriptum relation?), or what makes particular description-generating processes truth-conducive (e.g., how do “values” constrain the of particular descriptions and description-generating processes?). What is rarely if ever made clear in the literature on these philosophical puzzles is that these puzzles are the product of the global descriptivist view of science, and moreover, that this global view itself is optional and far from uncontroversial. At a conceptual level, this view of science as a descriptive endeavor stems from the suspect rationalistic idea that there is a fundamental gap that separates the human Chapter 3. Models as Tools: Making Artifactualism Leaner and Meaner 91 mental world from the extra-human natural world, a gap that needs to be bridged by transcending appearances and getting to the bottom of ultimate reality. Along these lines, for example, has described our culture’s conception of science as a secularized throwback to the religious worldview of the medieval era—a seculariza- tion of key religious elements such as the sacredness of rituals (the scientific method), concern for orthodoxy (agreement with the body of established scientific knowledge), the authority of priests (the scientist) who have privileged access to revelation (Truth), and, perhaps most importantly, the promise of redemption from our fallen (epistemic) nature (see, e.g. Rorty 1991, p. 35ff). But another (admittedly related) intellectual thread connects the global descriptivist conception of science even further back in his- tory to a platonic view of an ideal world, separate from the material world, of which the material world provides only confused and imperfect access. In both cases, there is some ultimate reality that is beyond the domain of human experience, but which we hope our carefully generated descriptions provide transcendental epistemic access to. Of course, this is just one way of construing how humans relate to the world and science’s role in this relation—and it’s an optional perspective, one for which the em- piricist tradition has historically been a powerful philosophical alternative. Besides appearing conceptually odd once its religious/platonic roots get exposed, the descriptivist view of science that underlies representationalism is also questionable in its historical soundness. Up until the middle of the twentieth century, the dominant approach in historical studies of science was to understand the past not on its own terms but in light of the present and, accordingly, to ‘hindsightedly’ interpret past de- velopments as paving the way to the scientific ideas we know would come to succeed. Chapter 3. Models as Tools: Making Artifactualism Leaner and Meaner 92

This interpretive approach was inspired by a view of the as “con- sist[ing] in a progressive and teleologically oriented struggle between the inexorable agents of cognitive progress and their atavistic opponents” (McEvoy 1997, p. 2). This interpretive lens on the history of science was naturally associated with the systematic neglect of ideas, theories, and practices that have since come to be discredited and seen as unscientific, such as . But this historiographical approach, which saw our science as the true heir of rational and theoretically-oriented natural philosophy, has for decades now given way to a more contextual lens.3 Instead, historians of science now predominantly see science as a bastard child, arising through a complex intermixing of natural philosophy with the “arts,” that is, the the domain of practical, productive knowledge encompassing craft and engineering as well alchemy and occultism (see, e.g., Kearney 1971, Meli 2006, Henry 2008, Lindberg 2010, Principe 2011, McClellan III and Dorn 2015). The idea that science is first and foremost a descriptive endeavor fits nicely with the old historiographical approach, but it is a odds with the emerg- ing demystified picture of science as the combined efforts of people working together, through whatever means available, to try to solve problems. To the extent that it accommodates representationalism, artifactualism as we know it from current accounts at least indirectly embraces this global picture of science as a descriptive enterprise. In fact, the impulse to think that models must be both tools and representations is symptomatic of (and it also reinforces) the idea that for science to be epistemically respectable, it must be in the business of describing the world. But the myth of the primacy of description is a conceptually and historiographically question- able view of what scientists really do and how they do it.

3Though exceptions persist: see, e.g., A. C. Grayling’s outspokenly “Whiggish, meliorist, progres- sivist” historical account of the scientific revolution (2016, p. 336). Chapter 3. Models as Tools: Making Artifactualism Leaner and Meaner 93

3.4.3 A Bad View of Philosophy: The Allure of Ideal Theory

In response to the contrast I drew just above, between a view science as a descriptive endeavor and a view of science as a pragmatic problem solving activity, some readers might ask, ¿por qué no los dos? Can’t science be both? The same applies to the combina- tion of representational and artifactual characteristic of artifactualism as we know it from current accounts. These accounts, we have seen, combine the idea that models are instruments, tools and artifacts with the traditional view that models are representations. Why couldn’t we say models are both? To be sure, combining the two is logically and ontologically legitimate. That is, the claim that models are artifacts is perfectly compatible with the claim that mod- els are representations, and this because the ontological categories in question are not mutually exclusive: artifacts can be representations and representations can be arti- facts—this is the case even if some artifacts do not represent anything and if some representations are not artifacts (say, if a non-man-made object is used to represent something). Strictly speaking, it is possible to accommodate various elements of a rep- resentational analysis into an artifactualist account without generating or committing a category mistake. But “can” does not imply “ought”, and the onto- logical justification for combining representational with artifactual thinking does not entail that this move makes for good philosophy of science. Over the course of the 20th century, the traditional philosophical focus on ideal Sci- ence and the logical structure of theories—what Karl Popper called “the objective logi- cal relations subsisting among the various systems of scientific statements, and within each of them” (Popper 1959/2005, p. 22)—gave way to a philosophical understand- ing of science that is informed by scientific practice, both historical and contemporary Chapter 3. Models as Tools: Making Artifactualism Leaner and Meaner 94

(on this shift toward naturalistic philosophy of science see, e.g., Callebaut 1993). And while the modeling literature is itself a product of this turn toward scientific practice, my contention is that the focus on abstract ontological and epistemological questions about representation is a hang-up from the discipline’s positivist historical beginnings that only gets in the way of proper philosophical practice. In this I see a contrast analogous to the one between ideal and nonideal theory in political philosophy. As (1971) put it, his ideal theory of justice was pri- marily interested in determining “the principles of justice that would regulate a well- ordered society” in which everyone complies fully (1971/2005, p. 8). Ideal theory, Rawls recognized, cannot account for the “pressing and urgent matters” of the injus- tices present in “everyday life”; yet, for him ideal theory needs to be the starting point because it is “the only basis for the systematic grasp of these more pressing problems” (p. 9). Amartya Sen (2006, 2009) and others have since strongly opposed Rawls’ view, arguing instead that ideal theory is neither sufficient nor necessary for dealing with the demands of everyday life in a nonideal world characterized by extraordinary injustice (see related discussions in, e.g., Simmons 2010, Jubb 2016, and Dieleman, Rondel and Voparil 2017). Whether critics like Sen are right when it comes to theorizing justice and injustice is well beyond the scope of this chapter. What is interesting for the present discussion is the (perhaps loose) applicability of the distinction for understanding the kind of theorizing at play in philosophy of science, whether it is concerned with ideal, rational science (as in the historical beginning of our field) or with real-world science (as has been the tendency since the turn to history and practice). I concede that the compatibility of the categories “artifact” and “representation” (that is, the fact that, conceptually and ontologically, models can be both) might count Chapter 3. Models as Tools: Making Artifactualism Leaner and Meaner 95 in favor of uniting the two in our analysis of models, especially if you are engaged in ideal philosophical theorizing about science. But this does not entail that uniting artifactualism and representationalism is either sufficient or necessary for philosophi- cally making sense of nonideal real-world scientific modeling. Determining the nature of the representation relation, for example, while of perfectly legitimate philosophical concern (especially given a more traditional understanding of the mission of philoso- phy of science), may well be unimportant for understanding scientific practice: after all, scientists lack a clear definition of representation and still appear to get by just fine. The same goes for questions relating to the epistemic role of idealizations, abstractions and other so-called ‘falsehoods’ at play in modeling: scientists seem to know what matters and what doesn’t in a model, and perhaps framing these differences in terms of misrepresentation just muddies our understanding of how scientists in fact advance their knowledge through the practice of modeling. Advocates of artifactualist accounts such as the ones reviewed in Section 2 rightly point out that considering the instrumental or tool-like features of model-building and model-functioning brings us closer to understanding real-world science. The risk is that, to the extent that they maintain representational ontological and epistemological commitments, they also counteract the benefits of the artifactualist perspective. What do we want from a philosophical theory of modeling? Steven French (2010) advocates for on the ontology of models in favor of a direct focus on model-based rep- resentation as it’s practiced. I think his position can be taken one step further in the context of the present discussion about the relation between artifactualism and repre- sentationalism. If we really want to understand how any practice works, it seems that it’s best not to get bogged down on abstract and ideal considerations: the key thing Chapter 3. Models as Tools: Making Artifactualism Leaner and Meaner 96 to understand is what people do when participating in that practice. The current ar- tifactualists’ framing of models in a way that preserves the traditional philosophical concern for abstract representational relations and categories, while logically and on- tologically legitimate, is more of a positivist relic than a perspective called for by the scientific practice itself. As such, artifactualism as we know it is implicated in a prob- lematic view not only of what scientists are up to, but also of what we philosophers are up to—and therefore what we need to focus on—when we try to understand science.

3.4.4 A Costly Analytical Strategy: The Representational Inheritance

Tax

Combining artifactualism with representationalism also gives rise to a practical prob- lem: this combination inherits from traditional representationalist (non-artifactualist) approaches a number of challenges surrounding the notion of ‘representation’. As in- dicated in the introduction, the philosophy of science literature has in recent decades seen an increasing number of competing views on the ontological nature of the rep- resentation relation and on the epistemic role of misrepresentations in modeling. The puzzles are many and varied: “is representation a matter of isomorphism, similarity, or some other criterion?”; “is representation an objective, mind-independent relation, or is it necessarily agential and intentional?"; “is representation a success-term or is accuracy separate from representational status?"; and “can misrepresentations make a model more explanatory, or are they only useful as temporary place-holders for ac- curate representations?” Given the diversity of debates, philosophical accounts some- times disagree not just about what the right answer is, but even about which questions Chapter 3. Models as Tools: Making Artifactualism Leaner and Meaner 97 need an answer first. By developing an analysis of models as tools that also incor- porates a representational dimension, current artifactualist accounts are unavoidably faced by the same puzzles and challenges attending more traditional accounts of mod- els. Current artifactualist-representationalist accounts shift the emphasis of philosoph- ical analysis, but they cannot postpone representational puzzles indefinitely. In some of the accounts the inheritance tax might be even higher: at least in Morrison and Mor- gan’s version, as we have seen, it’s the representationalism of their account (rather than the account’s artifactualism) that does the epistemic heavy-lifting and provides the grounding for the epistemic value of modeling in their view. Highlighting the in- strumental or tool-like features of models in accounts like these may well elucidate important aspects of scientific modeling, but if the full account is that models are in- struments, tools and artifacts that are epistemically useful as representations, then we will still, at some point, be charged with the more fundamental questions concerning the ontological nature and epistemic status of representation and misrepresentation. In this way, artifactualists end up multiplying their costs, having to deal with philo- sophical questions about tools and artifacs in addition to the traditional questions about representation.

3.5 Artifactualism doesn’t need Representationalism: To-

ward a Variety of Artifactualism Worth Wanting

In the previous sections I showed that, despite their differences in focus and empha- sis, current artifactualist accounts coincide in developing the basic artifactualist insight Chapter 3. Models as Tools: Making Artifactualism Leaner and Meaner 98

(that models are tools, instruments and artifacts) while holding on to at least some of the more traditional ideas from representationalist analyses of models. I also argued that this combination of artifactualism and representationalism is problematic because it is intertwined with questionable views of tools, of science, and of philosophy of science, all while inheriting the same problems attending non-artifactualist represen- tationalism. In my view, these aren’t reasons to give up on artifactualism, but rather to try to make it better. The goal of this section is to sketch what a leaner and meaner version of artifactualism could be like. A first step toward this goal is to recognize that artifactualism as we know it from current accounts is but one way of construing artifactualism. As is clear by now, cur- rent accounts have in common the fact that they develop the artifactualist insight as a conciliatory view: in this view, the typical emphasis philosophers give to the rep- resentational aspects of models is insufficient and must be expanded through careful consideration of the instrumental or tool-like characteristics of models. The accounts considered here fit this characterization because they describe models as instruments, tools and artifacts while also explaining the epistemic contribution of models in tra- ditional representational terms, as depending on the model’s representational means or content. But, again, this construal of artifactualism as a conciliatory view and as a shift in emphasis is not the only form of artifactualism possible. Here I will refer to this version of artifactualism as weak artifactualism to contrast it with the more radical alternative conception I am proposing, which I call strong artifactualism. Unlike weak artifactualism, strong artifactualism holds that understanding models as tools is not just a shift in emphasis, but a radical departure from traditional philo- sophical approaches: in this view, analyzing models as artifacts is, both conceptually Chapter 3. Models as Tools: Making Artifactualism Leaner and Meaner 99 and methodologically, an alternative to analyzing them in representational terms. Fig- ure 1 illustrates how the two types of artifactualism relate logically to representation- alism.

R A

W.A. S.A.

FIGURE 3.1: Representationalism (R) and Artifactualism (A) are indepen- dent (but not mutually exclusive) views of models. Current artifactual accounts combine both, adopting “weak artifactualism” (W.A.) and occu- pying the shaded area where the two views overlap. “Strong artifactual- ism” (S.A.), in contrast, is in the conceptual space that does not overlap with representational analyses of models.

The problems with artifactualism identified in Section 4 can be seen as reasons why strong artifactualism is attractive. That is, insofar as those problems attend artifactual- ist accounts because of their reliance on at least some representationalist assumptions or notions, they are really problems that afflict weak artifactualism; and since the strong artifactualism I am proposing is by design a freestanding approach—a version of arti- factualism on its own, free from all the extra representational baggage—those problems also show why strong artifactualism is desirable. In what follows I argue that strong artifactualism is not only desirable but also viable and promising: strong artifactualism Chapter 3. Models as Tools: Making Artifactualism Leaner and Meaner 100 maintains the benefits of weak artifactualism (Section 2), and it gets the job done with- out representationalism, thus avoiding the problems with weak artifactualism (Section 4).

3.5.1 A Philosophical Fresh Start: Models and/as ‘Simple Tools’

Because representationalism is so prevalent—because the “representational idiom,” as Pickering puts it, is our native language in contemporary philosophy of science—it’s helpful to find alternative philosophical frameworks that can guide the development of strong artifactualism and can inform how we even begin to think about models as tools in a nonrepresentational fashion. Here I sketch briefly how the philosophical tra- ditions of phenomenology and can provide such starting points for strong artifactualism—though I am in principle open to the possibility that other philosophi- cal traditions can play this role equally well or even better. As seen previously, Morrison and Morgan (1999) argue that, although models are tools, they cannot be ‘simple tools’: models must be representational tools, that is, they must represent some targets if they are to be meaningful and capable of teaching us about those targets. Their argument relies on the implicit assumption that there are such things as “simple tools” that are unable to teach us about things other than them- selves. But the understanding of tools arising from both Heideggerian phenomenology and Deweyan pragmatism challenges this implicit assumption. From Heidegger’s (1927/1962) perspective, tools, practices, and agents are inextrica- ble from one another and are only properly understood in reference to each another. You can begin to understand a tool and what it is “about” by considering what it is Chapter 3. Models as Tools: Making Artifactualism Leaner and Meaner 101 made of: “Hammer, tongs, and needle, refer in themselves to steel, iron, metal, min- eral, wood, in that they consist of these” (Heidegger 1927/2001, p. 100). But tools aren’t just meaningless lumps of matter built of metal, wood or plastic. For Heideg- ger, understanding the aboutness of a tool (or “equipment”) involves recognizing it as “something in-order-to.” This means, on the one hand, that a given tool is about what it is for, that is, it is about the practice (or “work”) it supports, as well as about other tools that also constitute the same practice. Heidegger affirms: “Taken strictly, there ‘is’ no such thing as an equipment. To the Being of any equipment there always belongs a totality of equipment, in which it can be this equipment that it is” (Heidegger 1927/2001, p. 97). A soccer ball is ‘about’, or refers to, the game of soccer (i.e., the practice) as much as it refers to goal posts, nets and cleats (i.e., other tools that contribute to the same practice): understanding the tool is constituted by understanding how it relates to these other tools, and how all work together in a specific practice. Besides being about some specific practice, on the other hand, as “something in- order-to” a tool is also about its users. Heidegger claims: “The work produced refers not only to the ‘towards-which’ of its usability and the ‘whereof’ of which it consists: under simple craft conditions it also has an assignment to the person who is to use it or wear it” (Heidegger 1927/2001, p. 100). Tools are therefore ‘about’ us as much as they are about what they are for. Even mass-produced commercial goods, which are created for some average user rather than a specific individual, retain this basic referentiality: a tool is about us in that it is for us to do something with it, something that is “for-the- sake-of” and determined by the “totality of our involvements” (Heidegger 1927/2001, p. 116), or our “care structure.” Chapter 3. Models as Tools: Making Artifactualism Leaner and Meaner 102

These various referential aspects of tools found in a Heideggerian account provide us with a particularly insightful way to make sense of how, as tools, scientific mod- els can teach us without having to be (understood as) representations of some target. Traditional philosophical analyses take the fundamental meaningfulness or aboutness of models to be their reference to real-world targets: this is why understanding how models represent the world has been of central concern in the current modeling liter- ature. But the Heideggerian perspective on tools motivates thinking of models first and foremost as “things in-order-to” that refer to the practices they are for and to the agents they are used by. Models are, of course, typically for guiding how we think and talk about some phenomenon, but this does not necessitate analyzing the model itself as being ‘about’ the phenomenon (in the sense of being a truth-evaluable representa- tion of some ‘target’) any more than as about ourselves and our projects, goals, and concerns. Classical American pragmatist (1925) offers similar insights into how, properly understood, ‘simple tools’ are meaningful and instructive in the way an ar- tifactualist wants to say models (as tools) are. For Dewey, a tool is always suggestive of its consequences: “Its perception as well as its actual use takes the mind to other things. The spear suggests the feast not directly but through the medium of other external things, such as the game and the hunt, to which the sight of the weapon trans- ports imagination” (p. 103). But for Dewey this is not just a layer of meaning that the mind imposes on an otherwise meaningless ‘simple’ object: the suggestive nature of tools is not a matter of interpretation, but it’s an objective feature of the tool. This is because, in Dewey’s view, a tool is “intrinsically relational, anticipatory, predictive” (p. 153), it’s “a thing in which a connection, a sequential bond of nature is embodied” (p. Chapter 3. Models as Tools: Making Artifactualism Leaner and Meaner 103

103), and by embodying its consequences, a tool is fundamentally also ‘about’ them: “[a tool’s] primary relationship is toward other external things, as the hammer to the nail, and the plow to the soil. Only through this objective bond does it sustain relation to man himself and his activities” (p. 103). Importantly for the present discussion, Dewey suggests that this understanding of tools does not apply only to the practical affairs of everyday life (where we use hammers, forks and needles) but also to science and to the development of scientific knowledge. For Dewey, as for other pragmatists, science itself is on a continuum with what we might think of as ordinary problem solving: “The history of the development of the physical sciences is the story of the enlarging possession by mankind of more efficacious instrumentalities for dealing with the conditions of life and action” (Dewey 1925/1994, p. 12-13). Dewey’s work thus already draws a link between our understanding of scientific instruments and our understanding of tools more generally. Inspired by his views, scientific models are to be understood not as ontologically sui generis entities (a spe- cial type of tool, different from ‘simple tools’ as Morrison and Morgan put it) but as additions to the incredibly varied toolkit humans already employ in our efforts to deal with the demands of life: for example, for millennia we have worked to secure our access to food by using watering cans and cold frames to extend growing sea- sons despite changing environmental conditions; to these we now add computational climate simulations, which further extend the spatiotemporal reach of our planning abilities. In this Dewey-inspired perspective—as was the case with the Heidegger- based view—understanding models as tools rather than as representations does not rob models of their meaning and aboutness: as ‘things that embody a sequential bond Chapter 3. Models as Tools: Making Artifactualism Leaner and Meaner 104 of nature’, tools (including models) are inherently and objectively meaningful for users engaged in particular practices. Crucially, these philosophical frameworks (and poten- tially other ones as well) can provide an intellectual fresh start in which models can be properly understood as meaningful and instructive and also as being on a continuum with ‘simple tools’.

3.5.2 Prolegomena to Future Strong Artifactualist Accounts

One task for future work, then, is to explore in greater detail how philosophical foun- dations such as the ones just sketched can support the development of an account of models that adopts strong artifactualism. Another central task, I propose, is to care- fully consider what from the current philosophical way of thinking about models can be preserved. This involves reconsidering the philosophical vocabulary, in some cases redefining terms already used in analyses of modeling, and in other cases doing away with concepts that do not fit the strong artifactualist approach. Traditional notions such as ‘similarity’, ‘abstraction’ and ‘idealization’ are currently used with thoroughly representational meanings. But this need not be the case, and strong artifactualism motivates operationalizing these terms non-representationally. ‘Similarity’, for instance, although so intimately associated with certain accounts of the nature of the representation relation (e.g., Giere 1988, 2010, Weisberg 2012), easily accommodates a non-representational framing that is more appropriate to tools. Con- sider how ordinary tools can be similar to other objects in the sense of enabling the performance of the same action: for example, in the absence of a screwdriver, a butter knife can often get the job done just fine (despite my wife’s complaints). This is pos- sible because the two are similar in an action-relevant way. Yet, there is no reason to Chapter 3. Models as Tools: Making Artifactualism Leaner and Meaner 105 think that this similarity entails anything representational, e.g. that one object is a rep- resentation of the other. Much the same way, taking seriously the idea that models are tools and artifacts, a model can advance scientific understanding of some real-world system by being similar to that system in some action-relevant way. This can occur when model-artifacts enable manipulations that are similar to manipulations of inter- est in some real-world system. Actively intervening in water flow rates in the Phillips hydraulic model, for instance, motivates thinking about how specific interventions like changes in tax or investment rates can affect the economy. But the action-relevant (in- terventionist) similarity does not entail that one is a representation of the other, just as the similarity between a butter knife and a screwdriver allows me to learn something about how to use the one via manipulating the other and this does not necessitate analysis in representational terms. This very move, I propose, might also enable an artifactualist account to employ no- tions like ‘abstraction’ and ‘idealization’ without slipping into representational think- ing and thereby adopting weak artifactualism. Consider ‘abstraction’ first. Philosophers of science often speak of abstraction as the process of removing from a model details that, while true of the target phenomena, are irrelevant for particular purposes: abstract models can thus be seen as “minimal mod- els,” models that represent only the crucial features of the target while neglecting or omitting—i.e., not representing—other noncrucial features (see, e.g., Weisberg 2012). But this representational connotation is not necessary, and the notion can alternatively be reframed in terms of action-relevant similarities and dissimilarities. A hammer has the perfect design for driving a nail into wood, but if I cannot find my hammer, a stone of the right dimensions and sturdiness can be improvised to meet simple hammering Chapter 3. Models as Tools: Making Artifactualism Leaner and Meaner 106 needs. The stone in this scenario has the bare minimum features required for ham- mering, and to use one as a makeshift hammer is to, through a process of abstraction, create a tool that is minimally similar in an action-relevant sense. While ‘abstraction’ is often framed in the current literature as the process of neglect- ing or omitting representational detail, ‘idealization’ is typically seen as a “departure from complete, veridical representation of real-world phenomena” through the addi- tion of details known to be false ((Weisberg 2012, p. 98; see also, e.g., Woods and Rosales 2010). But the same move toward action-relevant similaries is available here. Consider how the rise of the modern hammer from pre-historic hammerstones was a long process of adding elements to a simple tool to make it better suited to the same tasks and potentially more. Endowing the modern hammer with features known to be absent in primitive hammerstones (such as handle and claw) made the hammer more dissimilar to hammerstones in some respects but also more effective and easier to manipulate, thus preserving action-relevant similarity. As in these cases, we can describe ‘abstraction’ and ‘idealization’ in modeling as the introduction of action-relevant similarities and dissimilarities in the scientific tool, without thereby implying anything representational. There is little reason to say that my improvised stone is an abstracted representation of a hammer, or that the modern hammer is an idealized representation of primitive hammerstones, or even to call dis- similarities as forms of misrepresentation. In modeling also, the action-relevant simi- larities and dissimilarities between model-artifacts and other systems (i.e., targets) en- ables scientists to think about interventions in those other systems by means of manip- ulating the model-artifact, and understanding this enactive logic (in Malafouris’ terms) does not require analysis in terms of representation and misrepresentation. Chapter 3. Models as Tools: Making Artifactualism Leaner and Meaner 107

Other typically representational concepts resist this kind of re- and have no room within strong artifactualism. This is the case with the vehicle/content distinction, which is a paradigmatically representational distinction, as already seen. There is not much sense in talking about a hammer’s content, and for this reason even calling it a vehicle would be misleading because one notion implies the other: instead, we more adequately understand a hammer as a tool by knowing how it gets concretely manipulated to get certain actions done. Similarly, understanding the artifactual na- ture of models requires careful consideration of how models are concretely manipu- lated and how these manipulations inspire specific interventions in real systems—but it’s hard to see how notions like ‘content’ and ‘vehicle’ would be necessary for this task. I offer these as tentative illustrations of how certain philosophical con- cepts might be recast non-representationally in terms that are applicable to a strong artifactual analysis of models. Working out the details of artifactual operationaliza- tions and identifying their limits should reveal what does and does not belong in the artifactualist . Still, even as a preliminary sketch, this discussion already gives a glimpse of the promise of strong artifactualism. A leaner artifactual- ist account, free from representational assumptions, is attractive not only because it avoids the problems inherent to weak artifactualism (as seen in Section 4.) Strong ar- tifactualism is also attractive as a viable and promising alternative to more traditional philosophical approaches: importantly, it is an alternative that effectively dissolves conceptual puzzles that haunt representational analyses of models such as the ‘prob- lem of scientific representation’ and the ‘problem of misrepresentation’ by reframing similarities and dissimilarities in terms that are more appropriate to understanding tools, tool-users, and tool-using practices. 108

Chapter 4

An Ecological Approach to Scientific Modeling

4.1 Abstract

Gibsonian ecological psychology explains intelligent behavior in terms of an organ- ism’s perception of ‘affordances’ in its environment. This ecological understanding of perception, action and cognition is uniquely positioned to support a philosophical un- derstanding of the epistemic value of scientific models as tools. Here I propose that the epistemic success of modeling is best understood in terms of the affordances or ac- tion possibilities that models, as tools, make available to suitably-positioned embodied cognitive agents. This account develops strong artifactualism (Chapter 3) in an anti- psychologistic fashion (Chapter 1). Importantly, it circumvents the challenges inherent to representationalism (Chapter 2) because it anchors the epistemic worth of modeling in the models’ affordances, which are agent-relative but mind-independent. Chapter 4. An Ecological Approach to Scientific Modeling 109

“Radical embodied cognitive science” (RECS) encompasses a number of theories that adopt an anti-representational approach to psychological phenomena: instead of traditional explanations in terms of an organism’s internal computational/representational processing of information, in the radical embodied view intelligent behavior is under- stood in terms of constraints imposed by bodily and environmental structure; from this perspective, we ask “not what’s inside your head, but what your head’s inside of” (Mace 1977). In this chapter I draw from ecological psychology, a particular approach within RECS, to develop what I see as the most promising approach to understanding models as tools. As I show, the ecological approach’s ontological and epistemological focus on “affordances” (i.e., action possibilities) as relational functional properties pro- vides a way to harness the resources of RECS to articulate a strong artifactualist account of the sort I advocated in Chapter 3. Importantly, this ecological account of models cir- cumvents the challenges inherent to representationalism (Chapter 2) because it anchors the epistemic worth of modeling in the models’ affordances, which are agent-relative but mind-independent. I begin in section 2 with a primer on ecological psychology, introducing the relevant technical terminology and situating the ecological framework in its theoretical context. Section 3 gives a more detailed description of affordances, which I then apply, in section 4, to develop a strong artifactualist understanding of scientific models.

4.2 A Primer on Ecological Psychology

The label “ecological psychology” has been used to describe work in a variety of differ- ent scientific traditions. Historically, that was the name of the observational research Chapter 4. An Ecological Approach to Scientific Modeling 110 program led by Roger G. Barker, predominantly from the 1940s through the 60s, which shed light on the extent to which the particular environment or “behavior setting” an individual is in at any given time explains the individual’s behavior (see, e.g., Barker, Kounin and Wright 1943; Barker and Wright 1951, 1954; Barker 1965, 1968). More re- cently, the label “ecological psychology” has come to be used informally, and in some contexts, interchangeably with “environmental psychology” to describe research on the affective, cognitive and other psychological effects of having contact with gardens, forests and other natural environments (e.g., Wells 2000; Tyrvinen, Ojala, Korpela, Lanki, Tsunetsugu and Kagawa 2014) as well as research on our attitudes toward na- ture and how best to promote pro-environmental behaviors like recycling (e.g., Cheng and Monroe 2010; Zelenski, Dopko and Capaldi 2015). This chapter is concerned with neither of these ‘ecological psychologies’. Rather, here I will reserve this label to the distinct research tradition in experimental psychol- ogy initiated by James J. Gibson in his books The Senses Considered as Perceptual Systems (1966) and The Ecological Approach to Visual Perception (1979).1 This section presents some of the key themes in Gibson’s ecological vision for psychological science. I first provide a brief summary of the ecological theory of perception. I then discuss further theoretical and ontological aspects that are often ignored in more superficial treatments of the ecological theory of perception but which are crucial elements that make Gibso- nian ecological psychology truly “ecological” in the relevant sense.

1Although they are strictly independent, there are interesting points of contact between Gibsonian ecological psychology and other types of “ecological psychology” such as the ones mentioned here: see discussion in, e.g., Heft (2001) and Sanches de Oliveira (2018). Chapter 4. An Ecological Approach to Scientific Modeling 111

4.2.1 The Ecological Theory of Perception

The central ideas in Gibson’s theory of perception are that perception is direct, active, and action-oriented. The typical cognitivist view holds that perception is an inferential, constructive pro- cess in which, through some kind of computation, the mind uses sensory stimulation to build up internal representations of the probable causes of stimulation in the external world (Marr 1982, Fodor 1987). Gibson explicitly rejects this inferential, constructive, computational framing. In the ecological view, perception is direct in that our percep- tual access to our surroundings in unmediated by internal representations and recon- structions: “Perceiving is an achievement of the individual, not an appearance in the theater of his consciousness. It is a keeping-in-touch with the world, an experiencing of things rather than a having of experiences” (Gibson 1979, p. 239; see also Michaels and Carello 1981). That is, perceiving is not a matter of experiencing internally represented reconstructions of the external world, but rather a matter of coming in direct contact with the world through exploratory behavior. Reference to exploratory behavior in this last already points in the direc- tion of what it means for perception to be active. The traditional view of perception indicated above describes the perceptual process as beginning with the passive excita- tion of our sensory receptors: the perceiving subject is, therefore, ‘subjected’ to external stimuli impinging upon his sense organs. In contrast with this traditional view, for eco- logical psychologists perception is a success-term for something that organisms do, or, as Gibson put it in the previous quote, it is ‘an achievement of the individual’. Accord- ingly, if we want to understand visual perception, for example, it follows that studying Chapter 4. An Ecological Approach to Scientific Modeling 112 the retina as a photoreceptor will, at most, provide an incomplete account of the phe- nomenon: “the eye is part of a dual organ, one of a pair of mobile eyes, and they are set in a head that can turn, attached to a body that can move from place to place” and together, all of these elements make up our (visual) perceptual system (1979, p. 53). The same applies to all perceptual modalities: “The eyes, ears, nose, mouth, and skin can orient, explore, and investigate. When thus active they are neither passive senses nor channels of sensory quality, but ways of paying attention to whatever is constant in the changing stimulation” (1966, p. 4, emphasis added). In the ecological view, then, perception is not passive stimulation but the activity of a coordinated system of exploratory behavior: “perceiving is an act, not a response, an act of attention, not a triggered impression, an achievement, not a reflex” (1979, p. 149). In addition to being direct and active, according to ecological psychology percep- tion is also action-oriented. This amounts to a claim about what perception is for—namely, that it is for action, that it works in the service of action, just as much as action works in the service of perception. Gibson explains:

Moving from place to place is supposed to be “physical” whereas perceiv- ing is supposed to be “mental,” but this dichotomy is misleading. Locomo- tion is guided by visual perception. Not only does it depend on perception but perception depends on locomotion inasmuch as a moving point of ob- servation is necessary for any adequate acquaintance with the environment. So we must perceive in order to move, but we must also move in order to perceive. (Gibson 1979, p. 223)

And besides defining what perception is for, saying that perception is action-oriented also amounts to a claim about what perception is of. Traditional accounts hold that, Chapter 4. An Ecological Approach to Scientific Modeling 113 in perceiving an object, we perceive discrete primary qualities relating to an object’s size, shape and composition and then have to analyze those properties in order to determine how we might relate to the object. Gibson proposed, instead, that we have direct perceptual access to what he called “affordances,” or the opportunities for action our environment offers us. Ordinary examples include the possibility to sit on (in the case of a chair), to pass in between (in the case of an aperture), or to cut with (in the case of a knife): rather than having to infer the utility of these objects, we directly perceive them as ‘sit-on-able’ or ‘pass-in-between-able’ or ‘cut-with-able’, so to speak. In Gibson’s view, then, perception is action-oriented in that it is for action and of action- relevant properties: in perception we directly and actively perceive the affordances of our environment.

4.2.2 Theoretical and Ontological Foundations

The summary of the ecological view of perception just sketched captures some of the main features of Gibson’s work that typically receive attention in psychology textbooks and in parts of the philosophy of mind literature. More often than not, however, eco- logical psychology is mischaracterized as being only a theory of perception. While it is true that Gibson was particularly interested in perception, his ecological approach was meant as a comprehensive framework for psychological science as a whole. Missing this bigger picture is not only intellectually irresponsible, but it would also undermine the present aim of importing insights from ecological psychology to philosophy of sci- ence. A good starting point to understand the Gibsonian ecological framework is to con- sider its theoretical scope and context. At the time of its inception in the 1960s and Chapter 4. An Ecological Approach to Scientific Modeling 114

70s, ecological psychology was meant as an alternative to both behaviorism and cog- nitivism. From a behaviorist perspective, the scope of psychology as a science was limited to observable or otherwise measurable behavioral responses in association with stimuli or reinforcements. This corresponded, in theory and in practice, to black-boxing internal processes, which were seen either as impossible to study scientifically, or, for radical behaviorists, as nonexistent. In contrast with behaviorism, from the (then) emerging cognitivist perspective, the task of scientific psychology became to investigate the internal cognitive processing that occurs between sensory input and behavioral output—precisely what behaviorism had long neglected (see, e.g., Miller 2003 ). With the computer , cognition came to be seen as the internal computational processing of what had been obtained from, and would then be exhibited by, the peripherals. The crucial point for present purposes, however, is that, while being different in these important respects, both behaviorism and cognitivism accepted the same theo- retical model for scientific psychology, disagreeing mainly on what the focus of inquiry should be (see Figure 1).

Gibson rejected the theoretical and ontological assumptions endorsed by behavior- ists and cognitivists alike. Instead, his vision for psychology was “ecological” in that it shifted the theoretical scope of psychology to study informational perception-action dynamics, and, this, grounded on an ontology of organism-environment systems as single ecological units. Consider first how the traditional model in Figure 1 draws a clear distinction between perception and action (i.e., between stimulus and response, or between input and output, in each case). As already indicated, Gibson saw the two Chapter 4. An Ecological Approach to Scientific Modeling 115

(a)

stimulus response

(b)

input cognition output

FIGURE 4.1: The scope of psychological science indicated in red for (a) be- haviorism and (b) cognitivism. Behaviorism black-boxed internal process- ing and instead studied measurable stimuli and responses. Cognitivism, in turn, shifted away from analysis of stimuli and responses (now concep- tualized as inputs and outputs) and focused instead on the computational processes that might mediate the two. These differences aside, the overall schema is equivalent. as inseparable: perception is an action and it is for action, that is, we perceive by acting and in order to act, and no action is ever divorced from perception. Now, Gibson also spoke of there being a mutuality or reciprocity between organism and environment. He claimed that “information about a world that surrounds a point of im- plies information about the point of observation that is surrounded by a world. Each kind of information implies the other” (1979, p. 75; see also Lombardo 1987). This means that as an organism acts/perceives, it generates ecological information, that is, re- lational information that is specific to both organism and environment, or the relation between the two. Optic flow gives a good illustration of ecological information. Locomotion in a given direction at a given rate generates a pattern and rate of change in the optic array that is specific to locomotion in that direction at that rate (see Figure 2a). For example, as you walk down a hallway while looking straight ahead, an object on the right-side wall down the hallway (say, a door or a painting) will, at first, appear small and close to Chapter 4. An Ecological Approach to Scientific Modeling 116 the center of your visual field, yet, as you keep walking straight ahead, the object will gradually occupy a greater proportion of your visual field and shift away from the cen- ter of your visual field, toward the right, until you walk past it. The direction and rate of displacement is determined by the direction and rate of your movement: if you walk backward, objects near the edge of your visual field will gradually converge toward the center; and the faster you move, the faster the displacement will be. This shows how ‘information about the world’ is also ‘information about the organism’. An organism’s activity is, at once, informative of the structure of the environment and informative of the structure of the organism’s activity: hypothetically, if you didn’t already know whether you are walking forward or backward, the pattern of visual change generated by your activity would tell you precisely how you are moving. Importantly, ecological information is also forward-looking in that it is also informative of how you can move. As diving birds plunge down toward the water to catch fish, they adopt a streamlined posture, contracting their wings and thereby avoiding the potentially fatal injury that high-speed impact with the water surface would cause (see Figure 2b). This behavior has been shown to be visually guided by ecological information, namely the dynamic variable of τ or time-to-contact with water (Lee and Reddish 1981). An organism’s exploratory behavior thus generates information that specifies not only the organism’s current relation to the environment, but also a range of possible forms of engagement, enabling the control of action to be informationally (i.e., perceptually) constrained. In short, ecological information is informative of “affordances,” or the opportunities for ac- tion that the environment offers: water can afford both plunging and collision, and the behavior of diving birds is guided by their sensitivity to the information their behavior generates which specifies the different possibilities they may take advantage of. Chapter 4. An Ecological Approach to Scientific Modeling 117

FIGURE 4.2: Illustration of ecological information. An organism’s relation to the environment generates information that is specific to the organism’s relation to the environment, as in the case of optic flow in (a). Ecological information is dynamic and enables the prospective control of action, as in the case of diving birds guiding their wing position by visual information of time-to-contact with water (b).

Ecological psychology is, therefore, “ecological” in related theoretical and ontolog- ical senses. First, the theoretical scope of psychology is organism-environment rela- tions (rather than the internal processing of cognitivism or the behavior conditioning of behaviorism). And second, this is based on the ontological view that there is ecologi- cal information inherent to organism-environment relations which organisms generate through perception-action dynamics and which they are directly sensitive and respon- sive to (see Figure 3).

Understanding these related theoretical and ontological commitments is of central importance for understanding the ecological theory of perception. In saying that per- ception is direct, active and action-oriented, ecological psychologists are not saying that Chapter 4. An Ecological Approach to Scientific Modeling 118

information perception-action environment

FIGURE 4.3: The theoretical scope (in red) and ontological basis of eco- logical psychology: the lawful informational reciprocity of organism and environment. An organism’s perception-action dynamics generate infor- mation that, in turn, specifies the same dynamics. In this sense, ecological information is relational—hence the bidirectional arrows. (Inspired by di- agram in Turvey and Carello 1986, p. 143) the reception of stimulation or sensory inputs (i.e., the leftmost box in diagrams 1a and 1b) is direct, active and action-oriented. Rather, the entire explanatory schema is re- jected, and, in its place, a view is adopted in which psychological phenomena are situ- ated at the interface of perception-action dynamics and environmental regularities and constraints. At the same time, these theoretical and ontological commitments are only properly understood in concert with the ecological theory of perception. It would be a mistake to think that ecological psychology adopts an information-processing schema and merely posits that the type of information we rely on is ecological information. The assumption of information processing is only required if we see information as always ambiguous or impoverished: only if there is a “poverty of the stimulus” (Chomsky 1959, 1986) is the internal enrichment of that stimulus called for. Given that ecolog- ical information is rich and meaningful, perception is better understood as a matter of sensitivity and discrimination rather than information enrichment through internal processing (on ecological psychology and information processing, see Richardson et al 2008, Michaels and Palatinus 2014; and on perception as differentiation vs. enrichment, see J. Gibson and E. Gibson 1955, and E. Gibson 2003). Chapter 4. An Ecological Approach to Scientific Modeling 119

4.3 Getting Clearer on Affordances

As my goal in this chapter is to draw from ecological psychology to articulate an affordance-based philosophical understanding of scientific models as tools, it will be useful first to think more carefully about the of “affordance,” as well as its empirical support and practical applications.

4.3.1 Definition

The English language has convenient words for describing some things by what we can do with them: food is edible, clean water is drinkable, and your bowl is microwav- able but not recyclable. Yet, regardless of which convenient words a given language may have or lack, things still enable a variety of uses and forms of interaction. Gibson coined the term “affordance” to capture this characteristic of things: the affordances of the environment are, as he says, what it “offers the animal, what it provides or fur- nishes, either for good or ill” (1979, p. 127). Some surfaces around us afford walking on, sitting on, or leaning against; some objects afford grasping, throwing, or cutting with. Paper is “write-on-able” and a pen is “write-with-able,” but we don’t need these awkward made-up adjectives to recognize that these are uses the objects in question afford. And while we typically think of positively-valenced possibilities, affordances can just as well be negative or ‘for ill’ as Gibson put it. Basic affordances of fire, for ex- ample, include illumination, warmth, and injury to the skin; “once control is learned” fire also affords cooking, boiling water, glazing clay, reducing minerals to metals (1979, p. 39), and to this list we can add slash-and-burn agriculture and injury to others by combining fire with gunpowder. Similarly, trees can afford climbing and shelter from Chapter 4. An Ecological Approach to Scientific Modeling 120 sun or rain, as well as injury through collision, and cutting down to clear an area for agriculture or to extract wood with which to create shelter, tools, or fire. Influential accounts of the ontological status of affordances define them as relations. Chemero (2003), for example, frames affordances as the interplay of subjective skills and environmental characteristics, or “relations between the abilities of organisms and features of the environment” (p. 189; see also Chemero 2009). Along similar lines but with an eye to accommodating sociocultural variation, Rietveld and Kiverstein (2014) define affordances as “relations between aspects of a material environment and abilities available in a form of life” (p. 335). There are other competing accounts.2 Still, in what follows I will stick with a generic relational characterization as I think it is the most promising and also the closest to Gibson’s original description of affordances as “properties of things taken with reference to an observer” (1979, p. 137). A crucial point for the purposes of this chapter is that, understood as relational properties, affordances are both agent-relative and mind-independent. Like tango, it takes two for a relational property to exist. You can only have the property of being a brother, for example, if there is at least one other person who you are a brother to. Similarly, affordances are relational properties in that an object does not have affordances in and of itself, but necessarily only for someone or other. Vari- ation can be phylogenetic as well as ontogenetic. Air affords breathing, and water affords drinking and swimming as well as drowning—but these opportunities exist only for terrestrial animals, and not for fish, who can breath underwater but do not

2See, e.g., the views of affordances as dispositional properties of objects (Turvey 1992, Scarantino 2003) and as resources that exert selection pressure (Reed 1996; but cf. Withagen and van Wermeskerken 2010). Chapter 4. An Ecological Approach to Scientific Modeling 121 survive outside it. A knife affords cutting, piercing and scraping, and this surely de- pends on the knife’s own characteristics: it must be rigid, pointed, and have “a sharp dihedral angle” (Gibson 1979, p. 133). But it also depends on the user: a knife affords cutting, piercing and scraping only for some animals capable of prehensile grip. At the same time, a chair that is ‘sit-on-able’ for a given person now probably did not afford sitting to the same person when she was a baby: “knee-high for a child is not the same as knee-high for an adult, so the affordance is relative to the size of the individual” (Gibson 1979, p. 128). As opportunities for actions, then, affordances are necessarily agent-relative because they are opportunities an object offers for someone to act. Although affordances are agent-relative—as Gibson put it, they are “relative to the animal” and “unique for that animal” (1979, p. 127)—they are still thoroughly “ob- jective, real, and physical” (p. 129). The opportunities for action an object offers exist for an animal even if the animal does not act on them. In fact, because differ- ent parts of our environment afford a number of different uses at any given time, we are guaranteed to act on only a narrow range of the affordances available to us. Not only that, but presumably affordances exist even independently of our perception of them.3 Manufactured goods are built so as to offer particular action possibilities to the intended users: buttons, switches and knobs, for example, are for pushing, flipping and rotating, respectively (see Figure 4). The modes of engagement in these cases are mind-dependent, but only in the trivial sense that they are intentionally chosen by the objects’ designers: still, whether the intended use is in fact possible or not is an objec- tive, mind-independent matter. The boundary between match and mismatch between product characteristics and user ability is something designers always run the risk of

3To deny this would amount to thinking that things only exist when we perceive them, which is a form of idealism: see Chemero 2003, 2009. Chapter 4. An Ecological Approach to Scientific Modeling 122

FIGURE 4.4: Affordances are agent-relative but mind-independent oppor- tunities for action. On the one hand, an object does not have affordances in and of itself, but only for some agent. On the other hand, however, affor- dances are matches between characteristics of the agent and of the object, and as such they exist objectively, independently of the agent’s acting on them or even being aware of them. Buttons, switches and knobs objec- tively afford certain uses, but only to humans with a particular level and type of manual dexterity (and not to other humans nor to, say, elephants). getting wrong. It is also something that can be exploited for artistic purposes, as the “deliberately inconvenient everyday objects” created by architect Katerina Kamprani make clear.4 In sum, while the opportunities for action an object offers are relative to some par- ticular organism, the match or mismatch between the organism’s abilities and features of the object is mind-independent and holds objectively—regardless of whether the organism in question is human or not, whether that organism acts on or perceives the affordance or not, and whether the object in question is man-made or naturally- occurring.

4Pictures of Katerina Kamprani’s “uncomfortable objects” are available on her website: https:// www.theuncomfortable.com Chapter 4. An Ecological Approach to Scientific Modeling 123

4.3.2 Empirical Support and Applications

These ideas are nice and good, but they would be of little impact if affordances were merely hypothetical or theoretical constructs with no empirical backing. Thankfully, years of experimental research support the claims that affordances are real and avail- able for direct perception. First of all, there is a wealth of evidence suggesting that people are very good at per- ceiving affordances for themselves. In an influential study, Warren (1984) found that the boundary between climbable and unclimbable stairways corresponds to a fixed ratio between riser height and leg length (i.e., a relational property) and that partici- pants were perceptually sensitive to that ratio. Warren and Whang (1987) found sim- ilar results for the visual guidance of walking through apertures, with an aperture’s passability corresponding to an objective body-scale ratio that is visually perceivable. Other studies have shown that our perceptual access to such action boundaries fixed at body-scale ratios gets calibrated with changes in body-scale: this varies from the short- term effect that wearing a tall wooden block under one’s shoes has on the perception of opportunities for sitting and stair climbing (Mark 1987) up to comparatively longer- term effect of bodily changes during pregnancy on (the perception of) the passability of apertures (Franchak and Adolph 2014). Importantly, some of these and other studies have found that participants were wildly inaccurate when asked to estimate absolute properties (such as heights and widths in centimeters or inches), which suggests that the perception of affordances (i.e., agent-relative properties) is more fundamental than, and independent from, the perception of non-agent-relative properties. Second, suggests that people are good at perceiving affordances not only for themselves but also for others. This is the case, for example, when we Chapter 4. An Ecological Approach to Scientific Modeling 124 engage in joint actions. A series of studies found that, just as a person tasked with moving wooden planks across a room will shift from one-handed to two-handed grasp of the planks at predictable body-scaled ratios, so will pairs of participants tasked with the same goal shift from individual to joint grasp at similarly predictable relational transition points (Richardson et al 2007; Isenhower et al 2010). Similar results have also been found for participants walking through an aperture individually or in pairs (Davis et al 2010). But we do not even need to be engaged in joint action to be sensitive to other people’s affordances. Stoffregen, Gorday, Shen and Flynn (1999) found that, through observation alone (no interaction or cooperation), participants were able to perceive the maximum and preferred sitting heights for actors of varying heights, not only when viewing the actors live standing next to a chair but also when viewing kine- matic point-displays of the scene. In another study, Ramenzoni, Riley, Davis, Shock- ley and Armstrong (2008) found that participants were able to estimate the maximum height an observed actor could jump to reach an object: interestingly, in one condition the experimenters added weights to the actor’s legs, and found that participants were still able to perceive the actor’s reach-with-jump affordance because the actor’s walk- ing pattern while wearing the weight generated information about the actor’s changed ability to reach-with-jump. And in a related study, Weast, Shockley and Riley (2011) found that basketball players were more accurate than non-basketball players at per- ceiving an actor’s maximum reach-with-jump but were equal to non-basketball players at perceiving other affordances, which suggests that training and expertise can lead to an improvement in the perception of skill-related affordances for others. Lastly, and combining these two strands of findings, studies also show that people are able to perceive nested future affordances, both for themselves and for others. In Chapter 4. An Ecological Approach to Scientific Modeling 125 a series of experiments, Wagman, Caputo and Stoffregen (2016a, 2016b) gave partici- pants different objects with which to perform certain tasks, and gave them instructions to combine those objects in whatever way they found suitable to create tools for ac- complishing the different tasks. The results suggest that participants were sensitive to nested affordances, perceiving both affordances for tool assembly and affordances for tool use. And in another study, Wagman, Stoffregen, Bai and Schloesser (2017) found that participants could similarly perceive the nested affordances available to an ob- served actor, being prospectively sensitive to how certain manipulations (e.g., using a tool) would enable the actor to perform specified actions. Affordances are thus clearly well-supported empirically. They are also uncontro- versial in a number of areas outside experimental psychology, with the concept having been fruitfully adopted in fields as varied as industrial design (e.g., Norman 1988, 1999, You and Chen 2007, Maier and Fadel 2009), architecture (e.g., Koutamanis 2006, Maier, Fadel and Battisto 2009, Zinas and Jusan 2014), pedagogy and educational research (e.g., Billett 2002, Cheng and Tsai 2013, Fiskum and Jacobsen 2013, Pella 2015), human- computer interaction (e.g., Martin, Bowers and Wastell 1997, McGrenere and Ho 2000, Hartson 2003), sport science (e.g., Araujo et al 2009, Silva et al 2013, Seifert, Button and Davids 2013), and communication and media studies (e.g., Majchrzak et al 2013, Nagy and Neff 2015, Evans et al 2016) to mention just a few. My contention is that it can significantly contribute to philosophy of science as well. Chapter 4. An Ecological Approach to Scientific Modeling 126

4.4 An Affordance-Based View of Scientific Models as

Tools

A growing trend in philosophy of science is to think of scientific models as instruments, tools, and artifacts (see, e.g., Morrison and Morgan 1999; Keller 2000, 2002; Knuuttila 2011, 2017; Isaac 2013; Currie 2017). In chapter 2 I showed how prominent artifactu- alist accounts adopt what I call ‘weak artifactualism’, combining an understanding of models as tools with the representationalist assumption that models can teach us about the world only because, besides being tools, they are also representations of target phe- nomena. The ecological theory of affordances reviewed here is uniquely positioned to support what I call ‘strong artifactualism’, that is, an understanding of scientific mod- els as tools that is entirely free of representationalist assumptions. Put briefly, the general idea is that, like ordinary tools and artifacts, scientific mod- els have affordances: they offer a range of opportunities for action that scientists can more or less competently and more or less comprehensively take advantage of and for a number of different purposes. Like the affordances of any other object or part of the environment, the affordances of models are both agent-relative and objective. They are agent-relative in that the opportunities for action a given model offers are unique to individuals with certain abilities and sensorimotor makeup, and the same model may not afford the same uses to other agents. But the affordances of models are also ob- jective in the sense that they are real, physical, and mind-independent: when present, they are present whether or not the relevant agent is aware of them and/or makes use of them; and conversely, when certain affordances are absent, they are absent no matter how hard the agent might wish them to exist. In the remainder of this chapter I flesh Chapter 4. An Ecological Approach to Scientific Modeling 127 out this affordance-based strong artifactualist view and highlight some of its strengths and benefits.

4.4.1 A Tale of Two Other Relations

Traditional philosophical analyses of modeling focus on model-target relations, concep- tualizing the representational mapping between model and target either as mind-independent or, more often, as mind-dependent (see Figures 1 and 2 in chapter 1). But from the eco- logical, affordance-based perspective I advocate, two other relations define modeling and therefore become focal. The first defining relation is the user-model relation. Models are designed to, in the technical sense, afford certain uses of interest to intended users: for this reason, in order to make sense of modeling we must first consider the relation that holds between models, as tools with affordances, and modelers or users (see Figure 5).

user

affordances

model

FIGURE 4.5: As tools, models have affordances, i.e., they offer opportuni- ties for action to particular users. The bidirectional arrow highlights the nature of affordances as relational properties.

As an example of a concrete model, consider the Phillips Hydraulic Computer, a machine comprising of pipes and tanks through which water flows that was used in the 1940s and 50s at the London School of Economics to model the flow of money in the Chapter 4. An Ecological Approach to Scientific Modeling 128 economy and relations such as the ones between savings, investment, and consump- tion (see Figure 6). The Phillips machine affords intervening in the water volume of its various tanks by manually adjusting how much water flows through certain pipes; it thus also affords observing and testing hypotheses about flow dynamics, as well as predicting and explaining patterns of change in the relative water volume of the ma- chine’s various tanks.

FIGURE 4.6: A. W. H. Phillips and the MONIAC (Monetary National In- come Analogue Computer), also known as the Phillips Hydraulic Com- puter.

But affordances are not present only in concrete objects: the same analysis applies even in the case of models traditionally considered ‘abstract’, such as mathematical equations and simulations. Consider the Lotka-Volterra model, for example, which is the following pair of first-order differential equations widely used for modeling Chapter 4. An Ecological Approach to Scientific Modeling 129 predator-prey relations in biological populations:

dx = αx − βxy (4.1) dt

dy = δxy − γy (4.2) dt

The equations afford a number of interventions. You can change initial values of the interacting variables (i.e., x and y) and you can also change the parameter values that determine how they change and interact (i.e., α, β, δ and γ). Given the values assigned to variables and parameters, the equations can be iteratively solved to yield results for incremental time-steps (i.e, t values). Using pre-existing visualization conventions and techniques, modelers can not only plot the time-evolution of the entire system given fixed initial conditions, but they can also explore changes in dynamics for a range of different initial conditions (see Figure 7). By offering these action possibilities the model thus also affords the user a range of opportunities for observation, prediction, explanation and hypothesis testing. And even though a mathematical model may seem to be a purely ‘abstract’ structure, it is important to note that the model will always be used concretely, on paper or, more likely, on the computer screen: this makes it possible for there to be specific concrete opportunities for action that the model offers, as well as specific opportunities it doesn’t. The forms of engagement that are possible thus vary with each modeling technique. But, in every case, the opportunities for action that the model offers are always relative to some user. A model’s affordances cannot be reduced to properties that the model has in and of itself. The features of a hammer do not by themselves make the hammer Chapter 4. An Ecological Approach to Scientific Modeling 130

FIGURE 4.7: Visualizations from the Lotka-Volterra mathematical model. (a) Time evolution of variables x and y given fixed parameter values. (b) Phase-space plot depicting dynamics for a range of initial y values given the same set of fixed initial values for the variable x and for the parameters. afford hammering: it is the relation between the hammer’s features and the character- istics and abilities of certain users that make the hammer afford hammering to those users. So it is with models. To be sure, whether the model affords a given type of manipulation or not is an objective matter. But the affordance is still agent-relative, such that the user’s sensorimotor makeup and abilities are as necessary for the model to afford some use as are the model’s intrinsic features. It is important to note, however, that the opportunities for action that models offer typically aren’t just fortunate coincidences. As tools, models are designed expressly to enable some use(s) of interest. Models of different types are built to afford a range of manipulations, interventions and interactions that scientists find illuminating in a given research context. It follows that we cannot fully make sense of modeling without taking into account the contexts and interests that make particular model-user (affor- dance) relations relevant. Chapter 4. An Ecological Approach to Scientific Modeling 131

This brings attention to the second relation that, in my affordance-based strong artifactualist view, defines modeling: this is the relation between modeler and the real- world phenomena under investigation, or the user-target relation for short. If a robotic model is built to afford certain manipulations and to display a particular behavior, this may be scientifically interesting, for instance, as a way to test the viability of a cognitive architecture some living organism is hypothesized to have. But the same model (and therefore the same user-model relations) would be irrelevant in other re- search contexts (characaterized by different user-target relations). The Khepera robot, for example, has been useful for some modelers studying cooperative behavior in ants (Krieger, Billeter, and Keller 2000) and phonotaxis in crickets (Reeve, Webb, Horchler, Indiveri, and Quinn 2005), yet it would arguably be of little use if those researchers decided instead to investigate the molecular genetics of ants or crickets. As these and other cases illustrate, scientific models are indissociable from the re- search context in which they are built and used. The questions that guide a model- based project play a central role in shaping what the features of the model need to be so that the model can be useful for researchers attempting to answer those questions. It thus follows that the affordances at play in scientific modeling aren’t limited to the af- fordances of models themselves, but also include the affordances of target phenomena (see Figure 8). The affordances of models can be particularly illuminating in the context of inves- tigations about the opportunities for action offered by a number of phenomena in the world. Consider once again the Phillips Hydraulic Computer. As already seen, the Phillips machine affords manipulating the water flow between tanks, supporting an Chapter 4. An Ecological Approach to Scientific Modeling 132

user affordances

affordances

model targets

FIGURE 4.8: Scientific modeling is constituted by both user-model rela- tions and user-target relations. understanding of fluid dynamics and, with it, enabling the prediction and explana- tion of a number of changes in water volume at different parts of the system. These opportunities for intervention and manipulation were relevant and illuminating for economists in that context because of corresponding affordances that they were inter- ested in understanding in real economic systems. Economists can observe fluctuations in savings, spending, investment, taxation and so on, and they want to know how tweaking one part of the system can lead to changes in other parts. As such, given the conceptualization of money as a finite resource that flows through the economy in the path of least resistance, the Phillips machine made it possible, by analogy, to test hy- potheses (e.g., about the relations between different parts of the economy), to explain past events (e.g., a rise in investments or a drop in spending), to predict the effect of certain interventions (e.g., changes in taxation), and so on. Put simply, then, the affor- dances of the model were interesting and useful because of how they contributed to the agents’ understanding of affordances in the system that was the target of investigation. Modeling with the Lotka-Volterra equations, for another example, can similarly be understood in terms of a match between user-model and user-target relations. The particular affordances or opportunities for action scientists are interested in in a given Chapter 4. An Ecological Approach to Scientific Modeling 133 target phenomenon inform which opportunities for action will be most useful in the model and which will, for this reason, be built into the model. As seen above, the Lotka-Volterra model has been widely influential in the study of predator-prey rela- tions. But the model is much more useful for the biologist who studies population dynamics than for the biologist who seeks to understand how particular predator and prey species relate to each other given, say, their comparative anatomy and physiology. The biologists’ focus on opportunities for observing, predicting, explaining and inter- vening on population-level dynamics thus make the affordances of the mathematical model illuminating in ways that it presumably would not be for the biologist investi- gating opportunities for intervention at the molecular level. In each case the scientists’ interests in particular opportunities for acting in target phenomena inform which types of models (with which affordances) they create to support understanding of the target. A last point that is important to make clear concerns how, in bringing attention to both user-model and user-target relations, the affordance-based strong artifactualist view I am proposing here promotes a shift away from thinking of models as models ‘of’ to thinking of models as models ‘for’.5 The starting point for traditional represen- tational analyses is the idea that scientific models are always models of some target system or another: as seen in chapter 1, this draws attention to the model-target re- lation and gives rise to the philosophical task of elucidating what the nature of that relation is such that models represent their targets. In contrast, the affordance-based strong artifactualist analysis draws attention to the fact that, as tools, models are first and foremost models for—‘for’ some use and ‘for’ some user.

5The labels ‘model of’ and ‘model for’ have been used in different ways in different debates, includ- ing, for instance, to refer to competing styles of modeling that may be simultaneously used within a single research context. For this and other uses of the model of/for distinction see, e.g., Keller 2000, 2002, Gouvea and Passmore 2017, Ratti 2018. Chapter 4. An Ecological Approach to Scientific Modeling 134

Models are always for what they afford, they are for the opportunities for action they offer. The ‘what for’ of a model can be given a thin or thick description.6 Besides affording, under a thin description, manipulating water flow or solving for different values, models afford, as we have seen, hypothesis testing, explaining past events and predicting future events. To these we can add a variety of different uses under a thick description, such as generating understanding, enabling interventions, designing new experiments, simplifying complex problems, and unifying apparently disparate phe- nomena. Importantly, models can only be for uses such as these for someone. On the one hand, the affordances of models are agent-relative: models are designed to be used by scien- tists with specific abilities. On the other hand, the affordances of targets—responsible for determining which model-affordances matter—are also agent-relative. As such, a model can never explain or shed light on some target directly and by itself: a model can only be for contributing to inquiry in the context of research on some phenomenon because they enable someone to, through manipulating the model, answer a range of questions and contribute to inquiry in that context. As ‘models for’, models are, of course, typically also for, under a thick description, guiding how we think and talk about some phenomenon, but this does not necessitate analyzing the model itself as being ‘of’ the phenomenon in the sense of being a truth- evaluable description of some representational target. As tools, models are useful ‘for’ a range of goals and ‘for’ some people interested in pursuing those goals. These goals

6Here I am following the thin-thick distinction introduced by (1968). What is the differ- ence between ‘blinking’ and ‘winking’ or between moving your hand so as to ‘inscribe letters on paper’ and to ‘sign a peace treaty and ending a war’? Each pair provides, respectively, a thin and a thick de- scription. But rather than being descriptions of different events, the thin and thick versions describe the very same event. Chapter 4. An Ecological Approach to Scientific Modeling 135 and interests are typically sidelined in analyses of models as models ‘of’ some partic- ular phenomenon. By situating models, as tools with affordances, in a larger system comprised of user-model and user-target relations, the strong artifactualist view advo- cated here brings these goals and interests once again to the forefront.

4.4.2 Strengths

Wide Applicability

A major advantage of the affordance-based strong artifactualism I propose here is its ability to capture the wide variety of models and modeling techniques that are used across the physical, life and social sciences. Some of the models scientists use are physical structures constructed for the pur- poses of model-based research: this includes the Phillips Hydraulic Computer and the robotic ants and crickets already mentioned, as well as scale mock-ups. Other cases involve the construction of abstract mathematical structures and their explo- ration through concrete embodiment in computer simulations. Other cases, however, do not consist in the construction of a brand new man-made object, but rather in find- ing a pre-existing natural object that can support a given model-based research project. Model organisms such as worms, fruit flies, and mice are a good example of this: even if they are artificially shaped through selective breeding and more invasive techniques, they aren’t things that humans have built from scratch in the same way that we have built robots and equations from scratch. Model organisms are not straighforwardly ac- countable for in traditional representational terms as ‘models of’ some target, but they are readily amenable to explanation as tools with affordances. Affordances account for the meaning of objects, surfaces, and parts of the environment regardless of whether Chapter 4. An Ecological Approach to Scientific Modeling 136 those are natural or man-made. For this reason, an affordance-based account of mod- els as tools can accommodate the wide range of models we find in science, applying to more-or-less natural cases like model organisms just as well as it does to concrete and abstract man-made structures.

Discovery and Transfer

Another key advantage of affordance-based strong artifactualism is its ability to shed light on certain features of scientific discovery in the context of modeling. Model organisms such as mice have been used in a host of studies aimed at under- standing phenomena as diverse as learning, nutrition, genetics and cancer. Differential equations like the Lotka-Volterra model, widely known for their application in biol- ogy, were initially used in chemistry and have now made their way to applications in economics. And, traveling on the opposite direction, game-theoretic models first de- veloped in economics have come to be used in biology as well. Cases like these show how models often get used in novel ways in novel contexts—something an affordance- based account of models as tools is particularly good at explaining. Tools are by nature versatile. A hammer affords grasping, swinging and driving a nail into the wall, and it also affords removing a misplaced nail with the claw; but on a windy day a hammer also affords use as a paperweight. To be sure, tools are designed to afford particular uses of interest. But as with any surface and part of the environ- ment, the relation between an object’s features and the abilities of an interacting user may be such that a variety of other action possibilities are present, even if unintention- ally, and those action possibilities may come to be exploited as different circumstances arise. Chapter 4. An Ecological Approach to Scientific Modeling 137

From the representationalist perspective, it can be challenging to explain how a model comes to be used in novel ways within a given discipline or even in other sciences: after all, in this view models are ‘models of’ particular phenomena (and, therefore, not of other phenomena). Affordance-based strong artifactualism, in con- trast, easily accommodates this common feature of science as a natural consequence of model-based research understood as the development and use of a range of affor- dances offered by a model that are considered relevant in a given inquiry context: as it happens with ordinary tools, it is not at all surprising that models will offer unexpected affordances that may make the model useful in new ways and even in new contexts.

Limits, and Within- and Between-Subject Variation

Affordance-based strong artifactualism is also uniquely positioned to account for mod- eling differences between the sciences as well as for the fraught relations between sci- entists, policy makers and the general public. The previous point emphasized how, given the objective character of affordances, models may turn out to have unexpected affordances that make them useful in unfore- seen ways and in novel contexts. But, of course, there are limits to the applicability of models: not all models come to be used in all sciences to study all kinds of phenomena. This is to be expected since, just as ordinary tools afford only a range of uses, there will also be a limited range of opportunities for action that a model or modeling technique offers. Because affordances are agent-relative, these limits in application are in part due to the people using a given model, i.e., due to the users’ specific abilities and interests. Chapter 4. An Ecological Approach to Scientific Modeling 138

Becoming a scientist requires years of so much effort and intentional preparation—it re- quires learning, or in Gibson’s terminology, the “education of attention” (1979, p. 254). One needs skill in order to perceive affordances that are already present as well as to develop new ones, as is evident in the case of highly complex mathematical models, which can strike the uninitiated as gibberish. So there are radical differences an indi- vidual scientist undergoes over time, that is, before and after scientific initiation. But there are important differences between scientists as well. Distinct research traditions in different disciplines have different methods as well as different theories and con- ceptual frameworks with which to understand the world. Naturally, training within a particular discipline or tradition constrains one’s perception is unique ways, making certain features of the world salient (i.e., features of both the target phenomena of in- vestigation and of the models that are built and used to investigate them).7 This helps to explain the intra- and interdisciplinary differences we observe in science. Methods and tools that appear essential to the behavioral ecologist may be dispensable to the molecular biologist and useless to the physicist, and vice versa. Although many of the same affordances exist for members of the same species (i.e., because of species-wide characteristics, including abilities and sensorimotor makeup), some will be uniquely present and available to people with training and experience in one domain of scien- tific research but not to those in another domain. The relational nature of affordances also helps to shed light on at least some of the many difficulties attending the translation of scientific findings into public understand- ing and public policy. If differential training leads to meaningful differences across

7Constraints are not necessarily negative. Training and experience can also impose a number of enabling constraints to perception: consider, e.g., the heightened discriminatory skill of wine-tasters, chicken-sexers, art critics, and other specialists. Chapter 4. An Ecological Approach to Scientific Modeling 139 disciplines within science, it makes sense that there would be even greater differences between scientists and non-scientists in terms of what models and targets afford. Notice how in the foregoing I have been speaking of user-model and user-target relations (see also Figure 8) instead of scientist-model and scientist-target relations. This is because the affordance-based view of models as tools I am proposing here does not entail that models are only useful for scientists, nor that real-world systems and phenomena only offer opportunities for action to scientists. On the contrary, just as ordinary tools and parts of the environment will always afford some use or other to anyone, it is to be ex- pected that models and real-world systems will afford at least some uses to any given person. The difficulty lies in the fact that the affordances for a non-expert will often be different from the affordances that exist for scientists. For an example in which differences are benign, imagine that a scientist working with a concrete model like the Phillips Hydraulic Computer brings her child to the lab one day. To that child, the Phillips machine might most readily afford play, e.g., by adding a drop of food coloring in different tanks to see where the colors mix, or adding different breeds of fish to turn the machine into an aquarium. These fun uses may be all that the machine affords to the child, and convincing the child that the model motivates some policy change—say, making changes in taxation to promote an increase in consumption—requires so much background knowledge that it would be a very challenging task, if at all feasible. Climate modeling is a more dramatic example. Climate models combine math- ematical equations with computer simulation techniques to afford the generation of predictions of future states based on current climate data, and this can be illuminating in the context of understanding local and global climate trends and negotiating how Chapter 4. An Ecological Approach to Scientific Modeling 140 strictly we want pollution regulations to be, for instance. The difficulty with turning scientific knowledge into effective public policy to address climate change, I propose, arises at least in part from a mismatch between, on the one hand, scientist-model and scientist-target relations, and, on the other, the relations that policy makers, lobbyists and the general public may have with the models and with the real-world systems in question. Scientists have a special understanding of how climate simulations work and, with it, of the opportunities for action in local environments and their impact on the global climate. Some non-experts with different backgrounds may be in a position to grasp the basics and agree with scientists, but others may fail to grasp the basics and either have faith in what the scientific community says or they might reject science in favor of what is most readily perceptually available to them: forests do afford cutting, rivers do afford dumping toxic chemicals (this stuff needs to go somewhere, right?), and so on. This is not meant to exculpate climate change-deniers, nor to deny the ideolog- ical (rather than factual) character of some climate change-deniers’ denial of climate change.8 But in identifying at least part of why it is difficult to make scientific findings understandable to non-experts (namely, a mismatch in user-model and/or user-target relations), this account also points to a path for mitigating the gap between science and public policy: better, more effective and engaging basic science education, where students develop facility with methods through hands-on modeling rather than being merely exposed to facts (i.e., outcomes of those methods) in a vacuum.

8Though, as advocates of the Strong Programme in the of scientific knowledge have argued (e.g., Bloor 1976, Brown 1984), it follows from the principle of symmetry that, if ideological factors are part of the explanation of what goes wrong in anti-science and bad science, then ideological factors are also part of the explanation of what goes right in good science: science is never ideologically neutral. Chapter 4. An Ecological Approach to Scientific Modeling 141

Dissolving the Problem of Misrepresentation

For philosophers of science, one of the most puzzling aspects of modeling concerns the contribution that so-called ‘misrepresentation’ makes to scientific knowledge. From the representationalist perspective, the differences between a model and its target are typically described as falsehoods, or ways in which the model falls short from provid- ing true descriptions or depictions of the target. But models are widely acknowledged to diverge from veridical representation of target phenomena in a number of ways, not only by omitting certain details, but also by positively including elements and characteristics known to be absent in the target: strictly speaking, models are always incomplete and false. If models are epistemically useful as representations of target phenomena, how can they sometimes be more useful because of their incompleteness and falsity? In the strong artifactualist view proposed here, this problem is ill-posed because the differences between a model and a target are just that—differences between two entities. Tools may be more or less useful for certain purposes, but they are not truth- bearers. As tools, models may be more or less useful in certain investigations, for fa- cilitating understanding of certain targets, and for scientists with specific abilities and goals—but they are not true nor false of target phenomena. This view makes it pos- sible to understand the difference between “good models” and “bad models” without recourse to representation, and in quite the same terms with which we can appreciate what makes for good and bad tools. A good hammer has to be sturdy, but it should not be so heavy that the intended user cannot operate it; still, a hammer that is too heavy may be more useful if the task is not to drive a nail in, but instead to hold something in place on the ground. Similarly, a model is a better or worse tool for some task and Chapter 4. An Ecological Approach to Scientific Modeling 142 for intended users, depending on relational action possibilities involving scientists and model just as much as scientists and target. This approach makes the explanatory con- tribution of idealizations and abstractions (or, rather, differences) a lot less mysterious. Just like sometimes simplifications make ordinary tools work better, simplifications can make a model better fit for the scientists’ purposes, such as generating a predic- tion about the target by simulating a similar behavior, or leading to the identification of potential causal contributors and difference makers, etc. In this perspective, the empirical confirmation of a model is not confirmation that the model represents the target (sufficiently) accurately, but rather that the similarities of interest hold, i.e., that the model offers the action possibilities that scientists value in that context. The differ- ences between a model and some target phenomenon may or may not constitute ways in which the model is inadequate. If the differences make the model more useful—as is typically the case with idealizations and abstractions—then they should be seen as improvements rather than as inadequacies.

No Loans of Meaning

A central goal of ecological psychology is to understand human behavior without tak- ing “loans of intelligence”: if our explanation for a person’s intelligent behavior is in terms of an intelligent internal sub-personal controller (rather than in terms of non- intelligent processes), then we must at some point repay our loan by explaining what makes the sub-personal controller intelligent in the first place (see, e.g., Dennett 1971, Kugler and Turvey 1987, Richardson et al 2008, Turvey and Carello 2012, Turvey 2018). Just as the ecological approach provides a way to understand human behavior without taking loans of intelligence, an affordance-based strong artifactualist account makes it Chapter 4. An Ecological Approach to Scientific Modeling 143 possible to understand scientific models as tools without taking what I call loans of meaning. It might seem hard to understand how we learn about phenomena in the real world through modeling if models are ‘mere’ tools that we use and engage with in various ways but which do not represent the target phenomena we are interested in learning about. As seen in chapter 2, this is the motivation for the weak artifactualism of all prominent accounts of models as tools: in that view, the artifactual nature of models makes them manipulable and usable, but it is their representational nature that makes them meaningful. As seen in chapter 1, the logical landscape is such that theories of representation either treat the representation relation as mind-independent (i.e., as holding between model and target only) or as mind-dependent (i.e., as an agential, intentional accom- plishment of scientists). And, as was also shown in chapter 1, virtually all current rep- resentationalist accounts have shifted away from a mind-independent toward a mind- dependent understanding of representation. It therefore follows, given the prominent understanding of representation, that the meaning of models in weak artifactualist accounts is derivative. In the weak artifactualist view, models as tools are meaning- ful because they represent target phenomena; but because representation is a mind- dependent relation, a model’s meaning is explained in terms of meaning somewhere else (say, in the mind of individual scientists) that gets bestowed upon the model. This is a loan of meaning that, sooner or later, must be repaid. By explaining the usefulness of models, as tools, in terms of their affordances, the strong artifactualist view proposed here makes it possible for models to be meaningful without their meaning being derived from meaning somewhere else, such as in the Chapter 4. An Ecological Approach to Scientific Modeling 144 minds of scientists. Tools, objects and parts of the environment are intrinsically func- tionally meaningful to interacting agents. Agents may, of course, be better or worse at perceiving, and capitalizing on, that meaning. And the tool’s meaning might well be different for different agents. Still, as a relation between an agent’s abilities and features of the tool, the tool’s meaning is objective and real, and accounting for that meaning does not require taking a loan of meaning. This may not be the whole story yet, but it’s a way to start the journey debt-free. 145

Bibliography

Adams, Fred and Kenneth Aizawa (2009). “Why the mind is still in the head”. In: The Cambridge handbook of situated cognition, pp. 78–95. Araujo, Duarte et al. (2009). “How does knowledge constrain sport performance? An ecological perspective”. In: Perspectives on cognition and action in sport, pp. 119–132. Barker, Roger G (1965). “Explorations in ecological psychology.” In: American Psycholo- gist 20.1, p. 1. Barker, Roger G, Jacob S Kounin, and Herbert F Wright (1943). Child behavior and devel- opment: A course of representative studies. McGraw-Hill. Barker, Roger G and Herbert F Wright (1951). One boy’s day; a specimen record of behavior. Harper. Barker, Roger Garlock (1968). Ecological psychology; concepts and methods for studying the environment of human behavior. Press. Barker, Roger Garlock and Herbert Fletcher Wright (1954). Midwest and its children: The psychological ecology of an American town. Row, Peterson Evanston, IL. Barsalou, Lawrence W (2008). “Grounded cognition”. In: Annual Review of Psychology 59, pp. 617–645. Batterman, Robert W. and Collin C. Rice (2014). “Minimal Model Explanations”. In: Philosophy of Science 81.3, pp. 349–376. BIBLIOGRAPHY 146

Billett, Stephen (2002). “Toward a workplace pedagogy: Guidance, participation, and engagement”. In: Adult education quarterly 53.1, pp. 27–43. Bloor, David (1976). Knowledge and social imagery. University of Chicago Press. Boesch, Brandon (2017). “There is a special problem of scientific representation”. In: Philosophy of Science 84.5, pp. 970–981. Bokulich, Alisa (2012). “Distinguishing Explanatory From Nonexplanatory Fictions”. In: Philosophy of Science 79.5, pp. 725–737. — (2017). “Models and explanation”. In: Springer Handbook of Model-Based Science. Springer, pp. 103–118. Brown, James Robert (1984). Scientific rationality: the sociological turn. Springer Science & Business Media. Bueno, Otávio and Steven French (2011). “How theories represent”. In: The British Jour- nal for the Philosophy of Science 62.4, pp. 857–894. Callebaut, Werner (1993). Taking the naturalistic turn, or how real philosophy of science is done. University of Chicago Press. Callender, Craig and Jonathan Cohen (2006). “There is no special problem about scien- tific representation”. In: Theoria. Revista de Teoría, Historia y Fundamentos de la Ciencia 21.1, pp. 67–85. Chakravartty, Anjan (2010). “Informational Versus Functional Theories of Scientific Representation”. In: Synthese 172.2, pp. 197–213. Chalmers, David J (1992). “Subsymbolic computation and the Chinese room”. In: The symbolic and connectionist : Closing the gap. Ed. by John Dinsmore. Hillsdale: Lawrence Erlbaum, pp. 25–48. BIBLIOGRAPHY 147

Chemero, Anthony (2003). “An Outline of a Theory of Affordances”. In: Ecological Psy- chology 15.2, pp. 181–195. — (2009). Radical embodied cognitive science. MIT press. Chemero, Anthony and Michael Silberstein (2008). “Defending extended cognition”. In: Proceedings of the Annual Meeting of the Cognitive Science Society. Vol. 30. 30. Cheng, Kun-Hung and Chin-Chung Tsai (2013). “Affordances of augmented reality in science learning: Suggestions for future research”. In: Journal of science education and technology 22.4, pp. 449–462. Chomsky, Noam (1959). “A review of B.F. Skinner’s Verbal Behavior”. In: Language 35.1, pp. 26–58. — (1986). Knowledge of language: Its nature, origin, and use. Greenwood Publishing Group. Clark, Andy (1997). Being there: Putting brain, body, and world together again. MIT Press. — (2001). Mindware: An introduction to the philosophy of cognitive science. Oxford Univer- sity Press. — (2013). “Whatever next? Predictive brains, situated agents, and the future of cogni- tive science”. In: Behavioral and Brain Sciences 36.3, pp. 181–204. Clark, Andy and (1998). “The extended mind”. In: analysis 58.1, pp. 7– 19. Contessa, Gabriele (2007). “Scientific Representation, Interpretation, and Surrogative Reasoning”. In: Philosophy of Science 74.1, pp. 48–68. Currie, Adrian (2017). “From Models-as-fictions to Models-as-tools”. In: Ergo 4.27. Davis, Tehran J et al. (2010). “Perceiving affordances for joint actions”. In: Perception 39.12, pp. 1624–1644. BIBLIOGRAPHY 148

Dennett, Daniel C (1971). “Intentional systems”. In: The Journal of Philosophy 68.4, pp. 87– 106. Dewey, John (1925/1994). Experience and Nature. Open Court Publishing Company. Dieleman, Susan, David Rondel, and Christopher J Voparil (2017). Pragmatism and jus- tice. Oxford University Press. Elgin, Catherine Z. (2004). “True Enough”. In: Philosophical Issues 14.1, pp. 113–131. Elgin, Catherine Z (2017). True enough. MIT Press. Evans, Sandra K. et al. (2016). “Explicating Affordances: a Conceptual Framework for Understanding Affordances in Communication Research”. In: Journal of Computer-

Mediated Communication 22.1, pp. 35–52. ISSN: 1083-6101. DOI: 10.1111/jcc4.12180. eprint: http : / / oup . prod . sis . lan / jcmc / article - pdf / 22 / 1 / 35 / 22316379 /

jjcmcom0035.pdf. URL: https://doi.org/10.1111/jcc4.12180. Fiskum, Tove Anita and Karl Jacobsen (2013). “Outdoor education gives fewer de- mands for action regulation and an increased variability of affordances”. In: Journal of Adventure Education & Outdoor Learning 13.1, pp. 76–99. Fodor, Jerry A (1987). Psychosemantics: The problem of meaning in the philosophy of mind. MIT press. Fraassen, Bas C. van (1980). The Scientific Image. Oxford: Clarendon Press. — (2008). Scientific Representation: of Perspective. Oxford: Oxford University Press. Franchak, John M and Karen E Adolph (2014). “Gut estimates: Pregnant women adapt to changing possibilities for squeezing through doorways”. In: Attention, Perception, & Psychophysics 76.2, pp. 460–472. BIBLIOGRAPHY 149

French, Steven (2010). “Keeping quiet on the ontology of models”. In: Synthese 172.2, p. 231. Frigg, Roman (2010a). “Fiction and scientific representation”. In: Beyond and Convention. Springer, pp. 97–138. — (2010b). “Fiction in Science”. In: Fictions and Models: New Essays. Ed. by John Woods. Philosophia Verlag. Chap. VI, pp. 247–287. Frigg, Roman and James Nguyen (2017a). “Models and representation”. In: Springer handbook of model-based science. Springer, pp. 49–102. — (2017b). “Scientific representation is representation-as”. In: Philosophy of science in practice. Ed. by H.-K. Chao and J. Reiss. Springer, pp. 149–179. Friston, Karl (2009). “The free-energy principle: a rough guide to the brain?” In: Trends in Cognitive Sciences 13.7, pp. 293–301. Gallese, Vittorio and Corrado Sinigaglia (2011). “What is so special about embodied simulation?” In: Trends in cognitive sciences 15.11, pp. 512–519. Gelfert, Axel (2017). “The ontology of models”. In: Springer Handbook of Model-Based Science. Springer, pp. 5–23. Gibson, Eleanor J (2003). “The world is so full of a number of things: On specification and perceptual learning”. In: Ecological psychology 15.4, pp. 283–287. Gibson, James J. (1979). The Ecological Approach to Visual Perception. Houghton Mifflin. Gibson, James J and Eleanor J Gibson (1955). “Perceptual learning: Differentiation or enrichment?” In: Psychological review 62.1, p. 32. Gibson, James Jerome (1966). The Senses Considered as Perceptual Systems. Houghton Mifflin. BIBLIOGRAPHY 150

Giere, Ronald (2002). “Scientific cognition as distributed cognition”. In: The cognitive ba- sis of science. Ed. by Peter Carruthers, , and Michael Siegal. Cambridge University Press. Chap. 15, p. 285. — (2010). “An Agent-Based Conception of Models and Scientific Representation”. In: Synthese 172.2, pp. 269–281. Giere, Ronald N. (1988). Explaining Science: A Cognitive Approach. Chicago: University of Chicago Press. — (2004). “How Models Are Used to Represent Reality”. In: Philosophy of Science 71.5, pp. 742–752. — (2006). Scientific Perspectivism. University of Chicago Press. Godfrey-Smith, Peter (2006a). “The strategy of model-based science”. In: Biology and Philosophy 21, pp. 725–740. — (2006b). “Theories and Models in ”. In: The Harvard Review of Philosophy 14.1, pp. 4–19. — (2009a). “Abstractions, idealizations, and evolutionary biology”. In: Mapping the Fu- ture of Biology. Springer, pp. 47–56. — (2009b). “Models and fictions in science”. In: Philosophical Studies 143.1, pp. 101–116. Goldman, Alvin and Frederique de Vignemont (2009). “Is social cognition embodied?” In: Trends in cognitive sciences 13.4, pp. 154–159. Goldman, Alvin I (2014). “The bodily formats approach to embodied cognition”. In: Current controversies in philosophy of mind, pp. 91–108. Gouvea, Julia and Cynthia Passmore (2017). “Models of’versus ‘Models for”. In: Science & Education 26.1-2, pp. 49–63. BIBLIOGRAPHY 151

Grayling, Anthony Clifford (2016). The age of genius: the seventeenth century and the birth of the modern mind. Bloomsbury Publishing. Hartson, Rex (2003). “Cognitive, physical, sensory, and functional affordances in inter- action design”. In: Behaviour & Information Technology 22.5, pp. 315–338. Heft, Harry (2001). Ecological psychology in context: James Gibson, Roger Barker, and the legacy of William James’s radical . Psychology Press. Heidegger, Martin (1927/2001). Being and Time. Ed. by John Macquarrie and Edward Robins. Blackwell Publishers Ltd. Henry, John (2008). The scientific revolution and the origins of modern science. Palgrave Macmillan. Hohwy, Jakob (2012). “Attention and conscious perception in the hypothesis testing brain”. In: Frontiers in Psychology 3, p. 96. Hughes, Richard IG (1997). “Models and representation”. In: Philosophy of science 64, S325–S336. Hutchins, Edwin and Tove Klausen (1996). “Distributed cognition in an airline cock- pit”. In: Cognition and communication at work, pp. 15–34. Hutto, Daniel and Erik Myin (2013). Radical enactivism: Basic minds without content. Cambridge, MA: MIT Press. Isaac, Alistair M.C. (2013). “Modeling Without Representation”. In: Synthese 190, pp. 3611– 3623. Isenhower, Robert W et al. (2010). “Affording cooperation: Embodied constraints, dy- namics, and action-scaled invariance in joint lifting”. In: Psychonomic Bulletin & Re- view 17.3, pp. 342–347. BIBLIOGRAPHY 152

James, W. (1907). Pragmatism, a New Name for Some Old Ways of Thinking: Popular Lectures on Philosophy. Longmans, Green and Co. Jubb, Robert (2016). “Norms, Evaluations, and Ideal and Nonideal Theory”. In: Social Philosophy and Policy 33.1-2, pp. 393–412. Kearney, Hugh (1971). Science and change, 1500-1700. Weidenfeld and Nicolson London. Keller, Evelyn Fox (2000). “Models of and models for: Theory and practice in contem- porary biology”. In: Philosophy of science 67, S72–S86. — (2002). Making Sense of Life: Explaining Biological Development with Models, Metaphors, and Machines. Harvard University Press. Kennedy, Ashley Graham (2012). “A Non Representationalist View of Model Explana- tion”. In: Studies in History and Philosophy of Science 43, pp. 326–332. Kersten, Luke (2017). “A mechanistic account of wide computationalism”. In: Review of Philosophy and Psychology 8.3, pp. 501–517. Knuuttila, Tarja (2005). “Models, representation, and mediation”. In: Philosophy of Sci- ence 72.5, pp. 1260–1271. — (2010). “Not just underlying structures: Towards a semiotic approach to scientific representation and modeling”. In: Ideas in Action: Proceedings of the Applying Peirce Conference. Ed. by AV Pietarinen M. Bergman S. Paavola and H. Rydenfelt, pp. 163– 172. — (2011). “Modelling and representing: An artefactual approach to model-based rep- resentation”. In: Studies in History and Philosophy of Science Part A 42.2, pp. 262–271. — (2017). “Imagination Extended and Embedded: Artifactual Versus Fictional Accounts of Models”. In: Synthese, pp. 1–21. BIBLIOGRAPHY 153

Koutamanis, Alexander (2006). “Buildings and affordances”. In: Design Computing and Cognition’06. Springer, pp. 345–364. Krieger, Michael JB, Jean-Bernard Billeter, and Laurent Keller (2000). “Ant-like task allocation and recruitment in cooperative robots”. In: Nature 406.6799, p. 992. Kugler, Peter N and Michael T Turvey (1987). Information, Natural Law, and the Self- assembly of Rhythmic Movement. Lawrence Earlbaum. Lakoff, George and Mark Johnson (1980). Metaphors we live by. University of Chicago Press. — (1999). Philosophy in the Flesh. Vol. 4. New York: Basic Books. Lee, David N and Paul E Reddish (1981). “Plummeting gannets: a of ecolog- ical optics”. In: Nature 293.5830, p. 293. Lindberg, David C (2010). The beginnings of Western science: The European scientific tradi- tion in philosophical, religious, and institutional context, prehistory to AD 1450. Univer- sity of Chicago Press. Lloyd, Elisabeth A. (2010). “Conrmation and Robustness of Climate Models”. In: Phi- losophy of Science 77.5, pp. 971–984. Lombardo, Thomas J (1987). The reciprocity of perceiver and environment: The evolution of James J. Gibson’s ecological psychology. Lawrence Earlbaum Associates. MacBride, Fraser (2016). “Relations”. In: The Stanford Encyclopedia of Philosophy. Ed. by Edward N. Zalta. Winter 2016. Metaphysics Research Lab, Stanford University. Mace, William M. (1977). “Gibson’s strategy for perceiving: Ask not what’s inside your head, but what your head’s inside of”. In: Perceiving, Acting, and Knowing. Ed. by Robert Shaw and John Bransford. Lawrence Erlbaum Associates, pp. 43–65. BIBLIOGRAPHY 154

Maier, Jonathan RA and Georges M Fadel (2009). “Affordance based design: a rela- tional theory for design”. In: Research in Engineering Design 20.1, pp. 13–27. Maier, Jonathan RA, Georges M Fadel, and Dina G Battisto (2009). “An affordance- based approach to architectural theory, design, and practice”. In: Design Studies 30.4, pp. 393–414. Majchrzak, Ann et al. (2013). “The contradictory influence of social media affordances on online communal knowledge sharing”. In: Journal of Computer-Mediated Commu- nication 19.1, pp. 38–55. Malafouris, Lambros (2013). How things shape the mind. MIT Press. Mark, Leonard S (1987). “Eyeheight-scaled information about affordances: A study of sitting and stair climbing.” In: Journal of experimental psychology: human perception and performance 13. Marr, David (1982). Vision: A computational investigation into the human representation and processing of visual information. MIT Press. Martin, David, John Bowers, and David Wastell (1997). “The interactional affordances of technology: An ethnography of human-computer interaction in an ambulance control centre”. In: People and Computers XII. Springer, pp. 263–281. Maturana, Humberto R and Francisco J Varela (1980). “Autopoiesis: The organization of the living”. In: Autopoiesis and cognition: The realization of the living 42, pp. 59–138. McClellan III, James E and Harold Dorn (2015). Science and technology in world history: an introduction. Johns Hopkins University Press. McEvoy, John G (1997). “, whiggism, and the Chemical Revolution: A study in the historiography of chemistry”. In: History of Science 35.1, pp. 1–33. BIBLIOGRAPHY 155

McGrenere, Joanna and Wayne Ho (2000). “Affordances: Clarifying and Evolving a Concept”. In: Proceedings of the Graphics Interface 2000 Conference, May 15-17, 2000,

Montreal, Quebec, , pp. 179–186. URL: http://graphicsinterface.org/wp- content/uploads/gi2000-24.pdf. Meli, Domenico Bertoloni (2006). Thinking with objects: The transformation of mechanics in the seventeenth century. Johns Hopkins University Press. Melogno, Pablo (2019). “The Discovery-Justification Distinction and the New Histo- riography of Science: On ’s Thalheimer Lectures”. In: HOPOS: The Journal of the International Society for the History of Philosophy of Science 9.1, pp. 152– 178. Menary, Richard (2010). The extended mind. MIT Press. Michaels, Claire F and Claudia Carello (1981). Direct perception. Prentice-Hall. Miller, George A (2003). “The cognitive revolution: a historical perspective”. In: Trends in Cognitive Sciences 7.3, pp. 141–144. Millikan, Ruth Garrett (1991). “Perceptual content and Fregean myth”. In: Mind 100.4, pp. 439–459. Morgan, M.S. and M. Morrison (1999). Models as Mediators: Perspectives on Natural and

Social Science. Ideas in Context. Cambridge University Press. ISBN: 9780521655712. Morrison, Margaret (2015). Reconstructing reality: Models, mathematics, and simulations. Oxford University Press, USA. Morrison, Margaret and Mary S. Morgan (1999). “Models as mediating instruments”. In: Models as Mediators: Perspectives on Natural and . Ed. by Mary S. Morgan and Margaret Morrison. Cambridge University Press, pp. 10–37. BIBLIOGRAPHY 156

Myin, Erik and Daniel D Hutto (2015). “REC: Just radical enough”. In: Studies in Logic, Grammar and 41.1, pp. 61–71. Nagy, Peter and Gina Neff (2015). “Imagined affordance: Reconstructing a keyword for communication theory”. In: Social Media+ Society 1.2, p. 2056305115603385. Norman, Donald A (1999). “Affordance, conventions, and design”. In: interactions 6.3, pp. 38–43. Norman, Donald A et al. (1988). The psychology of everyday things. Vol. 5. Basic books New York. Oliveira, Guilherme Sanches de (2018a). “Ecological Psychology and the Environmen- talist Promise of Affordances”. In: Proceedings of the 40th Annual Conference of the Cognitive Science Society. — (2018b). “Representationalism is a dead end”. In: Synthese, pp. 1–27. Oliveira, Guilherme Sanches de and Anthony Chemero (2015). “Against Smallism and Localism”. In: Studies in Logic, Grammar and Rhetoric 41.1, pp. 9–23. Parker, Wendy S. (2011). “When Climate Models Agree: The Significance of Robust Model Predictions”. In: Philosophy of Science 78.4, pp. 579–600. Pella, Shannon (2015). “Pedagogical Reasoning and Action: Affordances of Practice- Based Teacher Professional Development.” In: Teacher Education Quarterly 42.3, pp. 81– 101. Peschard, Isabelle F and Bas C Van Fraassen (2018). The Experimental Side of Modeling. University of Minnesota Press. Pickering, Andy (1994). “After representation: in the performative id- iom”. In: PSA: Proceedings of the biennial meeting of the Philosophy of Science Associa- tion. Vol. 1994. 2. Philosophy of Science Association, pp. 413–419. BIBLIOGRAPHY 157

Pincock, Christopher (2012). Mathematics and scientific representation. Oxford University Press. Popper, Karl (1959/2005). The logic of scientific discovery. Routledge. Potochnik, Angela (2015). “The Diverse Aims of Science”. In: Studies in History and Philosophy of Science Part A 53, pp. 71–80. — (2017). Idealization and the Aims of Science. The University Chicago Press. Principe, Lawrence M (2011). “Alchemy restored”. In: Isis 102.2, pp. 305–312. Ramenzoni, Verónica et al. (2008). “Tuning in to another person’s action capabilities: Perceiving maximal jumping-reach height from walking kinematics.” In: Journal of Experimental Psychology: Human Perception and Performance 34.4, p. 919. Ratti, Emanuele (2018). “‘Models of’ and ‘Models for’: On the Relation between Mecha- nistic Models and Experimental Strategies in Molecular Biology”. In: British Journal

for the Philosophy of Science. DOI: 10.1093/bjps/axy018. eprint: http://oup.prod. sis.lan/bjps/advance- article- pdf/doi/10.1093/bjps/axy018/27499173/

axy018.pdf. URL: https://doi.org/10.1093/bjps/axy018. Rawls, John (1971). A Theory of Justice. Harvard University Press. Reed, Edward S (1996). Encountering the World: Toward an Ecological Psychology. Oxford University Press. Reeve, Richard et al. (2005). “New technologies for testing a model of cricket phono- taxis on an outdoor robot”. In: Robotics and Autonomous Systems 51.1, pp. 41–54. Reichenbach, Hans (1938). Experience and prediction: An analysis of the foundations and the structure of knowledge. University of Chicago Press. BIBLIOGRAPHY 158

Richardson, Michael J, Kerry L Marsh, and Reuben M Baron (2007). “Judging and ac- tualizing intrapersonal and interpersonal affordances.” In: Journal of experimental psychology: Human Perception and Performance 33.4, p. 845. Richardson, Michael J et al. (2008). “Ecological psychology: Six principles for an embodied– embedded approach to behavior”. In: Handbook of Cognitive Science. Elsevier, pp. 159– 187. Rietveld, Erik and Julian Kiverstein (2014). “A rich landscape of affordances”. In: Eco- logical Psychology 26.4, pp. 325–352. Rorty, Richard McKay and Richard Rorty (1991). Objectivity, relativism, and truth: philo- sophical papers. Vol. 1. Cambridge University Press. Ryle, Gilbert (2009). “The Thinking of Thoughts: What is ‘Le Penseur’ Doing?” In: Col- lected Essays 1929-1968: Collected Papers Volume 2. Routledge. Originally published in University Lectures, no. 18, 1968, University of Saskatchewan., pp. 494–510. Salis, Fiora (2019). “The new fiction view of models”. In: British Journal for the Philosophy of Science. Scarantino, Andrea (2003). “Affordances Explained”. In: Philosophy of Science 70.5, pp. 949– 961. Schickore, Jutta (2018). “Scientific Discovery”. In: The Stanford Encyclopedia of Philoso- phy. Ed. by Edward N. Zalta. Summer 2018. Metaphysics Research Lab, Stanford University. Seifert, Ludovic, Chris Button, and Keith Davids (2013). “Key properties of expert movement systems in sport”. In: Sports Medicine 43.3, pp. 167–178. Sen, Amartya (2006). “What do we want from a theory of justice?” In: The Journal of philosophy 103.5, pp. 215–238. BIBLIOGRAPHY 159

Sen, Amartya Kumar (2009). The idea of justice. Harvard University Press. Silva, Pedro et al. (2013). “Shared knowledge or shared affordances? Insights from an ecological dynamics approach to team coordination in sports”. In: Sports Medicine 43.9, pp. 765–772. Simmons, A John (2010). “Ideal and nonideal theory”. In: Philosophy & public affairs 38.1, pp. 5–36. Suarez, Mauricio (2003). “Scientific Representation: Against Similarity and Isomor- phism”. In: International Studies in the Philosophy of Science 17.3, pp. 225–244. — (2004). “An Inferential Conception of Scientific Representation”. In: Philosophy of Science 71.5, pp. 767–779. — (2015). “Deflationary Representation, , and Practice”. In: Studies in History and Philosophy of Science Part A 49, pp. 36–47. Sutton, John et al. (2010). “The psychology of memory, extended cognition, and socially distributed remembering”. In: Phenomenology and the cognitive sciences 9.4, pp. 521– 560. Thagard, Paul (2005). Mind: Introduction to Cognitive Science. MIT Press. Toon, Adam (2012). Models as Make-Believe: Imagination, Fiction, and Scientific Represen- tation. Palgrave-Macmillan. Turvey, Michael T (1992). “Affordances and prospective control: An outline of the on- tology”. In: Ecological psychology 4.3, pp. 173–187. — (2018). Lectures on Perception: An Ecological Perspective. Routledge. Turvey, Michael T and Claudia Carello (2012). “On intelligence from first principles: Guidelines for inquiry into the hypothesis of physical intelligence (PI)”. In: Ecologi- cal Psychology 24.1, pp. 3–32. BIBLIOGRAPHY 160

Turvey, MT and Claudia Carello (1986). “The ecological approach to perceiving-acting: A pictorial ”. In: Acta Psychologica 63.1-3, pp. 133–155. Varela, Francisco, Evan Thompson, and Eleanor Rosch (1991). The embodied mind: cog- nitive science and human experience. MIT Press. Wagman, Jeffrey B, Sarah E Caputo, and Thomas A Stoffregen (2016a). “Hierarchical nesting of affordances in a tool use task.” In: Journal of Experimental Psychology: Hu- man Perception and Performance 42.10, p. 1627. — (2016b). “Sensitivity to hierarchical relations among affordances in the assembly of asymmetric tools”. In: Experimental brain research 234.10, pp. 2923–2933. Wagman, Jeffrey B et al. (2017). “Perceiving nested affordances for another person’s actions”. In: The Quarterly Journal of Experimental Psychology, pp. 1–11. Warren, William H (1984). “Perceiving affordances: visual guidance of stair climbing.” In: Journal of experimental psychology: Human perception and performance 10.5, p. 683. Warren Jr, William H and Suzanne Whang (1987). “Visual guidance of walking through apertures: body-scaled information for affordances.” In: Journal of experimental psy- chology: human perception and performance 13.3, p. 371. Weast, Julie A, Kevin Shockley, and Michael A Riley (2011). “The influence of athletic experience and kinematic information on skill-relevant affordance perception”. In: The Quarterly Journal of Experimental Psychology 64.4, pp. 689–706. Weisberg, Michael (2012). Simulation and similarity: Using models to understand the world. Oxford University Press. Wilson, Robert A (1994). “Wide Computationalism”. In: Mind 103.411, pp. 351–372. — (2004). Boundaries of the Mind: The Individual in the Fragile Sciences. Cambridge Uni- versity Press. BIBLIOGRAPHY 161

Wimsatt, William C. (1987). “False Models as Means to Truer Theories”. In: Neutral Models in Biology. Ed. by N. Nitecki and A. Hoffman. Oxford: Oxford University Press, pp. 23–55. Winsberg, Eric (2018a). Philosophy and climate science. Cambridge University Press. — (2018b). “Values and Evidence in Model-Based Climate Forecasting”. In: The Exper- imental Side of Modeling. University of Minnesota Press, pp. 218–239. Withagen, Rob and Margot van Wermeskerken (2010). “The role of affordances in the evolutionary process reconsidered: A niche construction perspective”. In: Theory & Psychology 20.4, pp. 489–510. Woods, John and Alirio Rosales (2010). “Virtuous distortion”. In: Model-based reasoning in science and technology. Springer, pp. 3–30. You, Hsiao-chen and Kuohsiang Chen (2007). “Applications of affordance and seman- tics in product design”. In: Design Studies 28.1, pp. 23–38. Zinas, Zachariah Bako and MBM Jusan (2014). “Perceived affordances and motivations for housing interior walls finishes preference and choice”. In: Journal of Civil Engi- neering and Architecture Research 1.1, pp. 71–77.